Compare commits

...

604 Commits

Author SHA1 Message Date
ZQKC
a87a0663ed 版本升级至3.4.0 2023-12-03 15:24:11 +08:00
ZQKC
2390ae8941 版本修改为3.4.0 2023-12-03 15:21:49 +08:00
ZQKC
e2692a6fc4 升级至3.4.0版本 2023-12-03 15:16:01 +08:00
ZQKC
c18eeb6d55 补充3.4.0升级手册 2023-12-03 15:14:28 +08:00
ZQKC
f5de9789f2 合并企业版分支 2023-12-03 14:40:40 +08:00
EricZeng
4ae34d0030 合并企业版开发分支 (#1206) 2023-12-03 14:32:51 +08:00
EricZeng
95bce89ce5 合并master分支 (#1205) 2023-12-03 14:31:47 +08:00
ZQKC
6853862753 补充3.4.0变更内容 2023-12-03 14:22:09 +08:00
EricZeng
610af4a9e8 [Optimize]补充Kafka版本列表 (#1204)
仅补充版本列表,版本新增特效后续按需补充。

Co-authored-by: qiao.zeng <qiao.zeng@ingeek.com>
2023-12-03 12:52:52 +08:00
erge
49d3d078d3 合并主分支 (#1199) (#1201)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

- [Bugfix]修复重置offset接口调用过多问题
- [Bugfix]修复消费组Offset重置后,提示重置成功,但是前端不刷新数据,Offset无变化的问题
- [Optimize]消费组详情控制数据实时刷新

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;

请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-30 21:56:42 +08:00
erge
ac4ea13be9 修复一些前端问题 (#1199)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

- [Bugfix]修复重置offset接口调用过多问题
- [Bugfix]修复消费组Offset重置后,提示重置成功,但是前端不刷新数据,Offset无变化的问题
- [Optimize]消费组详情控制数据实时刷新

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-30 21:44:06 +08:00
EricZeng
2339a6f0cd 合并主分支进行测试 (#1197) 2023-11-27 21:08:53 +08:00
EricZeng
b6ea4aec19 [Optimize]GroupTopic信息修改为实时获取 (#1196) 2023-11-27 21:08:05 +08:00
erge
8346453aa3 修复一些前端问题 (#1195)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

[[Optimize]Security、Consumer权限点更新
[BugFix]JMX端口维护信息错误

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-27 20:26:42 +08:00
erge
a9eb4ae30e [BugFix]JMX端口维护信息错误 2023-11-27 20:18:42 +08:00
erge
cceff91f81 [Optimize]Security、Consumer权限点更新 2023-11-27 20:17:58 +08:00
EricZeng
2744f5b6dd 验证功能是否正常 (#1193) 2023-11-27 13:51:21 +08:00
erge
009ffeb099 修复一些前端问题 (#1192)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

- [Bugfix]优化系统管理子应用无法正常启动(#1167)
- [Optimize]security下的users、acls接入进权限管理(#1089)
- [BugFix]修复Topic消息展示,offset为0不显示问题(#996)
- [Optimize]Connect操作接入权限管理,所有用户都可以重启、编辑、删除(#1050)
- [Optimize]权限新增ACL,自定义权限配置,资源TransactionalId优化(#1160)
- [Optimize]Connect-JSON模式下的JSON格式和官方API的格式不一致(#1048)
- [Optimize]登录页面star数量变更
- [Optimize]Connect样式优化

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-26 16:44:00 +08:00
erge
e8e05812d0 [Optimize]Connect样式优化 2023-11-26 16:31:06 +08:00
erge
58a421c4b9 [Optimize]登录页面star数量变更 2023-11-26 16:29:38 +08:00
erge
af916d5a71 [Optimize]Connect-JSON模式下的JSON格式和官方API的格式不一致(#1048) 2023-11-26 16:28:29 +08:00
erge
8b30f78744 [Optimize]权限新增ACL,自定义权限配置,资源TransactionalId优化(#1160) 2023-11-26 16:26:06 +08:00
erge
592dee884a [Optimize]Connect操作接入权限管理,所有用户都可以重启、编辑、删除(#1050) 2023-11-26 16:25:00 +08:00
erge
715744ca15 [BugFix]修复Topic消息展示,offset为0不显示问题(#996) 2023-11-26 16:23:12 +08:00
erge
8a95401364 [Optimize]security下的users、acls接入进权限管理(#1089) 2023-11-26 16:21:14 +08:00
erge
e80f8086d4 [Bugfix]优化系统管理子应用无法正常启动(#1167) 2023-11-26 16:18:39 +08:00
jiangminbing
af82c2e615 [Bugfix] 修复Overview指标文字错误 (#1190)
[Bugfix] 修复Overview指标文字错误 (#1150)

Co-authored-by: jiangmb <jiangmb@televehicle.com>
2023-11-25 09:27:43 +08:00
HuYueeer
1369e7b9eb FAQ (#1189)
新增依赖elasticsearch8.0以上版本出现指标信息不显示的解决方法
2023-11-25 09:22:41 +08:00
Peng
ab6afe6dbc Update README.md 2023-11-14 10:10:51 +08:00
qiao.zeng
6e9dc4f807 Merge branch 'fix_1043' into ve_3.x_dev 2023-11-12 15:31:18 +08:00
qiao.zeng
a8be274ca6 合并master分支 2023-11-12 15:30:08 +08:00
qiao.zeng
e24a582067 [Doc]修改action触发规则 2023-11-12 15:25:23 +08:00
qiao.zeng
251f7f7110 [Bugfix]修复Truncate数据不生效的问题 2023-11-12 15:06:10 +08:00
Peng
65f8beef32 Update README.md 2023-11-08 14:11:59 +08:00
Peng
38366809f1 Update README.md 2023-11-08 14:10:40 +08:00
Peng
530219a317 Update README.md 2023-11-08 14:07:58 +08:00
erge
c07e544c50 [Bugfix]Connect-JSON模式下的JSON格式和官方API的格式不一致(#1048) (#1181)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-07 19:20:04 +08:00
EricZeng
c9308ee4f2 [Bugfix]修复ci_build失败的问题 (#1173) 2023-10-23 01:11:49 +08:00
爱喝药的大郎
95158813b9 [Bugfix]修复消费组Offset重置后,提示重置成功,但是前端不刷新数据,Offset无变化的问题 (#1090)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

修复消费组Offset重置后,提示重置成功,但是前端不刷新数据,Offset无变化的问题

## 简短的更新日志

使用pubsub-js解决问题

## 验证这一变化
### 重置前:

![7c90f21063995e7a155d30a24f70c82](https://github.com/didi/KnowStreaming/assets/43955116/db10a87d-2353-48f6-bd29-71b6eb47dab9)
### 重置指定分区

![039cf8a01ced8783ea957ab72187d83](https://github.com/didi/KnowStreaming/assets/43955116/f8cd4ac0-d093-4df2-aab3-915571bdd8de)

![84580ab27f725b68456793a47e0ad72](https://github.com/didi/KnowStreaming/assets/43955116/5ce85211-95a0-4809-accd-d57b141b4132)
### 重置最新offset

![image](https://github.com/didi/KnowStreaming/assets/43955116/227b7939-40ac-4c6c-8e92-03fc16413dce)
### 重置最旧offset

![image](https://github.com/didi/KnowStreaming/assets/43955116/56d08648-ac58-43c9-86cd-f88a2a8ae8dd)


请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [x] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [x] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [x] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [x] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [x] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [x] 确保编译通过,集成测试通过;
2023-10-20 09:34:29 +08:00
爱喝药的大郎
59e8a416b5 [Bugfix]修复未勾选系统管理查看权限,但是依然可以查看系统管理的问题 (#1105)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么
修复未勾选系统管理查看权限,但是依然可以查看系统管理的问题
## 简短的更新日志
修复未勾选系统管理查看权限,但是依然可以查看系统管理的问题
## 验证这一变化
### 权限表
<img width="587" alt="image"
src="https://github.com/didi/KnowStreaming/assets/43955116/497fea54-3216-4ae7-8dab-304a07e81209">

### 效果
<img width="1500" alt="image"
src="https://github.com/didi/KnowStreaming/assets/43955116/1e4a8260-336e-4c15-a244-5f768107a990">

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [x] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [x] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [x] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [x] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [x] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [x] 确保编译通过,集成测试通过;

Co-authored-by: suzj <hzsuzj@qq.com>
2023-10-20 09:32:31 +08:00
erge
f6becbdf2c [Optimize]Connect 提交任务变更为只保存用户修改的配置,并修复JSON模式下配置展示不全(#1047) (#1158)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

优化Content新增/编辑

## 简短的更新日志

- [Bugfix] 自定义的高级配置项,在JSON模式下未显示这些配置(#1045)
- [Optimize] 提交任务后只保存用户修改的配置,而不是将所有配置都保存起来,目前不论用户有没有修改配置都保存了所有的配置(#1047)

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-10-20 09:28:52 +08:00
erge
07bd00d60c [Optimize]security下的users、acls接入进权限管理(#1089) (#1154)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-10-18 09:40:07 +08:00
erge
1adfa639ac [Bugfix]Connect-JSON模式下的JSON格式和官方API的格式不一致(#1048) (#1153)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-10-18 09:39:21 +08:00
erge
3f817991aa [Bugfix]优化系统管理子应用无法正常启动 (#1167) 2023-10-15 11:15:39 +08:00
EricZeng
3b72f732be [Optimize]优化集群Brokers中, Controller显示存在延迟的问题 (#1162)
优化方式:
从DB获取调整为从Kafka中实时获取。
2023-09-27 14:05:45 +08:00
lucasun
e2ad3afe3d [Optimize]Connect操作未接入权限管理,所有用户都可以重启、编辑、删除(#1050) (#1147)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-09-14 19:37:12 +08:00
lucasun
ae04ffdd71 [Bugfix]Connect-JMX端口维护信息错误(#1044) (#1146)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-09-14 19:36:33 +08:00
lucasun
cf9d5b6832 [Feature]增加Truncate数据功能(#1043) (#1145)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-09-14 19:35:30 +08:00
lucasun
9c418d3b38 [Feature]新增删除Group或GroupOffset功能(#1040) (#1144)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-09-14 19:34:55 +08:00
erge
128b180c83 [Bugfix]去除删除Truncate时的多余提示信息 2023-09-10 22:45:59 +08:00
erge
b60941abc8 [Bugfix]去除编辑connect失败多余的提示信息(#1044) 2023-09-10 22:31:20 +08:00
erge
1a42472fd8 [Optimize]Connect操作未接入权限管理,所有用户都可以重启、编辑、删除(#1050) 2023-09-10 17:17:20 +08:00
erge
18e00f043e [Bugfix]Connect-JMX端口维护信息错误(#1044) 2023-09-10 16:46:14 +08:00
erge
6385889902 '[Feature]新增删除Group或GroupOffset功能(didi#1040)' 2023-09-10 16:14:48 +08:00
erge
ea0c744677 '[Feature]增加Truncate数据功能(#1043)' 2023-09-10 13:51:38 +08:00
EricZeng
d1417bef8c [Bugfix]修复删除Kafka集群后,Connect集群任务出现NPE问题 (#1129)
原因:

首先,删除Kafka集群后,没有将DB中的Connect集群进行删除。随后,进行Connect集群指标采集时,由于所在的Kafka集群已经不存在了。最终,导致NPE;

解决:
发布一个Kafka集群删除事件,触发MetaDataService子类,将其在DB中的数据进行删除。

遗留:

当前MetaDataService仅在部分元信息同步类中实现,导致当前DB中的脏数据清理不彻底,后续等MetaDataService在所有元信息同步类中实现后,便可彻底清理数据。

PS:当前修复已保证NPE问题不会再出现。
2023-08-16 10:54:58 +08:00
EricZeng
a7309612d5 [Optimize]统一DB元信息更新格式-Part2 (#1127)
1、KafkaMetaService修改为MetaService并移动到Core层;
2、修改ZK、KafkaACL的格式;
2023-08-15 18:46:41 +08:00
EricZeng
6e56688a31 [Optimize]统一DB元信息更新格式-Part1 (#1125)
1、引入KafkaMetaService;
2、将Connector的更新按照KafkaMetaService进行更新;
3、简化Connect-MirrorMaker的关联逻辑;
4、GroupService创建的AdminClient中的ClientID增加时间戳,减少Mbean冲突;
2023-08-15 14:24:23 +08:00
EricZeng
a6abfb3ea8 [Doc]补充启动失败的说明 (#1126) 2023-08-15 14:21:05 +08:00
chang-wd
ca696dd6e1 [Bugfix] 修复在Ldap登录时,设置auth-user-registration: false会导致空指针的问题 (#1117)
Configure LDAP And Set auth-user-registration: false will result in NPE(Null Pointer Exception) #1116 

---------

Co-authored-by: weidong_chang <weidong_chang@intsig.net>
2023-08-08 14:49:25 +08:00
EricZeng
db40a5cd0a [Optimize]增加AdminClient观测信息 (#1111)
1、增加AdminClient的ClientID;
2、关闭时,增加超时时间;
3、增加关闭错误的日志;
2023-08-02 21:19:03 +08:00
EricZeng
55161e439a [Optimize]增加Connector运行状态指标 (#1110)
1、增加Connector运行状态指标;
2、将Connector指标上报普罗米修斯;
3、调整代码继承关系;
2023-08-02 21:07:45 +08:00
chang-wd
bdffc10ca6 [Bugfix]修复Ldap user.getId() NPE问题 (#1108)
NPE solved

Co-authored-by: weidong_chang <weidong_chang@intsig.net>
2023-08-02 12:15:40 +08:00
lucasun
b1892c21e2 修复前端新增角色失败等问题 (#1107)
1.新增角色不选择系统管理权限点报错问题;
2.Connect配置项里面涉及敏感字段的值用*号代替;
3.Topic详情、ConsumerGroup详情,ConsumerGroup表格支持手动刷;
4.Topic Message预览,Offset为0不显示数值,添加offset排序;

---------

Co-authored-by: 孙超 <jacksuny@foxmail.com>
Co-authored-by: EricZeng <zengqiao_cn@163.com>
2023-08-01 16:34:30 +08:00
EricZeng
90e5492060 [Bugfix]增加查看User密码的权限点 (#1095)
仅后端增加,前端相关代码暂未补充
2023-07-18 14:25:26 +08:00
ZQKC
b1aa12bfa5 合并Master分支 2023-07-07 13:09:28 +08:00
ZQKC
64cddb7912 合并Master分支 2023-07-07 13:01:27 +08:00
EricZeng
42195c3180 增加对企业版&体验环境分支的CI 2023-07-07 12:45:27 +08:00
EricZeng
94b1e508fd Build之后生成安装包
1、修改文件名;
2、Build之后生成安装包,方便用户直接下载使用;
2023-07-07 12:36:50 +08:00
EricZeng
dd3dcd37e9 [Feature]新增Group及GroupOffset删除功能Part2 (#1084)
1、修复版本控制错误的问题;
2、增加相关权限点;

PS:仅后端代码,前端待补充。
2023-07-06 15:42:18 +08:00
EricZeng
0a6e9b7633 [Optimize]Jmx相关日志优化 (#1082)
1、统一Jmx客户端相关日志格式;
2、增加创建的Jmx-Connector时,所使用的信息;
3、优化日志级别;
2023-07-06 15:39:51 +08:00
EricZeng
470e471cad 新增build-all.yml 2023-07-05 16:12:12 +08:00
EricZeng
bd58b48bcb [Optimize]Connector增改接口的configs字段名调整为config (#1080)
1、保持和原生一致;
2、当前是兼容状态,可同时支持configs和config;
2023-07-05 13:43:19 +08:00
EricZeng
0cd071c5c6 [Optimize]去除对Connect集群的clusterUrl的动态更新 (#1079)
问题:
clusterUrl动态更新可能会获取到错误的地址,导致请求connect集群相关信息失败;

解决:
去除动态更新,仅支持用户输入;

遗留:
前端需要支持用户输入;
2023-07-05 11:55:16 +08:00
EricZeng
abaadfb9a8 [Optimize]Topic-Partitions增加主动超时功能 (#1076)
问题:
leader=-1的分区获取offset信息时,耗时时间过久会导致前端超时,进而整个页面的数据都获取不到;

解决:
后端主动在前端超时前,对一些请求进行超时,避免导致所有的信息都没有返回给前端;
2023-07-04 14:18:12 +08:00
EricZeng
49e7fea6d3 [Optimize]Topic-Messages页面后端增加按照Partition和Offset纬度的排序 (#1075) 2023-07-03 15:33:15 +08:00
EricZeng
d68a19679e [Optimize]Group列表的maxLag指标调整为实时获取 (#1074)
1、增加调用的超时时间,在前端需要的超时时间内返回;
2、将Group列表的maxLag指标调整为实时获取;
2023-07-03 14:37:35 +08:00
HwiLu
75be94fbea [Doc]补充ZK无数据排查说明 (#1004)
补充ZK无数据排查说明

---------

Co-authored-by: EricZeng <zengqiao_cn@163.com>
2023-06-29 21:51:08 +08:00
EricZeng
c11aa4fd17 [Bugfix]修复Security模块,后端权限点缺失问题 (#1069)
1、补充Security模块-ACL相关权限点;
2、补充Security模块-User相关权限点;
2023-06-29 21:36:58 +08:00
lucasun
cb96fef1a5 [Bugfix]消费组不支持重置到最旧Offset的问题 (#1039) (#1059)
修复了消费组不支持重置到最旧Offset的问题
2023-06-29 11:03:44 +08:00
EricZeng
e98cfbcf91 [Bugfix]修复Connect-Worker Jmx不生效的问题 (#1067)
1、FAQ中补充JMX连接失败的说明;
2、修复Connect-Worker Jmx不生效的问题;
2023-06-28 15:59:13 +08:00
EricZeng
0140b2e898 [Optimize]Connector增加重启、编辑、删除等权限点 (#1066) 2023-06-27 16:46:47 +08:00
ZQKC
b3b7ab9f6b [Doc]去除commit信息中带issue-id的限制
1、使用squash merge后,不需要在写commit-log的时候,带上issue-id的信息;
2、主页的id调整为pr的id;
2023-06-27 14:48:07 +08:00
EricZeng
b34edb9b64 [Feature]新增删除Group或GroupOffset功能 (#1064)
不包括前端,后端新增
1、新增Group删除功能;
2、新增Group-Topic纬度Offset删除功能;
3、新增Group-Topic-Partition纬度Offset删除功能;
2023-06-27 14:32:57 +08:00
诸葛子房
c2bc0f788d [Feature]增加Truncate数据功能(#1062)
增加Truncate数据功能(#1043)

目前已经完成后端部分,前端待补充。

---------

Co-authored-by: duanxiaoqiu <duanxiaoqiu@qiyi.com>
2023-06-27 10:58:00 +08:00
SUZJ
3f518c9e63 [Bugfix]修复权限ACL管理中,消费组列表展示错误的问题(#991) 2023-06-25 09:35:43 +08:00
Richard
7f7801a5f7 [Bugfix]消费组不支持重置到最旧Offset的问题 (#1039) 2023-06-20 10:56:22 +08:00
jeff-zou
e1e02f7c2a 日志输出增加支持MDC,方便用户在logback.xml中json格式化日志。
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder charset="utf-8" class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerData>true</includeCallerData>
            <customFields>{"system": "know-streaming"}
            </customFields>
        </encoder>
    </appender>
2023-06-18 13:04:54 +08:00
ZQKC
c497e4cb2d [Doc]补充连接特定Jmx端口的说明(#965)
1、补充连接特定Jmx端口的说明;
2、统一JMX问题排查文档;
3、删除无效图片;
2023-06-15 17:53:19 +08:00
ZQKC
e34e3f3e3d [Feature]支持指定Server的具体Jmx端口(#965)
变更事项:
1、接入集群时,支持按照Broker粒度进行Jmx端口的配置;
2、设置Jmx端口的优先级为:指定Broker端口 》ZK中获取到的Broker端口 》指定Cluster端口;

补充说明:
1、该修改仅为后端修改,产品上暂未进行修改;
2023-06-02 14:27:19 +08:00
Fangzhibin
b3fd494398 [Optimize]解决Connect模块没有默认勾选指标(#926) 2023-05-30 22:03:34 +08:00
Richard
ffc115cb76 [Bugfix]修复es索引create/delete死循环问题 (#1021) 2023-05-30 18:03:03 +08:00
ZQKC
7bfe787e39 [Bugfix]修复zk standalone状态不兼容问题 2023-05-19 11:48:50 +08:00
ZQKC
2256e8bbdb [Bugfix]修复Connect-GroupDescription解析失败的问题(#1010)
1、先尝试使用IncrementalCooperativeConnectProtocol协议进行解析;
2、IncrementalCooperativeConnectProtocol协议解析失败后,再维持原先的情况,使用ConnectProtocol协议进行解析;
2023-05-16 12:34:58 +08:00
ZQKC
e975932d41 [Bugfix]修复Prometheus开放接口中,Partition指标tag缺失的问题(#1013) 2023-05-12 13:53:58 +08:00
ZQKC
db044caf8b [Optimize]Group元信息更新优化(#1005)
1、Group元信息未变化时,则不进行updateById操作;
2、失效的Group信息直接删除;
2023-04-26 22:50:16 +08:00
ZQKC
82fbea4e5f [Doc]补充zk_properties字段的使用说明(#995)
1、补充 zk_properties 字段 使用说明;
2、补充 Digest-MD5 认证例子;
3、调整 Kerberos 认证说明;
2023-04-23 11:36:52 +08:00
ZQKC
6aaa4b34b8 补充页面无数据排查手册链接 2023-04-19 15:27:44 +08:00
ZQKC
3cb1f03668 [Optimize]补充ES集群Shard满的异常日志
1、调整文档的目录结构;
2、补充ES集群Shard满的异常日志;
3、强调说明ES日志所在的位置;
2023-04-19 15:27:44 +08:00
william
e61c446410 测试 git 提交权限. 2023-04-14 18:08:59 +08:00
ZQKC
9d0345c9cd bump jackson version to 2.13.5 2023-04-11 11:01:20 +08:00
ZQKC
62f870a342 [Optimize]优化pom.xml中,KS版本的标签名
1、修改后,便于idea识别,否则会一直存在存在错误提示。
2023-04-11 10:54:37 +08:00
ZQKC
13641c00ba [Bugfix]修复Broker元信息解析方法未调用导致接入集群失败的问题(#986) 2023-04-04 12:20:44 +08:00
zhaoli
9f6882cf0d [bugfix]leader重选时忽略ElectionNotNeededException异常,返回成功 2023-04-03 11:49:06 +08:00
ZQKC
d3cc0cb687 [Bugfix]修复Balance功能,ES密码未生效的问题(#992) 2023-04-02 20:30:19 +08:00
ZQKC
769c2c0fbc [Bugfix]修复ConsumerAssignment类型转换错误的问题
1、问题
KSGroupDescription 的 KSMemberBaseAssignment 对象,转 KSMemberConsumerAssignment 时,会出现转换失败的错误。

2、原因
KSPartialKafkaAdminClient 在返回 KSMemberDescription 时,当 ConsumerGroup 的 memberAssignment.length() <= 0 时,遗漏对 memberBaseAssignment 对象进行初始化。

3、解决
发现 memberAssignment.length() <= 0 时,主动将 KSMemberDescription 中的 memberBaseAssignment 赋值为 KSMemberConsumerAssignment 对象。
2023-03-17 20:35:35 +08:00
ZQKC
c71865f623 [Bugfix]修复ZK四字命令解析错误的问题
1、四字命令结果为Float类型的字符串时,使用Long.valueOf()会抛出格式转换失败异常。因此为了方便处理,将使用ConvertUtil.string2Float()方法进行转换。
2、规范调整过程中,涉及到的代码。
2023-03-17 20:15:05 +08:00
zengqiao
aa35965d7a 体验环境 2023-02-27 18:43:48 +08:00
zengqiao
77b87f1dbe 升级至企业版3.3.0 2023-02-24 17:52:27 +08:00
zengqiao
a82d7f594e 合并3.3.0企业版改动 2023-02-24 17:49:26 +08:00
zengqiao
cca7246281 合并3.3.0分支 2023-02-24 17:13:50 +08:00
zengqiao
258385dc9a 升级至3.3.0版本 2023-02-24 11:12:31 +08:00
zengqiao
65238231f0 补充3.3.0版本升级信息 2023-02-24 11:11:12 +08:00
zengqiao
cb22e02fbe 补充3.3.0版本变更信息 2023-02-24 11:10:42 +08:00
erge
aa0bec1206 [Optimize]package.json锁定lerna版本,更新package-lock.json文件(#957) 2023-02-23 20:14:04 +08:00
zengqiao
c56d8cfb0f 增加rebalance / testing / license能力 2023-02-23 11:56:46 +08:00
wyb
793c780015 [Bugfix]修复mm2列表请求超时(#949)
调整代码结构
2023-02-23 11:17:48 +08:00
erge
ec6f063450 [Optimize] 去除package.json 出现内部地址(#939) 2023-02-22 17:08:21 +08:00
zengqiao
f25c65b98b [Doc]补充贡献者信息 2023-02-22 14:00:52 +08:00
Luckywustone
2d99aae779 [Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
[Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
2023-02-22 13:41:02 +08:00
erge
a8847dc282 [Bugfix] 修复打包不成功(#940) 2023-02-22 11:58:33 +08:00
zengqiao
4852c01c88 [Feature]补充贡献代码相关文档(#947)
1、补充贡献者名单,如有遗漏,辛苦告知;
2、补充贡献指南;
2023-02-22 11:53:00 +08:00
zengqiao
3d6f405b69 [Bugfix]订正失效的邮箱地址(#944)
[Bugfix]订正语句(#944)
2023-02-22 11:52:40 +08:00
erge
18e3fbf41d [Optimize] 健康检查项时间和结果显示(didi#930) 2023-02-21 10:41:49 +08:00
erge
ae8cc3092b [Optimize] 新增/编辑MM2 Topic 由当前集群获取改为对应的sourceKafka集群获取& 新增/编辑MM2入参优化(#894) 2023-02-21 10:41:44 +08:00
erge
5c26e8947b [Optimize] JSON新增MM2 Drawer Title文案变更(#894) 2023-02-21 10:41:37 +08:00
erge
fbe6945d3b [Bugfix]zookeeper页面leader节点显示异常(#873) 2023-02-21 10:41:25 +08:00
zengqiao
7dc8f2dc48 [Bugfix]修复Connector列表和MM2列表搜索不生效的问题(#928) 2023-02-21 10:40:05 +08:00
zengqiao
91c60ce72c [Bugfix]修复新接入的集群,Controller-Host不显示的问题(#927)
问题原因:
1、新接入的集群,DB中暂未存储Broker信息,因此在存储Controller至DB时,查询DB中的Broker会查询为空。

解决方式:
1、存储Controller至DB前,主动获取一次Broker的信息。
2023-02-21 10:39:46 +08:00
zengqiao
687eea80c8 补充3.3.0版本变更信息 2023-02-16 14:51:43 +08:00
zengqiao
9bfe3fd1db 设置为AGPL协议 2023-02-15 17:53:46 +08:00
shizeying
03f81bc6de [Bugfix]删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引(#918) 2023-02-15 17:45:53 +08:00
slhu
eed9571ffa [Bugfix]解决在解析命令执行后返回指标的值时发生的数据类型转换错误与指标存储上报时报空指针的问题(#912)
1.zk_min_latency、zk_max_latency指标数据类型变更为float
2.使用ConvertUtil.string2Float()方法进行string到float到类型转换
2023-02-15 16:20:39 +08:00
edengyuan_v
e4651ef749 [Optimize]新增Topic时清理策略区分单选多选(#770) 2023-02-15 11:18:33 +08:00
zengqiao
f715cf7a8d 补充 3.3.0 版本变更信息 2023-02-13 11:57:51 +08:00
wyb
fad9ddb9a1 fix: 更新登录页文案 2023-02-13 11:49:00 +08:00
wyb
b6e4f50849 fix: 健康状态详情优化 & Connector 样式优化 & 无MM2任务指标兜底页 2023-02-13 11:49:00 +08:00
wyb
5c6911e398 [Optimize]Overview指标卡片展示逻辑 2023-02-13 11:49:00 +08:00
wyb
a0371ab88b feat: 新增Topic 复制功能 2023-02-13 11:49:00 +08:00
wyb
fa2abadc25 feat: 新增Mirror Maker 2.0(MM2) 2023-02-13 11:49:00 +08:00
zengqiao
f03460f3cd [Bugfix]修复 Broker Similar Config 显示错误的问题(#872) 2023-02-13 11:22:13 +08:00
zengqiao
b5683b73c2 [Optimize]优化 MySQL & ES 测试容器的初始化(#906)
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;

被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL;
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
2023-02-13 10:33:40 +08:00
zengqiao
c062586c7e [Optimize]删除无用&多余的打包配置文件 2023-02-10 16:51:32 +08:00
fengqiongfeng
98a5c7b776 [Optimize]健康检查日志优化(#869) 2023-02-10 11:02:24 +08:00
zengqiao
e204023b1f [Feature]增加支持Topic复制的集群列表接口(#899) 2023-02-09 17:03:28 +08:00
zengqiao
4c5ffccc45 [Optimize]删除无效代码 2023-02-09 17:00:50 +08:00
zengqiao
fbcf58e19c [Feature]MM2管理-Connector元信息管理优化(#894) 2023-02-09 16:59:38 +08:00
zengqiao
e5c6d00438 [Feature]MM2管理-补充集群Group列表信息(#894) 2023-02-09 16:59:38 +08:00
zengqiao
ab6a4d7099 [Feature]MM2管理-MM2管理相关接口类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
78b2b8a45e [Feature]MM2管理-MM2管理相关业务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
add2af4f3f [Feature]MM2管理-MM2管理相关服务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
235c0ed30e [Feature]MM2管理-MM2管理相关实体类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
5bd93aa478 [Bugfix]修复正常情况下,集群状态统计错误的问题(#865) 2023-02-09 16:44:26 +08:00
zengqiao
f95be2c1b3 [Optimize]TaskResult增加返回任务分组信息 2023-02-09 16:36:19 +08:00
zengqiao
5110b30f62 [Feature]MM2管理-MM2健康巡检(#894) 2023-02-09 15:36:35 +08:00
zengqiao
861faa5df5 [Feature]HA-镜像Topic管理(#899)
1、底层Kafka需要是滴滴版本的Kafka;
2、新增镜像Topic的增删改查;
3、新增镜像Topic的指标查看;
2023-02-09 15:21:23 +08:00
zengqiao
efdf624c67 [Feature]HA-滴滴Kafka版本信息兼容(#899) 2023-02-09 15:21:23 +08:00
zengqiao
caccf9cef5 [Feature]MM2管理-采集MM2指标任务(#894) 2023-02-09 14:58:34 +08:00
zengqiao
6ba3dceb84 [Feature]MM2管理-采集MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
9b7c41e804 [Feature]MM2管理-读写ES中的MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
346aee8fe7 [Bugfix]修复Topic指标大盘获取TopN指标存在错误的问题(#896)
1、将ES排序调整为基于本地cache的排序;
2、将database的本地cache从core模块移动到persistence模块;
2023-02-09 14:20:02 +08:00
zengqiao
353d781bca [Feature]补充MM2相关索引及数据库表信息(#894) 2023-02-09 13:44:40 +08:00
EricZeng
3ce4bf231a 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
EricZeng
d046cb8bf4 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
zengqiao
da95c63503 [Optimize]优化TestContainers相关依赖(#892)
1、去除对mysql-connector-j的依赖;
2、整理代码;
2023-02-09 11:28:26 +08:00
haoqi
915e48de22 [Optimize]补充Testcontainers的使用说明(#890) 2023-02-09 11:05:44 +08:00
_haoqi
256f770971 [Feature]Support running tests with testcontainers(#870) 2023-02-08 14:56:44 +08:00
zengqiao
16e251cbe8 调整开源协议 2023-02-08 14:10:37 +08:00
zengqiao
67743b859a [Optimize]补充Ldap登录的配置说明(#888) 2023-02-08 13:51:45 +08:00
congchen0321
c275b42632 Update faq.md 2023-02-08 13:41:08 +08:00
zengqiao
a02760417b [Optimize]ZK-Overview页面补充默认展示的指标(#874) 2023-01-30 13:18:06 +08:00
zengqiao
0e50bfc5d4 优化PR模版 2023-01-13 16:04:25 +08:00
wuyouwuyoulian
eab988e18f For #781, Fix "The partition display is incomplete" bug 2023-01-12 11:03:30 +08:00
zengqiao
dd6004b9d4 [Bugfix]修复采集副本指标时,参数传递错误问题(#867) 2023-01-11 18:00:21 +08:00
zengqiao
ac7c32acd5 [Optimize]优化ES索引及模版的初始化文档(#832)
1、订正不同地方索引模版的shard数存在不一致的问题;
2、删除多余的template.sh,统一使用init_es_template.sh;
3、init_es_template.sh中,增加connect相关索引模版的初始化脚本,删除replica 和 zookeper索引模版的初始化脚本;
2023-01-09 15:18:41 +08:00
zengqiao
f4a219ceef [Optimize]去除Replica指标从ES读写的相关代码(#862) 2023-01-09 14:57:38 +08:00
zengqiao
a8b56fb613 [Bugfix]修复用户信息修改后,用户列表会抛出空指针异常的问题(#860) 2023-01-09 14:57:23 +08:00
zengqiao
2925a20e8e [Bugfix]修复查看消息时,选择分区不生效问题(#858) 2023-01-09 13:38:10 +08:00
zengqiao
6b3eb05735 [Bugfix]修复对ZK客户端进行配置后不生效的问题(#694)
1、修复在ks_km_physical_cluster表的zk_properties字段填写ZK 客户端的相关配置后,不生效的问题。
2、删除zk_properties字段中,暂时无需使用的jmxConfig字段。
2023-01-09 10:44:35 +08:00
zengqiao
17e0c39f83 [Optimize]优化Topic健康巡检的日志(#855) 2023-01-06 14:42:08 +08:00
zengqiao
4994639111 [Optimize]无ZK模块时,巡检详情忽略对ZK的展示(#764) 2023-01-04 10:32:18 +08:00
wyb
c187b5246f [Bugfix]修复connector指标筛选缺少指标的问题(#846) 2022-12-23 16:19:34 +08:00
wyb
6ed6d5ec8a [Bugfix]修复用户更新失败问题(#840) 2022-12-22 15:56:48 +08:00
wyb
0735b332a8 [Bugfix]修复函数映射错误(#842) 2022-12-22 08:48:59 +08:00
wyb
344cec19fe [Bugfix]connector指标采集算最大值错误(#836) 2022-12-20 09:50:42 +08:00
zengqiao
6ef365e201 bump version to 3.2.0 2022-12-16 13:58:40 +08:00
zengqiao
edfa6a9f71 调整v3.2版本容器化部署信息 2022-12-16 13:39:51 +08:00
孙超
860d0b92e2 V3.2 2022-12-16 13:27:09 +08:00
zengqiao
5bceed7105 [Optimize]缩小ES索引默认shard数 2022-12-15 14:44:18 +08:00
zengqiao
44a2fe0398 增加3.2.0版本升级信息 2022-12-14 14:14:35 +08:00
zengqiao
218459ad1b 增加3.2.0版本变更信息 2022-12-14 14:14:20 +08:00
zengqiao
7db757bc12 [Optimize]优化Connector创建时的入参
1、增加config.action.reload的默认值;
2、增加errors.tolerance的默认值;
2022-12-14 14:12:32 +08:00
zengqiao
896a943587 [Optimize]缩短ES索引默认保存时间为15天 2022-12-14 14:10:46 +08:00
zengqiao
cd2c388e68 [Optimize]优化Sonar代码扫描结果 2022-12-14 14:07:30 +08:00
wyb
4543a339b7 [Bugfix]修复job更新中的数组越界报错(#744) 2022-12-14 13:56:29 +08:00
zengqiao
1c4fbef9f2 [Feature]支持拆分API服务和Job服务部署(#829)
1、JMX检查功能是每一个KS都必须要有的,因此从Task模块移动到Core模块;
2、application.yml中补充Task模块任务的整体开关字段;
2022-12-09 16:11:03 +08:00
zengqiao
b2f0f69365 [Optimize]Overview页面的TopN查询ES流程优化(#823)
1、复用线程池,同时支持线程池的线程数可配置;
2、优化查询TopN指标时,可能会出现重复查询的问题;
3、处理代码扫描(SonarLint)反馈的问题;
2022-12-09 14:39:17 +08:00
wyb
c4fb18a73c [Bugfix]修复迁移任务状态不一致问题(#815) 2022-12-08 17:13:14 +08:00
zengqiao
5cad7b4106 [Bugfix]修复集群Topic列表页面白屏问题(#819)
集群Topic列表健康状态对应关系存在问题,导致当健康状态指标存在时,会出现白屏。
2022-12-07 16:27:27 +08:00
zengqiao
f3c4133cd2 [Bugfix]分批从ES查询Topic最近一条指标(#817) 2022-12-07 16:15:01 +08:00
zengqiao
d9c59cb3d3 增加Connect Rest接口 2022-12-07 10:20:02 +08:00
zengqiao
7a0db7161b 增加Connect 业务层方法 2022-12-07 10:20:02 +08:00
zengqiao
6aefc16fa0 增加Connect相关任务 2022-12-07 10:20:02 +08:00
zengqiao
186dcd07e0 增加3.2版本升级信息 2022-12-07 10:20:02 +08:00
zengqiao
e8652d5db5 Connect相关代码 2022-12-07 10:20:02 +08:00
zengqiao
fb5964af84 补充kafka-connect相关包 2022-12-07 10:20:02 +08:00
zengqiao
249fe7c700 调整ES相关文件位置 & 补充connectESDAO相关类 2022-12-07 10:20:02 +08:00
zengqiao
cc2a590b33 新增自定义的KSPartialKafkaAdminClient
由于原生的KafkaAdminClient在解析Group时,会将Connect集群的Group过滤掉,因此自定义KSPartialKafkaAdminClient,使其具备获取Connect Group的能力
2022-12-07 10:20:02 +08:00
zengqiao
5b3f3e5575 移动指标入ES的代码 2022-12-07 10:20:02 +08:00
wyb
36cf285397 [Bug]修复logi-securiy模块数据库选择错误(#808) 2022-12-06 20:02:49 +08:00
zengqiao
4386563c2c 调整指标采集的默认耗时值,以便在查看Top指标时即可看到 2022-12-06 16:47:53 +08:00
zengqiao
0123ce4a5a 优化Broker列表JMX端口的返回值 2022-12-06 16:47:07 +08:00
zengqiao
c3d47d3093 池化KafkaAdminClient,避免KafkaAdminClient出现性能问题 2022-12-06 16:46:11 +08:00
zengqiao
9735c4f885 删除重复采集的指标 2022-12-06 16:41:27 +08:00
zengqiao
3a3141a361 调整ZK指标的采集时间 2022-12-06 16:40:52 +08:00
zengqiao
ac30436324 [Bugfix]修复更新健康巡检结果时出现死锁的问题(#728) 2022-12-05 16:30:37 +08:00
zengqiao
7176e418f5 [Optimize]优化健康巡检相关指标的计算(#726)
1、增加缓存,减少健康状态指标计算时的IO;
2、健康巡检调整为按照资源维度并发处理;
3、明确HealthCheckResultService和HealthStateService的功能边界;
2022-12-05 16:26:31 +08:00
zengqiao
ca794f507e [Optimize]规范日志输出格式(#800)
修改log输出配置,使其输出的日志中自带class={className}的信息,后续书写代码时,就无需书写该部分内容。
2022-12-05 14:27:02 +08:00
zengqiao
0f8be4fadc [Optimize]优化日志输出 & 本地缓存统一管理(#800) 2022-12-05 14:04:19 +08:00
zengqiao
7066246e8f [Optimize]错开采集任务触发时间,降低Offset信息获取时超时情况的发生(#726)
当前指标采集任务都是整分钟触发执行的,导致会同时向Kafka请求分区Offset信息,会导致:
1、请求过多,从而出现超时;
2、同时进行,可能会导致分区重复获取Offset信息;

因此将其错开。
2022-12-05 13:49:35 +08:00
zengqiao
7d1bb48b59 [Optimize]ZK四字命令解析日志优化(#805)
增加遗漏的指标名的处理,减少warn日志该部分的信息
2022-12-05 13:39:26 +08:00
limaiwang
dd0d519677 [Optimize]更新Zookeeper详情目录结构搜索文案(#793) 2022-12-05 12:15:03 +08:00
zengqiao
4293d05fca [Optimize]优化Topic元信息更新策略(#806) 2022-12-04 17:55:27 +08:00
zengqiao
2c82baf9fc [Optimize]指标采集性能优化-part1(#726) 2022-12-04 15:41:48 +08:00
zengqiao
921161d6d0 [Bugfix]修复ReplicaMetricCollector编译失败问题(#802) 2022-12-03 14:34:38 +08:00
zengqiao
e632c6c13f [Optimize]优化Sonar扫描结果 2022-12-02 15:34:28 +08:00
zengqiao
5833a8644c [Optimize]关闭errorLogger,去除无用输出(#801) 2022-12-02 15:29:17 +08:00
zengqiao
fab41e892f [Optimize]日志统一格式&优化输出内容-part3(#800) 2022-12-02 15:14:21 +08:00
zengqiao
7a52cf67b0 [Optimize]日志统一格式&优化输出内容-part2(#800) 2022-12-02 15:01:24 +08:00
zengqiao
175b8d643a [Optimize]统一日志格式-part1(#800) 2022-12-02 14:39:57 +08:00
zengqiao
6241eb052a [Bugfix]修复KafkaJMXClient类中logger错误的问题(#794) 2022-11-30 11:15:00 +08:00
zengqiao
c2fd0a8410 [Optimize]优化Sonar扫描出的不规范代码 2022-11-29 20:54:41 +08:00
zengqiao
5127b600ec [Optimize]优化ESClient的并发访问控制(#787) 2022-11-29 10:47:57 +08:00
zengqiao
feb03aede6 [Optimize]优化线程池的名称(#789) 2022-11-28 15:11:54 +08:00
duanxiaoqiu
47b6c5d86a [Bugfix]修复创建topic选择过期策略(kafka版本0.10.1.0之前)compact和delete只能二选一(didi#770) 2022-11-27 14:18:50 +08:00
SimonTeo58
c4a81613f4 [Optimize]更新Topic-Messages抽屉文案(#771) 2022-11-24 21:54:29 +08:00
limaiwang
daeb5c4cec [Bugfix]修复集群配置不写时,校验参数报错的问题 2022-11-24 15:30:01 +08:00
WangYaobo
38def45ad6 [Doc]增加无数据排查文档(#773) 2022-11-24 10:44:37 +08:00
pen4
4b29a2fdfd update org.springframework:spring-context 5.3.18 to 5.3.19 2022-11-23 11:38:11 +08:00
zengqiao
a165ecaeef [Bugfix]修复Broker&Topic修改时,版本设置错误问题(#762)
Kafka v2.3增加了增量修改配置的功能,但是KS中错误的将其配置为0.11.0版本就具备该能力,因此对其进行调整。
2022-11-21 15:56:33 +08:00
night.liang
6637ba4ccc [Optimize] optimize zk OutstandingRequests checker’s exception log (#738) 2022-11-18 17:12:07 +08:00
duanxiaoqiu
2f807eec2b [Feat]Topic列表健康分修改为健康状态(#758) 2022-11-18 13:56:27 +08:00
石臻臻的杂货铺
636c2c6a83 Update README.md 2022-11-17 13:33:40 +08:00
zengqiao
898a55c703 [Bugfix]修复Broker列表LogSize指标存储时名称错误的问题(#759) 2022-11-17 13:27:45 +08:00
zengqiao
8ffe7e7101 [Bugfix]修复Prometheus中Group部分指标缺少的问题(#756) 2022-11-14 13:33:16 +08:00
zengqiao
7661826ea5 [Optimize]健康巡检增加ClusterParam, 从而拆分Kafka和Connect相关的巡检任务 2022-11-10 16:24:39 +08:00
zengqiao
e456be91ef [Bugfix]集群JMX配置发生变更时,进行JMX的重新加载 2022-11-10 16:04:40 +08:00
zengqiao
da0a97cabf [Optimize] 调整Task代码结构为Connector功能做准备 2022-11-09 10:28:52 +08:00
zengqiao
c1031a492a [Optimize]增加ES索引删除的功能 2022-11-09 10:28:52 +08:00
zengqiao
3c8aaf528c [Bugfix] 修复因为指标缺失导致返回的集群数错误的问题 (#741) 2022-11-09 10:28:52 +08:00
黄海婷
70ff20a2b0 styles:cardBar卡片标题图标hover样式 2022-11-07 10:38:28 +08:00
黄海婷
6918f4babe styles:job列表自定义列按钮新增hover背景色 2022-11-07 10:38:28 +08:00
黄海婷
805a704d34 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
黄海婷
c69c289bc4 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
zengqiao
dd5869e246 [Optimize] 调整代码结构,为Connect功能做准备 2022-11-07 10:13:26 +08:00
Richard
b51ffb81a3 [Bugfix] No thread-bound request found. (#743) 2022-11-07 10:06:54 +08:00
黄海婷
ed0efd6bd2 styles:字体颜色#adb5bc变更为#74788D 2022-11-03 16:49:35 +08:00
黄海婷
39d2fe6195 styles:消息大小测试弹窗下方提示字体加粗 2022-11-03 16:49:35 +08:00
黄海婷
7471d05c20 styles:消息大小测试弹框字符数显示字体调整 2022-11-03 16:49:35 +08:00
黄海婷
3492688733 feat:Consumer列表刷新按钮新增hover提示 2022-11-01 17:37:37 +08:00
Sean
a603783615 [Optimize].gitignore 中添加 flatten.xml 过滤,为后续引入flatten 做准备(#732) 2022-11-01 14:16:53 +08:00
night.liang
5c9096d564 [Bugfix] fix replica dsl (#708) 2022-11-01 10:45:59 +08:00
zengqiao
c27786a257 bump version to 3.1.0 2022-10-31 14:55:50 +08:00
zengqiao
81910d1958 [Hotfix] 修复新接入集群时,健康状态信息页面出现空指针问题 2022-10-31 14:55:22 +08:00
zengqiao
55d5fc4bde 增加v3.1.0版本的变更项 2022-10-31 14:05:42 +08:00
GraceWalk
f30586b150 fix: 依赖安装默认采用 taobao 镜像 2022-10-29 13:55:36 +08:00
GraceWalk
37037c19f0 fix: 更新版本信息获取方式 2022-10-29 13:55:36 +08:00
GraceWalk
1a5e2c7309 fix: 错误页面优化 2022-10-29 13:55:36 +08:00
GraceWalk
941dd4fd65 feat: 支持 Zookeeper 模块 2022-10-29 13:55:36 +08:00
GraceWalk
5f6df3681c feat: 健康状态展示优化 2022-10-29 13:55:36 +08:00
zengqiao
7d045dbf05 补充ZK健康巡检任务 2022-10-29 13:55:07 +08:00
zengqiao
4ff4accdc3 补充3.1.0版本升级信息 2022-10-29 13:55:07 +08:00
zengqiao
bbe967c4a8 补充多集群健康状态概览信息 2022-10-29 13:55:07 +08:00
zengqiao
b101cec6fa 健康分调整为健康状态 2022-10-29 13:55:07 +08:00
zengqiao
e98ec562a2 Znode信息中,补充当前节点路径信息 2022-10-29 13:55:07 +08:00
zengqiao
0e71ecc587 延长健康检查结果过期时间 2022-10-29 13:55:07 +08:00
zengqiao
0f11a65df8 补充获取ZK的namespace的方法 2022-10-29 13:55:07 +08:00
zengqiao
da00c8c877 还原消费组重置失败的提示文案 2022-10-29 13:55:07 +08:00
hongtenzone@foxmail.com
8b177877bb Add release notes 2022-10-28 15:35:26 +08:00
hongtenzone@foxmail.com
ea199dca8d Add release notes 2022-10-28 15:35:26 +08:00
renxiangde
88b5833f77 [Bugfix] 修复新建Topic后,立即查看Topic-Messages信息会提示Topic不存在的问题 (#697) 2022-10-27 11:04:26 +08:00
zwen
127b5be651 [fix]Repair that preferredReplicaElection is not called as expected 2022-10-27 10:15:15 +08:00
Mengqi777
80f001cdd5 [ISSUE #723]Ignore error and continue to package km-rest if no git directory 2022-10-26 10:14:14 +08:00
zengqiao
30d297cae1 bump version to 3.1.0-SNAPSHOT 2022-10-21 17:13:02 +08:00
zengqiao
a96853db90 bump version to v3.0.1 2022-10-21 15:02:09 +08:00
zengqiao
c1502152c0 Revert "bump version to 3.1.0"
This reverts commit 7b5c2d80
2022-10-21 14:59:42 +08:00
GraceWalk
afda292796 fix: typescript 版本更新 2022-10-21 14:47:01 +08:00
GraceWalk
163cab78ae fix: 部分文案 & 样式优化 2022-10-21 14:47:01 +08:00
GraceWalk
8f4ff36c09 fix: 优化 Topic 扩分区名称 & 描述展示 2022-10-21 14:47:01 +08:00
GraceWalk
47b6b3577a fix: Broker 列表 jmxPort 列支持展示连接状态 2022-10-21 14:47:01 +08:00
GraceWalk
f3eca3b214 fix: ConsumerGroup 列表 & 详情页重构 2022-10-21 14:47:00 +08:00
GraceWalk
62f7d3f72f fix: 图表逻辑 & 展示优化 2022-10-21 14:47:00 +08:00
GraceWalk
26e60d8a64 fix: 优化全局 Message & Notification 展示效果 2022-10-21 14:47:00 +08:00
zengqiao
df655a250c 增加v3.0.1变更内容 2022-10-21 14:36:29 +08:00
zengqiao
811fc9b400 补充v3.0.1版本升级信息 2022-10-21 14:32:57 +08:00
zengqiao
83df02783c 安装包中,去除docs相关的文档 2022-10-21 14:32:07 +08:00
zengqiao
6a5efce874 [Bugfix] 修复指标版本信息list转map时出现key冲突从而抛出异常的问题 2022-10-21 12:06:22 +08:00
zengqiao
fa0ae5e474 [Optimize] 集群Broker列表中,补充Jmx是否成功连接的信息
1、当前页面无数据时,一部分的原因是JMX连接失败导致;
2、Broker列表中增加是否连接成功的信息,便于问题的排查;
2022-10-21 12:03:19 +08:00
zengqiao
cafd665a2d [Optimize] 删除Replica指标采集任务
1、当集群存在较多副本时,指标采集的性能会严重降低;
2、Replica的指标基本上都是在实时获取时才需要,因此当前先将Replica指标采集任务关闭,后续依据产品需要再看是否开启;
2022-10-21 11:49:58 +08:00
zengqiao
e8f77a456b [Optimize] 优化ZK指标的获取,减少重复采集的出现 (#709)
1、避免不同集群,相同的ZK地址时,指标重复获取的情况;
2、避免集群某个ZK地址获取指标失败时,下一个周期还会继续尝试从该地址获取指标;
2022-10-21 11:26:07 +08:00
_haoqi
4510c62ebd [ISSUE #677] 重启会导致部分信息采集抛出空指针 2022-10-20 15:36:32 +08:00
zengqiao
79864955e1 [Feature] 集群Group列表按照Group维度进行展示 (#580) 2022-10-20 13:29:43 +08:00
Richard
ff26a8d46c fix issue:
* [issue #700] Adjust the prompt and replace the Arrays.asList() with the Collections.singletonList()
2022-10-19 15:19:43 +08:00
dianyang12138
cc226d552e fix:修复es模版错误 2022-10-19 11:44:00 +08:00
EricZeng
962f89475b Merge pull request #699 from silent-night-no-trace/dev
[ISSUE #683]  fix ldap bug
2022-10-19 10:23:47 +08:00
night.liang
ec204a1605 fix ldap bug 2022-10-18 20:16:40 +08:00
早晚会起风
58d7623938 Merge pull request #696 from chenzhongyu11/dev
[ISSUE #672] 修复健康巡检结果时间展示错误的问题
2022-10-18 10:41:47 +08:00
EricZeng
8f4ecfcdc0 Merge pull request #691 from didi/dev
补充Kafka-Group表
2022-10-17 20:30:32 +08:00
zengqiao
ef719cedbc 补充Kafka-Group表 2022-10-17 10:34:21 +08:00
EricZeng
b7856c892b Merge pull request #690 from didi/master
合并默认分支
2022-10-17 10:30:18 +08:00
EricZeng
7435a78883 Merge pull request #689 from didi/dev
优化健康检查结果替换时出现死锁问题
2022-10-17 10:26:11 +08:00
chenzy
f49206b316 修复时间展示有误的bug,由原先的12小时制改为24小时制 2022-10-16 22:57:50 +08:00
EricZeng
7d500a0721 Merge pull request #684 from RichardZhengkay/dev
fix issue: [#662]
2022-10-15 14:39:37 +08:00
EricZeng
98a519f20b Merge pull request #682 from haoqi123/fix_678
[ISSUE #678] zk-Latency avg为多位小数会抛出空指针
2022-10-15 14:17:23 +08:00
Richard
39b655bb43 fix issue:
* [issue #662] Fix deadlocks caused by adding data using MySQL's REPLACE method
2022-10-14 14:03:16 +08:00
_haoqi
78d56a49fe 修改zk-Latency avg为小数时的数值转换异常问题 2022-10-14 11:53:48 +08:00
EricZeng
d2e9d1fa01 Merge pull request #673 from didi/dev
fix [ISSUE-666] Error in ks_km_zookeeper table role type #666
2022-10-13 18:57:06 +08:00
zengqiao
41ff914dc3 修复ZK元信息表role字段类型错误问题 2022-10-13 18:50:41 +08:00
shirenchuang
3ba447fac2 update readme 2022-10-13 18:49:06 +08:00
shirenchuang
e9cc380a2e update readme 2022-10-13 18:30:13 +08:00
EricZeng
017cac9bbe Merge pull request #670 from RichardZhengkay/dev
fix issue: [#666]
2022-10-13 18:25:15 +08:00
Richard
9ad72694af fix issue:
* [issue #666] Fix the type of role phase in ks_km_zookeeper table
2022-10-13 18:00:43 +08:00
shirenchuang
e8f9821870 Merge remote-tracking branch 'origin/master' 2022-10-13 16:31:03 +08:00
shirenchuang
bb167b9f8d update readme 2022-10-13 15:31:34 +08:00
石臻臻的杂货铺
28fbb5e130 Merge pull request #665 from zwOvO/patch-1
[ISSUE #664]关于'JMX-连接失败问题解决'的超链接修复
2022-10-13 10:17:29 +08:00
EricZeng
16101e81e8 Merge pull request #661 from didi/dev
合并开发分支
2022-10-13 10:16:14 +08:00
赤月
aced504d2a Update faq.md 2022-10-12 22:08:29 +08:00
shirenchuang
abb064d9d1 update readme add who's using know streaming 2022-10-12 19:15:19 +08:00
zengqiao
dc1899a1cd 修复集群ZK列表中缺少返回服务状态字段的问题 2022-10-12 16:45:47 +08:00
zengqiao
442f34278c 指标信息中,增加返回ZK的指标信息 2022-10-12 16:44:07 +08:00
zengqiao
a6dcbcd35b 删除未被使用的import 2022-10-12 16:43:16 +08:00
zengqiao
2b600e96eb 健康检查任务优化 2022-10-12 16:41:27 +08:00
zengqiao
177bb80f31 application.yml文件中增加ES用户名密码的配置项 2022-10-12 16:36:04 +08:00
zengqiao
63fbe728c4 增加ZK指标上报普罗米修斯 2022-10-12 11:11:25 +08:00
EricZeng
b33020840b ZookeeperService中增加服务存活统计方法(#659) 2022-10-12 11:07:52 +08:00
zengqiao
c5caf7c0d6 ZookeeperService中增加服务存活统计方法 2022-10-12 11:02:41 +08:00
EricZeng
0f0473db4c 增加float转integer方法(#658)
增加float转integer方法
2022-10-12 10:09:16 +08:00
zengqiao
beadde3e06 增加float转integer方法 2022-10-11 18:46:16 +08:00
EricZeng
a423a20480 修复获取TopN的Broker指标时,会出现部分指标缺失的问题(#657)
修复获取TopN的Broker指标时,会出现部分指标缺失的问题
2022-10-11 18:44:02 +08:00
shirenchuang
79f0a23813 update contribuer document 2022-10-11 17:38:15 +08:00
zengqiao
780fdea2cc 修复获取TopN的Broker指标时,会出现部分指标缺失的问题 2022-10-11 16:54:39 +08:00
shirenchuang
1c0fda1adf Merge remote-tracking branch 'origin/master' 2022-10-11 10:39:08 +08:00
EricZeng
9cf13e9b30 Broker增加服务是否存活接口(#654)
Broker增加服务是否存活接口
2022-10-10 19:56:12 +08:00
zengqiao
87cd058fd8 Broker增加服务是否存活接口 2022-10-10 19:54:47 +08:00
EricZeng
81b1ec48c2 调整贡献者名单(#653)
调整贡献者名单
2022-10-10 19:52:50 +08:00
zengqiao
66dd82f4fd 调整贡献者名单 2022-10-10 19:49:22 +08:00
EricZeng
ce35b23911 修复DSL错误导致ZK指标查询失败问题(#652)
修复DSL错误导致ZK指标查询失败问题
2022-10-10 19:27:48 +08:00
zengqiao
e79342acf5 修复DSL错误导致ZK指标查询失败问题 2022-10-10 19:19:05 +08:00
EricZeng
3fc9f39d24 Merge pull request #651 from didi/master
合并主分支
2022-10-10 19:10:48 +08:00
shirenchuang
0221fb3a4a 贡献者相关文档 2022-10-10 18:02:19 +08:00
shirenchuang
f009f8b7ba 贡献者相关文档 2022-10-10 17:21:21 +08:00
shirenchuang
b76959431a 贡献者相关文档 2022-10-10 16:55:33 +08:00
shirenchuang
975370b593 贡献者相关文档 2022-10-10 15:57:07 +08:00
shirenchuang
7275030971 贡献者相关文档 2022-10-10 15:50:16 +08:00
shirenchuang
99b0be5a95 Merge branch 'master' into docs_only 2022-10-10 15:01:00 +08:00
石臻臻的杂货铺
edd3f95fc4 Update CONTRIBUTING.md 2022-10-10 14:22:24 +08:00
石臻臻的杂货铺
479f983b09 Update CONTRIBUTING.md 2022-10-10 13:58:35 +08:00
石臻臻的杂货铺
7650332252 Update CONTRIBUTING.md 2022-10-10 13:50:55 +08:00
shirenchuang
8f1a021851 readme 2022-10-10 13:46:14 +08:00
shirenchuang
ce4df4d5fd Merge remote-tracking branch 'origin/master' 2022-10-10 13:00:28 +08:00
shirenchuang
bd43ae1b5d Issue 模板 2022-10-10 12:57:53 +08:00
石臻臻的杂货铺
8fa34116b9 Merge pull request #648 from didi/docs_only
PR 模板
2022-10-10 12:39:38 +08:00
shirenchuang
7e92553017 PR 模板 2022-10-10 11:42:04 +08:00
shirenchuang
b7e243a693 Merge remote-tracking branch 'origin/master' 2022-10-09 17:23:16 +08:00
shirenchuang
35d4888afb 贡献者规约文档 2022-10-09 17:03:46 +08:00
EricZeng
b3e8a4f0f6 Merge pull request #647 from didi/dev
合并DEV分支
2022-10-09 16:54:45 +08:00
shirenchuang
321125caee issue template 2022-10-09 15:47:13 +08:00
shirenchuang
e01427aa4f issue template 2022-10-09 15:42:40 +08:00
shirenchuang
14652e7f7a issue template 2022-10-09 15:39:20 +08:00
shirenchuang
7c05899dbd issue template 2022-10-09 15:26:57 +08:00
shirenchuang
56726b703f issue template 2022-10-09 13:56:44 +08:00
shirenchuang
6237b0182f issue template 2022-10-09 12:27:27 +08:00
EricZeng
be5b662f65 Merge pull request #645 from didi/dev_feature_zk_kerberos
如何修改代码支持ZK-Kerberos认证
2022-10-09 10:39:26 +08:00
EricZeng
224698355c 恢复为原先代码
恢复为原先代码
2022-10-09 10:38:36 +08:00
EricZeng
8f47138ecd Merge pull request #643 from didi/dev_3.1
监控Kafka的ZK
2022-10-08 17:22:03 +08:00
zengqiao
d159746391 调整接入带Kerberos认证的ZK集群的文档 2022-10-08 17:00:08 +08:00
EricZeng
63df93ea5e Merge pull request #608 from luhea/dev_feature_zk_kerberos
Add zk supported kerberos
2022-10-08 16:11:37 +08:00
EricZeng
38948c0daa Merge pull request #644 from didi/master
合并主分支
2022-10-08 16:09:40 +08:00
zengqiao
6c610427b6 ZK-增加ZK信息查询接口 2022-10-08 15:46:18 +08:00
zengqiao
b4cc31c459 ZK-指标采集入ES 2022-10-08 15:31:59 +08:00
zengqiao
7d781712c9 ZK-同步ZK元信息至DB 2022-10-08 15:19:09 +08:00
zengqiao
dd61ce9b2a ZK-增加配置的默认值 2022-10-08 14:58:28 +08:00
zengqiao
69a7212986 ZK-增加四字命令信息的获取 2022-10-08 14:52:17 +08:00
EricZeng
ff05a951fd Merge pull request #642 from didi/master
合并主分支
2022-10-08 14:42:37 +08:00
EricZeng
89d5357b40 Merge pull request #641 from didi/dev
删除无效的健康分计算代码
2022-10-08 14:41:27 +08:00
zengqiao
7ca3d65c42 删除无效的健康分计算代码 2022-10-08 14:15:20 +08:00
zengqiao
7b5c2d800f bump version to 3.1.0 2022-09-29 15:13:41 +08:00
EricZeng
f414b47a78 1、补充升级至v3.0.0信息;2、增加v3.0.0变更内容;(#636)
1、补充升级至v3.0.0信息;2、增加v3.0.0变更内容;(#)
2022-09-29 13:08:32 +08:00
EricZeng
44f4e2f0f9 Merge pull request #635 from didi/dev
合并前端调整的内容
2022-09-29 11:50:25 +08:00
zengqiao
2361008bdf Merge branch 'dev' of github.com:didi/KnowStreaming into dev 2022-09-29 11:49:00 +08:00
zengqiao
7377ef3ec5 增加v3.0.0变更内容 2022-09-29 11:45:29 +08:00
lucasun
a28d064b7a Merge pull request #634 from GraceWalk/dev
前端 bug 修复 & 问题优化
2022-09-29 11:23:25 +08:00
GraceWalk
e2e57e8575 fix: 依赖版本更新 2022-09-29 11:15:47 +08:00
zengqiao
9d90bd2835 补充升级至v3.0.0信息 2022-09-29 11:04:49 +08:00
EricZeng
7445e68df4 Merge pull request #632 from didi/master
合并主分支
2022-09-29 10:53:54 +08:00
GraceWalk
ab42625ad2 fix: 数字展示格式化 2022-09-29 10:52:31 +08:00
GraceWalk
18789a0a53 fix: IconFont 组件改为从独立包引入 2022-09-29 10:51:52 +08:00
zengqiao
68a37bb56a Merge branch 'master' of github.com:didi/KnowStreaming 2022-09-29 10:49:46 +08:00
GraceWalk
3b33652c47 fix: Rebalance 卡片 icon 颜色调整 2022-09-29 10:48:52 +08:00
GraceWalk
1e0c4c3904 feat: Topic 详情消息 Value 列支持复制 2022-09-29 10:48:09 +08:00
zengqiao
04e223de16 修改协议文案 2022-09-29 10:48:00 +08:00
GraceWalk
c4a691aa8a fix: 多集群列表兼容集群无 ZK 情况 2022-09-29 10:44:28 +08:00
GraceWalk
ff9dde163a feat: 图表支持存储拖拽排序 & 补点逻辑优化 2022-09-29 10:42:44 +08:00
EricZeng
eb7efbd1a5 增加字段校验注解(#631)
增加字段校验注解(#631)
2022-09-29 09:59:02 +08:00
zengqiao
8c8c362c54 Merge branch 'dev' of github.com:didi/KnowStreaming into dev 2022-09-28 20:19:35 +08:00
zengqiao
66e119ad5d 增加字段校验注解 2022-09-28 20:16:06 +08:00
EricZeng
6dedc04a05 Merge pull request #630 from didi/dev
合并开发分支
2022-09-28 20:14:51 +08:00
EricZeng
0cf8bad0df Merge pull request #629 from didi/master
合并主分支
2022-09-28 20:06:26 +08:00
zengqiao
95c9582d8b 优化消费组详情指标为实时获取 2022-09-28 20:03:23 +08:00
EricZeng
7815126ff5 1、修复Group指标防重复不生效问题;2、修复自动创建ES索引模版失败问题; (#628)
* 修复自动创建ES索引模版失败问题

* 修复Group指标防重复不生效问题

Co-authored-by: zengqiao <zengqiao@didiglobal.com>
2022-09-28 19:55:30 +08:00
zengqiao
a5fa9de54b 修复Group指标防重复不生效问题 2022-09-28 19:52:11 +08:00
zengqiao
95f1a2c630 修复自动创建ES索引模版失败问题 2022-09-28 19:46:07 +08:00
zengqiao
1e256ae1fd 修复自动创建ES索引模版失败问题 2022-09-28 19:44:33 +08:00
zengqiao
9fc9c54fa1 bump version to 3.0.0 2022-09-28 11:20:16 +08:00
zengqiao
1b362b1e02 Merge branch 'master' of github.com:didi/KnowStreaming 2022-09-28 11:16:54 +08:00
EricZeng
04e3172cca [ISSUE-624]过滤掉不存在的Topic(#625)
[ISSUE-624]过滤掉不存在的Topic(#625)
2022-09-28 11:13:15 +08:00
EricZeng
1caab7f3f7 [ISSUE-624]过滤掉不存在的Topic(#624)
[ISSUE-624]过滤掉不存在的Topic(#624)
2022-09-28 10:41:39 +08:00
zengqiao
9d33c725ad [ISSUE-624]过滤掉不存在的Topic(#624)
同步Group元信息时,如果Topic已经不存在了,则过滤掉该Group+Topic信息
2022-09-28 10:39:33 +08:00
EricZeng
6ed1d38106 [ISSUE-598]Fix start_time not set when create reassign job in MySQL-8 (#623)
[ISSUE-598]Fix start_time not set when create reassign job in MySQL-8 (#623 )
2022-09-28 10:26:56 +08:00
zengqiao
0f07ddedaf [ISSUE-598]Fix start_time not set when create reassign job in MySQL-8 2022-09-28 10:24:32 +08:00
EricZeng
289945b471 Merge pull request #622 from didi/dev
后端补充Kafka集群运行模式字段信息
2022-09-28 10:08:17 +08:00
zengqiao
f331a6d144 后端补充Kafka集群运行模式字段信息 2022-09-27 18:43:22 +08:00
EricZeng
0c8c12a651 Merge pull request #621 from didi/dev
指标发送ES类按照指标类别拆分
2022-09-27 18:38:05 +08:00
zengqiao
028c3bb2fa 指标发送ES类按照指标类别拆分 2022-09-27 10:19:18 +08:00
EricZeng
d7a5a0d405 健康巡检任务按照类型进行拆分
健康巡检任务按照类型进行拆分
2022-09-27 10:17:12 +08:00
zengqiao
5ef5f6e531 健康巡检任务按照类型进行拆分 2022-09-26 20:10:49 +08:00
EricZeng
1d205734b3 Merge pull request #619 from didi/dev
集群信息中,补充ZK配置字段
2022-09-26 19:50:26 +08:00
Peng
5edd43884f Update README.md 2022-09-26 18:43:25 +08:00
zengqiao
c1992373bc 集群信息中,补充ZK配置字段 2022-09-26 11:10:38 +08:00
EricZeng
ed562f9c8a Merge pull request #618 from didi/dev
DB中Group信息的更新方式,由replace调整为insert或update
2022-09-26 10:02:24 +08:00
zengqiao
b4d44ef8c7 DB中Group信息的更新方式,由replace调整为insert或update 2022-09-23 17:02:25 +08:00
EricZeng
ad0c16a1b4 升级Helm版本及增加Docker相关文件
升级Helm版本及增加Docker相关文件
2022-09-23 16:17:00 +08:00
wangdongfang-aden
7eabe66853 Merge pull request #616 from wangdongfang-aden/dev
添加docker-compose部署和更新helm
2022-09-23 14:50:22 +08:00
wangdongfang-aden
3983d73695 Update Chart.yaml 2022-09-23 14:47:40 +08:00
wangdongfang-aden
161d4c4562 Update 单机部署手册.md 2022-09-23 14:46:27 +08:00
wangdongfang-aden
9a1e89564e Update 单机部署手册.md 2022-09-23 14:44:49 +08:00
wangdongfang-aden
0c18c5b4f6 Update 单机部署手册.md 2022-09-23 14:43:23 +08:00
wangdongfang-aden
3e12ba34f7 Update docker-compose.yml 2022-09-23 14:33:05 +08:00
wangdongfang-aden
e71e29391b Delete ks-start.sh 2022-09-23 14:26:24 +08:00
wangdongfang-aden
9b7b9a7af0 Delete es_template_create.sh 2022-09-23 14:26:16 +08:00
wangdongfang-aden
a23819c308 Create ks-start.sh 2022-09-23 14:19:35 +08:00
wangdongfang-aden
6cb1825d96 Create es_template_create.sh 2022-09-23 14:19:10 +08:00
wangdongfang-aden
77b8c758dc Create initsql 2022-09-23 14:18:17 +08:00
wangdongfang-aden
e5a582cfad Create my.cnf 2022-09-23 14:17:25 +08:00
wangdongfang-aden
ec83db267e Create init.sh 2022-09-23 14:17:02 +08:00
wangdongfang-aden
bfd026cae7 Create dockerfile 2022-09-23 14:16:28 +08:00
wangdongfang-aden
35f1dd8082 Create dockerfile 2022-09-23 14:14:47 +08:00
wangdongfang-aden
7ed0e7dd23 Create dockerfile 2022-09-23 14:14:02 +08:00
wangdongfang-aden
1a3cbf7a9d Create knowstreaming.conf 2022-09-23 14:07:04 +08:00
wangdongfang-aden
d9e4abc3de Create ks-start.sh 2022-09-23 14:05:59 +08:00
wangdongfang-aden
a4186085d3 Create es_template_create.sh 2022-09-23 14:05:05 +08:00
wangdongfang-aden
26b1846bb4 Create docker-compose.yml 2022-09-23 14:03:14 +08:00
wangdongfang-aden
1aa89527a6 helm update 3.0.0-beta.3 2022-09-23 11:36:46 +08:00
wangdongfang-aden
eac76d7ad0 helm update 3.0.0-beta.3 2022-09-23 11:36:01 +08:00
wangdongfang-aden
cea0cd56f6 Merge pull request #607 from haoqi123/dev
[单机部署手册.md]docker-compose部署方式添加注释描述
2022-09-23 10:27:04 +08:00
EricZeng
c4b897f282 bump version to 3.0.0-beta.4
bump version to 3.0.0-beta.4
2022-09-23 10:24:52 +08:00
zengqiao
47389dbabb bump version to 3.0.0-beta.4 2022-09-23 10:17:58 +08:00
haoqi
a2f8b1a851 1. [单机部署手册.md]docker-compose部署方式添加注释描述 2022-09-22 19:46:21 +08:00
EricZeng
feac0a058f Merge pull request #613 from didi/dev
补充v3.0.0-beta.2变更信息
2022-09-22 17:30:35 +08:00
zengqiao
27eeac9fd4 补充v3.0.0-beta.2变更信息 2022-09-22 17:28:51 +08:00
EricZeng
a14db4b194 Merge pull request #612 from didi/dev
合并开发分支
2022-09-22 17:28:09 +08:00
lucasun
54ee271a47 Merge pull request #611 from GraceWalk/dev
修复前端bug和体验问题
2022-09-22 15:51:46 +08:00
GraceWalk
a3a9be4f7f fix: 更正前端本地环境接口代理地址 2022-09-22 15:37:24 +08:00
GraceWalk
d4f0a832f3 fix: 样式更新 2022-09-22 15:31:52 +08:00
GraceWalk
7dc533372c fix: 更正文件引用路径 2022-09-22 15:31:34 +08:00
GraceWalk
1737d87713 fix: 修复配置无法删除的问题 2022-09-22 15:31:13 +08:00
GraceWalk
dbb98dea11 fix: 更新登录页图片 2022-09-22 15:21:04 +08:00
GraceWalk
802b382b36 fix: Topic Messages 详情提示优化 2022-09-22 15:20:31 +08:00
GraceWalk
fc82999d45 fix: 消费测试 Message 限制最大值 2022-09-22 15:19:56 +08:00
GraceWalk
08aa000c07 refactor: 接入/编辑集群优化 2022-09-22 15:19:03 +08:00
GraceWalk
39015b5100 feat: 多集群管理列表页增加手动刷新功能 2022-09-22 15:18:13 +08:00
GraceWalk
0d635ad419 refactor: webpack 配置结构调整 2022-09-22 15:13:25 +08:00
EricZeng
9133205915 Merge pull request #610 from didi/dev
合并开发分支
2022-09-22 14:51:23 +08:00
zengqiao
725ac10c3d 1、调整KafkaZKDao位置;2、offset信息获取时,过滤掉无leader分区;3、调整验证ZK是否合法时的session超时时间 2022-09-22 11:30:46 +08:00
zengqiao
2b76358c8f Overview页面,后端增加排序信息 2022-09-22 11:24:13 +08:00
zengqiao
833c360698 bump oshi-core version to 5.6.1 2022-09-22 11:17:59 +08:00
zengqiao
7da1e67b01 FAQ补充权限识别失败问题说明 2022-09-22 11:13:54 +08:00
GraceWalk
7eb86a47dd fix: 部分依赖更新 2022-09-21 16:22:45 +08:00
GraceWalk
d67e383c28 feat: 系统管理列表增加手动刷新功能 2022-09-21 16:21:57 +08:00
GraceWalk
8749d3e1f5 fix: config 子应用 axios 配置错误兼容 2022-09-21 16:21:07 +08:00
GraceWalk
30fba21c48 fix: 生产测试单词发送消息数限制为 0~1000 2022-09-21 16:15:19 +08:00
GraceWalk
d83d35aee9 fix: 样式 & 文案优化 2022-09-21 16:12:13 +08:00
GraceWalk
1d3caeea7d feat: Cluster 图表去掉放大功能 2022-09-21 16:11:14 +08:00
luhe
c8806dbb4d 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:09:04 +08:00
luhe
e5802c7f50 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:02:38 +08:00
luhe
590f684d66 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 15:59:31 +08:00
luhe
8e5a67f565 修改代码支持ZK-Kerberos认证 2022-09-21 15:58:59 +08:00
luhe
8d2fbce11e 修改代码支持ZK-Kerberos认证 2022-09-21 15:54:30 +08:00
haoqi
26916f6632 1. [单机部署手册.md]docker-compose部署方式添加注释描述
2. 更改docker-compose中ui对外访问port为80
2022-09-21 12:55:43 +08:00
EricZeng
fbfa0d2d2a Merge pull request #600 from haoqi123/dev
docker-compose addition
2022-09-21 10:49:08 +08:00
haoqi
e626b99090 1. 删除km-dist/docker文件夹,以[单机部署手册.md]为准 2022-09-20 19:30:20 +08:00
haoqi123
203859b71b Merge branch 'didi:dev' into dev 2022-09-20 19:25:12 +08:00
haoqi
9a25c22f3a 1. 调整docker-compose.yml中各个服务的镜像
2. 经过@wangdongfang-aden大哥的调试将helm与docker镜像合二为一,于是删减掉各个镜像的Dockerfile与启动脚本,后续也不需要额外维护
2022-09-20 19:23:18 +08:00
zengqiao
0a03f41a7c 后端增加指标摆放顺序功能 2022-09-20 14:42:22 +08:00
zengqiao
56191939c8 Merge branch 'dev' of github.com:didi/KnowStreaming into dev 2022-09-20 14:23:09 +08:00
zengqiao
beb754aaaa 修复JMX连接被关闭,抛出IOException后,未进行连接重建的问题 2022-09-20 14:22:06 +08:00
EricZeng
f234f740ca Merge pull request #603 from didi/dev
合并开发分支
2022-09-20 10:51:39 +08:00
EricZeng
e14679694c Merge pull request #602 from f1558/dev
fix issue
2022-09-20 10:31:16 +08:00
zengqiao
e06712397e 修复因DB中Broker信息不存在导致TotalLogSize指标获取时抛空指针问题 2022-09-20 10:27:30 +08:00
Richard
b6c6df7ffc fix issue
* SQL specification comments to avoid direct operation failure
2022-09-20 09:42:42 +08:00
zengqiao
375c6f56c9 修改GroupOffsetResetEnum类名为OffsetTypeEnum 2022-09-19 13:55:59 +08:00
EricZeng
0bf85c97b5 Merge pull request #555 from superspeedone/dev
Dev
2022-09-19 11:18:28 +08:00
EricZeng
630e582321 Merge pull request #593 from Mengqi777/mengqi-dev
fix: adjust os judgment method with uname
2022-09-19 10:34:16 +08:00
EricZeng
a89fe23bdd Merge pull request #597 from WYAOBO/dev
文档更新
2022-09-19 10:15:38 +08:00
haoqi
a7a5fa9a31 1. 调整docker-compose.yml中networks配置
2. ks-manager添加健康检查
3. 更新单机部署手册
2022-09-18 19:10:22 +08:00
_haoqi
c73a7eee2f 1. 调整docker-compose服务,容器名称 2022-09-16 20:03:58 +08:00
_haoqi
121f8468d5 1. 调整文件格式LF
2. 调整docker-compose服务,容器名称
2022-09-16 17:33:19 +08:00
haoqi
7b0b6936e0 1. 调整docker-compose.yml中容器名称 2022-09-16 15:54:34 +08:00
Peng
597ea04a96 Update README.md 2022-09-16 15:20:04 +08:00
Peng
f7f90aeaaa Update README.md 2022-09-16 15:18:29 +08:00
_haoqi
227479f695 1. 修改dockerfile
2. 删除无用配置文件
2022-09-16 15:13:18 +08:00
WYAOBO
6477fb3fe0 Merge branch 'didi:dev' into dev 2022-09-16 14:50:13 +08:00
wangdongfang-aden
4223f4f3c4 Merge pull request #596 from wangdongfang-aden/dev
helm update 3.0.0-beta.2
2022-09-16 14:45:43 +08:00
wangdongfang-aden
7288874d72 helm update 3.0.0-beta.2 2022-09-16 14:44:14 +08:00
wangdongfang-aden
68f76f2daf helm update 3.0.0-beta.2 2022-09-16 14:42:34 +08:00
wyb
fe6ddebc49 文档更新 2022-09-16 14:41:45 +08:00
wangdongfang-aden
12b5acd073 helm update 3.0.0-beta.2 2022-09-16 14:41:40 +08:00
wangdongfang-aden
a6f1fe07b3 helm update 3.0.0-beta.2 2022-09-16 14:41:02 +08:00
wangdongfang-aden
85e3f2a946 helm update 3.0.0-beta.2 2022-09-16 14:40:34 +08:00
pokemeng
d4f416de14 fix: adjust os judgment method with uname 2022-09-16 11:34:03 +08:00
haoqi
0d9a6702c1 1. 更改es初始化脚本输出追加为重定向 2022-09-15 17:13:58 +08:00
haoqi
d11285cdbf Merge branch 'master' into dev
# Conflicts:
#	km-dist/init/sql/ddl-logi-security.sql
2022-09-15 17:01:39 +08:00
EricZeng
5f1f33d2b9 Merge pull request #591 from didi/master
合并主分支
2022-09-15 16:59:11 +08:00
zengqiao
474daf752d bump version to 3.0.0-beta.3 2022-09-15 16:54:52 +08:00
haoqi
27d1b92690 1. 添加init容器,只用于初始化es索引 2022-09-15 16:22:51 +08:00
zengqiao
993afa4c19 默认用户名密码调整说明 2022-09-15 16:20:13 +08:00
EricZeng
028d891c32 Merge pull request #588 from didi/dev_v3.0.0-beta.2
合并v3.0.0 beta.2
2022-09-15 15:46:58 +08:00
zengqiao
0df55ec22d 更新3.0.0-beta.2升级手册 2022-09-15 15:23:29 +08:00
zengqiao
579f64774d 更新3.0.0-beta.2变更说明 2022-09-15 15:20:50 +08:00
haoqi
792f8d939d 1. 更改Dockerfile 2022-09-15 15:06:19 +08:00
EricZeng
e4fb02fcda Merge pull request #587 from didi/dev
合并开发分支
2022-09-15 14:35:00 +08:00
haoqi
0c14c641d0 1. 添加docker-compose部署方式
2. 更改manage服务初始化方式
3. 更改es初始化方式
2022-09-15 14:26:45 +08:00
EricZeng
dba671fd1e Merge pull request #586 from GraceWalk/dev
Dev
2022-09-15 13:49:04 +08:00
GraceWalk
80d1693722 fix: 修复单集群详情引导步骤定位错误的问题 2022-09-15 13:39:09 +08:00
GraceWalk
26014a11b2 feat: 补充前端打包构建部分文档说明 2022-09-15 13:36:09 +08:00
GraceWalk
848fddd55a fix: 切换依赖安装源为 taobao 镜像 2022-09-15 13:34:15 +08:00
EricZeng
97f5f05f1a Merge pull request #585 from didi/dev
更新单机部署文档
2022-09-15 13:05:14 +08:00
zengqiao
25b82810f2 更新单机部署文档 2022-09-15 13:01:33 +08:00
EricZeng
9b1e506fa7 Merge pull request #584 from didi/dev
修复日志表字段过短问题
2022-09-15 12:56:49 +08:00
zengqiao
7a42996e97 修复日志表字段过短问题 2022-09-15 12:55:06 +08:00
EricZeng
dbfcebcf67 Merge pull request #583 from didi/dev
合并开发分支
2022-09-15 12:38:11 +08:00
zengqiao
37c3f69a28 修复类型转化失败问题 2022-09-15 11:32:44 +08:00
zengqiao
5d412890b4 调整超时时间配置 2022-09-15 11:31:25 +08:00
zengqiao
1e318a4c40 修改默认的用户名密码 2022-09-15 11:31:03 +08:00
EricZeng
d4549176ec Merge pull request #566 from lomodays207/master
解决 java.lang.NumberFormatException: For input string: "{"value":0,"relation":"eq"}" 问题
2022-09-15 10:05:26 +08:00
haoqi
61efdf492f 添加docker-compose部署方式 2022-09-13 23:20:41 +08:00
lucasun
67ea4d44c8 Merge pull request #575 from GraceWalk/dev
同步前端代码
2022-09-13 15:13:02 +08:00
GraceWalk
fdae05a4aa fix: 登录页文案修改 2022-09-13 14:46:42 +08:00
GraceWalk
5efb837ee8 fix: 单集群详情样式优化 2022-09-13 14:46:29 +08:00
GraceWalk
584b626d93 fix: 修复 Broker Card 返回数据后依旧展示加载态的问题 2022-09-13 14:45:56 +08:00
GraceWalk
de25a4ed8e fix: 修复 Broker Card 返回数据后依旧展示加载态的问题 2022-09-13 14:45:27 +08:00
GraceWalk
2e852e5ca6 fix: 修复用户登出后回退还可以访问系统的问题 2022-09-13 14:44:18 +08:00
GraceWalk
b11000715a 修复 Topic Config 编辑表单不能正确回显当前值的问题 2022-09-13 14:43:35 +08:00
GraceWalk
b3f8b46f0f fix: 修复扩缩/迁移副本无法选中默认 Topic 的问题 & 迁移副本 Topic 迁移时间单位支持分钟粒度 2022-09-13 14:42:21 +08:00
GraceWalk
8d22a0664a fix: Broker 列表标识当前 Controller 2022-09-13 14:37:20 +08:00
GraceWalk
20756a3453 fix: 重置 Offset 部分 partationId 修改为 Select & Offset 数值限制 2022-09-13 14:35:23 +08:00
GraceWalk
c9b4d45a64 fix: 修复 Job 扩缩副本任务明细错误的问题 2022-09-13 14:31:45 +08:00
GraceWalk
83f7f5468b fix: 均衡历史列表样式重构 & 周期均衡场景化 & 立即均衡默认带入周期均衡参数 2022-09-13 14:30:03 +08:00
GraceWalk
59c042ad67 fix: Topic 列表趋势图优化 & 相关文案调整 2022-09-13 14:26:12 +08:00
GraceWalk
d550fc5068 fix: 修复 Consume 点击 Stop 后未停止请求发送的问题 2022-09-13 14:24:30 +08:00
GraceWalk
6effba69a0 feat: 补充 ReBalance 和 Topic 部分权限项 2022-09-13 14:22:50 +08:00
GraceWalk
9b46956259 fix: Topic 详情 Partition Tab 卡片模式展示优化 2022-09-13 14:18:17 +08:00
GraceWalk
b5a4a732da fix: 健康分设置问题修复 2022-09-13 14:15:15 +08:00
GraceWalk
487862367e feat: 多集群列表支持编辑 & 代码结构优化 2022-09-13 14:14:15 +08:00
GraceWalk
5b63b9ce67 feat: 左侧栏内容调整 2022-09-13 14:12:34 +08:00
GraceWalk
afbcd3e1df fix: Broker/Topic 图表详情 bugfix & 体验优化 2022-09-13 14:09:57 +08:00
GraceWalk
12b82c1395 fix: 图表展示 bugifx & 优化 2022-09-13 14:09:03 +08:00
GraceWalk
863b765e0d feat: 新增 RenderEmpty 组件 2022-09-13 14:04:55 +08:00
GraceWalk
731429c51c fix: 系统管理子应用补充返回 code 码拦截逻辑 2022-09-13 11:44:47 +08:00
GraceWalk
66f3bc61fe fix: 创建/编辑角色优化 2022-09-13 11:44:08 +08:00
GraceWalk
4efe35dd51 fix: 项目打包构建流程优化 & 补充说明 2022-09-13 11:43:30 +08:00
EricZeng
c92461ef93 Merge pull request #565 from didi/dev
合并开发分支
2022-09-12 05:53:34 +08:00
superspeedone
405e6e0c1d Topic消息查询支持Timestamp排序,接口支持按指定日期查询 2022-09-09 18:56:45 +08:00
superspeedone
0d227aef49 Topic消息查询支持Timestamp排序,接口支持按指定日期查询 2022-09-09 17:29:22 +08:00
superspeedone
0e49002f42 Topic消息查询支持Timestamp排序,接口支持按指定日期查询 2022-09-09 15:45:31 +08:00
wangdongfang-aden
2e016800e0 Merge pull request #568 from wangdongfang-aden/dev
使用3.0.0-beta.1镜像
2022-09-09 15:18:36 +08:00
wangdongfang-aden
09f317b991 使用3.0.0-beta.1镜像 2022-09-09 15:17:02 +08:00
wangdongfang-aden
5a48cb1547 使用3.0.0-beta.1镜像 2022-09-09 15:16:33 +08:00
wangdongfang-aden
f632febf33 Update Chart.yaml 2022-09-09 15:15:56 +08:00
wangdongfang-aden
3c53467943 使用3.0.0-beta.1镜像 2022-09-09 15:15:24 +08:00
qiubo
d358c0f4f7 修复ES total 查询转换异常问题 2022-09-09 10:22:26 +08:00
zengqiao
de977a5b32 加快添加集群后的信息获取的速度 2022-09-08 14:21:26 +08:00
zengqiao
703d685d59 Task任务分为metrics,common,metaddata三类,每一类任务的执行对应一个线程池,减少对Job模块线程池的依赖 2022-09-08 14:17:15 +08:00
zengqiao
31a5f17408 修复旧副本数为NULL的问题 2022-09-08 13:53:41 +08:00
zengqiao
c40ae3c455 增加副本变更任务结束后,进行优先副本选举的操作 2022-09-08 13:52:51 +08:00
zengqiao
b71a34279e 调整默认的权限 2022-09-08 13:50:08 +08:00
zengqiao
8f8c0c4eda 删除无效文件 2022-09-08 13:49:07 +08:00
zengqiao
3a384f0e34 优化重置Offset时的错误信息 2022-09-08 13:47:21 +08:00
zengqiao
cf7bc11cbd 增加登录系统对接文档 2022-09-08 13:46:45 +08:00
EricZeng
be60ae8399 Merge pull request #560 from didi/dev
合并开发分支
2022-09-07 14:20:04 +08:00
superspeedone
8e50d145d5 Topic消息查询支持Timestamp排序,支持查询最新消息或最早消息 #534 2022-09-07 11:17:59 +08:00
zengqiao
7a3d15525c 支持Ldap登录认证 2022-09-06 15:25:27 +08:00
zengqiao
64f32d8b24 bump logi-security version to 2.10.13 and logi-elasticsearch-client version to 1.0.24 2022-09-06 15:24:05 +08:00
zengqiao
949d6ba605 集群Broker列表,增加Controller角色信息 2022-09-06 15:22:57 +08:00
zengqiao
ceb8db09f4 优化查询Topic信息,Topic不存在时的错误提示 2022-09-06 15:21:53 +08:00
zengqiao
ed05a0ebb8 修复集群Group列表搜索反馈结果错误问题 2022-09-06 15:20:50 +08:00
zengqiao
a7cbb76655 修复Offset单位错误问题 2022-09-06 15:19:29 +08:00
zengqiao
93cbfa0b1f 后端增加页面权限点 2022-09-06 15:18:54 +08:00
zengqiao
6120613a98 Merge branch 'dev' of github.com:didi/KnowStreaming into dev 2022-09-06 15:15:14 +08:00
EricZeng
dbd00db159 Merge pull request #559 from didi/master
合并主分支
2022-09-06 15:14:18 +08:00
zengqiao
befde952f5 补充KS连接特定JMX IP的说明 2022-09-06 15:13:03 +08:00
zengqiao
1aa759e5be bump version to 3.0.0-beta.2 2022-09-06 10:25:14 +08:00
superspeedone
0f35427645 Merge branch 'dev' of https://github.com/superspeedone/KnowStreaming into dev 2022-09-05 15:41:08 +08:00
yanweiwen
fa7ad64140 Topic消息查询支持Timestamp排序,支持查询最新消息或最早消息 #534 2022-09-05 14:46:40 +08:00
1038 changed files with 98083 additions and 11072 deletions

51
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,51 @@
---
name: 报告Bug
about: 报告KnowStreaming的相关Bug
title: ''
labels: bug
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个Bug。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 重现该问题的步骤
1. xxx
2. xxx
3. xxx
### 预期结果
<!-- 写下应该出现的预期结果?-->
### 实际结果
<!-- 实际发生了什么? -->
---
如果有异常请附上异常Trace:
```
Just put your stack trace here!
```

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: true
contact_links:
- name: 讨论问题
url: https://github.com/didi/KnowStreaming/discussions/new
about: 发起问题、讨论 等等
- name: KnowStreaming官网
url: https://knowstreaming.com/
about: KnowStreaming website

View File

@@ -0,0 +1,26 @@
---
name: 优化建议
about: 相关功能优化建议
title: ''
labels: Optimization Suggestions
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个优化建议。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 需要优化的功能点
### 建议如何优化

View File

@@ -0,0 +1,20 @@
---
name: 提议新功能/需求
about: 给KnowStreaming提一个功能需求
title: ''
labels: feature
assignees: ''
---
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
你是否希望来认领这个Feature。
「 Y / N 」
## 这里描述需求
<!--请尽可能的描述清楚您的需求 -->

12
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@@ -0,0 +1,12 @@
---
name: 提个问题
about: 问KnowStreaming相关问题
title: ''
labels: question
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
## 在这里提出你的问题

23
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,23 @@
请不要在没有先创建Issue的情况下创建Pull Request。
## 变更的目的是什么
XXXXX
## 简短的更新日志
XX
## 验证这一变化
XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 一个 PRPull Request的简写只解决一个问题禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue通常在您开始处理之前创建除非是书写错误之类的琐碎更改不需要 Issue
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PSCommit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过;

43
.github/workflows/ci_build.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: KnowStreaming Build
on:
push:
branches: [ "*" ]
pull_request:
branches: [ "*" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up JDK 11
uses: actions/setup-java@v3
with:
java-version: '11'
distribution: 'temurin'
cache: maven
- name: Setup Node
uses: actions/setup-node@v1
with:
node-version: '12.22.12'
- name: Build With Maven
run: mvn -Prelease-package -Dmaven.test.skip=true clean install -U
- name: Get KnowStreaming Version
if: ${{ success() }}
run: |
version=`mvn -Dexec.executable='echo' -Dexec.args='${project.version}' --non-recursive exec:exec -q`
echo "VERSION=${version}" >> $GITHUB_ENV
- name: Upload Binary Package
if: ${{ success() }}
uses: actions/upload-artifact@v3
with:
name: KnowStreaming-${{ env.VERSION }}.tar.gz
path: km-dist/target/KnowStreaming-${{ env.VERSION }}.tar.gz

6
.gitignore vendored
View File

@@ -109,4 +109,8 @@ out/*
dist/ dist/
dist/* dist/*
km-rest/src/main/resources/templates/ km-rest/src/main/resources/templates/
*dependency-reduced-pom* *dependency-reduced-pom*
#filter flattened xml
*/.flattened-pom.xml
.flattened-pom.xml
*/*/.flattened-pom.xml

74
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project, and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at https://knowstreaming.com/support-center . All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@@ -1,28 +1,150 @@
# Contribution Guideline
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
## Pull Requests
Before sending pull request to this project, please read and follow guidelines below. # 为KnowStreaming做贡献
1. Branch: We only accept pull request on `dev` branch.
2. Coding style: Follow the coding style used in LogiKM.
3. Commit message: Use English and be aware of your spell.
4. Test: Make sure to test your code.
Add device mode, API version, related log, screenshots and other related information in your pull request if possible. 欢迎👏🏻来到KnowStreaming本文档是关于如何为KnowStreaming做出贡献的指南。
NOTE: We assume all your contribution can be licensed under the [Apache License 2.0](LICENSE). 如果您发现不正确或遗漏的内容, 请留下意见/建议。
## Issues ## 行为守则
请务必阅读并遵守我们的 [行为准则](./CODE_OF_CONDUCT.md).
We love clearly described issues. :)
Following information can help us to resolve the issue faster.
* Device mode and hardware information. ## 贡献
* API version.
* Logs. **KnowStreaming** 欢迎任何角色的新参与者,包括 **User** 、**Contributor**、**Committer**、**PMC** 。
* Screenshots.
* Steps to reproduce the issue. 我们鼓励新人积极加入 **KnowStreaming** 项目从User到Contributor、Committer ,甚至是 PMC 角色。
为了做到这一点,新人需要积极地为 **KnowStreaming** 项目做出贡献。以下介绍如何对 **KnowStreaming** 进行贡献。
### 创建/打开 Issue
如果您在文档中发现拼写错误、在代码中**发现错误**或想要**新功能**或想要**提供建议**,您可以在 GitHub 上[创建一个Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 进行报告。
如果您想直接贡献, 您可以选择下面标签的问题。
- [contribution welcome](https://github.com/didi/KnowStreaming/labels/contribution%20welcome) : 非常需要解决/新增 的Issues
- [good first issue](https://github.com/didi/KnowStreaming/labels/good%20first%20issue): 对新人比较友好, 新人可以拿这个Issue来练练手热热身。
<font color=red ><b> 请注意,任何 PR 都必须与有效issue相关联。否则PR 将被拒绝。</b></font>
### 开始你的贡献
**分支介绍**
我们将 `dev`分支作为开发分支, 说明这是一个不稳定的分支。
此外,我们的分支模型符合 [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/). 我们强烈建议新人在创建PR之前先阅读上述文章。
**贡献流程**
为方便描述,我们这里定义一下2个名词
自己Fork出来的仓库是私人仓库, 我们这里称之为 **分叉仓库**
Fork的源项目,我们称之为:**源仓库**
现在如果您准备好创建PR, 以下是贡献者的工作流程:
1. Fork [KnowStreaming](https://github.com/didi/KnowStreaming) 项目到自己的仓库
2. 从源仓库的`dev`拉取并创建自己的本地分支,例如: `dev`
3. 在本地分支上对代码进行修改
4. Rebase 开发分支, 并解决冲突
5. commit 并 push 您的更改到您自己的**分叉仓库**
6. 创建一个 Pull Request 到**源仓库**的`dev`分支中。
7. 等待回复。如果回复的慢,请无情的催促。
更为详细的贡献流程请看:[贡献流程](./docs/contributer_guide/贡献流程.md)
创建Pull Request时
1. 请遵循 PR的 [模板](./.github/PULL_REQUEST_TEMPLATE.md)
2. 请确保 PR 有相应的issue。
3. 如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档(在对应的issue中)。
4. 注意单个 PR 不能太大。如果需要进行大量更改,最好将更改分成几个单独的 PR。
5. 在合并PR之前尽量的将最终的提交信息清晰简洁, 将多次修改的提交尽可能的合并为一次提交。
6. 创建 PR 后将为PR分配一个或多个reviewers。
<font color=red><b>如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档。</b></font>
# 代码审查指南
Commiter将轮流review代码以确保在合并前至少有一名Commiter
一些原则:
- 可读性——重要的代码应该有详细的文档。API 应该有 Javadoc。代码风格应与现有风格保持一致。
- 优雅:新的函数、类或组件应该设计得很好。
- 可测试性——单元测试用例应该覆盖 80% 的新代码。
- 可维护性 - 遵守我们的编码规范。
# 开发者
## 成为Contributor
只要成功提交并合并PR , 则为Contributor
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
## 尝试成为Commiter
一般来说, 贡献8个重要的补丁并至少让三个不同的人来Review他们(您需要3个Commiter的支持)。
然后请人给你提名, 您需要展示您的
1. 至少8个重要的PR和项目的相关问题
2. 与团队合作的能力
3. 了解项目的代码库和编码风格
4. 编写好代码的能力
当前的Commiter可以通过在KnowStreaming中的Issue标签 `nomination`(提名)来提名您
1. 你的名字和姓氏
2. 指向您的Git个人资料的链接
3. 解释为什么你应该成为Commiter
4. 详细说明提名人与您合作的3个PR以及相关问题,这些问题可以证明您的能力。
另外2个Commiter需要支持您的**提名**如果5个工作日内没有人反对您就是提交者,如果有人反对或者想要更多的信息Commiter会讨论并通常达成共识(5个工作日内) 。
# 开源奖励计划
我们非常欢迎开发者们为KnowStreaming开源项目贡献一份力量相应也将给予贡献者激励以表认可与感谢。
## 参与贡献
1. 积极参与 Issue 的讨论如答疑解惑、提供想法或报告无法解决的错误Issue
2. 撰写和改进项目的文档Wiki
3. 提交补丁优化代码Coding
## 你将获得
1. 加入KnowStreaming开源项目贡献者名单并展示
2. KnowStreaming开源贡献者证书(纸质&电子版)
3. KnowStreaming贡献者精美大礼包(KnowStreamin/滴滴 周边)
## 相关规则
- Contributer和Commiter都会有对应的证书和对应的礼包
- 每季度有KnowStreaming项目团队评选出杰出贡献者,颁发相应证书。
- 年末进行年度评选
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)

View File

@@ -45,22 +45,29 @@
## `Know Streaming` 简介 ## `Know Streaming` 简介
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。整体具有以下特点: `Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。
我们现在正在收集 Know Streaming 用户信息,以帮助我们进一步改进 Know Streaming。
请在 [issue#663](https://github.com/didi/KnowStreaming/issues/663) 上提供您的使用信息来支持我们:[谁在使用 Know Streaming](https://github.com/didi/KnowStreaming/issues/663)
整体具有以下特点:
- 👀 &nbsp;**零侵入、全覆盖** - 👀 &nbsp;**零侵入、全覆盖**
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平; - 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
- 🌪️ &nbsp;**零成本、界面化** - 🌪️ &nbsp;**零成本、界面化**
- 提炼高频 CLI 能力,设计合理的产品路径,提供清新美观的 GUI 界面,支持 Cluster、Broker、Topic、Group、Message、ACL 等组件 GUI 管理普通用户5分钟即可上手 - 提炼高频 CLI 能力,设计合理的产品路径,提供清新美观的 GUI 界面,支持 Cluster、Broker、Zookeeper、Topic、ConsumerGroup、Message、ACL、Connect 等组件 GUI 管理普通用户5分钟即可上手
- 👏 &nbsp;**云原生、插件化** - 👏 &nbsp;**云原生、插件化**
- 基于云原生构建,具备水平扩展能力,只需要增加节点即可获取更强的采集及对外服务能力,提供众多可热插拔的企业级特性,覆盖可观测性生态整合、资源治理、多活容灾等核心场景; - 基于云原生构建,具备水平扩展能力,只需要增加节点即可获取更强的采集及对外服务能力,提供众多可热插拔的企业级特性,覆盖可观测性生态整合、资源治理、多活容灾等核心场景;
- 🚀 &nbsp;**专业能力** - 🚀 &nbsp;**专业能力**
- 集群管理:支持集群一键纳管,健康分析、核心组件观测 等功能; - 集群管理:支持一键纳管,健康分析、核心组件观测 等功能;
- 观测提升:多维度指标观测大盘、观测指标最佳实践 等功能; - 观测提升:多维度指标观测大盘、观测指标最佳实践 等功能;
- 异常巡检:集群多维度健康巡检、集群多维度健康分 等功能; - 异常巡检:集群多维度健康巡检、集群多维度健康分 等功能;
- 能力增强Topic扩缩副本、Topic副本迁移 等功能; - 能力增强:集群负载均衡、Topic扩缩副本、Topic副本迁移 等功能;
&nbsp; &nbsp;
@@ -83,6 +90,7 @@
- [单机部署手册](docs/install_guide/单机部署手册.md) - [单机部署手册](docs/install_guide/单机部署手册.md)
- [版本升级手册](docs/install_guide/版本升级手册.md) - [版本升级手册](docs/install_guide/版本升级手册.md)
- [本地源码启动手册](docs/dev_guide/本地源码启动手册.md) - [本地源码启动手册](docs/dev_guide/本地源码启动手册.md)
- [页面无数据排查手册](docs/dev_guide/页面无数据排查手册.md)
**`产品相关手册`** **`产品相关手册`**
@@ -93,15 +101,21 @@
**点击 [这里](https://doc.knowstreaming.com/product),也可以从官网获取到更多文档** **点击 [这里](https://doc.knowstreaming.com/product),也可以从官网获取到更多文档**
**`产品网址`**
- [产品官网https://knowstreaming.com](https://knowstreaming.com)
- [体验环境https://demo.knowstreaming.com](https://demo.knowstreaming.com),登陆账号admin/admin
## 成为社区贡献者 ## 成为社区贡献者
点击 [这里](CONTRIBUTING.md)了解如何成为 Know Streaming 的贡献者 1. [贡献源码](https://doc.knowstreaming.com/product/10-contribution) 了解如何成为 Know Streaming 的贡献者
2. [具体贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)
3. [开源激励计划](https://doc.knowstreaming.com/product/10-contribution#105-开源激励计划)
4. [贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)
获取KnowStreaming开源社区证书。
## 加入技术交流群 ## 加入技术交流群
@@ -132,8 +146,16 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`** **`2、微信群`**
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。 微信加群:添加`PenceXie``szzdzhp001`的微信号备注KnowStreaming加群。
<br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。
感谢感谢!!!
<img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png">
## Star History ## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=didi/KnowStreaming&type=Date)](https://star-history.com/#didi/KnowStreaming&Date) [![Star History Chart](https://api.star-history.com/svg?repos=didi/KnowStreaming&type=Date)](https://star-history.com/#didi/KnowStreaming&Date)

View File

@@ -1,4 +1,370 @@
## v3.4.0
**问题修复**
- [Bugfix]修复 Overview 指标文案错误的错误 ([#1190](https://github.com/didi/KnowStreaming/issues/1190))
- [Bugfix]修复删除 Kafka 集群后Connect 集群任务出现 NPE 问题 ([#1129](https://github.com/didi/KnowStreaming/issues/1129))
- [Bugfix]修复在 Ldap 登录时,设置 auth-user-registration: false 会导致空指针的问题 ([#1117](https://github.com/didi/KnowStreaming/issues/1117))
- [Bugfix]修复 Ldap 登录,调用 user.getId() 出现 NPE 的问题 ([#1108](https://github.com/didi/KnowStreaming/issues/1108))
- [Bugfix]修复前端新增角色失败等问题 ([#1107](https://github.com/didi/KnowStreaming/issues/1107))
- [Bugfix]修复 ZK 四字命令解析错误的问题
- [Bugfix]修复 zk standalone 模式下,状态获取错误的问题
- [Bugfix]修复 Broker 元信息解析方法未调用导致接入集群失败的问题 ([#993](https://github.com/didi/KnowStreaming/issues/993))
- [Bugfix]修复 ConsumerAssignment 类型转换错误的问题
- [Bugfix]修复对 Connect 集群的 clusterUrl 的动态更新导致配置不生效的问题 ([#1079](https://github.com/didi/KnowStreaming/issues/1079))
- [Bugfix]修复消费组不支持重置到最旧 Offset 的问题 ([#1059](https://github.com/didi/KnowStreaming/issues/1059))
- [Bugfix]后端增加查看 User 密码的权限点 ([#1095](https://github.com/didi/KnowStreaming/issues/1095))
- [Bugfix]修复 Connect-JMX 端口维护信息错误的问题 ([#1146](https://github.com/didi/KnowStreaming/issues/1146))
- [Bugfix]修复系统管理子应用无法正常启动的问题 ([#1167](https://github.com/didi/KnowStreaming/issues/1167))
- [Bugfix]修复 Security 模块,权限点缺失问题 ([#1069](https://github.com/didi/KnowStreaming/issues/1069)), ([#1154](https://github.com/didi/KnowStreaming/issues/1154))
- [Bugfix]修复 Connect-Worker Jmx 不生效的问题 ([#1067](https://github.com/didi/KnowStreaming/issues/1067))
- [Bugfix]修复权限 ACL 管理中,消费组列表展示错误的问题 ([#1037](https://github.com/didi/KnowStreaming/issues/1037))
- [Bugfix]修复 Connect 模块没有默认勾选指标的问题([#1022](https://github.com/didi/KnowStreaming/issues/1022)
- [Bugfix]修复 es 索引 create/delete 死循环的问题 ([#1021](https://github.com/didi/KnowStreaming/issues/1021))
- [Bugfix]修复 Connect-GroupDescription 解析失败的问题 ([#1015](https://github.com/didi/KnowStreaming/issues/1015))
- [Bugfix]修复 Prometheus 开放接口中Partition 指标 tag 缺失的问题 ([#1014](https://github.com/didi/KnowStreaming/issues/1014))
- [Bugfix]修复 Topic 消息展示offset 为 0 不显示的问题 ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
- [Bugfix]修复重置offset接口调用过多问题
- [Bugfix]Connect 提交任务变更为只保存用户修改的配置,并修复 JSON 模式下配置展示不全的问题 ([#1158](https://github.com/didi/KnowStreaming/issues/1158))
- [Bugfix]修复消费组 Offset 重置后提示重置成功但是前端不刷新数据Offset 无变化的问题 ([#1090](https://github.com/didi/KnowStreaming/issues/1090))
- [Bugfix]修复未勾选系统管理查看权限,但是依然可以查看系统管理的问题 ([#1105](https://github.com/didi/KnowStreaming/issues/1105))
**产品优化**
- [Optimize]补充接入集群时,可选的 Kafka 版本列表 ([#1204](https://github.com/didi/KnowStreaming/issues/1204))
- [Optimize]GroupTopic 信息修改为实时获取 ([#1196](https://github.com/didi/KnowStreaming/issues/1196))
- [Optimize]增加 AdminClient 观测信息 ([#1111](https://github.com/didi/KnowStreaming/issues/1111))
- [Optimize]增加 Connector 运行状态指标 ([#1110](https://github.com/didi/KnowStreaming/issues/1110))
- [Optimize]统一 DB 元信息更新格式 ([#1127](https://github.com/didi/KnowStreaming/issues/1127)), ([#1125](https://github.com/didi/KnowStreaming/issues/1125)), ([#1006](https://github.com/didi/KnowStreaming/issues/1006))
- [Optimize]日志输出增加支持 MDC方便用户在 logback.xml 中 json 格式化日志 ([#1032](https://github.com/didi/KnowStreaming/issues/1032))
- [Optimize]Jmx 相关日志优化 ([#1082](https://github.com/didi/KnowStreaming/issues/1082))
- [Optimize]Topic-Partitions增加主动超时功能 ([#1076](https://github.com/didi/KnowStreaming/issues/1076))
- [Optimize]Topic-Messages页面后端增加按照Partition和Offset纬度的排序 ([#1075](https://github.com/didi/KnowStreaming/issues/1075))
- [Optimize]Connect-JSON模式下的JSON格式和官方API的格式不一致 ([#1080](https://github.com/didi/KnowStreaming/issues/1080)), ([#1153](https://github.com/didi/KnowStreaming/issues/1153)), ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
- [Optimize]登录页面展示的 star 数量修改为最新的数量
- [Optimize]Group 列表的 maxLag 指标调整为实时获取 ([#1074](https://github.com/didi/KnowStreaming/issues/1074))
- [Optimize]Connector增加重启、编辑、删除等权限点 ([#1066](https://github.com/didi/KnowStreaming/issues/1066)), ([#1147](https://github.com/didi/KnowStreaming/issues/1147))
- [Optimize]优化 pom.xml 中KS版本的标签名
- [Optimize]优化集群Brokers中, Controller显示存在延迟的问题 ([#1162](https://github.com/didi/KnowStreaming/issues/1162))
- [Optimize]bump jackson version to 2.13.5
- [Optimize]权限新增 ACL自定义权限配置资源 TransactionalId 优化 ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
- [Optimize]Connect 样式优化
- [Optimize]消费组详情控制数据实时刷新
**功能新增**
- [Feature]新增删除 Group 或 GroupOffset 功能 ([#1064](https://github.com/didi/KnowStreaming/issues/1064)), ([#1084](https://github.com/didi/KnowStreaming/issues/1084)), ([#1040](https://github.com/didi/KnowStreaming/issues/1040)), ([#1144](https://github.com/didi/KnowStreaming/issues/1144))
- [Feature]增加 Truncate 数据功能 ([#1062](https://github.com/didi/KnowStreaming/issues/1062)), ([#1043](https://github.com/didi/KnowStreaming/issues/1043)), ([#1145](https://github.com/didi/KnowStreaming/issues/1145))
- [Feature]支持指定 Server 的具体 Jmx 端口 ([#965](https://github.com/didi/KnowStreaming/issues/965))
**文档更新**
- [Doc]FAQ 补充 ES 8.x 版本使用说明 ([#1189](https://github.com/didi/KnowStreaming/issues/1189))
- [Doc]补充启动失败的说明 ([#1126](https://github.com/didi/KnowStreaming/issues/1126))
- [Doc]补充 ZK 无数据排查说明 ([#1004](https://github.com/didi/KnowStreaming/issues/1004))
- [Doc]无数据排查文档,补充 ES 集群 Shard 满的异常日志
- [Doc]README 补充页面无数据排查手册链接
- [Doc]补充连接特定 Jmx 端口的说明 ([#965](https://github.com/didi/KnowStreaming/issues/965))
- [Doc]补充 zk_properties 字段的使用说明 ([#1003](https://github.com/didi/KnowStreaming/issues/1003))
---
## v3.3.0
**问题修复**
- 修复 Connect 的 JMX-Port 配置未生效问题;
- 修复 不存在 Connector 时OverView 页面的数据一直处于加载中的问题;
- 修复 Group 分区信息,分页时展示不全的问题;
- 修复采集副本指标时,参数传递错误的问题;
- 修复用户信息修改后,用户列表会抛出空指针异常的问题;
- 修复 Topic 详情页面,查看消息时,选择分区不生效问题;
- 修复对 ZK 客户端进行配置后不生效的问题;
- 修复 connect 模块,指标中缺少健康巡检项通过数的问题;
- 修复 connect 模块,指标获取方法存在映射错误的问题;
- 修复 connect 模块max 纬度指标获取错误的问题;
- 修复 Topic 指标大盘 TopN 指标显示信息错误的问题;
- 修复 Broker Similar Config 显示错误的问题;
- 修复解析 ZK 四字命令时,数据类型设置错误导致空指针的问题;
- 修复新增 Topic 时,清理策略选项版本控制错误的问题;
- 修复新接入集群时 Controller-Host 信息不显示的问题;
- 修复 Connector 和 MM2 列表搜索不生效的问题;
- 修复 Zookeeper 页面Leader 显示存在异常的问题;
- 修复前端打包失败的问题;
**产品优化**
- ZK Overview 页面补充默认展示的指标;
- 统一初始化 ES 索引模版的脚本为 init_es_template.sh同时新增缺失的 connect 索引模版初始化脚本,去除多余的 replica 和 zookeper 索引模版初始化脚本;
- 指标大盘页面,优化指标筛选操作后,无指标数据的指标卡片由不显示改为显示,并增加无数据的兜底;
- 删除从 ES 读写 replica 指标的相关代码;
- 优化 Topic 健康巡检的日志,明确错误的原因;
- 优化无 ZK 模块时,巡检详情忽略对 ZK 的展示;
- 优化本地缓存大小为可配置;
- Task 模块中的返回中,补充任务的分组信息;
- FAQ 补充 Ldap 的配置说明;
- FAQ 补充接入 Kerberos 认证的 Kafka 集群的配置说明;
- ks_km_kafka_change_record 表增加时间纬度的索引,优化查询性能;
- 优化 ZK 健康巡检的日志,便于问题的排查;
**功能新增**
- 新增基于滴滴 Kafka 的 Topic 复制功能(需使用滴滴 Kafka 才可具备该能力);
- Topic 指标大盘,新增 Topic 复制相关的指标;
- 新增基于 TestContainers 的单测;
**Kafka MM2 Beta版 (v3.3.0版本新增发布)**
- MM2 任务的增删改查;
- MM2 任务的指标大盘;
- MM2 任务的健康状态;
---
## v3.2.0
**问题修复**
- 修复健康巡检结果更新至 DB 时,出现死锁问题;
- 修复 KafkaJMXClient 类中logger错误的问题
- 后端修复 Topic 过期策略在 0.10.1.0 版本能多选的问题,实际应该只能二选一;
- 修复接入集群时,不填写集群配置会报错的问题;
- 升级 spring-context 至 5.3.19 版本,修复安全漏洞;
- 修复 Broker & Topic 修改配置时,多版本兼容配置的版本信息错误的问题;
- 修复 Topic 列表的健康分为健康状态;
- 修复 Broker LogSize 指标存储名称错误导致查询不到的问题;
- 修复 Prometheus 中,缺少 Group 部分指标的问题;
- 修复因缺少健康状态指标导致集群数错误的问题;
- 修复后台任务记录操作日志时,因缺少操作用户信息导致出现异常的问题;
- 修复 Replica 指标查询时DSL 错误的问题;
- 关闭 errorLogger修复错误日志重复输出的问题
- 修复系统管理更新用户信息失败的问题;
- 修复因原AR信息丢失导致迁移任务一直处于执行中的错误
- 修复集群 Topic 列表实时数据查询时,出现失败的问题;
- 修复集群 Topic 列表,页面白屏问题;
- 修复副本变更时因AR数据异常导致数组访问越界的问题
**产品优化**
- 优化健康巡检为按照资源维度多线程并发处理;
- 统一日志输出格式,并优化部分输出的日志;
- 优化 ZK 四字命令结果解析过程中,容易引起误解的 WARN 日志;
- 优化 Zookeeper 详情中,目录结构的搜索文案;
- 优化线程池的名称,方便第三方系统进行相关问题的分析;
- 去除 ESClient 的并发访问控制,降低 ESClient 创建数及提升利用率;
- 优化 Topic Messages 抽屉文案;
- 优化 ZK 健康巡检失败时的错误日志信息;
- 提高 Offset 信息获取的超时时间,降低并发过高时出现请求超时的概率;
- 优化 Topic & Partition 元信息的更新策略,降低对 DB 连接的占用;
- 优化 Sonar 代码扫码问题;
- 优化分区 Offset 指标的采集;
- 优化前端图表相关组件逻辑;
- 优化产品主题色;
- Consumer 列表刷新按钮新增 hover 提示;
- 优化配置 Topic 的消息大小时的测试弹框体验;
- 优化 Overview 页面 TopN 查询的流程;
**功能新增**
- 新增页面无数据排查文档;
- 增加 ES 索引删除的功能;
- 支持拆分API服务和Job服务部署
**Kafka Connect Beta版 (v3.2.0版本新增发布)**
- Connect 集群的纳管;
- Connector 的增删改查;
- Connect 集群 & Connector 的指标大盘;
---
## v3.1.0
**Bug修复**
- 修复重置 Group Offset 的提示信息中缺少Dead状态也可进行重置的描述
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
- 修复 git 目录不存在时,打包不能正常进行的问题;
- 修复 KRaft 模式的 Kafka 集群JMX PORT 显示 -1 的问题;
**体验优化**
- 优化Cluster、Broker、Topic、Group的健康分为健康状态
- 去除健康巡检配置中的权重信息;
- 错误提示页面展示优化;
- 前端打包编译依赖默认使用 taobao 镜像;
- 重新设计优化导航栏的 icon
**新增**
- 个人头像下拉信息中,新增产品版本信息;
- 多集群列表页面,新增集群健康状态分布信息;
**Kafka ZK 部分 (v3.1.0版本正式发布)**
- 新增 ZK 集群的指标大盘信息;
- 新增 ZK 集群的服务状态概览信息;
- 新增 ZK 集群的服务节点列表信息;
- 新增 Kafka 在 ZK 的存储数据查看功能;
- 新增 ZK 的健康巡检及健康状态计算;
---
## v3.0.1
**Bug修复**
- 修复重置 Group Offset 时,提示信息中缺少 Dead 状态也可进行重置的信息;
- 修复 Ldap 某个属性不存在时,会直接抛出空指针导致登陆失败的问题;
- 修复集群 Topic 列表页,健康分详情信息中,检查时间展示错误的问题;
- 修复更新健康检查结果时,出现死锁的问题;
- 修复 Replica 索引模版错误的问题;
- 修复 FAQ 文档中的错误链接;
- 修复 Broker 的 TopN 指标不存在时,页面数据不展示的问题;
- 修复 Group 详情页,图表时间范围选择不生效的问题;
**体验优化**
- 集群 Group 列表按照 Group 维度进行展示;
- 优化避免因 ES 中该指标不存在,导致日志中出现大量空指针的问题;
- 优化全局 Message & Notification 展示效果;
- 优化 Topic 扩分区名称 & 描述展示;
**新增**
- Broker 列表页面,新增 JMX 是否成功连接的信息;
**ZK 部分(未完全发布)**
- 后端补充 Kafka ZK 指标采集Kafka ZK 信息获取相关功能;
- 增加本地缓存,避免同一采集周期内 ZK 指标重复采集;
- 增加 ZK 节点采集失败跳过策略,避免不断对存在问题的节点不断尝试;
- 修复 zkAvgLatency 指标转 Long 时抛出异常问题;
- 修复 ks_km_zookeeper 表中role 字段类型错误问题;
---
## v3.0.0
**Bug修复**
- 修复 Group 指标防重复采集不生效问题
- 修复自动创建 ES 索引模版失败问题
- 修复 Group+Topic 列表中存在已删除Topic的问题
- 修复使用 MySQL-8 ,因兼容问题, start_time 信息为 NULL 时,会导致创建任务失败的问题
- 修复 Group 信息表更新时,出现死锁的问题
- 修复图表补点逻辑与图表时间范围不适配的问题
**体验优化**
- 按照资源类别,拆分健康巡检任务
- 优化 Group 详情页的指标为实时获取
- 图表拖拽排序支持用户级存储
- 多集群列表 ZK 信息展示兼容无 ZK 情况
- Topic 详情消息预览支持复制功能
- 部分内容大数字支持千位分割符展示
**新增**
- 集群信息中,新增 Zookeeper 客户端配置字段
- 集群信息中,新增 Kafka 集群运行模式字段
- 新增 docker-compose 的部署方式
---
## v3.0.0-beta.3
**文档**
- FAQ 补充权限识别失败问题的说明
- 同步更新文档,保持与官网一致
**Bug修复**
- Offset 信息获取时,过滤掉无 Leader 的分区
- 升级 oshi-core 版本至 5.6.1 版本,修复 Windows 系统获取系统指标失败问题
- 修复 JMX 连接被关闭后,未进行重建的问题
- 修复因 DB 中 Broker 信息不存在导致 TotalLogSize 指标获取时抛空指针问题
- 修复 dml-logi.sql 中SQL 注释错误的问题
- 修复 startup.sh 中,识别操作系统类型错误的问题
- 修复配置管理页面删除配置失败的问题
- 修复系统管理应用文件引用路径
- 修复 Topic Messages 详情提示信息点击跳转 404 的问题
- 修复扩副本时,当前副本数不显示问题
**体验优化**
- Topic-Messages 页面增加返回数据的排序以及按照Earliest/Latest的获取方式
- 优化 GroupOffsetResetEnum 类名为 OffsetTypeEnum使得类名含义更准确
- 移动 KafkaZKDAO 类,及 Kafka Znode 实体类的位置,使得 Kafka Zookeeper DAO 更加内聚及便于识别
- 后端补充 Overview 页面指标排序的功能
- 前端 Webpack 配置优化
- Cluster Overview 图表取消放大展示功能
- 列表页增加手动刷新功能
- 接入/编辑集群,优化 JMX-PORTVersion 信息的回显优化JMX信息的展示
- 提高登录页面图片展示清晰度
- 部分样式和文案优化
---
## v3.0.0-beta.2
**文档**
- 新增登录系统对接文档
- 优化前端工程打包构建部分文档说明
- FAQ补充KnowStreaming连接特定JMX IP的说明
**Bug修复**
- 修复logi_security_oplog表字段过短导致删除Topic等操作无法记录的问题
- 修复ES查询时抛java.lang.NumberFormatException: For input string: "{"value":0,"relation":"eq"}" 问题
- 修复LogStartOffset和LogEndOffset指标单位错误问题
- 修复进行副本变更时旧副本数为NULL的问题
- 修复集群Group列表在第二页搜索时搜索时返回的分页信息错误问题
- 修复重置Offset时返回的错误信息提示不一致的问题
- 修复集群查看系统查看LoadRebalance等页面权限点缺失问题
- 修复查询不存在的Topic时错误信息提示不明显的问题
- 修复Windows用户打包前端工程报错的问题
- package-lock.json锁定前端依赖版本号修复因依赖自动升级导致打包失败等问题
- 系统管理子应用补充后端返回的Code码拦截解决后端接口返回报错不展示的问题
- 修复用户登出后,依旧可以访问系统的问题
- 修复巡检任务配置时,数值显示错误的问题
- 修复Broker/Topic Overview 图表和图表详情问题
- 修复Job扩缩副本任务明细数据错误的问题
- 修复重置Offset时分区IDOffset数值无限制问题
- 修复扩缩/迁移副本时无法选中Kafka系统Topic的问题
- 修复Topic的Config页面编辑表单时不能正确回显当前值的问题
- 修复Broker Card返回数据后依旧展示加载态的问题
**体验优化**
- 优化默认用户密码为 admin/admin
- 缩短新增集群后,集群信息加载的耗时
- 集群Broker列表增加Controller角色信息
- 副本变更任务结束后,增加进行优先副本选举的操作
- Task模块任务分为Metrics、Common、Metadata三类任务每类任务配备独立线程池减少对Job模块的线程池以及不同类任务之间的相互影响
- 删除代码中存在的多余无用文件
- 自动新增ES索引模版及近7天索引减少用户搭建时需要做的事项
- 优化前端工程打包流程
- 优化登录页文案页面左侧栏内容单集群详情样式Topic列表趋势图等
- 首次进入Broker/Topic图表详情时进行预缓存数据从而优化体验
- 优化Topic详情Partition Tab的展示
- 多集群列表页增加编辑功能
- 优化副本变更时,迁移时间支持分钟级别粒度
- logi-security版本升级至2.10.13
- logi-elasticsearch-client版本升级至1.0.24
**能力提升**
- 支持Ldap登录认证
---
## v3.0.0-beta.1 ## v3.0.0-beta.1
**文档** **文档**
@@ -35,6 +401,7 @@
- 增加周期任务用于主动创建缺少的ES模版及索引的能力减少额外的脚本操作 - 增加周期任务用于主动创建缺少的ES模版及索引的能力减少额外的脚本操作
- 增加JMX连接的Broker地址可选择的能力 - 增加JMX连接的Broker地址可选择的能力
---
## v3.0.0-beta.0 ## v3.0.0-beta.0

View File

@@ -13,7 +13,7 @@ curl -s --connect-timeout 10 -o /dev/null -X POST -H 'cache-control: no-cache' -
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "2"
} }
}, },
"mappings" : { "mappings" : {
@@ -115,7 +115,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "2"
} }
}, },
"mappings" : { "mappings" : {
@@ -302,7 +302,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -377,7 +377,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -436,95 +436,6 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
"aliases" : { } "aliases" : { }
}' }'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_partition_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"brokerId" : {
"type" : "long"
},
"partitionId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"topic" : {
"type" : "keyword"
},
"metrics" : {
"properties" : {
"LogStartOffset" : {
"type" : "float"
},
"Messages" : {
"type" : "float"
},
"LogEndOffset" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}[root@10-255-0-23 template]# cat ks_kafka_replication_metric
PUT _template/ks_kafka_replication_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
"order" : 10, "order" : 10,
"index_patterns" : [ "index_patterns" : [
@@ -532,7 +443,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -640,7 +551,474 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
} }
}, },
"aliases" : { } "aliases" : { }
}' }'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_zookeeper_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_cluster_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_cluster_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"ConnectorCount" : {
"type" : "float"
},
"TaskCount" : {
"type" : "float"
},
"ConnectorStartupAttemptsTotal" : {
"type" : "float"
},
"ConnectorStartupFailurePercentage" : {
"type" : "float"
},
"ConnectorStartupFailureTotal" : {
"type" : "float"
},
"ConnectorStartupSuccessPercentage" : {
"type" : "float"
},
"ConnectorStartupSuccessTotal" : {
"type" : "float"
},
"TaskStartupAttemptsTotal" : {
"type" : "float"
},
"TaskStartupFailurePercentage" : {
"type" : "float"
},
"TaskStartupFailureTotal" : {
"type" : "float"
},
"TaskStartupSuccessPercentage" : {
"type" : "float"
},
"TaskStartupSuccessTotal" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_connector_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_connector_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"ConnectorTotalTaskCount" : {
"type" : "float"
},
"HealthCheckPassed" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ConnectorRunningTaskCount" : {
"type" : "float"
},
"ConnectorPausedTaskCount" : {
"type" : "float"
},
"ConnectorFailedTaskCount" : {
"type" : "float"
},
"ConnectorUnassignedTaskCount" : {
"type" : "float"
},
"BatchSizeAvg" : {
"type" : "float"
},
"BatchSizeMax" : {
"type" : "float"
},
"OffsetCommitAvgTimeMs" : {
"type" : "float"
},
"OffsetCommitMaxTimeMs" : {
"type" : "float"
},
"OffsetCommitFailurePercentage" : {
"type" : "float"
},
"OffsetCommitSuccessPercentage" : {
"type" : "float"
},
"PollBatchAvgTimeMs" : {
"type" : "float"
},
"PollBatchMaxTimeMs" : {
"type" : "float"
},
"SourceRecordActiveCount" : {
"type" : "float"
},
"SourceRecordActiveCountAvg" : {
"type" : "float"
},
"SourceRecordActiveCountMax" : {
"type" : "float"
},
"SourceRecordPollRate" : {
"type" : "float"
},
"SourceRecordPollTotal" : {
"type" : "float"
},
"SourceRecordWriteRate" : {
"type" : "float"
},
"SourceRecordWriteTotal" : {
"type" : "float"
},
"OffsetCommitCompletionRate" : {
"type" : "float"
},
"OffsetCommitCompletionTotal" : {
"type" : "float"
},
"OffsetCommitSkipRate" : {
"type" : "float"
},
"OffsetCommitSkipTotal" : {
"type" : "float"
},
"PartitionCount" : {
"type" : "float"
},
"PutBatchAvgTimeMs" : {
"type" : "float"
},
"PutBatchMaxTimeMs" : {
"type" : "float"
},
"SinkRecordActiveCount" : {
"type" : "float"
},
"SinkRecordActiveCountAvg" : {
"type" : "float"
},
"SinkRecordActiveCountMax" : {
"type" : "float"
},
"SinkRecordLagMax" : {
"type" : "float"
},
"SinkRecordReadRate" : {
"type" : "float"
},
"SinkRecordReadTotal" : {
"type" : "float"
},
"SinkRecordSendRate" : {
"type" : "float"
},
"SinkRecordSendTotal" : {
"type" : "float"
},
"DeadletterqueueProduceFailures" : {
"type" : "float"
},
"DeadletterqueueProduceRequests" : {
"type" : "float"
},
"LastErrorTimestamp" : {
"type" : "float"
},
"TotalErrorsLogged" : {
"type" : "float"
},
"TotalRecordErrors" : {
"type" : "float"
},
"TotalRecordFailures" : {
"type" : "float"
},
"TotalRecordsSkipped" : {
"type" : "float"
},
"TotalRetries" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_mirror_maker_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_mirror_maker_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ByteCount" : {
"type" : "float"
},
"ByteRate" : {
"type" : "float"
},
"RecordAgeMs" : {
"type" : "float"
},
"RecordAgeMsAvg" : {
"type" : "float"
},
"RecordAgeMsMax" : {
"type" : "float"
},
"RecordAgeMsMin" : {
"type" : "float"
},
"RecordCount" : {
"type" : "float"
},
"RecordRate" : {
"type" : "float"
},
"ReplicationLatencyMs" : {
"type" : "float"
},
"ReplicationLatencyMsAvg" : {
"type" : "float"
},
"ReplicationLatencyMsMax" : {
"type" : "float"
},
"ReplicationLatencyMsMin" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
for i in {0..6}; for i in {0..6};
do do
@@ -649,7 +1027,10 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_connector_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_mirror_maker_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2 exit 2
done done

View File

@@ -9,7 +9,7 @@ error_exit ()
[ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME [ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME
if [ -z "$JAVA_HOME" ]; then if [ -z "$JAVA_HOME" ]; then
if $darwin; then if [ "Darwin" = "$(uname -s)" ]; then
if [ -x '/usr/libexec/java_home' ] ; then if [ -x '/usr/libexec/java_home' ] ; then
export JAVA_HOME=`/usr/libexec/java_home` export JAVA_HOME=`/usr/libexec/java_home`

View File

@@ -0,0 +1,111 @@
<mxfile host="65bd71144e">
<diagram id="vxzhwhZdNVAY19FZ4dgb" name="Page-1">
<mxGraphModel dx="1194" dy="733" grid="0" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="4" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="540" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="7" style="edgeStyle=none;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1" source="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="240" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="5" style="edgeStyle=none;html=1;startArrow=none;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="400" as="targetPoint"/>
<mxPoint x="360" y="360" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="3" value="C3" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#FF8000;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="340" y="280" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;entryPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="8" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="8" value="fix_928" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="320" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="9" value="github_master" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="160" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="10" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;startArrow=none;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="11" target="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="2" value="C2" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="200" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="12" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="9" target="11">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="200" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="11" value="C1" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="120" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="23" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="3">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="360" as="targetPoint"/>
<mxPoint x="360" y="400" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="17" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;endArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="2" target="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="240" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="16" value="C4" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="440" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="22" value="Tag-v3.2.0" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="120" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="24" value="Tag-v3.2.1" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="440" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="27" value="切换到主分支git checkout github_master" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="90" width="240" height="30" as="geometry"/>
</mxCell>
<mxCell id="34" style="edgeStyle=none;html=1;exitX=0;exitY=0;exitDx=0;exitDy=0;entryX=0.855;entryY=0.145;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="28" target="2">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="28" value="主分支拉最新代码git pull" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="120" width="160" height="30" as="geometry"/>
</mxCell>
<mxCell id="35" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="29">
<mxGeometry relative="1" as="geometry">
<mxPoint x="270" y="225" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="29" value="基于主分支拉新分支git checkout -b fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="210" width="250" height="30" as="geometry"/>
</mxCell>
<mxCell id="37" style="edgeStyle=none;html=1;exitX=0;exitY=1;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="30" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="30" value="提交代码git commit -m &quot;[Optimize]优化xxx问题(#928)&quot;" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="270" width="320" height="30" as="geometry"/>
</mxCell>
<mxCell id="31" value="提交到自己远端仓库git push --set-upstream origin fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="300" width="334" height="30" as="geometry"/>
</mxCell>
<mxCell id="38" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="32">
<mxGeometry relative="1" as="geometry">
<mxPoint x="280" y="380" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="32" value="GitHub页面发起Pull Request请求管理员合入主仓库" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="360" width="300" height="30" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

View File

@@ -0,0 +1 @@
TODO.

View File

@@ -0,0 +1,100 @@
# 贡献名单
- [贡献名单](#贡献名单)
- [1、贡献者角色](#1贡献者角色)
- [1.1、Maintainer](#11maintainer)
- [1.2、Committer](#12committer)
- [1.3、Contributor](#13contributor)
- [2、贡献者名单](#2贡献者名单)
## 1、贡献者角色
KnowStreaming 开发者包含 Maintainer、Committer、Contributor 三种角色,每种角色的标准定义如下。
### 1.1、Maintainer
Maintainer 是对 KnowStreaming 项目的演进和发展做出显著贡献的个人。具体包含以下的标准:
- 完成多个关键模块或者工程的设计与开发,是项目的核心开发人员;
- 持续的投入和激情能够积极参与社区、官网、issue、PR 等项目相关事项的维护;
- 在社区中具有有目共睹的影响力,能够代表 KnowStreaming 参加重要的社区会议和活动;
- 具有培养 Committer 和 Contributor 的意识和能力;
### 1.2、Committer
Committer 是具有 KnowStreaming 仓库写权限的个人,包含以下的标准:
- 能够在长时间内做持续贡献 issue、PR 的个人;
- 参与 issue 列表的维护及重要 feature 的讨论;
- 参与 code review
### 1.3、Contributor
Contributor 是对 KnowStreaming 项目有贡献的个人,标准为:
- 提交过 PR 并被合并;
---
## 2、贡献者名单
开源贡献者名单(不定期更新)
在名单内,但是没有收到贡献者礼品的同学,可以联系szzdzhp001
| 姓名 | Github | 角色 | 公司 |
| ------------------- | ---------------------------------------------------------- | ----------- | -------- |
| 张亮 | [@zhangliangboy](https://github.com/zhangliangboy) | Maintainer | 滴滴出行 |
| 谢鹏 | [@PenceXie](https://github.com/PenceXie) | Maintainer | 滴滴出行 |
| 赵情融 | [@zqrferrari](https://github.com/zqrferrari) | Maintainer | 滴滴出行 |
| 石臻臻 | [@shirenchuang](https://github.com/shirenchuang) | Maintainer | 滴滴出行 |
| 曾巧 | [@ZQKC](https://github.com/ZQKC) | Maintainer | 滴滴出行 |
| 孙超 | [@lucasun](https://github.com/lucasun) | Maintainer | 滴滴出行 |
| 洪华驰 | [@brodiehong](https://github.com/brodiehong) | Maintainer | 滴滴出行 |
| 许喆 | [@potaaaaaato](https://github.com/potaaaaaato) | Committer | 滴滴出行 |
| 郭宇航 | [@GraceWalk](https://github.com/GraceWalk) | Committer | 滴滴出行 |
| 李伟 | [@velee](https://github.com/velee) | Committer | 滴滴出行 |
| 张占昌 | [@zzccctv](https://github.com/zzccctv) | Committer | 滴滴出行 |
| 王东方 | [@wangdongfang-aden](https://github.com/wangdongfang-aden) | Committer | 滴滴出行 |
| 王耀波 | [@WYAOBO](https://github.com/WYAOBO) | Committer | 滴滴出行 |
| 赵寅锐 | [@ZHAOYINRUI](https://github.com/ZHAOYINRUI) | Maintainer | 字节跳动 |
| haoqi123 | [@haoqi123](https://github.com/haoqi123) | Contributor | 前程无忧 |
| chaixiaoxue | [@chaixiaoxue](https://github.com/chaixiaoxue) | Contributor | SYNNEX |
| 陆晗 | [@luhea](https://github.com/luhea) | Contributor | 竞技世界 |
| Mengqi777 | [@Mengqi777](https://github.com/Mengqi777) | Contributor | 腾讯 |
| ruanliang-hualun | [@ruanliang-hualun](https://github.com/ruanliang-hualun) | Contributor | 网易 |
| 17hao | [@17hao](https://github.com/17hao) | Contributor | |
| Huyueeer | [@Huyueeer](https://github.com/Huyueeer) | Contributor | INVENTEC |
| lomodays207 | [@lomodays207](https://github.com/lomodays207) | Contributor | 建信金科 |
| Super .Wein星痕 | [@superspeedone](https://github.com/superspeedone) | Contributor | 韵达 |
| Hongten | [@Hongten](https://github.com/Hongten) | Contributor | Shopee |
| 徐正熙 | [@hyper-xx)](https://github.com/hyper-xx) | Contributor | 滴滴出行 |
| RichardZhengkay | [@RichardZhengkay](https://github.com/RichardZhengkay) | Contributor | 趣街 |
| 罐子里的茶 | [@gzldc](https://github.com/gzldc) | Contributor | 道富 |
| 陈忠玉 | [@paula](https://github.com/chenzhongyu11) | Contributor | 平安产险 |
| 杨光 | [@yaangvipguang](https://github.com/yangvipguang) | Contributor |
| 王亚聪 | [@wangyacongi](https://github.com/wangyacongi) | Contributor |
| Yang Jing | [@yangbajing](https://github.com/yangbajing) | Contributor | |
| 刘新元 Liu XinYuan | [@Liu-XinYuan](https://github.com/Liu-XinYuan) | Contributor | |
| Joker | [@LiubeyJokerQueue](https://github.com/JokerQueue) | Contributor | 丰巢 |
| Eason Lau | [@Liubey](https://github.com/Liubey) | Contributor | |
| hailanxin | [@hailanxin](https://github.com/hailanxin) | Contributor | |
| Qi Zhang | [@zzzhangqi](https://github.com/zzzhangqi) | Contributor | 好雨科技 |
| fengxsong | [@fengxsong](https://github.com/fengxsong) | Contributor | |
| 谢晓东 | [@Strangevy](https://github.com/Strangevy) | Contributor | 花生日记 |
| ZhaoXinlong | [@ZhaoXinlong](https://github.com/ZhaoXinlong) | Contributor | |
| xuehaipeng | [@xuehaipeng](https://github.com/xuehaipeng) | Contributor | |
| 孔令续 | [@mrazkong](https://github.com/mrazkong) | Contributor | |
| pierre xiong | [@pierre94](https://github.com/pierre94) | Contributor | |
| PengShuaixin | [@PengShuaixin](https://github.com/PengShuaixin) | Contributor | |
| 梁壮 | [@lz](https://github.com/silent-night-no-trace) | Contributor | |
| 张晓寅 | [@ahu0605](https://github.com/ahu0605) | Contributor | 电信数智 |
| 黄海婷 | [@Huanghaiting](https://github.com/Huanghaiting) | Contributor | 云徙科技 |
| 任祥德 | [@RenChauncy](https://github.com/RenChauncy) | Contributor | 探马企服 |
| 胡圣林 | [@slhu997](https://github.com/slhu997) | Contributor | |
| 史泽颖 | [@shizeying](https://github.com/shizeying) | Contributor | |
| 王玉博 | [@Wyb7290](https://github.com/Wyb7290) | Committer | |
| 伍璇 | [@Luckywustone](https://github.com/Luckywustone) | Contributor ||
| 邓苑 | [@CatherineDY](https://github.com/CatherineDY) | Contributor ||
| 封琼凤 | [@Luckywustone](https://github.com/fengqiongfeng) | Committer ||

View File

@@ -0,0 +1,168 @@
# 贡献指南
- [贡献指南](#贡献指南)
- [1、行为准则](#1行为准则)
- [2、仓库规范](#2仓库规范)
- [2.1、Issue 规范](#21issue-规范)
- [2.2、Commit-Log 规范](#22commit-log-规范)
- [2.3、Pull-Request 规范](#23pull-request-规范)
- [3、操作示例](#3操作示例)
- [3.1、初始化环境](#31初始化环境)
- [3.2、认领问题](#32认领问题)
- [3.3、处理问题 \& 提交解决](#33处理问题--提交解决)
- [3.4、请求合并](#34请求合并)
- [4、常见问题](#4常见问题)
- [4.1、如何将多个 Commit-Log 合并为一个?](#41如何将多个-commit-log-合并为一个)
---
欢迎 👏🏻 👏🏻 👏🏻 来到 `KnowStreaming`。本文档是关于如何为 `KnowStreaming` 做出贡献的指南。如果您发现不正确或遗漏的内容, 请留下您的意见/建议。
---
## 1、行为准则
请务必阅读并遵守我们的:[行为准则](https://github.com/didi/KnowStreaming/blob/master/CODE_OF_CONDUCT.md)。
## 2、仓库规范
### 2.1、Issue 规范
按要求,在 [创建Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 中创建ISSUE即可。
需要重点说明的是:
- 提供出现问题的环境信息包括使用的系统使用的KS版本等
- 提供出现问题的复现方式;
### 2.2、Commit-Log 规范
`Commit-Log` 包含三部分 `Header``Body``Footer`。其中 `Header` 是必须的,格式固定,`Body` 在变更有必要详细解释时使用。
**1、`Header` 规范**
`Header` 格式为 `[Type]Message` 主要有三部分组成,分别是`Type``Message`
- `Type`:说明这个提交是哪一个类型的,比如有 Bugfix、Feature、Optimize等
- `Message`说明提交的信息比如修复xx问题
实际例子:[`[Bugfix]修复新接入的集群Controller-Host不显示的问题`](https://github.com/didi/KnowStreaming/pull/933/commits)
**2、`Body` 规范**
一般不需要,如果解决了较复杂问题,或者代码较多,需要 `Body` 说清楚解决的问题,解决的思路等信息。
---
**3、实际例子**
```
[Optimize]优化 MySQL & ES 测试容器的初始化
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;
被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
```
**TODO : 后续有兴趣的同学,可以考虑引入 Git 的 Hook 进行更好的 Commit-Log 的管理。**
### 2.3、Pull-Request 规范
详细见:[PULL-REQUEST 模版](../../.github/PULL_REQUEST_TEMPLATE.md)
需要重点说明的是:
- <font color=red > 任何 PR 都必须与有效 ISSUE 相关联。否则, PR 将被拒绝;</font>
- <font color=red> 一个分支只修改一件事,一个 PR 只修改一件事;</b></font>
---
## 3、操作示例
本节主要介绍对 `KnowStreaming` 进行代码贡献时,相关的操作方式及操作命令。
名词说明:
- 主仓库https://github.com/didi/KnowStreaming 这个仓库为主仓库。
- 分仓库Fork 到自己账号下的 KnowStreaming 仓库为分仓库;
### 3.1、初始化环境
1. `Fork KnowStreaming` 主仓库至自己账号下,见 https://github.com/didi/KnowStreaming 地址右上角的 `Fork` 按钮;
2. 克隆分仓库至本地:`git clone git@github.com:xxxxxxx/KnowStreaming.git`,该仓库的简写名通常是`origin`
3. 添加主仓库至本地:`git remote add upstream https://github.com/didi/KnowStreaming``upstream`是主仓库在本地的简写名,可以随意命名,前后保持一致即可;
4. 拉取主仓库代码:`git fetch upstream`
5. 拉取分仓库代码:`git fetch origin`
6. 将主仓库的`master`分支,拉取到本地并命名为`github_master``git checkout -b upstream/master`
最后,我们来看一下初始化完成之后的大致效果,具体如下图所示:
![环境初始化](./assets/环境初始化.jpg)
至此,我们的环境就初始化好了。后续,`github_master` 分支就是主仓库的`master`分支,我们可以使用`git pull`拉取该分支的最新代码,还可以使用`git checkout -b xxx`拉取我们想要的分支。
### 3.2、认领问题
在文末评论说明自己要处理该问题即可,具体如下图所示:
![问题认领](./assets/问题认领.jpg)
### 3.3、处理问题 & 提交解决
本节主要介绍一下处理问题 & 提交解决过程中的分支管理,具体如下图所示:
![分支管理](./assets/分支管理.png)
1. 切换到主分支:`git checkout github_master`
2. 主分支拉最新代码:`git pull`
3. 基于主分支拉新分支:`git checkout -b fix_928`
4. 提交代码安装commit的规范进行提交例如`git commit -m "[Optimize]优化xxx问题"`
5. 提交到自己远端仓库:`git push --set-upstream origin fix_928`
6. `GitHub` 页面发起 `Pull Request` 请求,管理员合入主仓库。这部分详细见下一节;
### 3.4、请求合并
代码在提交到 `GitHub` 分仓库之后,就可以在 `GitHub` 的网站创建 `Pull Request`,申请将代码合入主仓库了。 `Pull Request` 具体见下图所示:
![申请合并](./assets/申请合并.jpg)
[Pull Request 创建的例子](https://github.com/didi/KnowStreaming/pull/945)
---
## 4、常见问题
### 4.1、如何将多个 Commit-Log 合并为一个?
可以不需要将多个commit合并为一个如果要合并可以使用 `git rebase -i` 命令进行解决。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -6,72 +6,72 @@
### 3.3.1、Cluster 指标 ### 3.3.1、Cluster 指标
| 指标名称 | 指标单位 | 指标含义 | kafka 版本 | 企业/开源版指标 | | 指标名称 | 指标单位 | 指标含义 | kafka 版本 | 企业/开源版指标 |
| ------------------------- | -------- | ------------------------------------ | ---------------- | --------------- | | ------------------------- | -------- |--------------------------------| ---------------- | --------------- |
| HealthScore | 分 | 集群总体的健康分 | 全部版本 | 开源版 | | HealthScore | 分 | 集群总体的健康分 | 全部版本 | 开源版 |
| HealthCheckPassed | 个 | 集群总体健康检查通过数 | 全部版本 | 开源版 | | HealthCheckPassed | 个 | 集群总体健康检查通过数 | 全部版本 | 开源版 |
| HealthCheckTotal | 个 | 集群总体健康检查总数 | 全部版本 | 开源版 | | HealthCheckTotal | 个 | 集群总体健康检查总数 | 全部版本 | 开源版 |
| HealthScore_Topics | 分 | 集群 Topics 的健康分 | 全部版本 | 开源版 | | HealthScore_Topics | 分 | 集群 Topics 的健康分 | 全部版本 | 开源版 |
| HealthCheckPassed_Topics | 个 | 集群 Topics 健康检查通过数 | 全部版本 | 开源版 | | HealthCheckPassed_Topics | 个 | 集群 Topics 健康检查通过数 | 全部版本 | 开源版 |
| HealthCheckTotal_Topics | 个 | 集群 Topics 健康检查总数 | 全部版本 | 开源版 | | HealthCheckTotal_Topics | 个 | 集群 Topics 健康检查总数 | 全部版本 | 开源版 |
| HealthScore_Brokers | 分 | 集群 Brokers 的健康分 | 全部版本 | 开源版 | | HealthScore_Brokers | 分 | 集群 Brokers 的健康分 | 全部版本 | 开源版 |
| HealthCheckPassed_Brokers | 个 | 集群 Brokers 健康检查通过数 | 全部版本 | 开源版 | | HealthCheckPassed_Brokers | 个 | 集群 Brokers 健康检查通过数 | 全部版本 | 开源版 |
| HealthCheckTotal_Brokers | 个 | 集群 Brokers 健康检查总数 | 全部版本 | 开源版 | | HealthCheckTotal_Brokers | 个 | 集群 Brokers 健康检查总数 | 全部版本 | 开源版 |
| HealthScore_Groups | 分 | 集群 Groups 的健康分 | 全部版本 | 开源版 | | HealthScore_Groups | 分 | 集群 Groups 的健康分 | 全部版本 | 开源版 |
| HealthCheckPassed_Groups | 个 | 集群 Groups 健康检查总数 | 全部版本 | 开源版 | | HealthCheckPassed_Groups | 个 | 集群 Groups 健康检查总数 | 全部版本 | 开源版 |
| HealthCheckTotal_Groups | 个 | 集群 Groups 健康检查总数 | 全部版本 | 开源版 | | HealthCheckTotal_Groups | 个 | 集群 Groups 健康检查总数 | 全部版本 | 开源版 |
| HealthScore_Cluster | 分 | 集群自身的健康分 | 全部版本 | 开源版 | | HealthScore_Cluster | 分 | 集群自身的健康分 | 全部版本 | 开源版 |
| HealthCheckPassed_Cluster | 个 | 集群自身健康检查通过数 | 全部版本 | 开源版 | | HealthCheckPassed_Cluster | 个 | 集群自身健康检查通过数 | 全部版本 | 开源版 |
| HealthCheckTotal_Cluster | 个 | 集群自身健康检查总数 | 全部版本 | 开源版 | | HealthCheckTotal_Cluster | 个 | 集群自身健康检查总数 | 全部版本 | 开源版 |
| TotalRequestQueueSize | 个 | 集群中总的请求队列数 | 全部版本 | 开源版 | | TotalRequestQueueSize | 个 | 集群中总的请求队列数 | 全部版本 | 开源版 |
| TotalResponseQueueSize | 个 | 集群中总的响应队列数 | 全部版本 | 开源版 | | TotalResponseQueueSize | 个 | 集群中总的响应队列数 | 全部版本 | 开源版 |
| EventQueueSize | 个 | 集群中 Controller 的 EventQueue 大小 | 2.0.0 及以上版本 | 开源版 | | EventQueueSize | 个 | 集群中 Controller 的 EventQueue 大小 | 2.0.0 及以上版本 | 开源版 |
| ActiveControllerCount | 个 | 集群中存活的 Controller 数 | 全部版本 | 开源版 | | ActiveControllerCount | 个 | 集群中存活的 Controller 数 | 全部版本 | 开源版 |
| TotalProduceRequests | 个 | 集群中的 Produce 每秒请求数 | 全部版本 | 开源版 | | TotalProduceRequests | 个 | 集群中的 Produce 每秒请求数 | 全部版本 | 开源版 |
| TotalLogSize | byte | 集群总的已使用的磁盘大小 | 全部版本 | 开源版 | | TotalLogSize | byte | 集群总的已使用的磁盘大小 | 全部版本 | 开源版 |
| ConnectionsCount | 个 | 集群的连接(Connections)个数 | 全部版本 | 开源版 | | ConnectionsCount | 个 | 集群的连接(Connections)个数 | 全部版本 | 开源版 |
| Zookeepers | 个 | 集群中存活的 zk 节点个数 | 全部版本 | 开源版 | | Zookeepers | 个 | 集群中存活的 zk 节点个数 | 全部版本 | 开源版 |
| ZookeepersAvailable | 是/否 | ZK 地址是否合法 | 全部版本 | 开源版 | | ZookeepersAvailable | 是/否 | ZK 地址是否合法 | 全部版本 | 开源版 |
| Brokers | 个 | 集群的 broker 的总数 | 全部版本 | 开源版 | | Brokers | 个 | 集群的 broker 的总数 | 全部版本 | 开源版 |
| BrokersAlive | 个 | 集群的 broker 的存活数 | 全部版本 | 开源版 | | BrokersAlive | 个 | 集群的 broker 的存活数 | 全部版本 | 开源版 |
| BrokersNotAlive | 个 | 集群的 broker 的未存活数 | 全部版本 | 开源版 | | BrokersNotAlive | 个 | 集群的 broker 的未存活数 | 全部版本 | 开源版 |
| Replicas | 个 | 集群中 Replica 的总数 | 全部版本 | 开源版 | | Replicas | 个 | 集群中 Replica 的总数 | 全部版本 | 开源版 |
| Topics | 个 | 集群中 Topic 的总数 | 全部版本 | 开源版 | | Topics | 个 | 集群中 Topic 的总数 | 全部版本 | 开源版 |
| Partitions | 个 | 集群的 Partitions 总数 | 全部版本 | 开源版 | | Partitions | 个 | 集群的 Partitions 总数 | 全部版本 | 开源版 |
| PartitionNoLeader | 个 | 集群中的 PartitionNoLeader 总数 | 全部版本 | 开源版 | | PartitionNoLeader | 个 | 集群中的 PartitionNoLeader 总数 | 全部版本 | 开源版 |
| PartitionMinISR_S | 个 | 集群中的小于 PartitionMinISR 总数 | 全部版本 | 开源版 | | PartitionMinISR_S | 个 | 集群中的小于 PartitionMinISR 总数 | 全部版本 | 开源版 |
| PartitionMinISR_E | 个 | 集群中的等于 PartitionMinISR 总数 | 全部版本 | 开源版 | | PartitionMinISR_E | 个 | 集群中的等于 PartitionMinISR 总数 | 全部版本 | 开源版 |
| PartitionURP | 个 | 集群中的未同步的 Partition 总数 | 全部版本 | 开源版 | | PartitionURP | 个 | 集群中的未同步的 Partition 总数 | 全部版本 | 开源版 |
| MessagesIn | 条/s | 集群每消息写入条数 | 全部版本 | 开源版 | | MessagesIn | 条/s | 集群每消息写入条数 | 全部版本 | 开源版 |
| Messages | 条 | 集群总的消息条数 | 全部版本 | 开源版 | | Messages | 条 | 集群总的消息条数 | 全部版本 | 开源版 |
| LeaderMessages | 条 | 集群中 leader 总的消息条数 | 全部版本 | 开源版 | | LeaderMessages | 条 | 集群中 leader 总的消息条数 | 全部版本 | 开源版 |
| BytesIn | byte/s | 集群的每秒写入字节数 | 全部版本 | 开源版 | | BytesIn | byte/s | 集群的每秒写入字节数 | 全部版本 | 开源版 |
| BytesIn_min_5 | byte/s | 集群的每秒写入字节数5 分钟均值 | 全部版本 | 开源版 | | BytesIn_min_5 | byte/s | 集群的每秒写入字节数5 分钟均值 | 全部版本 | 开源版 |
| BytesIn_min_15 | byte/s | 集群的每秒写入字节数15 分钟均值 | 全部版本 | 开源版 | | BytesIn_min_15 | byte/s | 集群的每秒写入字节数15 分钟均值 | 全部版本 | 开源版 |
| BytesOut | byte/s | 集群的每秒流出字节数 | 全部版本 | 开源版 | | BytesOut | byte/s | 集群的每秒流出字节数 | 全部版本 | 开源版 |
| BytesOut_min_5 | byte/s | 集群的每秒流出字节数5 分钟均值 | 全部版本 | 开源版 | | BytesOut_min_5 | byte/s | 集群的每秒流出字节数5 分钟均值 | 全部版本 | 开源版 |
| BytesOut_min_15 | byte/s | 集群的每秒流出字节数15 分钟均值 | 全部版本 | 开源版 | | BytesOut_min_15 | byte/s | 集群的每秒流出字节数15 分钟均值 | 全部版本 | 开源版 |
| Groups | 个 | 集群中 Group 的总数 | 全部版本 | 开源版 | | Groups | 个 | 集群中 Group 的总数 | 全部版本 | 开源版 |
| GroupActives | 个 | 集群中 ActiveGroup 的总数 | 全部版本 | 开源版 | | GroupActives | 个 | 集群中 ActiveGroup 的总数 | 全部版本 | 开源版 |
| GroupEmptys | 个 | 集群中 EmptyGroup 的总数 | 全部版本 | 开源版 | | GroupEmptys | 个 | 集群中 EmptyGroup 的总数 | 全部版本 | 开源版 |
| GroupRebalances | 个 | 集群中 RebalanceGroup 的总数 | 全部版本 | 开源版 | | GroupRebalances | 个 | 集群中 RebalanceGroup 的总数 | 全部版本 | 开源版 |
| GroupDeads | 个 | 集群中 DeadGroup 的总数 | 全部版本 | 开源版 | | GroupDeads | 个 | 集群中 DeadGroup 的总数 | 全部版本 | 开源版 |
| Alive | 是/否 | 集群是否存活1存活0没有存活 | 全部版本 | 开源版 | | Alive | 是/否 | 集群是否存活1存活0没有存活 | 全部版本 | 开源版 |
| AclEnable | 是/否 | 集群是否开启 Acl10否 | 全部版本 | 开源版 | | AclEnable | 是/否 | 集群是否开启 Acl10 | 全部版本 | 开源版 |
| Acls | 个 | ACL 数 | 全部版本 | 开源版 | | Acls | 个 | ACL 数 | 全部版本 | 开源版 |
| AclUsers | 个 | ACL-KafkaUser 数 | 全部版本 | 开源版 | | AclUsers | 个 | ACL-KafkaUser 数 | 全部版本 | 开源版 |
| AclTopics | 个 | ACL-Topic 数 | 全部版本 | 开源版 | | AclTopics | 个 | ACL-Topic 数 | 全部版本 | 开源版 |
| AclGroups | 个 | ACL-Group 数 | 全部版本 | 开源版 | | AclGroups | 个 | ACL-Group 数 | 全部版本 | 开源版 |
| Jobs | 个 | 集群任务总数 | 全部版本 | 开源版 | | Jobs | 个 | 集群任务总数 | 全部版本 | 开源版 |
| JobsRunning | 个 | 集群 running 任务总数 | 全部版本 | 开源版 | | JobsRunning | 个 | 集群 running 任务总数 | 全部版本 | 开源版 |
| JobsWaiting | 个 | 集群 waiting 任务总数 | 全部版本 | 开源版 | | JobsWaiting | 个 | 集群 waiting 任务总数 | 全部版本 | 开源版 |
| JobsSuccess | 个 | 集群 success 任务总数 | 全部版本 | 开源版 | | JobsSuccess | 个 | 集群 success 任务总数 | 全部版本 | 开源版 |
| JobsFailed | 个 | 集群 failed 任务总数 | 全部版本 | 开源版 | | JobsFailed | 个 | 集群 failed 任务总数 | 全部版本 | 开源版 |
| LoadReBalanceEnable | 是/否 | 是否开启均衡, 10否 | 全部版本 | 企业版 | | LoadReBalanceEnable | 是/否 | 是否开启均衡, 10 | 全部版本 | 企业版 |
| LoadReBalanceCpu | 是/否 | CPU 是否均衡, 10否 | 全部版本 | 企业版 | | LoadReBalanceCpu | 是/否 | CPU 是否均衡, 10 | 全部版本 | 企业版 |
| LoadReBalanceNwIn | 是/否 | BytesIn 是否均衡, 10否 | 全部版本 | 企业版 | | LoadReBalanceNwIn | 是/否 | BytesIn 是否均衡, 10 | 全部版本 | 企业版 |
| LoadReBalanceNwOut | 是/否 | BytesOut 是否均衡, 10否 | 全部版本 | 企业版 | | LoadReBalanceNwOut | 是/否 | BytesOut 是否均衡, 10 | 全部版本 | 企业版 |
| LoadReBalanceDisk | 是/否 | Disk 是否均衡, 10否 | 全部版本 | 企业版 | | LoadReBalanceDisk | 是/否 | Disk 是否均衡, 10 | 全部版本 | 企业版 |
### 3.3.2、Broker 指标 ### 3.3.2、Broker 指标

View File

@@ -0,0 +1,180 @@
![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
---
# 接入 ZK 带认证的 Kafka 集群
- [接入 ZK 带认证的 Kafka 集群](#接入-zk-带认证的-kafka-集群)
- [1、简要说明](#1简要说明)
- [2、支持 Digest-MD5 认证](#2支持-digest-md5-认证)
- [3、支持 Kerberos 认证](#3支持-kerberos-认证)
## 1、简要说明
- 1、当前 KnowStreaming 暂无页面可以直接配置 ZK 的认证信息,但是 KnowStreaming 的后端预留了 MySQL 的字段用于存储 ZK 的认证信息,用户可通过将认证信息存储至该字段,从而达到支持接入 ZK 带认证的 Kafka 集群。
&nbsp;
- 2、该字段位于 MySQL 库 ks_km_physical_cluster 表中的 zk_properties 字段,该字段的格式是:
```json
{
"openSecure": false, # 是否开启认证开启时配置为true
"sessionTimeoutUnitMs": 15000, # session超时时间
"requestTimeoutUnitMs": 5000, # request超时时间
"otherProps": { # 其他配置,认证信息主要配置在该位置
"zookeeper.sasl.clientconfig": "kafkaClusterZK1" # 例子,
}
}
```
- 3、实际生效的代码位置
```java
// 代码位置https://github.com/didi/KnowStreaming/blob/master/km-persistence/src/main/java/com/xiaojukeji/know/streaming/km/persistence/kafka/KafkaAdminZKClient.java
kafkaZkClient = KafkaZkClient.apply(
clusterPhy.getZookeeper(),
zkConfig.getOpenSecure(), // 是否开启认证开启时配置为true
zkConfig.getSessionTimeoutUnitMs(), // session超时时间
zkConfig.getRequestTimeoutUnitMs(), // request超时时间
5,
Time.SYSTEM,
"KS-ZK-ClusterPhyId-" + clusterPhyId,
"KS-ZK-SessionExpireListener-clusterPhyId-" + clusterPhyId,
Option.apply("KS-ZK-ClusterPhyId-" + clusterPhyId),
Option.apply(this.getZKConfig(clusterPhyId, zkConfig.getOtherProps())) // 其他配置,认证信息主要配置在该位置
);
```
- 4、SQL例子
```sql
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK1" } }' where id=集群1ID;
```
- 5、zk_properties 字段不能覆盖所有的场景,所以实际使用过程中还可能需要在此基础之上,进行其他的调整。比如,`Digest-MD5 认证``Kerberos 认证` 都还需要修改启动脚本等。后续看能否通过修改 ZK 客户端的源码,使得 ZK 认证的相关配置能和 Kafka 认证的配置一样方便。
---
## 2、支持 Digest-MD5 认证
1. 假设你有两个 Kafka 集群, 对应两个 ZK 集群;
2. 两个 ZK 集群的认证信息如下所示
```bash
# ZK1集群的认证信息这里的 kafkaClusterZK1 可以是随意的名称,只需要和后续数据库的配置对应上即可。
kafkaClusterZK1 {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zk1"
password="zk1-passwd";
};
# ZK2集群的认证信息这里的 kafkaClusterZK2 可以是随意的名称,只需要和后续数据库的配置对应上即可。
kafkaClusterZK2 {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zk2"
password="zk2-passwd";
};
```
3. 将这两个ZK集群的认证信息存储到 `/xxx/zk_client_jaas.conf` 文件中,文件中的内容如下所示:
```bash
kafkaClusterZK1 {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zk1"
password="zk1-passwd";
};
kafkaClusterZK2 {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zk2"
password="zk2-passwd";
};
```
4. 修改 KnowStreaming 的启动脚本
```bash
# `KnowStreaming/bin/startup.sh` 中的 47 行的 JAVA_OPT 中追加如下设置
-Djava.security.auth.login.config=/xxx/zk_client_jaas.conf
```
5. 修改 KnowStreaming 的表数据
```sql
# 这里的 kafkaClusterZK1 要和 /xxx/zk_client_jaas.conf 中的对应上
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK1" } }' where id=集群1ID;
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK2" } }' where id=集群2ID;
```
6. 重启 KnowStreaming
---
## 3、支持 Kerberos 认证
**第一步查看用户在ZK的ACL**
假设我们使用的用户是 `kafka` 这个用户。
- 1、查看 server.properties 的配置的 zookeeper.connect 的地址;
- 2、使用 `zkCli.sh -serve zookeeper.connect的地址` 登录到ZK页面
- 3、ZK页面上执行命令 `getAcl /kafka` 查看 `kafka` 用户的权限;
此时,我们可以看到如下信息:
![watch_user_acl.png](assets/support_kerberos_zk/watch_user_acl.png)
`kafka` 用户需要的权限是 `cdrwa`。如果用户没有 `cdrwa` 权限的话,需要创建用户并授权,授权命令为:`setAcl`
**第二步创建Kerberos的keytab并修改 KnowStreaming 主机**
- 1、在 Kerberos 的域中创建 `kafka/_HOST``keytab`,并导出。例如:`kafka/dbs-kafka-test-8-53`
- 2、导出 keytab 后上传到安装 KS 的机器的 `/etc/keytab` 下;
- 3、在 KS 机器上,执行 `kinit -kt zookeepe.keytab kafka/dbs-kafka-test-8-53` 看是否能进行 `Kerberos` 登录;
- 4、可以登录后配置 `/opt/zookeeper.jaas` 文件,例子如下:
```bash
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=false
serviceName="zookeeper"
keyTab="/etc/keytab/zookeeper.keytab"
principal="kafka/dbs-kafka-test-8-53@XXX.XXX.XXX";
};
```
- 5、需要配置 `KDC-Server``KnowStreaming` 的机器开通防火墙并在KS的机器 `/etc/host/` 配置 `kdc-server``hostname`。并将 `krb5.conf` 导入到 `/etc` 下;
**第三步:修改 KnowStreaming 的配置**
- 1、修改数据库开启ZK的认证
```sql
update ks_km_physical_cluster set zk_properties='{ "openSecure": true }' where id=集群1ID;
```
- 2、在 `KnowStreaming/bin/startup.sh` 中的47行的JAVA_OPT中追加如下设置
```bash
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/zookeeper.jaas
```
- 3、重启KS集群后再 start.out 中看到如下信息则证明Kerberos配置成功
![success_1.png](assets/support_kerberos_zk/success_1.png)
![success_2.png](assets/support_kerberos_zk/success_2.png)
**第四步:补充说明**
- 1、多Kafka集群如果用的是一样的Kerberos域的话只需在每个`ZK`中给`kafka`用户配置`crdwa`权限即可,这样集群初始化的时候`zkclient`是都可以认证;
- 2、多个Kerberos域暂时未适配

View File

@@ -84,7 +84,7 @@ IDEA 更多具体的配置如下图所示:
`Know Streaming` 启动之后,可以访问一些信息,包括: `Know Streaming` 启动之后,可以访问一些信息,包括:
- 产品页面http://localhost:8080 ,默认账号密码:`admin` / `admin2022_` 进行登录。 - 产品页面http://localhost:8080 ,默认账号密码:`admin` / `admin2022_` 进行登录。`v3.0.0-beta.2`版本开始,默认账号密码为`admin` / `admin`
- 接口地址http://localhost:8080/swagger-ui.html 查看后端提供的相关接口。 - 接口地址http://localhost:8080/swagger-ui.html 查看后端提供的相关接口。
更多信息,详见:[KnowStreaming 官网](https://knowstreaming.com/) 更多信息,详见:[KnowStreaming 官网](https://knowstreaming.com/)

View File

@@ -0,0 +1,199 @@
![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
## 登录系统对接
[KnowStreaming](https://github.com/didi/KnowStreaming)以下简称KS 除了实现基于本地MySQL的用户登录认证方式外还已经实现了基于Ldap的登录认证。
但是登录认证系统并非仅此两种。因此为了具有更好的拓展性KS具有自定义登陆认证逻辑快速对接已有系统的特性。
在KS中我们将登陆认证相关的一些文件放在[km-extends](https://github.com/didi/KnowStreaming/tree/master/km-extends)模块下的[km-account](https://github.com/didi/KnowStreaming/tree/master/km-extends/km-account)模块里。
本文将介绍KS如何快速对接自有的用户登录认证系统。
### 对接步骤
- 创建一个登陆认证类,实现[LogiCommon](https://github.com/didi/LogiCommon)的LoginExtend接口
- 将[application.yml](https://github.com/didi/KnowStreaming/blob/master/km-rest/src/main/resources/application.yml)中的spring.logi-security.login-extend-bean-name字段改为登陆认证类的bean名称
```Java
//LoginExtend 接口
public interface LoginExtend {
/**
* 验证登录信息,同时记住登录状态
*/
UserBriefVO verifyLogin(AccountLoginDTO var1, HttpServletRequest var2, HttpServletResponse var3) throws LogiSecurityException;
/**
* 登出接口,清楚登录状态
*/
Result<Boolean> logout(HttpServletRequest var1, HttpServletResponse var2);
/**
* 检查是否已经登录
*/
boolean interceptorCheck(HttpServletRequest var1, HttpServletResponse var2, String var3, List<String> var4) throws IOException;
}
```
### 对接例子
我们以Ldap对接为例说明KS如何对接登录认证系统。
+ 编写[LdapLoginServiceImpl](https://github.com/didi/KnowStreaming/blob/master/km-extends/km-account/src/main/java/com/xiaojukeji/know/streaming/km/account/login/ldap/LdapLoginServiceImpl.java)类实现LoginExtend接口。
+ 设置[application.yml](https://github.com/didi/KnowStreaming/blob/master/km-rest/src/main/resources/application.yml)中的spring.logi-security.login-extend-bean-name=ksLdapLoginService。
完成上述两步即可实现KS对接Ldap认证登陆。
```Java
@Service("ksLdapLoginService")
public class LdapLoginServiceImpl implements LoginExtend {
@Override
public UserBriefVO verifyLogin(AccountLoginDTO loginDTO,
HttpServletRequest request,
HttpServletResponse response) throws LogiSecurityException {
String decodePasswd = AESUtils.decrypt(loginDTO.getPw());
// 去LDAP验证账密
LdapPrincipal ldapAttrsInfo = ldapAuthentication.authenticate(loginDTO.getUserName(), decodePasswd);
if (ldapAttrsInfo == null) {
// 用户不存在,正常来说上如果有问题,上一步会直接抛出异常
throw new LogiSecurityException(ResultCode.USER_NOT_EXISTS);
}
// 进行业务相关操作
// 记录登录状态Ldap因为无法记录登录状态因此有KnowStreaming进行记录
initLoginContext(request, response, loginDTO.getUserName(), user.getId());
return CopyBeanUtil.copy(user, UserBriefVO.class);
}
@Override
public Result<Boolean> logout(HttpServletRequest request, HttpServletResponse response) {
//清理cookie和session
return Result.buildSucc(Boolean.TRUE);
}
@Override
public boolean interceptorCheck(HttpServletRequest request, HttpServletResponse response, String requestMappingValue, List<String> whiteMappingValues) throws IOException {
// 检查是否已经登录
String userName = HttpRequestUtil.getOperator(request);
if (StringUtils.isEmpty(userName)) {
// 未登录,则进行登出
logout(request, response);
return Boolean.FALSE;
}
return Boolean.TRUE;
}
}
```
### 实现原理
因为登陆和登出整体实现逻辑是一致的,所以我们以登陆逻辑为例进行介绍。
+ 登陆原理
登陆走的是[LogiCommon](https://github.com/didi/LogiCommon)自带的LoginController。
```java
@RestController
public class LoginController {
//登陆接口
@PostMapping({"/login"})
public Result<UserBriefVO> login(HttpServletRequest request, HttpServletResponse response, @RequestBody AccountLoginDTO loginDTO) {
try {
//登陆认证
UserBriefVO userBriefVO = this.loginService.verifyLogin(loginDTO, request, response);
return Result.success(userBriefVO);
} catch (LogiSecurityException var5) {
return Result.fail(var5);
}
}
}
```
而登陆操作是调用LoginServiceImpl类来实现但是具体由哪个登陆认证类来执行登陆操作却由loginExtendBeanTool来指定。
```java
//LoginServiceImpl类
@Service
public class LoginServiceImpl implements LoginService {
//实现登陆操作但是具体哪个登陆类由loginExtendBeanTool来管理
public UserBriefVO verifyLogin(AccountLoginDTO loginDTO, HttpServletRequest request, HttpServletResponse response) throws LogiSecurityException {
return this.loginExtendBeanTool.getLoginExtendImpl().verifyLogin(loginDTO, request, response);
}
}
```
而loginExtendBeanTool类会优先去查找用户指定的登陆认证类如果失败则调用默认的登陆认证函数。
```java
//LoginExtendBeanTool类
@Component("logiSecurityLoginExtendBeanTool")
public class LoginExtendBeanTool {
public LoginExtend getLoginExtendImpl() {
LoginExtend loginExtend;
//先调用用户指定登陆类,如果失败则调用系统默认登陆认证
try {
//调用的类由spring.logi-security.login-extend-bean-name指定
loginExtend = this.getCustomLoginExtendImplBean();
} catch (UnsupportedOperationException var3) {
loginExtend = this.getDefaultLoginExtendImplBean();
}
return loginExtend;
}
}
```
+ 认证原理
认证的实现则比较简单向Spring中注册我们的拦截器PermissionInterceptor。
拦截器会调用LoginServiceImpl类的拦截方法LoginServiceImpl后续处理逻辑就和前面登陆是一致的。
```java
public class PermissionInterceptor implements HandlerInterceptor {
/**
* 拦截预处理
* @return boolean false:拦截, 不向下执行, true:放行
*/
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
//免登录相关校验,如果验证通过,提前返回
//走拦截函数,进行普通用户验证
return loginService.interceptorCheck(request, response, classRequestMappingValue, whiteMappingValues);
}
}
```

View File

@@ -1,101 +1,276 @@
![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png) ![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
## JMX-连接失败问题解决 ## 2、解决连接 JMX 失败
- [JMX-连接失败问题解决](#jmx-连接失败问题解决) - [2、解决连接 JMX 失败](#2解决连接-jmx-失败)
- [1、问题&说明](#1问题说明) - [2.1、正异常现象](#21正异常现象)
- [2、解决方法](#2解决方法) - [2.2、异因一JMX未开启](#22异因一jmx未开启)
- [3、解决方法 —— 认证的JMX](#3解决方法--认证的jmx) - [2.2.1、异常现象](#221异常现象)
- [2.2.2、解决方案](#222解决方案)
集群正常接入Logi-KafkaManager之后即可以看到集群的Broker列表此时如果查看不了Topic的实时流量或者是Broker的实时流量信息时那么大概率就是JMX连接的问题了。 - [2.3、异原二JMX配置错误](#23异原二jmx配置错误)
- [2.3.1、异常现象](#231异常现象)
下面我们按照步骤来一步一步的检查。 - [2.3.2、解决方案](#232解决方案)
- [2.4、异因三JMX开启SSL](#24异因三jmx开启ssl)
### 1、问题&说明 - [2.4.1、异常现象](#241异常现象)
- [2.4.2、解决方案](#242解决方案)
**类型一JMX配置未开启** - [2.5、异因四连接了错误IP](#25异因四连接了错误ip)
- [2.5.1、异常现象](#251异常现象)
未开启时,直接到`2、解决方法`查看如何开启即可。 - [2.5.2、解决方案](#252解决方案)
- [2.6、异因五:连接了错误端口](#26异因五连接了错误端口)
![check_jmx_opened](http://img-ys011.didistatic.com/static/dc2img/do1_dRX6UHE2IUSHqsN95DGb) - [2.6.1、异常现象](#261异常现象)
- [2.6.2、解决方案](#262解决方案)
**类型二:配置错误** 背景Kafka 通过 JMX 服务进行运行指标的暴露,因此 `KnowStreaming` 会主动连接 Kafka 的 JMX 服务进行指标采集。如果我们发现页面缺少指标,那么可能原因之一是 Kafka 的 JMX 端口配置的有问题导致指标获取失败,进而页面没有数据。
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
- `JMX`配置错误:见`2、解决方法`
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`
错误日志例子: ### 2.1、正异常现象
```
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。 **1、异常现象**
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is: Broker 列表的 JMX PORT 列出现红色感叹号,则表示 JMX 连接存在异常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_MLlLCfAktne4X6MBtBUd width="90%">
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。 **2、正常现象**
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
### 2、解决方法 Broker 列表的 JMX PORT 列出现绿色,则表示 JMX 连接正常。
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。 <img src=http://img-ys011.didistatic.com/static/dc2img/do1_ymtDTCiDlzfrmSCez2lx width="90%">
修改`kafka-server-start.sh`文件:
``` ---
### 2.2、异因一JMX未开启
#### 2.2.1、异常现象
broker列表的JMX Port值为-1对应Broker的JMX未开启。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_E1PD8tPsMeR2zYLFBFAu width="90%">
#### 2.2.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```bash
# 在这个下面增加JMX端口的配置 # 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999 export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi fi
``` ```
&nbsp;
修改`kafka-run-class.sh`文件 2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings ```bash
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}" KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=当前机器的IP"
fi fi
# JMX port to use # JMX port to use
if [ $JMX_PORT ]; then if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT" KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi fi
``` ```
### 3、解决方法 —— 认证的JMX
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了 3、重启Kafka-Broker
在JMX的配置等都没有问题的情况下如果是因为认证的原因导致连接不了的此时可以使用下面介绍的方法进行解决。
**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。** ---
`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。
这个数据是`json`格式的字符串,例子如下所示:
### 2.3、异原二JMX配置错误
#### 2.3.1、异常现象
错误日志:
```log
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
#### 2.3.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```bash
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```bash
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=当前机器的IP"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
---
### 2.4、异因三JMX开启SSL
#### 2.4.1、异常现象
```log
# 连接JMX的日志中出现SSL认证失败的相关日志。TODO欢迎补充具体日志案例。
```
#### 2.4.2、解决方案
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_kNyCi8H9wtHSRkWurB6S width="50%">
---
### 2.5、异因四连接了错误IP
#### 2.5.1、异常现象
Broker 配置了内外网而JMX在配置时可能配置了内网IP或者外网IP此时`KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
比如Broker在ZK的存储结构如下所示我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址。
```json ```json
{ {
"maxConn": 10, # KM对单台Broker的最大JMX连接数 "listener_security_protocol_map": {
"username": "xxxxx", # 用户名 "EXTERNAL": "SASL_PLAINTEXT",
"password": "xxxx", # 密码 "INTERNAL": "SASL_PLAINTEXT"
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭 },
"endpoints": [
"EXTERNAL://192.168.0.1:7092",
"INTERNAL://192.168.0.2:7093"
],
"jmx_port": 8099,
"host": "192.168.0.1",
"timestamp": "1627289710439",
"port": -1,
"version": 4
} }
``` ```
&nbsp; #### 2.5.2、解决方案
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
`jmx_properties`格式:
```json
{
"maxConn": 100, // KM对单台Broker的最大JMX连接数
"username": "xxxxx", //用户名,可以不填写
"password": "xxxx", // 密码,可以不填写
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
"useWhichEndpoint": "EXTERNAL" //指定要连接的网络名称填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
}
```
SQL例子
SQL的例子
```sql ```sql
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx}; UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
``` ```
---
### 2.6、异因五:连接了错误端口
3.3.0 以上版本,或者是 master 分支最新代码,才具备该能力。
#### 2.6.1、异常现象
在 AWS 或者是容器上的 Kafka-Broker使用同一个IP但是外部服务想要去连接 JMX 端口时,需要进行映射。因此 KnowStreaming 如果直接连接 ZK 上获取到的 JMX 端口,会连接失败,因此需要具备连接端口可配置的能力。
TODO补充具体的日志。
#### 2.6.2、解决方案
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`specifiedJmxPortList`字段,从而控制 `KnowStreaming` 连接到特定的JMX PORT。
`jmx_properties`格式:
```json
{
"jmxPort": 2445, // 最低优先级使用的jmx端口
"maxConn": 100, // KM对单台Broker的最大JMX连接数
"username": "xxxxx", //用户名,可以不填写
"password": "xxxx", // 密码,可以不填写
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
"useWhichEndpoint": "EXTERNAL", //指定要连接的网络名称填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
"specifiedJmxPortList": [ // 配置最高优先使用的jmx端口
{
"serverId": "1", // kafka-broker的brokerId, 注意这个是字符串类型字符串类型的原因是要兼容connect的jmx端口的连接
"jmxPort": 1234 // 该 broker 所连接的jmx端口
},
{
"serverId": "2",
"jmxPort": 1234
},
]
}
```
SQL例子
```sql
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "specifiedJmxPortList": [{"serverId": "1", "jmxPort": 1234}] }' where id={xxx};
```
---

View File

@@ -0,0 +1,183 @@
![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
# 页面无数据排查手册
- [页面无数据排查手册](#页面无数据排查手册)
- [1、集群接入错误](#1集群接入错误)
- [1.1、异常现象](#11异常现象)
- [1.2、解决方案](#12解决方案)
- [1.3、正常情况](#13正常情况)
- [2、JMX连接失败](#2jmx连接失败)
- [3、ElasticSearch问题](#3elasticsearch问题)
- [3.1、异因一:缺少索引](#31异因一缺少索引)
- [3.1.1、异常现象](#311异常现象)
- [3.1.2、解决方案](#312解决方案)
- [3.2、异因二:索引模板错误](#32异因二索引模板错误)
- [3.2.1、异常现象](#321异常现象)
- [3.2.2、解决方案](#322解决方案)
- [3.3、异因三集群Shard满](#33异因三集群shard满)
- [3.3.1、异常现象](#331异常现象)
- [3.3.2、解决方案](#332解决方案)
---
## 1、集群接入错误
### 1.1、异常现象
如下图所示,集群非空时,大概率为地址配置错误导致。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_BRiXBvqYFK2dxSF1aqgZ width="80%">
### 1.2、解决方案
接入集群时,依据提示的错误,进行相应的解决。例如:
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_Yn4LhV8aeSEKX1zrrkUi width="50%">
### 1.3、正常情况
接入集群时,页面信息都自动正常出现,没有提示错误。
---
## 2、JMX连接失败
背景Kafka 通过 JMX 服务进行运行指标的暴露,因此 `KnowStreaming` 会主动连接 Kafka 的 JMX 服务进行指标采集。如果我们发现页面缺少指标,那么可能原因之一是 Kafka 的 JMX 端口配置的有问题导致指标获取失败,进而页面没有数据。
具体见同目录下的文档:[解决连接JMX失败](./%E8%A7%A3%E5%86%B3%E8%BF%9E%E6%8E%A5JMX%E5%A4%B1%E8%B4%A5.md)
---
## 3、ElasticSearch问题
**背景:**
`KnowStreaming` 将从 Kafka 中采集到的指标存储到 ES 中,如果 ES 存在问题,则也可能会导致页面出现无数据的情况。
**日志:**
`KnowStreaming` 读写 ES 相关日志,在 `logs/es/es.log` 中!
**注意:**
mac系统在执行curl指令时可能报zsh错误。可参考以下操作。
```bash
1 进入.zshrc 文件 vim ~/.zshrc
2.在.zshrc中加入 setopt no_nomatch
3.更新配置 source ~/.zshrc
```
---
### 3.1、异因一:缺少索引
#### 3.1.1、异常现象
报错信息
```log
# 日志位置 logs/es/es.log
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
`curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka*` 查看KS索引列表发现没有索引。
#### 3.1.2、解决方案
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
---
### 3.2、异因二:索引模板错误
#### 3.2.1、异常现象
多集群列表有数据集群详情页图标无数据。查询KS索引模板列表发现不存在。
```bash
curl {ES的IP地址}:{ES的端口号}/_cat/templates/ks_kafka*?v&h=name
```
正常KS模板如下图所示。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_l79bPYSci9wr6KFwZDA6 width="90%">
#### 3.2.2、解决方案
删除KS索引模板和索引
```bash
curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
```
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
---
### 3.3、异因三集群Shard满
#### 3.3.1、异常现象
报错信息
```log
# 日志位置 logs/es/es.log
{"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [4] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [4] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"},"status":400}
```
尝试手动创建索引失败。
```bash
#创建ks_kafka_cluster_metric_test索引的指令
curl -s -XPUT http://{ES的IP地址}:{ES的端口号}/ks_kafka_cluster_metric_test
```
#### 3.3.2、解决方案
ES索引的默认分片数量为1000达到数量以后索引创建失败。
+ 扩大ES索引数量上限执行指令
```
curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端口号}/_cluster/settings -d '
{
"persistent": {
"cluster": {
"max_shards_per_node":{索引上限默认为1000, 测试时可以将其调整为10000}
}
}
}'
```
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来补全索引。

View File

@@ -6,9 +6,10 @@
### 2.1.1、安装说明 ### 2.1.1、安装说明
-`v3.0.0-bete` 版本为例进行部署; -`v3.0.0-beta.1` 版本为例进行部署;
- 以 CentOS-7 为例,系统基础配置要求 4C-8G - 以 CentOS-7 为例,系统基础配置要求 4C-8G
- 部署完成后,可通过浏览器:`IP:PORT` 进行访问,默认端口是 `8080`,系统默认账号密码: `admin` / `admin2022_` - 部署完成后,可通过浏览器:`IP:PORT` 进行访问,默认端口是 `8080`,系统默认账号密码: `admin` / `admin2022_`
- `v3.0.0-beta.2`版本开始,默认账号密码为`admin` / `admin`
- 本文为单机部署,如需分布式部署,[请联系我们](https://knowstreaming.com/support-center) - 本文为单机部署,如需分布式部署,[请联系我们](https://knowstreaming.com/support-center)
**软件依赖** **软件依赖**
@@ -19,7 +20,7 @@
| ElasticSearch | v7.6+ | 8060 | | ElasticSearch | v7.6+ | 8060 |
| JDK | v8+ | - | | JDK | v8+ | - |
| CentOS | v6+ | - | | CentOS | v6+ | - |
| Ubantu | v16+ | - | | Ubuntu | v16+ | - |
&nbsp; &nbsp;
@@ -29,7 +30,7 @@
```bash ```bash
# 在服务器中下载安装脚本, 该脚本中会在当前目录下重新安装MySQL。重装后的mysql密码存放在当前目录的mysql.password文件中。 # 在服务器中下载安装脚本, 该脚本中会在当前目录下重新安装MySQL。重装后的mysql密码存放在当前目录的mysql.password文件中。
wget https://s3-gzpu.didistatic.com/pub/knowstreaming/deploy_KnowStreaming.sh wget https://s3-gzpu.didistatic.com/pub/knowstreaming/deploy_KnowStreaming-3.0.0-beta.1.sh
# 执行脚本 # 执行脚本
sh deploy_KnowStreaming.sh sh deploy_KnowStreaming.sh
@@ -42,10 +43,10 @@ sh deploy_KnowStreaming.sh
```bash ```bash
# 将安装包下载到本地且传输到目标服务器 # 将安装包下载到本地且传输到目标服务器
wget https://s3-gzpu.didistatic.com/pub/knowstreaming/KnowStreaming-3.0.0-betaoffline.tar.gz wget https://s3-gzpu.didistatic.com/pub/knowstreaming/KnowStreaming-3.0.0-beta.1-offline.tar.gz
# 解压安装包 # 解压安装包
tar -zxf KnowStreaming-3.0.0-betaoffline.tar.gz tar -zxf KnowStreaming-3.0.0-beta.1-offline.tar.gz
# 执行安装脚本 # 执行安装脚本
sh deploy_KnowStreaming-offline.sh sh deploy_KnowStreaming-offline.sh
@@ -58,28 +59,182 @@ sh deploy_KnowStreaming-offline.sh
### 2.1.3、容器部署 ### 2.1.3、容器部署
#### 2.1.3.1、Helm
**环境依赖** **环境依赖**
- Kubernetes >= 1.14 Helm >= 2.17.0 - Kubernetes >= 1.14 Helm >= 2.17.0
- 默认配置为全部安装 ElasticSearch + MySQL + KnowStreaming - 默认依赖全部安装ElasticSearch3 节点集群模式) + MySQL(单机) + KnowStreaming-manager + KnowStreaming-ui
- 如果使用已有的 ElasticSearch(7.6.x) 和 MySQL(5.7) 只需调整 values.yaml 部分参数即可 - 使用已有的 ElasticSearch(7.6.x) 和 MySQL(5.7) 只需调整 values.yaml 部分参数即可
**安装命令** **安装命令**
```bash ```bash
# 下载安装包 # 相关镜像在Docker Hub都可以下载
wget https://s3-gzpu.didistatic.com/pub/knowstreaming/knowstreaming-3.0.0-hlem.tgz # 快速安装(NAMESPACE需要更改为已存在的安装启动需要几分钟初始化请稍等~)
helm install -n [NAMESPACE] [NAME] http://download.knowstreaming.com/charts/knowstreaming-manager-0.1.5.tgz
# 解压安装包
tar -zxf knowstreaming-3.0.0-hlem.tgz
# 执行命令(NAMESPACE需要更改为已存在的)
helm install -n [NAMESPACE] knowstreaming knowstreaming-manager/
# 获取KnowStreaming前端ui的service. 默认nodeport方式. # 获取KnowStreaming前端ui的service. 默认nodeport方式.
# (http://nodeIP:nodeport默认用户名密码admin/admin2022_) # (http://nodeIP:nodeport默认用户名密码admin/admin2022_)
# `v3.0.0-beta.2`版本开始helm chart包版本0.1.4开始),默认账号密码为`admin` / `admin`
# 添加仓库
helm repo add knowstreaming http://download.knowstreaming.com/charts
# 拉取最新版本
helm pull knowstreaming/knowstreaming-manager
```
&nbsp;
#### 2.1.3.2、Docker Compose
**环境依赖**
- [Docker](https://docs.docker.com/engine/install/)
- [Docker Compose](https://docs.docker.com/compose/install/)
**安装命令**
```bash
# `v3.0.0-beta.2`版本开始(docker镜像为0.2.0版本开始),默认账号密码为`admin` / `admin`
# https://hub.docker.com/u/knowstreaming 在此处寻找最新镜像版本
# mysql与es可以使用自己搭建的服务,调整对应配置即可
# 复制docker-compose.yml到指定位置后执行下方命令即可启动
docker-compose up -d
```
**验证安装**
```shell
docker-compose ps
# 验证启动 - 状态为 UP 则表示成功
Name Command State Ports
----------------------------------------------------------------------------------------------------
elasticsearch-single /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp
knowstreaming-init /bin/bash /es_template_cre ... Up
knowstreaming-manager /bin/sh /ks-start.sh Up 80/tcp
knowstreaming-mysql /entrypoint.sh mysqld Up (health: starting) 3306/tcp, 33060/tcp
knowstreaming-ui /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
# 稍等一分钟左右 knowstreaming-init 会退出表示es初始化完成可以访问页面
Name Command State Ports
-------------------------------------------------------------------------------------------
knowstreaming-init /bin/bash /es_template_cre ... Exit 0
knowstreaming-mysql /entrypoint.sh mysqld Up (healthy) 3306/tcp, 33060/tcp
```
**访问**
```http request
http://127.0.0.1:80/
```
**docker-compose.yml**
```yml
version: "2"
services:
# *不要调整knowstreaming-manager服务名称ui中会用到
knowstreaming-manager:
image: knowstreaming/knowstreaming-manager:latest
container_name: knowstreaming-manager
privileged: true
restart: always
depends_on:
- elasticsearch-single
- knowstreaming-mysql
expose:
- 80
command:
- /bin/sh
- /ks-start.sh
environment:
TZ: Asia/Shanghai
# mysql服务地址
SERVER_MYSQL_ADDRESS: knowstreaming-mysql:3306
# mysql数据库名
SERVER_MYSQL_DB: know_streaming
# mysql用户名
SERVER_MYSQL_USER: root
# mysql用户密码
SERVER_MYSQL_PASSWORD: admin2022_
# es服务地址
SERVER_ES_ADDRESS: elasticsearch-single:9200
# 服务JVM参数
JAVA_OPTS: -Xmx1g -Xms1g
# 对于kafka中ADVERTISED_LISTENERS填写的hostname可以通过该方式完成
# extra_hosts:
# - "hostname:x.x.x.x"
# 服务日志路径
# volumes:
# - /ks/manage/log:/logs
knowstreaming-ui:
image: knowstreaming/knowstreaming-ui:latest
container_name: knowstreaming-ui
restart: always
ports:
- '80:80'
environment:
TZ: Asia/Shanghai
depends_on:
- knowstreaming-manager
# extra_hosts:
# - "hostname:x.x.x.x"
elasticsearch-single:
image: docker.io/library/elasticsearch:7.6.2
container_name: elasticsearch-single
restart: always
expose:
- 9200
- 9300
# ports:
# - '9200:9200'
# - '9300:9300'
environment:
TZ: Asia/Shanghai
# es的JVM参数
ES_JAVA_OPTS: -Xms512m -Xmx512m
# 单节点配置,多节点集群参考 https://www.elastic.co/guide/en/elasticsearch/reference/7.6/docker.html#docker-compose-file
discovery.type: single-node
# 数据持久化路径
# volumes:
# - /ks/es/data:/usr/share/elasticsearch/data
# es初始化服务与manager使用同一镜像
# 首次启动es需初始化模版和索引,后续会自动创建
knowstreaming-init:
image: knowstreaming/knowstreaming-manager:latest
container_name: knowstreaming-init
depends_on:
- elasticsearch-single
command:
- /bin/bash
- /es_template_create.sh
environment:
TZ: Asia/Shanghai
# es服务地址
SERVER_ES_ADDRESS: elasticsearch-single:9200
knowstreaming-mysql:
image: knowstreaming/knowstreaming-mysql:latest
container_name: knowstreaming-mysql
restart: always
environment:
TZ: Asia/Shanghai
# root 用户密码
MYSQL_ROOT_PASSWORD: admin2022_
# 初始化时创建的数据库名称
MYSQL_DATABASE: know_streaming
# 通配所有host,可以访问远程
MYSQL_ROOT_HOST: '%'
expose:
- 3306
# ports:
# - '3306:3306'
# 数据持久化路径
# volumes:
# - /ks/mysql/data:/data/mysql
``` ```
&nbsp; &nbsp;
@@ -219,10 +374,10 @@ sh /data/elasticsearch/control.sh status
```bash ```bash
# 下载安装包 # 下载安装包
wget https://s3-gzpu.didistatic.com/pub/knowstreaming/KnowStreaming-3.0.0-beta.tar.gz wget https://s3-gzpu.didistatic.com/pub/knowstreaming/KnowStreaming-3.0.0-beta.1.tar.gz
# 解压安装包到指定目录 # 解压安装包到指定目录
tar -zxf KnowStreaming-3.0.0-beta.tar.gz -C /data/ tar -zxf KnowStreaming-3.0.0-beta.1.tar.gz -C /data/
# 修改启动脚本并加入systemd管理 # 修改启动脚本并加入systemd管理
cd /data/KnowStreaming/ cd /data/KnowStreaming/
@@ -236,7 +391,7 @@ mysql -uroot -pDidi_km_678 know_streaming < ./init/sql/dml-ks-km.sql
mysql -uroot -pDidi_km_678 know_streaming < ./init/sql/dml-logi.sql mysql -uroot -pDidi_km_678 know_streaming < ./init/sql/dml-logi.sql
# 创建elasticsearch初始化数据 # 创建elasticsearch初始化数据
sh ./init/template/template.sh sh ./bin/init_es_template.sh
# 修改配置文件 # 修改配置文件
vim ./conf/application.yml vim ./conf/application.yml

View File

@@ -1,7 +1,5 @@
![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png) ![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
# `Know Streaming` 源码编译打包手册 # `Know Streaming` 源码编译打包手册
## 1、环境信息 ## 1、环境信息
@@ -11,7 +9,7 @@
`windows7+``Linux``Mac` `windows7+``Linux``Mac`
**环境依赖** **环境依赖**
- Maven 3.6.3 (后端) - Maven 3.6.3 (后端)
- Node v12.20.0/v14.17.3 (前端) - Node v12.20.0/v14.17.3 (前端)
- Java 8+ (后端) - Java 8+ (后端)
@@ -25,27 +23,23 @@
具体见下面描述。 具体见下面描述。
### 2.1、前后端合并打包 ### 2.1、前后端合并打包
1. 下载源码; 1. 下载源码;
2. 进入 `KS-KM` 工程目录,执行 `mvn -Prelease-package -Dmaven.test.skip=true clean install -U` 命令; 2. 进入 `KS-KM` 工程目录,执行 `mvn -Prelease-package -Dmaven.test.skip=true clean install -U` 命令;
3. 打包命令执行完成后,会在 `km-dist/target` 目录下面生成一个 `KnowStreaming-*.tar.gz` 的安装包。 3. 打包命令执行完成后,会在 `km-dist/target` 目录下面生成一个 `KnowStreaming-*.tar.gz` 的安装包。
### 2.2、前端单独打包
### 2.2、前端单独打包
1. 下载源码; 1. 下载源码;
2. 进入 `KS-KM/km-console` 工程目录; 2. 跳转到 [前端打包构建文档](https://github.com/didi/KnowStreaming/blob/master/km-console/README.md) 按步骤进行。打包成功后,会在 `km-rest/src/main/resources` 目录下生成名为 `templates` 的前端静态资源包;
3. 执行 `npm run build`命令,会在 `KS-KM/km-console` 目录下生成一个名为 `pub` 的前端静态资源包; 3. 如果上一步过程中报错,请查看 [FAQ](https://github.com/didi/KnowStreaming/blob/master/docs/user_guide/faq.md) 第 8.10 条;
### 2.3、后端单独打包
### 2.3、后端单独打包
1. 下载源码; 1. 下载源码;
2. 修改顶层 `pom.xml` ,去掉其中的 `km-console` 模块,如下所示; 2. 修改顶层 `pom.xml` ,去掉其中的 `km-console` 模块,如下所示;
```xml ```xml
<modules> <modules>
<!-- <module>km-console</module>--> <!-- <module>km-console</module>-->
@@ -62,10 +56,7 @@
<module>km-rest</module> <module>km-rest</module>
<module>km-dist</module> <module>km-dist</module>
</modules> </modules>
``` ```
3. 执行 `mvn -U clean package -Dmaven.test.skip=true`命令; 3. 执行 `mvn -U clean package -Dmaven.test.skip=true`命令;
4. 执行完成之后会在 `KS-KM/km-rest/target` 目录下面生成一个 `ks-km.jar` 即为KS的后端部署的Jar包也可以执行 `mvn -Prelease-package -Dmaven.test.skip=true clean install -U` 生成的tar包也仅有后端服务的功能 4. 执行完成之后会在 `KS-KM/km-rest/target` 目录下面生成一个 `ks-km.jar` 即为 KS 的后端部署的 Jar 包,也可以执行 `mvn -Prelease-package -Dmaven.test.skip=true clean install -U` 生成的 tar 包也仅有后端服务的功能;

View File

@@ -1,21 +1,482 @@
## 6.2、版本升级手册 ## 6.2、版本升级手册
注意:如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。 注意:
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 6.2.0、升级至 `master` 版本 ### 升级至 `master` 版本
暂无 暂无
--- ---
### 6.2.1、升级至 `v3.0.0-beta.1`版本 ### 升级至 `3.4.0` 版本
**配置变更**
```yaml
# 新增的配置
request: # 请求相关的配置
api-call: # api调用
timeout-unit-ms: 8000 # 超时时间默认8000毫秒
```
**SQL 变更**
```sql
-- 多集群管理权限2023-06-27新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2026', 'Connector-新增', '1593', '1', '2', 'Connector-新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2028', 'Connector-编辑', '1593', '1', '2', 'Connector-编辑', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2030', 'Connector-删除', '1593', '1', '2', 'Connector-删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2032', 'Connector-重启', '1593', '1', '2', 'Connector-重启', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2034', 'Connector-暂停&恢复', '1593', '1', '2', 'Connector-暂停&恢复', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2026', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2028', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2030', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2032', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2034', '0', 'know-streaming');
**SQL变更** -- 多集群管理权限2023-06-29新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2036', 'Security-ACL新增', '1593', '1', '2', 'Security-ACL新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2038', 'Security-ACL删除', '1593', '1', '2', 'Security-ACL删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2040', 'Security-User新增', '1593', '1', '2', 'Security-User新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2042', 'Security-User删除', '1593', '1', '2', 'Security-User删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2044', 'Security-User修改密码', '1593', '1', '2', 'Security-User修改密码', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2036', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2038', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2040', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2042', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2044', '0', 'know-streaming');
-- 多集群管理权限2023-07-06新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2046', 'Group-删除', '1593', '1', '2', 'Group-删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2048', 'GroupOffset-Topic纬度删除', '1593', '1', '2', 'GroupOffset-Topic纬度删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2050', 'GroupOffset-Partition纬度删除', '1593', '1', '2', 'GroupOffset-Partition纬度删除', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2046', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2048', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2050', '0', 'know-streaming');
-- 多集群管理权限2023-07-18新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2052', 'Security-User查看密码', '1593', '1', '2', 'Security-User查看密码', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2052', '0', 'know-streaming');
```
---
### 升级至 `3.3.0` 版本
**SQL 变更**
```sql
ALTER TABLE `logi_security_user`
CHANGE COLUMN `phone` `phone` VARCHAR(20) NOT NULL DEFAULT '' COMMENT 'mobile' ;
ALTER TABLE ks_kc_connector ADD `heartbeat_connector_name` varchar(512) DEFAULT '' COMMENT '心跳检测connector名称';
ALTER TABLE ks_kc_connector ADD `checkpoint_connector_name` varchar(512) DEFAULT '' COMMENT '进度确认connector名称';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_TOTAL_RECORD_ERRORS', '{\"value\" : 1}', 'MirrorMaker消息处理错误的次数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX', '{\"value\" : 6000}', 'MirrorMaker消息复制最大延迟时间', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_UNASSIGNED_TASK_COUNT', '{\"value\" : 20}', 'MirrorMaker未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_FAILED_TASK_COUNT', '{\"value\" : 10}', 'MirrorMaker失败状态的任务数量', 'admin');
-- 多集群管理权限2023-01-05新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2012', 'Topic-新增Topic复制', '1593', '1', '2', 'Topic-新增Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2014', 'Topic-详情-取消Topic复制', '1593', '1', '2', 'Topic-详情-取消Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2012', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2014', '0', 'know-streaming');
-- 多集群管理权限2023-01-18新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2016', 'MM2-新增', '1593', '1', '2', 'MM2-新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2018', 'MM2-编辑', '1593', '1', '2', 'MM2-编辑', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2020', 'MM2-删除', '1593', '1', '2', 'MM2-删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2022', 'MM2-重启', '1593', '1', '2', 'MM2-重启', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2024', 'MM2-暂停&恢复', '1593', '1', '2', 'MM2-暂停&恢复', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2016', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2018', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2020', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2022', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2024', '0', 'know-streaming');
DROP TABLE IF EXISTS `ks_ha_active_standby_relation`;
CREATE TABLE `ks_ha_active_standby_relation` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`active_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '主集群ID',
`standby_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '备集群ID',
`res_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '资源名称',
`res_type` int(11) NOT NULL DEFAULT '-1' COMMENT '资源类型0集群1镜像Topic2主备Topic',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_res` (`res_type`,`active_cluster_phy_id`,`standby_cluster_phy_id`,`res_name`),
UNIQUE KEY `uniq_res_type_standby_cluster_res_name` (`res_type`,`standby_cluster_phy_id`,`res_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='HA主备关系表';
-- 删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引
ALTER TABLE `ks_km_kafka_change_record` DROP INDEX `idx_cluster_phy_id` ,
ADD INDEX `idx_cluster_update_time` (`cluster_phy_id` ASC, `update_time` ASC);
```
---
### 升级至 `3.2.0` 版本
**配置变更**
```yaml
# 新增如下配置
spring:
logi-job: # know-streaming 依赖的 logi-job 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
enable: true # true表示开启job任务, false表关闭。KS在部署上可以考虑部署两套服务一套处理前端请求一套执行job任务此时可以通过该字段进行控制
# 线程池大小相关配置
thread-pool:
es:
search: # es查询线程池
thread-num: 20 # 线程池大小
queue-size: 10000 # 队列大小
# 客户端池大小相关配置
client-pool:
kafka-admin:
client-cnt: 1 # 每个Kafka集群创建的KafkaAdminClient数
# ES客户端配置
es:
index:
expire: 15 # 索引过期天数15表示超过15天的索引会被KS过期删除
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_kc_connect_cluster`;
CREATE TABLE `ks_kc_connect_cluster` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Connect集群ID',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
`group_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群Group名称',
`cluster_url` varchar(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
`member_leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL地址',
`version` varchar(64) NOT NULL DEFAULT '' COMMENT 'connect版本',
`jmx_properties` text COMMENT 'JMX配置',
`state` tinyint(4) NOT NULL DEFAULT '1' COMMENT '集群使用的消费组状态,也表示集群状态:-1 Unknown,0 ReBalance,1 Active,2 Dead,3 Empty',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '接入时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_id_group_name` (`id`,`group_name`),
UNIQUE KEY `uniq_name_kafka_cluster` (`name`,`kafka_cluster_phy_id`),
KEY `idx_kafka_cluster_phy_id` (`kafka_cluster_phy_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connect集群信息表';
DROP TABLE IF EXISTS `ks_kc_connector`;
CREATE TABLE `ks_kc_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`connector_class_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector类',
`connector_type` varchar(32) NOT NULL DEFAULT '' COMMENT 'Connector类型',
`state` varchar(45) NOT NULL DEFAULT '' COMMENT '状态',
`topics` text COMMENT '访问过的Topics',
`task_count` int(11) NOT NULL DEFAULT '0' COMMENT '任务数',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_connect_cluster_id_connector_name` (`connect_cluster_id`,`connector_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connector信息表';
DROP TABLE IF EXISTS `ks_kc_worker`;
CREATE TABLE `ks_kc_worker` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`member_id` varchar(512) NOT NULL DEFAULT '' COMMENT '成员ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
`jmx_port` int(16) NOT NULL DEFAULT '-1' COMMENT 'Jmx端口',
`url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL信息',
`leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'leaderURL信息',
`leader` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1是leader0不是leader',
`worker_id` varchar(128) NOT NULL COMMENT 'worker地址',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_id_member_id` (`connect_cluster_id`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='worker信息表';
DROP TABLE IF EXISTS `ks_kc_worker_connector`;
CREATE TABLE `ks_kc_worker_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`worker_member_id` varchar(256) NOT NULL DEFAULT '',
`task_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'Task的ID',
`state` varchar(128) DEFAULT NULL COMMENT '任务状态',
`worker_id` varchar(128) DEFAULT NULL COMMENT 'worker信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_relation` (`connect_cluster_id`,`connector_name`,`task_id`,`worker_member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Worker和Connector关系表';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_FAILED_TASK_COUNT', '{\"value\" : 1}', 'connector失败状态的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_UNASSIGNED_TASK_COUNT', '{\"value\" : 1}', 'connector未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE', '{\"value\" : 0.05}', 'Connect集群任务启动失败概率', 'admin');
```
---
### 升级至 `v3.1.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
```
### 升级至 `v3.0.1` 版本
**ES 索引模版**
```bash
# 新增 ks_kafka_zookeeper_metric 索引模版。
# 可通过再次执行 bin/init_es_template.sh 脚本,创建该索引模版。
# 索引模版内容
PUT _template/ks_kafka_zookeeper_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_km_zookeeper`;
CREATE TABLE `ks_km_zookeeper` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '物理集群ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper主机名',
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'zookeeper端口',
`role` varchar(16) NOT NULL DEFAULT '' COMMENT '角色, leader follower observer',
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper版本',
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1存活0未存活11存活但是4字命令使用不了',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_host_port` (`cluster_phy_id`,`host`, `port`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Zookeeper信息表';
DROP TABLE IF EXISTS `ks_km_group`;
CREATE TABLE `ks_km_group` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`name` varchar(192) COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Group名称',
`member_count` int(11) unsigned NOT NULL DEFAULT '0' COMMENT '成员数',
`topic_members` text CHARACTER SET utf8 COMMENT 'group消费的topic列表',
`partition_assignor` varchar(255) CHARACTER SET utf8 NOT NULL COMMENT '分配策略',
`coordinator_id` int(11) NOT NULL COMMENT 'group协调器brokerId',
`type` int(11) NOT NULL COMMENT 'group类型 0consumer 1connector',
`state` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '' COMMENT '状态',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_name` (`cluster_phy_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Group信息表';
```
### 升级至 `v3.0.0` 版本
**SQL 变更**
```sql
ALTER TABLE `ks_km_physical_cluster`
ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
```
---
### 升级至 `v3.0.0-beta.2`版本
**配置变更**
```yaml
# 新增配置
spring:
logi-security: # know-streaming 依赖的 logi-security 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
login-extend-bean-name: logiSecurityDefaultLoginExtendImpl # 使用的登录系统Service的Bean名称无需修改
# 线程池大小相关配置在task模块中新增了三类线程池
# 从而减少不同类型任务之间的相互影响以及减少对logi-job内的线程池的影响
thread-pool:
task: # 任务模块的配置
metrics: # metrics采集任务配置
thread-num: 18 # metrics采集任务线程池核心线程数
queue-size: 180 # metrics采集任务线程池队列大小
metadata: # metadata同步任务配置
thread-num: 27 # metadata同步任务线程池核心线程数
queue-size: 270 # metadata同步任务线程池队列大小
common: # 剩余其他任务配置
thread-num: 15 # 剩余其他任务线程池核心线程数
queue-size: 150 # 剩余其他任务线程池队列大小
# 删除配置,下列配置将不再使用
thread-pool:
task: # 任务模块的配置
heaven: # 采集任务配置
thread-num: 20 # 采集任务线程池核心线程数
queue-size: 1000 # 采集任务线程池队列大小
```
**SQL 变更**
```sql
-- 多集群管理权限2022-09-06新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2000', '多集群管理查看', '1593', '1', '2', '多集群管理查看', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2002', 'Topic-迁移副本', '1593', '1', '2', 'Topic-迁移副本', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2004', 'Topic-扩缩副本', '1593', '1', '2', 'Topic-扩缩副本', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2006', 'Cluster-LoadReBalance-周期均衡', '1593', '1', '2', 'Cluster-LoadReBalance-周期均衡', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2008', 'Cluster-LoadReBalance-立即均衡', '1593', '1', '2', 'Cluster-LoadReBalance-立即均衡', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2010', 'Cluster-LoadReBalance-设置集群规格', '1593', '1', '2', 'Cluster-LoadReBalance-设置集群规格', '0', 'know-streaming');
-- 系统管理权限2022-09-06新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('3000', '系统管理查看', '1595', '1', '2', '系统管理查看', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2000', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2002', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2004', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2006', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2008', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2010', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '3000', '0', 'know-streaming');
-- 修改字段长度
ALTER TABLE `logi_security_oplog`
CHANGE COLUMN `operator_ip` `operator_ip` VARCHAR(64) NOT NULL COMMENT '操作者ip' ,
CHANGE COLUMN `operator` `operator` VARCHAR(64) NULL DEFAULT NULL COMMENT '操作者账号' ,
CHANGE COLUMN `operate_page` `operate_page` VARCHAR(64) NOT NULL DEFAULT '' COMMENT '操作页面' ,
CHANGE COLUMN `operate_type` `operate_type` VARCHAR(64) NOT NULL COMMENT '操作类型' ,
CHANGE COLUMN `target_type` `target_type` VARCHAR(64) NOT NULL COMMENT '对象分类' ,
CHANGE COLUMN `target` `target` VARCHAR(1024) NOT NULL COMMENT '操作对象' ,
CHANGE COLUMN `operation_methods` `operation_methods` VARCHAR(64) NOT NULL DEFAULT '' COMMENT '操作方式' ;
```
---
### 升级至 `v3.0.0-beta.1`版本
**SQL 变更**
1、在`ks_km_broker`表增加了一个监听信息字段。 1、在`ks_km_broker`表增加了一个监听信息字段。
2、为`logi_security_oplog`表operation_methods字段设置默认值''。 2、为`logi_security_oplog` operation_methods 字段设置默认值''。
因此需要执行下面的sql对数据库表进行更新。 因此需要执行下面的 sql 对数据库表进行更新。
```sql ```sql
ALTER TABLE `ks_km_broker` ALTER TABLE `ks_km_broker`
@@ -28,8 +489,7 @@ ALTER COLUMN `operation_methods` set default '';
--- ---
### `2.x`版本 升级至 `v3.0.0-beta.0`版本
### 6.2.2、`2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:** **升级步骤:**
@@ -53,14 +513,14 @@ ALTER COLUMN `operation_methods` set default '';
UPDATE ks_km_topic UPDATE ks_km_topic
INNER JOIN INNER JOIN
(SELECT (SELECT
topic.cluster_id AS cluster_id, topic.cluster_id AS cluster_id,
topic.topic_name AS topic_name, topic.topic_name AS topic_name,
topic.description AS description topic.description AS description
FROM topic WHERE description != '' FROM topic WHERE description != ''
) AS t ) AS t
ON ks_km_topic.cluster_phy_id = t.cluster_id ON ks_km_topic.cluster_phy_id = t.cluster_id
AND ks_km_topic.topic_name = t.topic_name AND ks_km_topic.topic_name = t.topic_name
AND ks_km_topic.id > 0 AND ks_km_topic.id > 0
SET ks_km_topic.description = t.description; SET ks_km_topic.description = t.description;
``` ```

View File

@@ -1,14 +1,37 @@
# FAQ ![Logo](https://user-images.githubusercontent.com/71620349/185368586-aed82d30-1534-453d-86ff-ecfa9d0f35bd.png)
## 8.1、支持哪些 Kafka 版本? # FAQ
- [FAQ](#faq)
- [1、支持哪些 Kafka 版本?](#1支持哪些-kafka-版本)
- [1、2.x 版本和 3.0 版本有什么差异?](#12x-版本和-30-版本有什么差异)
- [3、页面流量信息等无数据](#3页面流量信息等无数据)
- [4、`Jmx`连接失败如何解决?](#4jmx连接失败如何解决)
- [5、有没有 API 文档?](#5有没有-api-文档)
- [6、删除 Topic 成功后,为何过段时间又出现了?](#6删除-topic-成功后为何过段时间又出现了)
- [7、如何在不登录的情况下调用接口](#7如何在不登录的情况下调用接口)
- [8、Specified key was too long; max key length is 767 bytes](#8specified-key-was-too-long-max-key-length-is-767-bytes)
- [9、出现 ESIndexNotFoundEXception 报错](#9出现-esindexnotfoundexception-报错)
- [10、km-console 打包构建失败](#10km-console-打包构建失败)
- [11、在 `km-console` 目录下执行 `npm run start` 时看不到应用构建和热加载过程?如何启动单个应用?](#11在-km-console-目录下执行-npm-run-start-时看不到应用构建和热加载过程如何启动单个应用)
- [12、权限识别失败问题](#12权限识别失败问题)
- [13、接入开启kerberos认证的kafka集群](#13接入开启kerberos认证的kafka集群)
- [14、对接Ldap的配置](#14对接ldap的配置)
- [15、测试时使用Testcontainers的说明](#15测试时使用testcontainers的说明)
- [16、JMX连接失败怎么办](#16jmx连接失败怎么办)
- [17、zk监控无数据问题](#17zk监控无数据问题)
- [18、启动失败报NoClassDefFoundError如何解决](#18启动失败报noclassdeffounderror如何解决)
- [19、依赖ElasticSearch 8.0以上版本部署后指标信息无法正常显示如何解决]
## 1、支持哪些 Kafka 版本?
- 支持 0.10+ 的 Kafka 版本; - 支持 0.10+ 的 Kafka 版本;
- 支持 ZK 及 Raft 运行模式的 Kafka 版本; - 支持 ZK 及 Raft 运行模式的 Kafka 版本;
&nbsp; &nbsp;
## 8.1、2.x 版本和 3.0 版本有什么差异? ## 1、2.x 版本和 3.0 版本有什么差异?
**全新设计理念** **全新设计理念**
@@ -24,7 +47,7 @@
&nbsp; &nbsp;
## 8.3、页面流量信息等无数据 ## 3、页面流量信息等无数据
- 1、`Broker JMX`未正确开启 - 1、`Broker JMX`未正确开启
@@ -36,13 +59,13 @@
&nbsp; &nbsp;
## 8.4、`Jmx`连接失败如何解决? ## 4、`Jmx`连接失败如何解决?
- 参看 [Jmx 连接配置&问题解决](./9-attachment#jmx-连接失败问题解决) 说明。 - 参看 [Jmx 连接配置&问题解决](https://doc.knowstreaming.com/product/9-attachment#91jmx-%E8%BF%9E%E6%8E%A5%E5%A4%B1%E8%B4%A5%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3) 说明。
&nbsp; &nbsp;
## 8.5、有没有 API 文档? ## 5、有没有 API 文档?
`KnowStreaming` 采用 Swagger 进行 API 说明,在启动 KnowStreaming 服务之后,就可以从下面地址看到。 `KnowStreaming` 采用 Swagger 进行 API 说明,在启动 KnowStreaming 服务之后,就可以从下面地址看到。
@@ -50,7 +73,7 @@ Swagger-API 地址: [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-
&nbsp; &nbsp;
## 8.6、删除 Topic 成功后,为何过段时间又出现了? ## 6、删除 Topic 成功后,为何过段时间又出现了?
**原因说明:** **原因说明:**
@@ -75,7 +98,7 @@ for (int i= 0; i < 100000; ++i) {
&nbsp; &nbsp;
## 8.7、如何在不登录的情况下调用接口 ## 7、如何在不登录的情况下调用接口
步骤一:接口调用时,在 header 中,增加如下信息: 步骤一:接口调用时,在 header 中,增加如下信息:
@@ -110,20 +133,189 @@ SECURITY.TRICK_USERS
但是还有一点需要注意,绕过的用户仅能调用他有权限的接口,比如一个普通用户,那么他就只能调用普通的接口,不能去调用运维人员的接口。 但是还有一点需要注意,绕过的用户仅能调用他有权限的接口,比如一个普通用户,那么他就只能调用普通的接口,不能去调用运维人员的接口。
## 8.8、Specified key was too long; max key length is 767 bytes ## 8、Specified key was too long; max key length is 767 bytes
**原因:**不同版本的InoDB引擎参数innodb_large_prefix默认值不同即在5.6默认值为OFF5.7默认值为ON。 **原因:** 不同版本的 InoDB 引擎参数innodb_large_prefix默认值不同即在 5.6 默认值为 OFF5.7 默认值为 ON。
对于引擎为InnoDBinnodb_large_prefix=OFF且行格式为Antelope即支持REDUNDANTCOMPACT时索引键前缀长度最大为 767 字节。innodb_large_prefix=ON且行格式为Barracuda即支持DYNAMICCOMPRESSED时索引键前缀长度最大为3072字节。 对于引擎为 InnoDBinnodb_large_prefix=OFF且行格式为 Antelope 即支持 REDUNDANTCOMPACT 时,索引键前缀长度最大为 767 字节。innodb_large_prefix=ON且行格式为 Barracuda 即支持 DYNAMICCOMPRESSED 时,索引键前缀长度最大为 3072 字节。
**解决方案:** **解决方案:**
- 减少varchar字符大小低于767/4=191。 - 减少 varchar 字符大小低于 767/4=191。
- 将字符集改为latin1一个字符=一个字节)。 - 将字符集改为 latin1一个字符=一个字节)。
- 开启innodb_large_prefix修改默认行格式innodb_file_format为Barracuda并设置row_format=dynamic。 - 开启innodb_large_prefix修改默认行格式innodb_file_format Barracuda并设置 row_format=dynamic。
## 8.9、出现ESIndexNotFoundEXception报错 ## 9、出现 ESIndexNotFoundEXception 报错
**原因 **没有创建ES索引模版 **原因 **没有创建 ES 索引模版
**解决方案:**执行 init_es_template.sh 脚本,创建 ES 索引模版即可。
## 10、km-console 打包构建失败
首先,**请确保您正在使用最新版本**,版本列表见 [Tags](https://github.com/didi/KnowStreaming/tags)。如果不是最新版本,请升级后再尝试有无问题。
常见的原因是由于工程依赖没有正常安装,导致在打包过程中缺少依赖,造成打包失败。您可以检查是否有以下文件夹,且文件夹内是否有内容
```
KnowStreaming/km-console/node_modules
KnowStreaming/km-console/packages/layout-clusters-fe/node_modules
KnowStreaming/km-console/packages/config-manager-fe/node_modules
```
如果发现没有对应的 `node_modules` 目录或着目录内容为空,说明依赖没有安装成功。请按以下步骤操作,
1. 手动删除上述三个文件夹(如果有)
2. 如果之前是通过 `mvn install` 打包 `km-console`请到项目根目录KnowStreaming下重新输入该指令进行打包。观察打包过程有无报错。如有报错请见步骤 4。
3. 如果是通过本地独立构建前端工程的方式(指直接执行 `npm run build`),请进入 `KnowStreaming/km-console` 目录,执行下述步骤(注意:执行时请确保您在使用 `node v12` 版本)
a. 执行 `npm run i`。如有报错,请见步骤 4。
b. 执行 `npm run build`。如有报错,请见步骤 4。
4. 麻烦联系我们协助解决。推荐提供以下信息,方面我们快速定位问题,示例如下。
```
操作系统: Mac
命令行终端bash
Node 版本: v12.22.12
复现步骤: 1. -> 2.
错误截图:
```
## 11、在 `km-console` 目录下执行 `npm run start` 时看不到应用构建和热加载过程?如何启动单个应用?
需要到具体的应用中执行 `npm run start`,例如 `cd packages/layout-clusters-fe` 后,执行 `npm run start`
应用启动后需要到基座应用中查看(需要启动基座应用,即 layout-clusters-fe
## 12、权限识别失败问题
1、使用admin账号登陆KnowStreaming时点击系统管理-用户管理-角色管理-新增角色,查看页面是否正常。
<img src="http://img-ys011.didistatic.com/static/dc2img/do1_gwGfjN9N92UxzHU8dfzr" width = "400" >
2、查看'/logi-security/api/v1/permission/tree'接口返回值,出现如下图所示乱码现象。
![接口返回值](http://img-ys011.didistatic.com/static/dc2img/do1_jTxBkwNGU9vZuYQQbdNw)
3、查看logi_security_permission表看看是否出现了中文乱码现象。
根据以上几点,我们可以确定是由于数据库乱码造成的权限识别失败问题。
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
+ 解决方案清空数据库数据将数据库字符集调整为utf8最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。
## 13、接入开启kerberos认证的kafka集群
1. 部署KnowStreaming的机器上安装krb客户端
2. 替换/etc/krb5.conf配置文件
3. 把kafka对应的keytab复制到改机器目录下
4. 接入集群时认证配置,配置信息根据实际情况填写;
```json
{
"security.protocol": "SASL_PLAINTEXT",
"sasl.mechanism": "GSSAPI",
"sasl.jaas.config": "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=\"/etc/keytab/kafka.keytab\" storeKey=true useTicketCache=false principal=\"kafka/kafka@TEST.COM\";",
"sasl.kerberos.service.name": "kafka"
}
```
## 14、对接Ldap的配置
```yaml
# 需要在application.yml中增加如下配置。相关配置的信息按实际情况进行调整
account:
ldap:
url: ldap://127.0.0.1:8080/
basedn: DC=senz,DC=local
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: CN=search,DC=senz,DC=local
credentials: xxxxxxx
auth-user-registration: false # 是否注册到mysql默认false
auth-user-registration-role: 1677 # 1677是超级管理员角色的id如果赋予想默认赋予普通角色可以到ks新建一个。
# 需要在application.yml中修改如下配置
spring:
logi-security:
login-extend-bean-name: ksLdapLoginService # 表示使用ldap的service
```
## 15、测试时使用Testcontainers的说明
1. 需要docker运行环境 [Testcontainers运行环境说明](https://www.testcontainers.org/supported_docker_environment/)
2. 如果本机没有docker可以使用[远程访问docker](https://docs.docker.com/config/daemon/remote-access/) [Testcontainers配置说明](https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection)
## 16、JMX连接失败怎么办
详细见:[解决连接JMX失败](../dev_guide/%E8%A7%A3%E5%86%B3%E8%BF%9E%E6%8E%A5JMX%E5%A4%B1%E8%B4%A5.md)
## 17、zk监控无数据问题
**现象:**
zookeeper集群正常但Ks上zk页面所有监控指标无数据`KnowStreaming` log_error.log日志提示
```vim
[MetricCollect-Shard-0-8-thread-1] ERROR class=c.x.k.s.k.c.s.h.c.z.HealthCheckZookeeperService||method=checkWatchCount||param=ZookeeperParam(zkAddressList=[Tuple{v1=192.168.xxx.xx, v2=2181}, Tuple{v1=192.168.xxx.xx, v2=2181}, Tuple{v1=192.168.xxx.xx, v2=2181}], zkConfig=null)||config=HealthAmountRatioConfig(amount=100000, ratio=0.8)||result=Result{message='mntr is not executed because it is not in the whitelist.
', code=8031, data=null}||errMsg=get metrics failed, may be collect failed or zk mntr command not in whitelist.
2023-04-23 14:39:07.234 [MetricCollect-Shard-0-8-thread-1] ERROR class=c.x.k.s.k.c.s.h.checker.AbstractHeal
```
原因就很明确了。需要开放zk的四字命令`zoo.cfg`配置文件中添加
```
4lw.commands.whitelist=mntr,stat,ruok,envi,srvr,envi,cons,conf,wchs,wchp
```
建议至少开放上述几个四字命令,当然,您也可以全部开放
```
4lw.commands.whitelist=*
```
## 18、启动失败报NoClassDefFoundError如何解决
**错误现象:**
```log
# 启动失败报nested exception is java.lang.NoClassDefFoundError: Could not initialize class com.didiglobal.logi.job.core.WorkerSingleton$Singleton
2023-08-11 22:54:29.842 [main] ERROR class=org.springframework.boot.SpringApplication||Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzScheduler' defined in class path resource [com/didiglobal/logi/job/LogIJobAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.didiglobal.logi.job.core.Scheduler]: Factory method 'quartzScheduler' threw exception; nested exception is java.lang.NoClassDefFoundError: Could not initialize class com.didiglobal.logi.job.core.WorkerSingleton$Singleton
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:657)
```
**问题原因:**
1. `KnowStreaming` 依赖的 `Logi-Job` 初始化 `WorkerSingleton$Singleton` 失败。
2. `WorkerSingleton$Singleton` 初始化的过程中,会去获取一些操作系统的信息,如果获取时出现了异常,则会导致 `WorkerSingleton$Singleton` 初始化失败。
**临时建议:**
`Logi-Job` 问题的修复时间不好控制,之前我们测试验证了一下,在 `Windows``Mac``CentOS` 这几个操作系统下基本上都是可以正常运行的。
所以,如果有条件的话,可以暂时先使用这几个系统部署 `KnowStreaming`
如果在在 `Windows``Mac``CentOS` 这几个操作系统下也出现了启动失败的问题可以重试2-3次看是否还是启动失败或者换一台机器试试。
## 依赖ElasticSearch 8.0以上版本部署后指标信息无法正常显示如何解决
**错误现象**
```log
Warnings: [299 Elasticsearch-8.9.1-a813d015ef1826148d9d389bd1c0d781c6e349f0 "Legacy index templates are deprecated in favor of composable templates."]
```
**问题原因**
1. ES8.0和ES7.0版本存在Template模式的差异建议使用 /_index_template 端点来管理模板;
2. ES java client在此版本的行为很奇怪表现为读取数据为空
**解决方法**
修改`es_template_create.sh`脚本中所有的`/_template``/_index_template`后执行即可。
**解决方案:**执行init_es_template.sh脚本创建ES索引模版即可。

View File

@@ -11,7 +11,7 @@
下面是用户第一次使用我们产品的典型体验路径: 下面是用户第一次使用我们产品的典型体验路径:
![text](http://img-ys011.didistatic.com/static/dc2img/do1_YehqxqmsVaqU5gf3XphI) ![text](http://img-ys011.didistatic.com/static/dc2img/do1_qgqPsAY46sZeBaPUCwXY)
## 5.3、常用功能 ## 5.3、常用功能

View File

@@ -5,13 +5,13 @@
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-biz</artifactId> <artifactId>km-biz</artifactId>
<version>${km.revision}</version> <version>${revision}</version>
<packaging>jar</packaging> <packaging>jar</packaging>
<parent> <parent>
<artifactId>km</artifactId> <artifactId>km</artifactId>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<version>${km.revision}</version> <version>${revision}</version>
</parent> </parent>
<properties> <properties>
@@ -29,6 +29,11 @@
<artifactId>km-core</artifactId> <artifactId>km-core</artifactId>
<version>${project.parent.version}</version> <version>${project.parent.version}</version>
</dependency> </dependency>
<dependency>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-rebalance</artifactId>
<version>${project.parent.version}</version>
</dependency>
<!-- spring --> <!-- spring -->
<dependency> <dependency>
@@ -62,10 +67,6 @@
<groupId>commons-lang</groupId> <groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId> <artifactId>commons-lang</artifactId>
</dependency> </dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency> <dependency>
<groupId>commons-codec</groupId> <groupId>commons-codec</groupId>

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
/**
* Kafka集群Connector概览
*/
public interface ClusterConnectorsManager {
PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto);
ConnectStateVO getClusterConnectorsState(Long clusterPhyId);
}

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
/**
* 多集群总体状态
*/
public interface ClusterZookeepersManager {
Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId);
PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto);
Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path);
}

View File

@@ -1,10 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.cluster; package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import java.util.List;
/** /**
* 多集群总体状态 * 多集群总体状态
*/ */
@@ -15,10 +20,14 @@ public interface MultiClusterPhyManager {
*/ */
ClusterPhysState getClusterPhysState(); ClusterPhysState getClusterPhysState();
ClusterPhysHealthState getClusterPhysHealthState();
/** /**
* 查询多集群大盘 * 查询多集群大盘
* @param dto 分页信息 * @param dto 分页信息
* @return * @return
*/ */
PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto); PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto);
Result<List<ClusterPhyBaseVO>> getClusterPhysBasic();
} }

View File

@@ -6,6 +6,8 @@ import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterBrokersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController; import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -14,7 +16,10 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBrokersOverviewVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBrokersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBrokersStateVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBrokersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterRunStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
@@ -23,6 +28,8 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -50,6 +57,9 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
@Autowired @Autowired
private KafkaControllerService kafkaControllerService; private KafkaControllerService kafkaControllerService;
@Autowired
private KafkaJMXClient kafkaJMXClient;
@Override @Override
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) { public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
// 获取集群Broker列表 // 获取集群Broker列表
@@ -71,14 +81,27 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Topic groupTopic = topicService.getTopic(clusterPhyId, org.apache.kafka.common.internals.Topic.GROUP_METADATA_TOPIC_NAME); Topic groupTopic = topicService.getTopic(clusterPhyId, org.apache.kafka.common.internals.Topic.GROUP_METADATA_TOPIC_NAME);
Topic transactionTopic = topicService.getTopic(clusterPhyId, org.apache.kafka.common.internals.Topic.TRANSACTION_STATE_TOPIC_NAME); Topic transactionTopic = topicService.getTopic(clusterPhyId, org.apache.kafka.common.internals.Topic.TRANSACTION_STATE_TOPIC_NAME);
//获取controller信息
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
//获取jmx状态信息
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(clusterPhyId);
// 格式转换 // 格式转换
return PaginationResult.buildSuc( return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList( this.convert2ClusterBrokersOverviewVOList(
clusterPhy,
paginationResult.getData().getBizData(), paginationResult.getData().getBizData(),
brokerList, brokerList,
metricsResult.getData(), metricsResult.getData(),
groupTopic, groupTopic,
transactionTopic transactionTopic,
kafkaController,
jmxConnectedMap
), ),
paginationResult paginationResult
); );
@@ -117,7 +140,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
clusterBrokersStateVO.setKafkaControllerAlive(true); clusterBrokersStateVO.setKafkaControllerAlive(true);
} }
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, Arrays.asList("broker.id", "listeners", "name", "value")) <= 0); clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, KafkaConstant.CONFIG_SIMILAR_IGNORED_CONFIG_KEY_LIST) <= 0
);
return clusterBrokersStateVO; return clusterBrokersStateVO;
} }
@@ -155,26 +179,36 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
); );
} }
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(List<Integer> pagedBrokerIdList, private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(ClusterPhy clusterPhy,
List<Integer> pagedBrokerIdList,
List<Broker> brokerList, List<Broker> brokerList,
List<BrokerMetrics> metricsList, List<BrokerMetrics> metricsList,
Topic groupTopic, Topic groupTopic,
Topic transactionTopic) { Topic transactionTopic,
Map<Integer, BrokerMetrics> metricsMap = metricsList == null? new HashMap<>(): metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity())); KafkaController kafkaController,
Map<Integer, Boolean> jmxConnectedMap) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null ? new HashMap<>() : metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null? new HashMap<>(): brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity())); Map<Integer, Broker> brokerMap = brokerList == null ? new HashMap<>() : brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size()); List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
for (Integer brokerId : pagedBrokerIdList) { for (Integer brokerId : pagedBrokerIdList) {
Broker broker = brokerMap.get(brokerId); Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId); BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic)); voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
} }
//补充非zk模式的JMXPort信息
if (!clusterPhy.getRunState().equals(ClusterRunStateEnum.RUN_ZK.getRunState())) {
JmxConfig jmxConfig = ConvertUtil.str2ObjByJson(clusterPhy.getJmxProperties(), JmxConfig.class);
voList.forEach(elem -> elem.setJmxPort(jmxConfig.getFinallyJmxPort(String.valueOf(elem.getBrokerId()))));
}
return voList; return voList;
} }
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic) { private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController, Boolean jmxConnected) {
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO(); ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
clusterBrokersOverviewVO.setBrokerId(brokerId); clusterBrokersOverviewVO.setBrokerId(brokerId);
if (broker != null) { if (broker != null) {
@@ -192,8 +226,12 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
if (transactionTopic != null && transactionTopic.getBrokerIdSet().contains(brokerId)) { if (transactionTopic != null && transactionTopic.getBrokerIdSet().contains(brokerId)) {
clusterBrokersOverviewVO.getKafkaRoleList().add(transactionTopic.getTopicName()); clusterBrokersOverviewVO.getKafkaRoleList().add(transactionTopic.getTopicName());
} }
if (kafkaController != null && kafkaController.getBrokerId().equals(brokerId)) {
clusterBrokersOverviewVO.getKafkaRoleList().add(KafkaConstant.CONTROLLER_ROLE);
}
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics); clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
clusterBrokersOverviewVO.setJmxConnected(jmxConnected);
return clusterBrokersOverviewVO; return clusterBrokersOverviewVO;
} }

View File

@@ -0,0 +1,152 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterConnectorsManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect.MetricsConnectorsDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.converter.ConnectConverter;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Service
public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterConnectorsManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Autowired
private WorkerService workerService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto) {
List<ConnectCluster> clusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> poList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
// 查询实时指标
Result<List<ConnectorMetrics>> latestMetricsResult = connectorMetricService.getLatestMetricsFromES(
clusterPhyId,
poList.stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames()
);
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
// 转换成vo
List<ClusterConnectorOverviewVO> voList = ConnectConverter.convert2ClusterConnectorOverviewVOList(clusterList, poList,latestMetricsResult.getData());
// 请求分页信息
PaginationResult<ClusterConnectorOverviewVO> voPaginationResult = this.pagingConnectorInLocal(voList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = connectorMetricService.listConnectClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
)
);
return PaginationResult.buildSuc(
ConnectConverter.supplyData2ClusterConnectorOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public ConnectStateVO getClusterConnectorsState(Long clusterPhyId) {
//获取Connect集群Id列表
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> connectWorkerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return convert2ConnectStateVO(connectClusterList, connectorPOList, workerConnectorList, connectWorkerList);
}
/**************************************************** private method ****************************************************/
private MetricsConnectorsDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsConnectorsDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsConnectorsDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
private ConnectStateVO convert2ConnectStateVO(List<ConnectCluster> connectClusterList, List<ConnectorPO> connectorPOList, List<WorkerConnector> workerConnectorList, List<ConnectWorker> connectWorkerList) {
ConnectStateVO connectStateVO = new ConnectStateVO();
connectStateVO.setConnectClusterCount(connectClusterList.size());
connectStateVO.setTotalConnectorCount(connectorPOList.size());
connectStateVO.setAliveConnectorCount(connectorPOList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectStateVO.setWorkerCount(connectWorkerList.size());
connectStateVO.setTotalTaskCount(workerConnectorList.size());
connectStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
return connectStateVO;
}
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
//模糊匹配
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(connectorVOList, dto);
}
}

View File

@@ -14,10 +14,12 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterPhyTop
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant; import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.ha.HaResTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.ha.HaActiveStandbyRelationService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -38,16 +40,22 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
@Autowired @Autowired
private TopicMetricService topicMetricService; private TopicMetricService topicMetricService;
@Autowired
private HaActiveStandbyRelationService haActiveStandbyRelationService;
@Override @Override
public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) { public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) {
// 获取集群所有的Topic信息 // 获取集群所有的Topic信息
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
// 获取集群所有Topic的指标 // 获取集群所有Topic的指标
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCacheFirst(clusterPhyId); Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId);
// 获取HA信息
Set<String> haTopicNameSet = haActiveStandbyRelationService.listByClusterAndType(clusterPhyId, HaResTypeEnum.MIRROR_TOPIC).stream().map(elem -> elem.getResName()).collect(Collectors.toSet());
// 转换成vo // 转换成vo
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap); List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap, haTopicNameSet);
// 请求分页信息 // 请求分页信息
PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto); PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto);

View File

@@ -0,0 +1,138 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.Arrays;
import java.util.List;
@Service
public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterZookeepersManagerImpl.class);
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZnodeService znodeService;
@Override
public Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId) {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
}
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
vo.setTotalServerCount(infoList.size());
vo.setAliveFollowerCount(0);
vo.setTotalFollowerCount(0);
vo.setAliveObserverCount(0);
vo.setTotalObserverCount(0);
vo.setAliveServerCount(0);
for (ZookeeperInfo info: infoList) {
if (info.getRole().equals(ZKRoleEnum.LEADER.getRole()) || info.getRole().equals(ZKRoleEnum.STANDALONE.getRole())) {
// leader 或者 standalone
vo.setLeaderNode(info.getHost());
}
if (info.getRole().equals(ZKRoleEnum.FOLLOWER.getRole())) {
vo.setTotalFollowerCount(vo.getTotalFollowerCount() + 1);
vo.setAliveFollowerCount(info.alive()? vo.getAliveFollowerCount() + 1: vo.getAliveFollowerCount());
}
if (info.getRole().equals(ZKRoleEnum.OBSERVER.getRole())) {
vo.setTotalObserverCount(vo.getTotalObserverCount() + 1);
vo.setAliveObserverCount(info.alive()? vo.getAliveObserverCount() + 1: vo.getAliveObserverCount());
}
if (info.alive()) {
vo.setAliveServerCount(vo.getAliveServerCount() + 1);
}
}
// 指标获取
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
clusterPhyId,
Arrays.asList(
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
)
);
if (metricsResult.failed()) {
LOGGER.error(
"method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage()
);
return Result.buildSuc(vo);
}
ZookeeperMetrics metrics = metricsResult.getData();
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
return Result.buildSuc(vo);
}
@Override
public PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto) {
//获取集群zookeeper列表
List<ClusterZookeepersOverviewVO> clusterZookeepersOverviewVOList = ConvertUtil.list2List(zookeeperService.listFromDBByCluster(clusterPhyId), ClusterZookeepersOverviewVO.class);
//搜索
clusterZookeepersOverviewVOList = PaginationUtil.pageByFuzzyFilter(clusterZookeepersOverviewVOList, dto.getSearchKeywords(), Arrays.asList("host"));
//分页
PaginationResult<ClusterZookeepersOverviewVO> paginationResult = PaginationUtil.pageBySubData(clusterZookeepersOverviewVOList, dto);
return paginationResult;
}
@Override
public Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path) {
Result<Znode> result = znodeService.getZnode(clusterPhyId, path);
if (result.failed()) {
return Result.buildFromIgnoreData(result);
}
return Result.buildSuc(ConvertUtil.obj2ObjByJSON(result.getData(), ZnodeVO.class));
}
/**************************************************** private method ****************************************************/
}

View File

@@ -5,32 +5,34 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager; import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems; import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.common.BalanceMetricConstant;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.ClusterBalanceItemState;
import com.xiaojukeji.know.streaming.km.rebalance.core.service.ClusterBalanceService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import java.util.ArrayList; import java.util.*;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@Service @Service
@@ -44,33 +46,50 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
private ClusterMetricService clusterMetricService; private ClusterMetricService clusterMetricService;
@Autowired @Autowired
private KafkaControllerService kafkaControllerService; private ClusterBalanceService clusterBalanceService;
@Override @Override
public ClusterPhysState getClusterPhysState() { public ClusterPhysState getClusterPhysState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters(); List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysState physState = new ClusterPhysState(0, 0, 0, clusterPhyList.size());
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB( for (ClusterPhy clusterPhy : clusterPhyList) {
clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
false Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
); if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟 } else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
physState.setDownCount(physState.getDownCount() + 1);
} else if ((System.currentTimeMillis() - clusterPhy.getCreateTime().getTime() >= 5 * 60 * 1000) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
physState.setDownCount(physState.getDownCount() + 1); physState.setDownCount(physState.getDownCount() + 1);
} else { } else {
// 其他情况都设置为alive
physState.setLiveCount(physState.getLiveCount() + 1); physState.setLiveCount(physState.getLiveCount() + 1);
} }
} }
return physState;
}
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
physState.setGoodCount(physState.getGoodCount() + 1);
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
physState.setMediumCount(physState.getMediumCount() + 1);
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
physState.setPoorCount(physState.getPoorCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDeadCount(physState.getDeadCount() + 1);
} else {
physState.setUnknownCount(physState.getUnknownCount() + 1);
}
}
return physState; return physState;
} }
@@ -83,24 +102,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 转为vo格式方便后续进行分页筛选等 // 转为vo格式方便后续进行分页筛选等
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class); List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
// 获取集群controller信息并补充到vo中,
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
for (ClusterPhyDashboardVO vo: voList) {
KafkaController kafkaController = controllerMap.get(vo.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
vo.setAlive(Constant.DOWN);
} else if ((System.currentTimeMillis() - vo.getCreateTime().getTime() >= 5 * 60L * 1000L) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
vo.setAlive(Constant.DOWN);
} else {
// 其他情况都设置为alive
vo.setAlive(Constant.ALIVE);
}
}
// 本地分页过滤 // 本地分页过滤
voList = this.getAndPagingDataInLocal(voList, dto); voList = this.getAndPagingDataInLocal(voList, dto);
@@ -125,6 +126,15 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
); );
} }
@Override
public Result<List<ClusterPhyBaseVO>> getClusterPhysBasic() {
// 获取集群
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
// 转为vo格式方便后续进行分页筛选等
return Result.buildSuc(ConvertUtil.list2List(clusterPhyList, ClusterPhyBaseVO.class));
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
@@ -149,12 +159,11 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
List<ClusterMetrics> metricsList = new ArrayList<>(); List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) { for (ClusterPhyDashboardVO vo: voList) {
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId()); ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) { clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension());
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
// 如果集群没有健康分,则设置一个默认的健康分数值 Result<ClusterMetrics> balanceMetricsResult = this.getClusterLoadReBalanceInfo(vo.getId());
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE, if (balanceMetricsResult.hasData()) {
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue() clusterMetrics.putMetric(balanceMetricsResult.getData().getMetrics());
);
} }
metricsList.add(clusterMetrics); metricsList.add(clusterMetrics);
@@ -178,4 +187,21 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
dto.setClusterPhyIds(clusterIdList); dto.setClusterPhyIds(clusterIdList);
return dto; return dto;
} }
private Result<ClusterMetrics> getClusterLoadReBalanceInfo(Long clusterPhyId) {
Result<ClusterBalanceItemState> stateResult = clusterBalanceService.getItemStateFromCacheFirst(clusterPhyId);
if (stateResult.failed()) {
return Result.buildFromIgnoreData(stateResult);
}
ClusterBalanceItemState state = stateResult.getData();
ClusterMetrics metric = ClusterMetrics.initWithMetrics(clusterPhyId, BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_ENABLE, state.getEnable()? Constant.YES: Constant.NO);
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_CPU, state.getResItemState(Resource.CPU).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_NW_IN, state.getResItemState(Resource.NW_IN).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_NW_OUT, state.getResItemState(Resource.NW_OUT).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_DISK, state.getResItemState(Resource.DISK).floatValue());
return Result.buildSuc(metric);
}
} }

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import java.util.Properties;
public interface ConnectorManager {
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator);
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
public interface WorkerConnectorManager {
Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,119 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.OpConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;
@Service
public class ConnectorManagerImpl implements ConnectorManager {
@Autowired
private PluginService pluginService;
@Autowired
private ConnectorService connectorService;
@Autowired
private OpConnectorService opConnectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator) {
Result<ConnectConfigInfos> infosResult = pluginService.validateConfig(connectClusterId, configs);
if (infosResult.failed()) {
return Result.buildFromIgnoreData(infosResult);
}
if (infosResult.getData().getErrorCount() > 0) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Connector参数错误");
}
return opConnectorService.updateConnectorConfig(connectClusterId, connectorName, configs, operator);
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
dto.getSuitableConfig().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = opConnectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getConnectorFromKafka(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
opConnectorService.addNewToDB(ksConnectorResult.getData());
return Result.buildSuc();
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator) {
dto.getSuitableConfig().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = opConnectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getConnectorFromKafka(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
KSConnector connector = ksConnectorResult.getData();
connector.setCheckpointConnectorName(checkpointName);
connector.setHeartbeatConnectorName(heartbeatName);
opConnectorService.addNewToDB(connector);
return Result.buildSuc();
}
@Override
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null) {
return Result.buildFailure(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorName().equals(connectorName)).collect(Collectors.toList());
return Result.buildSuc(convert2ConnectorOverviewVO(connectorPO, workerConnectorList));
}
private ConnectorStateVO convert2ConnectorOverviewVO(ConnectorPO connectorPO, List<WorkerConnector> workerConnectorList) {
ConnectorStateVO connectorStateVO = new ConnectorStateVO();
connectorStateVO.setConnectClusterId(connectorPO.getConnectClusterId());
connectorStateVO.setName(connectorPO.getConnectorName());
connectorStateVO.setType(connectorPO.getConnectorType());
connectorStateVO.setState(connectorPO.getState());
connectorStateVO.setTotalTaskCount(workerConnectorList.size());
connectorStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectorStateVO.setTotalWorkerCount(workerConnectorList.stream().map(elem -> elem.getWorkerId()).collect(Collectors.toSet()).size());
return connectorStateVO;
}
}

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.WorkerConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.persistence.connect.cache.LoadedConnectClusterCache;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
@Service
public class WorkerConnectorManageImpl implements WorkerConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(WorkerConnectorManageImpl.class);
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectCluster connectCluster = LoadedConnectClusterCache.getByPhyId(connectClusterId);
List<WorkerConnector> workerConnectorList = workerConnectorService.getWorkerConnectorListFromCluster(connectCluster, connectorName);
return Result.buildSuc(ConvertUtil.list2List(workerConnectorList, KCTaskOverviewVO.class));
}
}

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
import java.util.Map;
import java.util.Properties;
/**
* @author wyb
* @date 2022/12/26
*/
public interface MirrorMakerManager {
Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator);
Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator);
Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId);
PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto);
Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectId, String connectName);
Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName);
Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName);
Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto);
}

View File

@@ -0,0 +1,653 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.biz.connect.mm2.MirrorMakerManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2.MetricsMirrorMakersDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.*;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.MirrorMakerUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.OpConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import org.apache.commons.lang.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.Function;
import java.util.stream.Collectors;
import static org.apache.kafka.connect.runtime.AbstractStatus.State.RUNNING;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.*;
/**
* @author wyb
* @date 2022/12/26
*/
@Service
public class MirrorMakerManagerImpl implements MirrorMakerManager {
private static final ILog LOGGER = LogFactory.getLog(MirrorMakerManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private OpConnectorService opConnectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Autowired
private WorkerService workerService;
@Autowired
private ConnectorManager connectorManager;
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private PluginService pluginService;
@Override
public Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator) {
// 检查基本参数
Result<Void> rv = this.checkCreateMirrorMakerParamAndUnifyData(dto);
if (rv.failed()) {
return rv;
}
// 创建MirrorSourceConnector
Result<Void> sourceConnectResult = connectorManager.createConnector(
dto,
dto.getCheckpointConnectorConfigs() != null? MirrorMakerUtil.genCheckpointName(dto.getConnectorName()): "",
dto.getHeartbeatConnectorConfigs() != null? MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()): "",
operator
);
if (sourceConnectResult.failed()) {
// 创建失败, 直接返回
return Result.buildFromIgnoreData(sourceConnectResult);
}
// 创建 checkpoint 任务
Result<Void> checkpointResult = Result.buildSuc();
if (dto.getCheckpointConnectorConfigs() != null) {
checkpointResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genCheckpointName(dto.getConnectorName()), dto.getCheckpointConnectorConfigs()),
operator
);
}
// 创建 heartbeat 任务
Result<Void> heartbeatResult = Result.buildSuc();
if (dto.getHeartbeatConnectorConfigs() != null) {
heartbeatResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()), dto.getHeartbeatConnectorConfigs()),
operator
);
}
// 全都成功
if (checkpointResult.successful() && checkpointResult.successful()) {
return Result.buildSuc();
} else if (checkpointResult.failed() && checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint & heartbeat 失败.%n失败信息分别为%s%n%n%s", checkpointResult.getMessage(), heartbeatResult.getMessage())
);
} else if (checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint 失败.%n失败信息分别为%s", checkpointResult.getMessage())
);
} else{
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 heartbeat 失败.%n失败信息分别为%s", heartbeatResult.getMessage())
);
}
}
@Override
public Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = opConnectorService.deleteConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = opConnectorService.deleteConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return opConnectorService.deleteConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(dto.getConnectClusterId(), dto.getConnectorName());
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(dto.getConnectClusterId(), dto.getConnectorName()));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName()) && dto.getCheckpointConnectorConfigs() != null) {
rv = opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getCheckpointConnectorName(), dto.getCheckpointConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && dto.getHeartbeatConnectorConfigs() != null) {
rv = opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getHeartbeatConnectorName(), dto.getHeartbeatConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
return opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
}
@Override
public Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = opConnectorService.restartConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = opConnectorService.restartConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return opConnectorService.restartConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = opConnectorService.stopConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = opConnectorService.stopConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return opConnectorService.stopConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = opConnectorService.resumeConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = opConnectorService.resumeConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return opConnectorService.resumeConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId) {
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> workerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return Result.buildSuc(convert2MirrorMakerStateVO(connectorPOList, workerConnectorList, workerList));
}
@Override
public PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto) {
List<ConnectorPO> mirrorMakerList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
Result<List<MirrorMakerMetrics>> latestMetricsResult = mirrorMakerMetricService.getLatestMetricsFromES(clusterPhyId,
mirrorMakerList.stream().map(elem -> new Tuple<>(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames());
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterMirrorMakersOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList = this.convert2ClusterMirrorMakerOverviewVO(mirrorMakerList, connectClusterList, latestMetricsResult.getData());
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = this.completeClusterInfo(mirrorMakerOverviewVOList);
PaginationResult<ClusterMirrorMakerOverviewVO> voPaginationResult = this.pagingMirrorMakerInLocal(mirrorMakerVOList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=ClusterMirrorMakerOverviewVO||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = mirrorMakerMetricService.listMirrorMakerClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
));
return PaginationResult.buildSuc(
this.supplyData2ClusterMirrorMakerOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectClusterId, String connectName) {
//mm2任务
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream()
.filter(workerConnector -> workerConnector.getConnectorName().equals(connectorPO.getConnectorName())
|| (!StringUtils.isBlank(connectorPO.getCheckpointConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName()))
|| (!StringUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())))
.collect(Collectors.toList());
MirrorMakerBaseStateVO mirrorMakerBaseStateVO = new MirrorMakerBaseStateVO();
mirrorMakerBaseStateVO.setTotalTaskCount(workerConnectorList.size());
mirrorMakerBaseStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
mirrorMakerBaseStateVO.setWorkerCount(workerConnectorList.stream().collect(Collectors.groupingBy(WorkerConnector::getWorkerId)).size());
return Result.buildSuc(mirrorMakerBaseStateVO);
}
@Override
public Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
Map<String, List<KCTaskOverviewVO>> listMap = new HashMap<>();
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId);
if (workerConnectorList.isEmpty()){
return Result.buildSuc(listMap);
}
workerConnectorList.forEach(workerConnector -> {
if (workerConnector.getConnectorName().equals(connectorPO.getConnectorName())){
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
}
});
return Result.buildSuc(listMap);
}
@Override
public Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<Properties> propList = new ArrayList<>();
// source
Result<KSConnectorInfo> connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
Properties props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
// checkpoint
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getCheckpointConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
// heartbeat
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getHeartbeatConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
return Result.buildSuc(propList);
}
@Override
public Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto) {
List<ConnectConfigInfosVO> voList = new ArrayList<>();
Result<ConnectConfigInfos> infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getSuitableConfig());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
if (dto.getHeartbeatConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getHeartbeatConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
if (dto.getCheckpointConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getCheckpointConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
return Result.buildSuc(voList);
}
/**************************************************** private method ****************************************************/
private MetricsMirrorMakersDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsMirrorMakersDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsMirrorMakersDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
public Result<Void> checkCreateMirrorMakerParamAndUnifyData(MirrorMakerCreateDTO dto) {
ClusterPhy sourceClusterPhy = clusterPhyService.getClusterByCluster(dto.getSourceKafkaClusterId());
if (sourceClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getSourceKafkaClusterId()));
}
ConnectCluster connectCluster = connectClusterService.getById(dto.getConnectClusterId());
if (connectCluster == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getConnectClusterNotExist(dto.getConnectClusterId()));
}
ClusterPhy targetClusterPhy = clusterPhyService.getClusterByCluster(connectCluster.getKafkaClusterPhyId());
if (targetClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(connectCluster.getKafkaClusterPhyId()));
}
if (!dto.getSuitableConfig().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector缺少connector.class");
}
if (!MIRROR_MAKER_SOURCE_CONNECTOR_TYPE.equals(dto.getSuitableConfig().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector的connector.class类型错误");
}
if (dto.getCheckpointConnectorConfigs() != null) {
if (!dto.getCheckpointConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "CheckpointConnector缺少connector.class");
}
if (!MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE.equals(dto.getCheckpointConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Checkpoint的connector.class类型错误");
}
}
if (dto.getHeartbeatConnectorConfigs() != null) {
if (!dto.getHeartbeatConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "HeartbeatConnector缺少connector.class");
}
if (!MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE.equals(dto.getHeartbeatConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Heartbeat的connector.class类型错误");
}
}
dto.unifyData(
sourceClusterPhy.getId(), sourceClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(sourceClusterPhy.getClientProperties(), Properties.class),
targetClusterPhy.getId(), targetClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(targetClusterPhy.getClientProperties(), Properties.class)
);
return Result.buildSuc();
}
private MirrorMakerStateVO convert2MirrorMakerStateVO(List<ConnectorPO> connectorPOList,List<WorkerConnector> workerConnectorList,List<ConnectWorker> workerList){
MirrorMakerStateVO mirrorMakerStateVO = new MirrorMakerStateVO();
List<ConnectorPO> sourceSet = connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
mirrorMakerStateVO.setMirrorMakerCount(sourceSet.size());
Set<Long> connectClusterIdSet = sourceSet.stream().map(ConnectorPO::getConnectClusterId).collect(Collectors.toSet());
mirrorMakerStateVO.setWorkerCount(workerList.stream().filter(elem -> connectClusterIdSet.contains(elem.getConnectClusterId())).collect(Collectors.toList()).size());
List<ConnectorPO> mirrorMakerConnectorList = new ArrayList<>();
mirrorMakerConnectorList.addAll(sourceSet);
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerStateVO.setTotalConnectorCount(mirrorMakerConnectorList.size());
mirrorMakerStateVO.setAliveConnectorCount(mirrorMakerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
Set<String> connectorNameSet = mirrorMakerConnectorList.stream().map(elem -> elem.getConnectorName()).collect(Collectors.toSet());
List<WorkerConnector> taskList = workerConnectorList.stream().filter(elem -> connectorNameSet.contains(elem.getConnectorName())).collect(Collectors.toList());
mirrorMakerStateVO.setTotalTaskCount(taskList.size());
mirrorMakerStateVO.setAliveTaskCount(taskList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
return mirrorMakerStateVO;
}
private List<ClusterMirrorMakerOverviewVO> convert2ClusterMirrorMakerOverviewVO(List<ConnectorPO> mirrorMakerList, List<ConnectCluster> connectClusterList, List<MirrorMakerMetrics> latestMetric) {
List<ClusterMirrorMakerOverviewVO> clusterMirrorMakerOverviewVOList = new ArrayList<>();
Map<String, MirrorMakerMetrics> metricsMap = latestMetric.stream().collect(Collectors.toMap(elem -> elem.getConnectClusterId() + "@" + elem.getConnectorName(), Function.identity()));
Map<Long, ConnectCluster> connectClusterMap = connectClusterList.stream().collect(Collectors.toMap(elem -> elem.getId(), Function.identity()));
for (ConnectorPO mirrorMaker : mirrorMakerList) {
ClusterMirrorMakerOverviewVO clusterMirrorMakerOverviewVO = new ClusterMirrorMakerOverviewVO();
clusterMirrorMakerOverviewVO.setConnectClusterId(mirrorMaker.getConnectClusterId());
clusterMirrorMakerOverviewVO.setConnectClusterName(connectClusterMap.get(mirrorMaker.getConnectClusterId()).getName());
clusterMirrorMakerOverviewVO.setConnectorName(mirrorMaker.getConnectorName());
clusterMirrorMakerOverviewVO.setState(mirrorMaker.getState());
clusterMirrorMakerOverviewVO.setCheckpointConnector(mirrorMaker.getCheckpointConnectorName());
clusterMirrorMakerOverviewVO.setTaskCount(mirrorMaker.getTaskCount());
clusterMirrorMakerOverviewVO.setHeartbeatConnector(mirrorMaker.getHeartbeatConnectorName());
clusterMirrorMakerOverviewVO.setLatestMetrics(metricsMap.getOrDefault(mirrorMaker.getConnectClusterId() + "@" + mirrorMaker.getConnectorName(), new MirrorMakerMetrics(mirrorMaker.getConnectClusterId(), mirrorMaker.getConnectorName())));
clusterMirrorMakerOverviewVOList.add(clusterMirrorMakerOverviewVO);
}
return clusterMirrorMakerOverviewVOList;
}
PaginationResult<ClusterMirrorMakerOverviewVO> pagingMirrorMakerInLocal(List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList, ClusterMirrorMakersOverviewDTO dto) {
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = PaginationUtil.pageByFuzzyFilter(mirrorMakerOverviewVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(mirrorMakerVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(mirrorMakerVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(mirrorMakerVOList, dto);
}
public static List<ClusterMirrorMakerOverviewVO> supplyData2ClusterMirrorMakerOverviewVOList(List<ClusterMirrorMakerOverviewVO> voList,
List<MetricMultiLinesVO> metricLineVOList) {
Map<String, List<MetricLineVO>> metricLineMap = new HashMap<>();
if (metricLineVOList != null) {
for (MetricMultiLinesVO metricMultiLinesVO : metricLineVOList) {
metricMultiLinesVO.getMetricLines()
.forEach(metricLineVO -> {
String key = metricLineVO.getName();
List<MetricLineVO> metricLineVOS = metricLineMap.getOrDefault(key, new ArrayList<>());
metricLineVOS.add(metricLineVO);
metricLineMap.put(key, metricLineVOS);
});
}
}
voList.forEach(elem -> elem.setMetricLines(metricLineMap.get(elem.getConnectClusterId() + "#" + elem.getConnectorName())));
return voList;
}
private List<ClusterMirrorMakerOverviewVO> completeClusterInfo(List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList) {
Map<String, KSConnectorInfo> connectorInfoMap = new ConcurrentHashMap<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
ApiCallThreadPoolService.runnableTask(String.format("method=completeClusterInfo||connectClusterId=%d||connectorName=%s||getMirrorMakerInfo", mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName()),
3000
, () -> {
Result<KSConnectorInfo> connectorInfoRet = connectorService.getConnectorInfoFromCluster(mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName());
if (connectorInfoRet.hasData()) {
connectorInfoMap.put(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName(), connectorInfoRet.getData());
}
});
}
ApiCallThreadPoolService.waitResult();
List<ClusterMirrorMakerOverviewVO> newMirrorMakerVOList = new ArrayList<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
KSConnectorInfo connectorInfo = connectorInfoMap.get(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName());
if (connectorInfo == null) {
continue;
}
String sourceClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME);
String targetClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME);
//先默认设置为集群别名
mirrorMakerVO.setSourceKafkaClusterName(sourceClusterAlias);
mirrorMakerVO.setDestKafkaClusterName(targetClusterAlias);
if (!ValidateUtils.isBlank(sourceClusterAlias) && CommonUtils.isNumeric(sourceClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(sourceClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setSourceKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setSourceKafkaClusterName(clusterPhy.getName());
}
}
if (!ValidateUtils.isBlank(targetClusterAlias) && CommonUtils.isNumeric(targetClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(targetClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setDestKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setDestKafkaClusterName(clusterPhy.getName());
}
}
newMirrorMakerVOList.add(mirrorMakerVO);
}
return newMirrorMakerVOList;
}
}

View File

@@ -1,11 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.group; package com.xiaojukeji.know.streaming.km.biz.group;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetDeleteDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException; import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
@@ -22,6 +26,10 @@ public interface GroupManager {
String searchGroupKeyword, String searchGroupKeyword,
PaginationBaseDTO dto); PaginationBaseDTO dto);
PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) throws Exception;
PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto);
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId, PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
String topicName, String topicName,
String groupName, String groupName,
@@ -31,4 +39,10 @@ public interface GroupManager {
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime); Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception; Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
Result<Void> deleteGroupOffsets(GroupOffsetDeleteDTO dto, String operator) throws Exception;
@Deprecated
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList, Integer timeoutUnitMs);
} }

View File

@@ -3,49 +3,71 @@ package com.xiaojukeji.know.streaming.km.biz.group.impl;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager; import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetDeleteDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberConsumerAssignment;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupTopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupTopicPartitionParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO; import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
import com.xiaojukeji.know.streaming.km.common.converter.GroupConverter;
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.GroupOffsetResetEnum; import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.DeleteGroupTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException; import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException; import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.config.KSConfigUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.group.OpGroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO; import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import org.apache.kafka.clients.admin.MemberDescription;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.common.ConsumerGroupState; import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.*; import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum.CONNECT_CLUSTER_PROTOCOL_TYPE;
@Component @Component
public class GroupManagerImpl implements GroupManager { public class GroupManagerImpl implements GroupManager {
private static final ILog log = LogFactory.getLog(GroupManagerImpl.class); private static final ILog LOGGER = LogFactory.getLog(GroupManagerImpl.class);
@Autowired @Autowired
private TopicService topicService; private TopicService topicService;
@@ -53,6 +75,9 @@ public class GroupManagerImpl implements GroupManager {
@Autowired @Autowired
private GroupService groupService; private GroupService groupService;
@Autowired
private OpGroupService opGroupService;
@Autowired @Autowired
private PartitionService partitionService; private PartitionService partitionService;
@@ -62,6 +87,12 @@ public class GroupManagerImpl implements GroupManager {
@Autowired @Autowired
private GroupMetricESDAO groupMetricESDAO; private GroupMetricESDAO groupMetricESDAO;
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private KSConfigUtils ksConfigUtils;
@Override @Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId, public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
String topicName, String topicName,
@@ -69,41 +100,96 @@ public class GroupManagerImpl implements GroupManager {
String searchTopicKeyword, String searchTopicKeyword,
String searchGroupKeyword, String searchGroupKeyword,
PaginationBaseDTO dto) { PaginationBaseDTO dto) {
long startTimeUnitMs = System.currentTimeMillis();
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto); PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
if (paginationResult.failed()) {
return PaginationResult.buildFailure(paginationResult, dto);
}
if (!paginationResult.hasData()) { if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = this.getGroupTopicOverviewVOList(
clusterPhyId,
paginationResult.getData().getBizData(),
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
);
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
@Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) throws Exception {
long startTimeUnitMs = System.currentTimeMillis();
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
}
Group group = groupService.getGroupFromKafka(clusterPhy, groupName);
//没有topicMember则直接返回
if (group == null || ValidateUtils.isEmptyList(group.getTopicMembers())) {
return PaginationResult.buildSuc(dto); return PaginationResult.buildSuc(dto);
} }
// 获取指标 //排序
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES( List<GroupTopicMember> groupTopicMembers = PaginationUtil.pageBySort(group.getTopicMembers(), PaginationConstant.DEFAULT_GROUP_TOPIC_SORTED_FIELD, SortTypeEnum.DESC.getSortType());
clusterPhyId,
paginationResult.getData().getBizData().stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()), //分页
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG), PaginationResult<GroupTopicMember> paginationResult = PaginationUtil.pageBySubData(groupTopicMembers, dto);
AggTypeEnum.MAX
); List<GroupMemberPO> groupMemberPOList = paginationResult.getData().getBizData().stream().map(elem -> new GroupMemberPO(clusterPhyId, elem.getTopicName(), groupName, group.getState().getState(), elem.getMemberCount())).collect(Collectors.toList());
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=pagingGroupMembers||clusterPhyId={}||topicName={}||groupName={}||result={}||errMsg=search es failed", clusterPhyId, topicName, groupName, metricsListResult);
}
return PaginationResult.buildSuc( return PaginationResult.buildSuc(
this.convert2GroupTopicOverviewVOList(paginationResult.getData().getBizData(), metricsListResult.getData()), this.getGroupTopicOverviewVOList(
clusterPhyId,
groupMemberPOList,
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
),
paginationResult paginationResult
); );
} }
@Override
public PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto) {
List<Group> groupList = groupService.listClusterGroups(clusterPhyId);
// 类型转化
List<GroupOverviewVO> voList = groupList.stream().map(GroupConverter::convert2GroupOverviewVO).collect(Collectors.toList());
// 搜索groupName
voList = PaginationUtil.pageByFuzzyFilter(voList, dto.getSearchGroupName(), Arrays.asList("name"));
//搜索topic
if (!ValidateUtils.isBlank(dto.getSearchTopicName())) {
voList = voList.stream().filter(elem -> {
for (String topicName : elem.getTopicNameList()) {
if (topicName.contains(dto.getSearchTopicName())) {
return true;
}
}
return false;
}).collect(Collectors.toList());
}
// 分页 后 返回
return PaginationUtil.pageBySubData(voList, dto);
}
@Override @Override
public PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId, public PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
String topicName, String topicName,
String groupName, String groupName,
List<String> latestMetricNames, List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException { PaginationSortDTO dto) throws NotExistException, AdminOperateException {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
}
// 获取消费组消费的TopicPartition列表 // 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffset(clusterPhyId, groupName); Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet() List<Integer> partitionList = consumedOffsetMap.keySet()
.stream() .stream()
.filter(elem -> elem.topic().equals(topicName)) .filter(elem -> elem.topic().equals(topicName))
@@ -112,13 +198,19 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList); Collections.sort(partitionList);
// 获取消费组当前运行信息 // 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescription(clusterPhyId, groupName); KSGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhy, groupName);
// 转换存储格式 // 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>(); Map<TopicPartition, KSMemberDescription> tpMemberMap = new HashMap<>();
for (MemberDescription description: groupDescription.members()) {
for (TopicPartition tp: description.assignment().topicPartitions()) { // 如果不是connect集群
tpMemberMap.put(tp, description); if (!groupDescription.protocolType().equals(CONNECT_CLUSTER_PROTOCOL_TYPE)) {
for (KSMemberDescription description : groupDescription.members()) {
// 如果是 Consumer 的 Description ,则 Assignment 的类型为 KSMemberConsumerAssignment 的
KSMemberConsumerAssignment assignment = (KSMemberConsumerAssignment) description.assignment();
for (TopicPartition tp : assignment.topicPartitions()) {
tpMemberMap.put(tp, description);
}
} }
} }
@@ -135,11 +227,11 @@ public class GroupManagerImpl implements GroupManager {
vo.setTopicName(topicName); vo.setTopicName(topicName);
vo.setPartitionId(groupMetrics.getPartitionId()); vo.setPartitionId(groupMetrics.getPartitionId());
MemberDescription memberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId())); KSMemberDescription ksMemberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
if (memberDescription != null) { if (ksMemberDescription != null) {
vo.setMemberId(memberDescription.consumerId()); vo.setMemberId(ksMemberDescription.consumerId());
vo.setHost(memberDescription.host()); vo.setHost(ksMemberDescription.host());
vo.setClientId(memberDescription.clientId()); vo.setClientId(ksMemberDescription.clientId());
} }
vo.setLatestMetrics(groupMetrics); vo.setLatestMetrics(groupMetrics);
@@ -165,13 +257,18 @@ public class GroupManagerImpl implements GroupManager {
return rv; return rv;
} }
ConsumerGroupDescription description = groupService.getGroupDescription(dto.getClusterId(), dto.getGroupName()); ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterId());
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterId()));
}
KSGroupDescription description = groupService.getGroupDescriptionFromKafka(clusterPhy, dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) { if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败"); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
} }
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) { if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", description.state().name())); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
} }
// 获取offset // 获取offset
@@ -184,6 +281,111 @@ public class GroupManagerImpl implements GroupManager {
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator); return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
} }
@Override
public Result<Void> deleteGroupOffsets(GroupOffsetDeleteDTO dto, String operator) throws Exception {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterPhyId());
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterPhyId()));
}
// 按照group纬度进行删除
if (ValidateUtils.isBlank(dto.getGroupName())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "groupName不允许为空");
}
if (DeleteGroupTypeEnum.GROUP.getCode().equals(dto.getDeleteType())) {
return opGroupService.deleteGroupOffset(
new DeleteGroupParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP),
operator
);
}
// 按照topic纬度进行删除
if (ValidateUtils.isBlank(dto.getTopicName())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "topicName不允许为空");
}
if (DeleteGroupTypeEnum.GROUP_TOPIC.getCode().equals(dto.getDeleteType())) {
return opGroupService.deleteGroupTopicOffset(
new DeleteGroupTopicParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP, dto.getTopicName()),
operator
);
}
// 按照partition纬度进行删除
if (ValidateUtils.isNullOrLessThanZero(dto.getPartitionId())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "partitionId不允许为空或小于0");
}
if (DeleteGroupTypeEnum.GROUP_TOPIC_PARTITION.getCode().equals(dto.getDeleteType())) {
return opGroupService.deleteGroupTopicPartitionOffset(
new DeleteGroupTopicPartitionParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP, dto.getTopicName(), dto.getPartitionId()),
operator
);
}
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "deleteType类型错误");
}
@Override
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList) {
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
groupMemberPOList.stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
LOGGER.error("method=completeMetricData||clusterPhyId={}||result={}||errMsg=search es failed", clusterPhyId, metricsListResult);
}
return this.convert2GroupTopicOverviewVOList(groupMemberPOList, metricsListResult.getData());
}
@Override
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> poList, Integer timeoutUnitMs) {
Set<String> requestedGroupSet = new HashSet<>();
// 获取指标
Map<String, Map<String, Float>> groupTopicLagMap = new ConcurrentHashMap<>();
poList.forEach(elem -> {
if (requestedGroupSet.contains(elem.getGroupName())) {
// 该Group已经处理过
return;
}
requestedGroupSet.add(elem.getGroupName());
ApiCallThreadPoolService.runnableTask(
String.format("clusterPhyId=%d||groupName=%s||msg=getGroupTopicLag", clusterPhyId, elem.getGroupName()),
timeoutUnitMs,
() -> {
Result<List<GroupMetrics>> listResult = groupMetricService.collectGroupMetricsFromKafka(clusterPhyId, elem.getGroupName(), GroupMetricVersionItems.GROUP_METRIC_LAG);
if (listResult == null || !listResult.hasData()) {
return;
}
Map<String, Float> lagMetricMap = new HashMap<>();
listResult.getData().forEach(item -> {
Float newLag = item.getMetric(GroupMetricVersionItems.GROUP_METRIC_LAG);
if (newLag == null) {
return;
}
Float oldLag = lagMetricMap.getOrDefault(item.getTopic(), newLag);
lagMetricMap.put(item.getTopic(), Math.max(oldLag, newLag));
});
groupTopicLagMap.put(elem.getGroupName(), lagMetricMap);
}
);
});
ApiCallThreadPoolService.waitResult();
return this.convert2GroupTopicOverviewVOList(poList, groupTopicLagMap);
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
@@ -198,12 +400,12 @@ public class GroupManagerImpl implements GroupManager {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getTopicNotExist(dto.getClusterId(), dto.getTopicName())); return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getTopicNotExist(dto.getClusterId(), dto.getTopicName()));
} }
if (GroupOffsetResetEnum.PRECISE_OFFSET.getResetType() == dto.getResetType() if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()
&& ValidateUtils.isEmptyList(dto.getOffsetList())) { && ValidateUtils.isEmptyList(dto.getOffsetList())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误指定offset重置需传offset信息"); return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误指定offset重置需传offset信息");
} }
if (GroupOffsetResetEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType() if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()
&& ValidateUtils.isNull(dto.getTimestamp())) { && ValidateUtils.isNull(dto.getTimestamp())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误,指定时间重置需传时间信息"); return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误,指定时间重置需传时间信息");
} }
@@ -212,7 +414,7 @@ public class GroupManagerImpl implements GroupManager {
} }
private Result<Map<TopicPartition, Long>> getPartitionOffset(GroupOffsetResetDTO dto) { private Result<Map<TopicPartition, Long>> getPartitionOffset(GroupOffsetResetDTO dto) {
if (GroupOffsetResetEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()) { if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()) {
return Result.buildSuc(dto.getOffsetList().stream().collect(Collectors.toMap( return Result.buildSuc(dto.getOffsetList().stream().collect(Collectors.toMap(
elem -> new TopicPartition(dto.getTopicName(), elem.getPartitionId()), elem -> new TopicPartition(dto.getTopicName(), elem.getPartitionId()),
PartitionOffsetDTO::getOffset, PartitionOffsetDTO::getOffset,
@@ -220,16 +422,16 @@ public class GroupManagerImpl implements GroupManager {
))); )));
} }
OffsetSpec offsetSpec = null; KSOffsetSpec offsetSpec = null;
if (GroupOffsetResetEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) { if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp()); offsetSpec = KSOffsetSpec.forTimestamp(dto.getTimestamp());
} else if (GroupOffsetResetEnum.EARLIEST.getResetType() == dto.getResetType()) { } else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.earliest(); offsetSpec = KSOffsetSpec.earliest();
} else { } else {
offsetSpec = OffsetSpec.latest(); offsetSpec = KSOffsetSpec.latest();
} }
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec, dto.getTimestamp()); return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec);
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) { private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
@@ -237,13 +439,22 @@ public class GroupManagerImpl implements GroupManager {
metricsList = new ArrayList<>(); metricsList = new ArrayList<>();
} }
// <GroupName, <TopicName, GroupMetrics>> // <GroupName, <TopicName, lag>>
Map<String, Map<String, GroupMetrics>> metricsMap = new HashMap<>(); Map<String, Map<String, Float>> metricsMap = new HashMap<>();
metricsList.stream().forEach(elem -> { metricsList.stream().forEach(elem -> {
Float metricValue = elem.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG);
if (metricValue == null) {
return;
}
metricsMap.putIfAbsent(elem.getGroup(), new HashMap<>()); metricsMap.putIfAbsent(elem.getGroup(), new HashMap<>());
metricsMap.get(elem.getGroup()).put(elem.getTopic(), elem); metricsMap.get(elem.getGroup()).put(elem.getTopic(), metricValue);
}); });
return this.convert2GroupTopicOverviewVOList(poList, metricsMap);
}
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, Map<String, Map<String, Float>> metricsMap) {
List<GroupTopicOverviewVO> voList = new ArrayList<>(); List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupMemberPO po: poList) { for (GroupMemberPO po: poList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class); GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
@@ -251,9 +462,9 @@ public class GroupManagerImpl implements GroupManager {
continue; continue;
} }
GroupMetrics metrics = metricsMap.getOrDefault(po.getGroupName(), new HashMap<>()).get(po.getTopicName()); Float metricValue = metricsMap.getOrDefault(po.getGroupName(), new HashMap<>()).get(po.getTopicName());
if (metrics != null) { if (metricValue != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG))); vo.setMaxLag(ConvertUtil.Float2Long(metricValue));
} }
voList.add(vo); voList.add(vo);
@@ -271,15 +482,11 @@ public class GroupManagerImpl implements GroupManager {
// 获取Group指标信息 // 获取Group指标信息
Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.listPartitionLatestMetricsFromES( Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.collectGroupMetricsFromKafka(clusterPhyId, groupName, latestMetricNames == null ? Arrays.asList() : latestMetricNames);
clusterPhyId,
groupName,
topicName,
latestMetricNames == null? Arrays.asList(): latestMetricNames
);
// 转换Group指标 // 转换Group指标
List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData()? groupMetricsResult.getData(): new ArrayList<>(); List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData() ? groupMetricsResult.getData().stream().filter(elem -> topicName.equals(elem.getTopic())).collect(Collectors.toList()) : new ArrayList<>();
Map<Integer, GroupMetrics> esMetricsMap = new HashMap<>(); Map<Integer, GroupMetrics> esMetricsMap = new HashMap<>();
for (GroupMetrics groupMetrics: esGroupMetricsList) { for (GroupMetrics groupMetrics: esGroupMetricsList) {
esMetricsMap.put(groupMetrics.getPartitionId(), groupMetrics); esMetricsMap.put(groupMetrics.getPartitionId(), groupMetrics);
@@ -295,5 +502,4 @@ public class GroupManagerImpl implements GroupManager {
dto dto
); );
} }
} }

View File

@@ -22,7 +22,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService; import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;

View File

@@ -19,4 +19,9 @@ public interface OpTopicManager {
* 扩分区 * 扩分区
*/ */
Result<Void> expandTopic(TopicExpansionDTO dto, String operator); Result<Void> expandTopic(TopicExpansionDTO dto, String operator);
/**
* 清空Topic
*/
Result<Void> truncateTopic(Long clusterPhyId, String topicName, String operator);
} }

View File

@@ -1,7 +1,10 @@
package com.xiaojukeji.know.streaming.km.biz.topic; package com.xiaojukeji.know.streaming.km.biz.topic;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -22,4 +25,6 @@ public interface TopicStateManager {
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames); Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName); Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto);
} }

View File

@@ -7,21 +7,28 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicExpansionDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicExpansionDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.config.KafkaTopicConfigParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicTruncateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.*;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil; import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils; import kafka.admin.AdminUtils;
import kafka.admin.BrokerMetadata; import kafka.admin.BrokerMetadata;
import org.apache.kafka.common.config.TopicConfig;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.annotation.Transactional;
@@ -52,6 +59,12 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired @Autowired
private ClusterPhyService clusterPhyService; private ClusterPhyService clusterPhyService;
@Autowired
private PartitionService partitionService;
@Autowired
private TopicConfigService topicConfigService;
@Override @Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) { public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator); log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -80,7 +93,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
); );
// 创建Topic // 创建Topic
return opTopicService.createTopic( Result<Void> createTopicRes = opTopicService.createTopic(
new TopicCreateParam( new TopicCreateParam(
dto.getClusterId(), dto.getClusterId(),
dto.getTopicName(), dto.getTopicName(),
@@ -90,6 +103,21 @@ public class OpTopicManagerImpl implements OpTopicManager {
), ),
operator operator
); );
if (createTopicRes.successful()){
try{
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
BackoffUtils.backoff(3000);
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
if (partitionsResult.successful()){
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
}
});
}catch (Exception e) {
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功但记录Partition到DB中失败等待定时任务同步partition信息");
}
}
return createTopicRes;
} }
@Override @Override
@@ -134,9 +162,74 @@ public class OpTopicManagerImpl implements OpTopicManager {
return rv; return rv;
} }
@Override
public Result<Void> truncateTopic(Long clusterPhyId, String topicName, String operator) {
// 增加delete配置
Result<Tuple<Boolean, String>> rt = this.addDeleteConfigIfNotExist(clusterPhyId, topicName, operator);
if (rt.failed()) {
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||operator={}||result={}||msg=get config from kafka failed", clusterPhyId, topicName, operator, rt);
return Result.buildFromIgnoreData(rt);
}
// 清空Topic
Result<Void> rv = opTopicService.truncateTopic(new TopicTruncateParam(clusterPhyId, topicName, KafkaConstant.TOPICK_TRUNCATE_DEFAULT_OFFSET), operator);
if (rv.failed()) {
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||originConfig={}||operator={}||result={}||msg=truncate topic failed", clusterPhyId, topicName, rt.getData().v2(), operator, rv);
// config被修改了则错误提示需要提醒一下否则直接返回错误
return rt.getData().v1() ? Result.buildFailure(rv.getCode(), rv.getMessage() + "\t\n" + String.format("Topic的CleanupPolicy已被修改需要手动恢复为%s", rt.getData().v2())) : rv;
}
// 恢复compact配置
rv = this.recoverConfigIfChanged(clusterPhyId, topicName, rt.getData().v1(), rt.getData().v2(), operator);
if (rv.failed()) {
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||originConfig={}||operator={}||result={}||msg=truncate topic success but recover config failed", clusterPhyId, topicName, rt.getData().v2(), operator, rv);
// config被修改了则错误提示需要提醒一下否则直接返回错误
return Result.buildFailure(rv.getCode(), String.format("Topic清空操作已成功但是恢复CleanupPolicy配置失败需要手动恢复为%s。", rt.getData().v2()) + "\t\n" + rv.getMessage());
}
return Result.buildSuc();
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
private Result<Tuple<Boolean, String>> addDeleteConfigIfNotExist(Long clusterPhyId, String topicName, String operator) {
// 获取Topic配置
Result<Map<String, String>> configMapResult = topicConfigService.getTopicConfigFromKafka(clusterPhyId, topicName);
if (configMapResult.failed()) {
return Result.buildFromIgnoreData(configMapResult);
}
String cleanupPolicyValue = configMapResult.getData().getOrDefault(TopicConfig.CLEANUP_POLICY_CONFIG, "");
List<String> cleanupPolicyValueList = CommonUtils.string2StrList(cleanupPolicyValue);
if (cleanupPolicyValueList.size() == 1 && cleanupPolicyValueList.contains(TopicConfig.CLEANUP_POLICY_DELETE)) {
// 不需要修改
return Result.buildSuc(new Tuple<>(Boolean.FALSE, cleanupPolicyValue));
}
Map<String, String> changedConfigMap = new HashMap<>(1);
changedConfigMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, TopicConfig.CLEANUP_POLICY_DELETE);
Result<Void> rv = topicConfigService.modifyTopicConfig(new KafkaTopicConfigParam(clusterPhyId, topicName, changedConfigMap), operator);
if (rv.failed()) {
// 修改失败
return Result.buildFromIgnoreData(rv);
}
return Result.buildSuc(new Tuple<>(Boolean.TRUE, cleanupPolicyValue));
}
private Result<Void> recoverConfigIfChanged(Long clusterPhyId, String topicName, Boolean changed, String originValue, String operator) {
if (!changed) {
// 没有修改,直接返回
return Result.buildSuc();
}
// 恢复配置
Map<String, String> changedConfigMap = new HashMap<>(1);
changedConfigMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, originValue);
return topicConfigService.modifyTopicConfig(new KafkaTopicConfigParam(clusterPhyId, topicName, changedConfigMap), operator);
}
private Seq<BrokerMetadata> buildBrokerMetadataSeq(Long clusterPhyId, final List<Integer> selectedBrokerIdList) { private Seq<BrokerMetadata> buildBrokerMetadataSeq(Long clusterPhyId, final List<Integer> selectedBrokerIdList) {
// 选取Broker列表 // 选取Broker列表

View File

@@ -16,7 +16,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.version.BaseVersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.BaseKafkaVersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
@@ -27,7 +27,7 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
@Component @Component
public class TopicConfigManagerImpl extends BaseVersionControlService implements TopicConfigManager { public class TopicConfigManagerImpl extends BaseKafkaVersionControlService implements TopicConfigManager {
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class); private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig"; private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";

View File

@@ -2,17 +2,23 @@ package com.xiaojukeji.know.streaming.km.biz.topic.impl;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager; import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition; import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -22,25 +28,27 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.partition.TopicPart
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant; import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.converter.PartitionConverter; import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException; import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException; import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.config.KSConfigUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.apache.kafka.clients.admin.OffsetSpec; import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.TopicConfig; import org.apache.kafka.common.config.TopicConfig;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -53,7 +61,7 @@ import java.util.stream.Collectors;
@Component @Component
public class TopicStateManagerImpl implements TopicStateManager { public class TopicStateManagerImpl implements TopicStateManager {
private static final ILog log = LogFactory.getLog(TopicStateManagerImpl.class); private static final ILog LOGGER = LogFactory.getLog(TopicStateManagerImpl.class);
@Autowired @Autowired
private TopicService topicService; private TopicService topicService;
@@ -76,6 +84,15 @@ public class TopicStateManagerImpl implements TopicStateManager {
@Autowired @Autowired
private TopicConfigService topicConfigService; private TopicConfigService topicConfigService;
@Autowired
private GroupService groupService;
@Autowired
private GroupManager groupManager;
@Autowired
private KSConfigUtils ksConfigUtils;
@Override @Override
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException { public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
Topic topic = topicService.getTopic(clusterPhyId, topicName); Topic topic = topicService.getTopic(clusterPhyId, topicName);
@@ -88,7 +105,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
TopicBrokerAllVO allVO = new TopicBrokerAllVO(); TopicBrokerAllVO allVO = new TopicBrokerAllVO();
allVO.setTotal(topic.getBrokerIdSet().size()); allVO.setTotal(topic.getBrokerIdSet().size());
allVO.setLive((int)brokerMap.values().stream().filter(elem -> elem.alive()).count()); allVO.setLive((int)brokerMap.values().stream().filter(Broker::alive).count());
allVO.setDead(allVO.getTotal() - allVO.getLive()); allVO.setDead(allVO.getTotal() - allVO.getLive());
allVO.setPartitionCount(topic.getPartitionNum()); allVO.setPartitionCount(topic.getPartitionNum());
@@ -130,75 +147,38 @@ public class TopicStateManagerImpl implements TopicStateManager {
} }
// 获取分区beginOffset // 获取分区beginOffset
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.earliest(), null); Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.earliest());
if (beginOffsetsMapResult.failed()) { if (beginOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(beginOffsetsMapResult); return Result.buildFromIgnoreData(beginOffsetsMapResult);
} }
// 获取分区endOffset // 获取分区endOffset
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.latest(), null); Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.latest());
if (endOffsetsMapResult.failed()) { if (endOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(endOffsetsMapResult); return Result.buildFromIgnoreData(endOffsetsMapResult);
} }
List<TopicRecordVO> voList = new ArrayList<>(); // 数据采集
List<TopicRecordVO> voList = this.getTopicMessages(clusterPhy, topicName, beginOffsetsMapResult.getData(), endOffsetsMapResult.getData(), startTime, dto);
KafkaConsumer<String, String> kafkaConsumer = null; // 排序
try { if (ValidateUtils.isBlank(dto.getSortType())) {
// 创建kafka-consumer // 默认按时间倒序排序
kafkaConsumer = new KafkaConsumer<>(this.generateClientProperties(clusterPhy, dto.getMaxRecords())); dto.setSortType(SortTypeEnum.DESC.getSortType());
List<TopicPartition> partitionList = new ArrayList<>();
long maxMessage = 0;
for (Map.Entry<TopicPartition, Long> entry : endOffsetsMapResult.getData().entrySet()) {
long begin = beginOffsetsMapResult.getData().get(entry.getKey());
long end = entry.getValue();
if (begin == end){
continue;
}
maxMessage += end - begin;
partitionList.add(entry.getKey());
}
maxMessage = Math.min(maxMessage, dto.getMaxRecords());
kafkaConsumer.assign(partitionList);
for (TopicPartition partition : partitionList) {
kafkaConsumer.seek(partition, Math.max(beginOffsetsMapResult.getData().get(partition), endOffsetsMapResult.getData().get(partition) - dto.getMaxRecords()));
}
// 这里需要减去 KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS 是因为poll一次需要耗时如果这里不减去则可能会导致poll之后超过要求的时间
while (System.currentTimeMillis() - startTime <= dto.getPullTimeoutUnitMs() && voList.size() < maxMessage) {
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
if (this.checkIfIgnore(consumerRecord, dto.getFilterKey(), dto.getFilterValue())) {
continue;
}
voList.add(TopicVOConverter.convert2TopicRecordVO(topicName, consumerRecord));
if (voList.size() >= dto.getMaxRecords()) {
break;
}
}
// 超时则返回
if (System.currentTimeMillis() - startTime + KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS > dto.getPullTimeoutUnitMs()
|| voList.size() > dto.getMaxRecords()) {
break;
}
}
return Result.buildSuc(voList.subList(0, Math.min(dto.getMaxRecords(), voList.size())));
} catch (Exception e) {
log.error("method=getTopicMessages||clusterPhyId={}||topicName={}||param={}||errMsg=exception", clusterPhyId, topicName, dto, e);
throw new AdminOperateException(e.getMessage(), e, ResultStatus.KAFKA_OPERATE_FAILED);
} finally {
if (kafkaConsumer != null) {
try {
kafkaConsumer.close(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
} catch (Exception e) {
// ignore
}
}
} }
if (ValidateUtils.isBlank(dto.getSortField())) {
// 默认按照timestampUnitMs字段排序
dto.setSortField(PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD);
}
if (PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD.equals(dto.getSortField())) {
// 如果是时间类型则第二排序规则是offset
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType(), PaginationConstant.TOPIC_RECORDS_OFFSET_SORTED_FIELD, dto.getSortType());
} else {
// 如果是非时间类型,则第二排序规则是时间
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType(), PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD, dto.getSortType());
}
return Result.buildSuc(voList.subList(0, Math.min(dto.getMaxRecords(), voList.size())));
} }
@Override @Override
@@ -253,26 +233,37 @@ public class TopicStateManagerImpl implements TopicStateManager {
@Override @Override
public Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames) { public Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames) {
long startTime = System.currentTimeMillis();
List<Partition> partitionList = partitionService.listPartitionByTopic(clusterPhyId, topicName); List<Partition> partitionList = partitionService.listPartitionByTopic(clusterPhyId, topicName);
if (ValidateUtils.isEmptyList(partitionList)) { if (ValidateUtils.isEmptyList(partitionList)) {
return Result.buildSuc(); return Result.buildSuc();
} }
Result<List<PartitionMetrics>> metricsResult = partitionMetricService.collectPartitionsMetricsFromKafka(clusterPhyId, topicName, metricsNames);
if (metricsResult.failed()) {
// 仅打印错误日志,但是不直接返回错误
log.error(
"class=TopicStateManagerImpl||method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
clusterPhyId, topicName, metricsResult
);
}
// 转map
Map<Integer, PartitionMetrics> metricsMap = new HashMap<>(); Map<Integer, PartitionMetrics> metricsMap = new HashMap<>();
if (metricsResult.hasData()) { ApiCallThreadPoolService.runnableTask(
for (PartitionMetrics metrics: metricsResult.getData()) { String.format("clusterPhyId=%d||topicName=%s||method=getTopicPartitions", clusterPhyId, topicName),
metricsMap.put(metrics.getPartitionId(), metrics); ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTime),
} () -> {
Result<List<PartitionMetrics>> metricsResult = partitionMetricService.collectPartitionsMetricsFromKafka(clusterPhyId, topicName, metricsNames);
if (metricsResult.failed()) {
// 仅打印错误日志,但是不直接返回错误
LOGGER.error(
"method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from kafka failed",
clusterPhyId, topicName, metricsResult
);
}
for (PartitionMetrics metrics: metricsResult.getData()) {
metricsMap.put(metrics.getPartitionId(), metrics);
}
}
);
boolean finished = ApiCallThreadPoolService.waitResultAndReturnFinished(1);
if (!finished && metricsMap.isEmpty()) {
// 未完成 -> 打印日志
LOGGER.error("method=getTopicPartitions||clusterPhyId={}||topicName={}||msg=get metrics from kafka failed", clusterPhyId, topicName);
} }
List<TopicPartitionVO> voList = new ArrayList<>(); List<TopicPartitionVO> voList = new ArrayList<>();
@@ -291,7 +282,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
// Broker统计信息 // Broker统计信息
vo.setBrokerCount(brokerMap.size()); vo.setBrokerCount(brokerMap.size());
vo.setLiveBrokerCount((int)brokerMap.values().stream().filter(elem -> elem.alive()).count()); vo.setLiveBrokerCount((int)brokerMap.values().stream().filter(Broker::alive).count());
vo.setDeadBrokerCount(vo.getBrokerCount() - vo.getLiveBrokerCount()); vo.setDeadBrokerCount(vo.getBrokerCount() - vo.getLiveBrokerCount());
// Partition统计信息 // Partition统计信息
@@ -313,6 +304,25 @@ public class TopicStateManagerImpl implements TopicStateManager {
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }
@Override
public PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto) {
long startTimeUnitMs = System.currentTimeMillis();
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, "", "", searchGroupName, dto);
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = groupManager.getGroupTopicOverviewVOList(
clusterPhyId,
paginationResult.getData().getBizData(),
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
);
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) { private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {
@@ -328,11 +338,8 @@ public class TopicStateManagerImpl implements TopicStateManager {
// ignore // ignore
return true; return true;
} }
if (filterValue != null && consumerRecord.value() != null && !consumerRecord.value().contains(filterValue)) {
return true;
}
return false; return (filterValue != null && consumerRecord.value() != null && !consumerRecord.value().contains(filterValue));
} }
private TopicBrokerSingleVO getTopicBrokerSingle(Long clusterPhyId, private TopicBrokerSingleVO getTopicBrokerSingle(Long clusterPhyId,
@@ -392,4 +399,90 @@ public class TopicStateManagerImpl implements TopicStateManager {
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Math.max(2, Math.min(5, maxPollRecords))); props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Math.max(2, Math.min(5, maxPollRecords)));
return props; return props;
} }
private List<TopicRecordVO> getTopicMessages(ClusterPhy clusterPhy,
String topicName,
Map<TopicPartition, Long> beginOffsetsMap,
Map<TopicPartition, Long> endOffsetsMap,
long startTime,
TopicRecordDTO dto) throws AdminOperateException {
List<TopicRecordVO> voList = new ArrayList<>();
try (KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(this.generateClientProperties(clusterPhy, dto.getMaxRecords()))) {
// 移动到指定位置
long maxMessage = this.assignAndSeekToSpecifiedOffset(kafkaConsumer, beginOffsetsMap, endOffsetsMap, dto);
// 这里需要减去 KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS 是因为poll一次需要耗时如果这里不减去则可能会导致poll之后超过要求的时间
while (System.currentTimeMillis() - startTime <= dto.getPullTimeoutUnitMs() && voList.size() < maxMessage) {
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
if (this.checkIfIgnore(consumerRecord, dto.getFilterKey(), dto.getFilterValue())) {
continue;
}
voList.add(TopicVOConverter.convert2TopicRecordVO(topicName, consumerRecord));
if (voList.size() >= dto.getMaxRecords()) {
break;
}
}
// 超时则返回
if (System.currentTimeMillis() - startTime + KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS > dto.getPullTimeoutUnitMs()
|| voList.size() > dto.getMaxRecords()) {
break;
}
}
return voList;
} catch (Exception e) {
LOGGER.error("method=getTopicMessages||clusterPhyId={}||topicName={}||param={}||errMsg=exception", clusterPhy.getId(), topicName, dto, e);
throw new AdminOperateException(e.getMessage(), e, ResultStatus.KAFKA_OPERATE_FAILED);
}
}
private long assignAndSeekToSpecifiedOffset(KafkaConsumer<String, String> kafkaConsumer,
Map<TopicPartition, Long> beginOffsetsMap,
Map<TopicPartition, Long> endOffsetsMap,
TopicRecordDTO dto) {
List<TopicPartition> partitionList = new ArrayList<>();
long maxMessage = 0;
for (Map.Entry<TopicPartition, Long> entry : endOffsetsMap.entrySet()) {
long begin = beginOffsetsMap.get(entry.getKey());
long end = entry.getValue();
if (begin == end){
continue;
}
maxMessage += end - begin;
partitionList.add(entry.getKey());
}
maxMessage = Math.min(maxMessage, dto.getMaxRecords());
kafkaConsumer.assign(partitionList);
Map<TopicPartition, OffsetAndTimestamp> partitionOffsetAndTimestampMap = new HashMap<>();
// 获取指定时间每个分区的offset按指定开始时间查询消息时
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
Map<TopicPartition, Long> timestampsToSearch = new HashMap<>();
partitionList.forEach(topicPartition -> timestampsToSearch.put(topicPartition, dto.getStartTimestampUnitMs()));
partitionOffsetAndTimestampMap = kafkaConsumer.offsetsForTimes(timestampsToSearch);
}
for (TopicPartition partition : partitionList) {
if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getFilterOffsetReset()) {
// 重置到最旧
kafkaConsumer.seek(partition, beginOffsetsMap.get(partition));
} else if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
// 重置到指定时间
kafkaConsumer.seek(partition, partitionOffsetAndTimestampMap.get(partition).offset());
} else if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getFilterOffsetReset()) {
// 重置到指定位置
} else {
// 默认,重置到最新
kafkaConsumer.seek(partition, Math.max(beginOffsetsMap.get(partition), endOffsetsMap.get(partition) - dto.getMaxRecords()));
}
}
return maxMessage;
}
} }

View File

@@ -20,7 +20,7 @@ public interface VersionControlManager {
* 获取当前ks所有支持的kafka版本 * 获取当前ks所有支持的kafka版本
* @return * @return
*/ */
Result<Map<String, Long>> listAllVersions(); Result<Map<String, Long>> listAllKafkaVersions();
/** /**
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持 * 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
@@ -28,7 +28,7 @@ public interface VersionControlManager {
* @param type * @param type
* @return * @return
*/ */
Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type); Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type);
/** /**
* 获取当前用户设置的用于展示的指标配置 * 获取当前用户设置的用于展示的指标配置

View File

@@ -7,6 +7,7 @@ import com.didiglobal.logi.log.LogFactory;
import com.didiglobal.logi.security.common.dto.config.ConfigDTO; import com.didiglobal.logi.security.common.dto.config.ConfigDTO;
import com.didiglobal.logi.security.service.ConfigService; import com.didiglobal.logi.security.service.ConfigService;
import com.xiaojukeji.know.streaming.km.biz.version.VersionControlManager; import com.xiaojukeji.know.streaming.km.biz.version.VersionControlManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDetailDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.UserMetricConfigDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.UserMetricConfigDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric.UserMetricConfig; import com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric.UserMetricConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -16,6 +17,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil; import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -28,10 +30,14 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.BrokerMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.BrokerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.MirrorMakerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.ConnectClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.ConnectorMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems.*;
@Service @Service
public class VersionControlManagerImpl implements VersionControlManager { public class VersionControlManagerImpl implements VersionControlManager {
@@ -46,51 +52,120 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct @PostConstruct
public void init(){ public void init(){
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true)); // topic
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_TOTAL_PRODUCE_REQUESTS, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_TOTAL_PRODUCE_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true)); // cluster
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_REQ_QUEUE_SIZE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_RES_QUEUE_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_LOG_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_MESSAGES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_MESSAGES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITIONS_NO_LEADER, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITIONS_NO_LEADER, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITION_URP, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITION_URP, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_LOG_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_REQ_QUEUE_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_RES_QUEUE_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
// group
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true)); // broker
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_REQ_QUEUE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_RES_QUEUE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_REQ_AVG_IDLE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_REQ_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_REQ_QUEUE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_RES_QUEUE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_LEADERS_SKEW, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_LEADERS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_UNDER_REPLICATE_PARTITION, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_UNDER_REPLICATE_PARTITION, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
// zookeeper
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_MAX_REQUEST_LATENCY, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OUTSTANDING_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NODE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_WATCH_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NUM_ALIVE_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_RECEIVED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_SENT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_EPHEMERALS_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_APPROXIMATE_DATA_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OPEN_FILE_DESCRIPTOR_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_DISCONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_SYNC_CONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_REQUEST_LATENCY_99TH, true));
// mm2
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_AGE_MS_MAX, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_REPLICATION_LATENCY_MS_MAX, true));
// Connect Cluster
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_ATTEMPTS_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_FAILURE_PERCENTAGE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_FAILURE_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_ATTEMPTS_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_FAILURE_PERCENTAGE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_FAILURE_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_COLLECT_COST_TIME, true));
// Connect Connector
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_CHECK_PASSED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_CHECK_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_COLLECT_COST_TIME, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_TOTAL_TASK_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_RUNNING_TASK_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_FAILED_TASK_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_ACTIVE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_POLL_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_WRITE_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_ACTIVE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_READ_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_SEND_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_DEADLETTERQUEUE_PRODUCE_FAILURES, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_DEADLETTERQUEUE_PRODUCE_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_TOTAL_ERRORS_LOGGED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_POLL_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_WRITE_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_READ_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_SEND_RATE, true));
} }
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -106,27 +181,40 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CLUSTER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CONNECTOR.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_MIRROR_MAKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect( Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
Collectors.toMap(u -> u.getType() + "@" + u.getName(), Function.identity() )); Collectors.toMap(
u -> u.getType() + "@" + u.getName(),
Function.identity(),
(v1, v2) -> v1)
);
return Result.buildSuc(map); return Result.buildSuc(map);
} }
@Override @Override
public Result<Map<String, Long>> listAllVersions() { public Result<Map<String, Long>> listAllKafkaVersions() {
return Result.buildSuc(VersionEnum.allVersionsWithOutMax()); return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
} }
@Override @Override
public Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type) { public Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type) {
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type); List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
List<VersionItemVO> versionItemVOS = new ArrayList<>(); List<VersionItemVO> versionItemVOS = new ArrayList<>();
String versionStr = clusterPhyService.getVersionFromCacheFirst(clusterId);
for (VersionControlItem item : allItem){ for (VersionControlItem item : allItem){
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class); VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
boolean support = versionControlService.isClusterSupport(clusterId, item); boolean support = versionControlService.isClusterSupport(versionStr, item);
itemVO.setSupport(support); itemVO.setSupport(support);
itemVO.setDesc(itemSupportDesc(item, support)); itemVO.setDesc(itemSupportDesc(item, support));
@@ -139,7 +227,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override @Override
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) { public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
Result<List<VersionItemVO>> ret = listClusterVersionControlItem(clusterId, type); Result<List<VersionItemVO>> ret = listKafkaClusterVersionControlItem(clusterId, type);
if(null == ret || ret.failed()){ if(null == ret || ret.failed()){
return Result.buildFail(); return Result.buildFail();
} }
@@ -159,6 +247,9 @@ public class VersionControlManagerImpl implements VersionControlManager {
UserMetricConfig umc = userMetricConfigMap.get(itemType + "@" + metric); UserMetricConfig umc = userMetricConfigMap.get(itemType + "@" + metric);
userMetricConfigVO.setSet(null != umc && umc.isSet()); userMetricConfigVO.setSet(null != umc && umc.isSet());
if (umc != null) {
userMetricConfigVO.setRank(umc.getRank());
}
userMetricConfigVO.setName(itemVO.getName()); userMetricConfigVO.setName(itemVO.getName());
userMetricConfigVO.setType(itemVO.getType()); userMetricConfigVO.setType(itemVO.getType());
userMetricConfigVO.setDesc(itemVO.getDesc()); userMetricConfigVO.setDesc(itemVO.getDesc());
@@ -178,13 +269,29 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override @Override
public Result<Void> updateUserMetricItem(Long clusterId, Integer type, UserMetricConfigDTO dto, String operator) { public Result<Void> updateUserMetricItem(Long clusterId, Integer type, UserMetricConfigDTO dto, String operator) {
Map<String, Boolean> metricsSetMap = dto.getMetricsSet(); Map<String, Boolean> metricsSetMap = dto.getMetricsSet();
if(null == metricsSetMap || metricsSetMap.isEmpty()){
//转换metricDetailDTOList
List<MetricDetailDTO> metricDetailDTOList = dto.getMetricDetailDTOList();
Map<String, MetricDetailDTO> metricDetailMap = new HashMap<>();
if (metricDetailDTOList != null && !metricDetailDTOList.isEmpty()) {
metricDetailMap = metricDetailDTOList.stream().collect(Collectors.toMap(MetricDetailDTO::getMetric, Function.identity()));
}
//转换metricsSetMap
if (metricsSetMap != null && !metricsSetMap.isEmpty()) {
for (Map.Entry<String, Boolean> metricAndShowEntry : metricsSetMap.entrySet()) {
if (metricDetailMap.containsKey(metricAndShowEntry.getKey())) continue;
metricDetailMap.put(metricAndShowEntry.getKey(), new MetricDetailDTO(metricAndShowEntry.getKey(), metricAndShowEntry.getValue(), null));
}
}
if (metricDetailMap.isEmpty()) {
return Result.buildSuc(); return Result.buildSuc();
} }
Set<UserMetricConfig> userMetricConfigs = getUserMetricConfig(operator); Set<UserMetricConfig> userMetricConfigs = getUserMetricConfig(operator);
for(Map.Entry<String, Boolean> metricAndShowEntry : metricsSetMap.entrySet()){ for (MetricDetailDTO metricDetailDTO : metricDetailMap.values()) {
UserMetricConfig userMetricConfig = new UserMetricConfig(type, metricAndShowEntry.getKey(), metricAndShowEntry.getValue()); UserMetricConfig userMetricConfig = new UserMetricConfig(type, metricDetailDTO.getMetric(), metricDetailDTO.getSet(), metricDetailDTO.getRank());
userMetricConfigs.remove(userMetricConfig); userMetricConfigs.remove(userMetricConfig);
userMetricConfigs.add(userMetricConfig); userMetricConfigs.add(userMetricConfig);
} }
@@ -228,7 +335,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
return defaultMetrics; return defaultMetrics;
} }
return JSON.parseObject(value, new TypeReference<Set<UserMetricConfig>>(){}); return JSON.parseObject(value, new TypeReference<Set<UserMetricConfig>>() {});
} }
public static void main(String[] args){ public static void main(String[] args){

View File

@@ -5,13 +5,13 @@
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-collector</artifactId> <artifactId>km-collector</artifactId>
<version>${km.revision}</version> <version>${revision}</version>
<packaging>jar</packaging> <packaging>jar</packaging>
<parent> <parent>
<artifactId>km</artifactId> <artifactId>km</artifactId>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<version>${km.revision}</version> <version>${revision}</version>
</parent> </parent>
<dependencies> <dependencies>

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService; import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
import com.xiaojukeji.know.streaming.km.common.component.SpringTool; import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
@@ -9,17 +8,20 @@ import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
/** /**
* @author didi * @author didi
*/ */
public abstract class AbstractMetricCollector<T> { public abstract class AbstractMetricCollector<M, C> {
public abstract void collectMetrics(ClusterPhy clusterPhy); public abstract String getClusterVersion(C c);
public abstract VersionItemTypeEnum collectorType(); public abstract VersionItemTypeEnum collectorType();
@Autowired @Autowired
private CollectThreadPoolService collectThreadPoolService; private CollectThreadPoolService collectThreadPoolService;
public abstract void collectMetrics(C c);
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) { protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode()); return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
} }

View File

@@ -1,121 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.*;
@Component
public class MetricESSender implements ApplicationListener<BaseMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final int THRESHOLD = 100;
private ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(10, 20, 6000, TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount()));
@PostConstruct
public void init(){
LOGGER.info("class=MetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(BaseMetricEvent event) {
if(event instanceof BrokerMetricEvent) {
BrokerMetricEvent brokerMetricEvent = (BrokerMetricEvent)event;
send2es(BROKER_INDEX,
ConvertUtil.list2List(brokerMetricEvent.getBrokerMetrics(), BrokerMetricPO.class)
);
} else if(event instanceof ClusterMetricEvent) {
ClusterMetricEvent clusterMetricEvent = (ClusterMetricEvent)event;
send2es(CLUSTER_INDEX,
ConvertUtil.list2List(clusterMetricEvent.getClusterMetrics(), ClusterMetricPO.class)
);
} else if(event instanceof TopicMetricEvent) {
TopicMetricEvent topicMetricEvent = (TopicMetricEvent)event;
send2es(TOPIC_INDEX,
ConvertUtil.list2List(topicMetricEvent.getTopicMetrics(), TopicMetricPO.class)
);
} else if(event instanceof PartitionMetricEvent) {
PartitionMetricEvent partitionMetricEvent = (PartitionMetricEvent)event;
send2es(PARTITION_INDEX,
ConvertUtil.list2List(partitionMetricEvent.getPartitionMetrics(), PartitionMetricPO.class)
);
} else if(event instanceof GroupMetricEvent) {
GroupMetricEvent groupMetricEvent = (GroupMetricEvent)event;
send2es(GROUP_INDEX,
ConvertUtil.list2List(groupMetricEvent.getGroupMetrics(), GroupMetricPO.class)
);
} else if(event instanceof ReplicaMetricEvent) {
ReplicaMetricEvent replicaMetricEvent = (ReplicaMetricEvent)event;
send2es(REPLICATION_INDEX,
ConvertUtil.list2List(replicaMetricEvent.getReplicationMetrics(), ReplicationMetricPO.class)
);
}
}
/**
* 根据不同监控维度来发送
*/
private boolean send2es(String index, List<? extends BaseESPO> statsList){
if (CollectionUtils.isEmpty(statsList)) {
return true;
}
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull( baseMetricESDao )) {
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
return false;
}
int size = statsList.size();
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
if (size < THRESHOLD) {
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList)
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
);
}
return true;
}
}

View File

@@ -1,124 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ReplicationMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_REPLICATION;
/**
* @author didi
*/
@Component
public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired
private VersionControlService versionControlService;
@Autowired
private ReplicaMetricService replicaMetricService;
@Autowired
private PartitionService partitionService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<Partition> partitions = partitionService.listPartitionByCluster(clusterPhyId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<ReplicationMetrics> metricsList = new ArrayList<>();
for(Partition partition : partitions) {
for (Integer brokerId: partition.getAssignReplicaList()) {
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
metricsList.add(metrics);
future.runnableTask(
String.format("method=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
30000,
() -> collectMetrics(clusterPhyId, metrics, items)
);
}
}
future.waitExecute(30000);
publishMetric(new ReplicaMetricEvent(this, metricsList));
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_REPLICATION;
}
/**************************************************** private method ****************************************************/
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) {
try {
if (metrics.getMetrics().containsKey(v.getName())) {
continue;
}
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafkaWithCache(
clusterPhyId,
metrics.getTopic(),
metrics.getBrokerId(),
metrics.getPartitionId(),
v.getName()
);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
if (!EnvUtil.isOnline()) {
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partitionId={}||metricName={}||metricValue={}",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e) {
LOGGER.error("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
return metrics;
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractConnectMetricCollector<M> extends AbstractMetricCollector<M, ConnectCluster> {
private static final ILog LOGGER = LogFactory.getLog(AbstractConnectMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ConnectClusterService connectClusterService;
public abstract List<M> collectConnectMetrics(ConnectCluster connectCluster);
@Override
public String getClusterVersion(ConnectCluster connectCluster){
return connectClusterService.getClusterVersion(connectCluster.getId());
}
@Override
public void collectMetrics(ConnectCluster connectCluster) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectConnectMetrics(connectCluster);
// 输出耗时信息
LOGGER.info(
"metricType={}||connectClusterId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), connectCluster.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||connectClusterId={}||metrics={}!",
this.collectorType().getMessage(), connectCluster.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -0,0 +1,83 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CLUSTER;
/**
* @author didi
*/
@Component
public class ConnectClusterMetricCollector extends AbstractConnectMetricCollector<ConnectClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectClusterMetricService connectClusterMetricService;
@Override
public List<ConnectClusterMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
ConnectClusterMetrics metrics = new ConnectClusterMetrics(clusterPhyId, connectClusterId);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
List<VersionControlItem> items = versionControlService.listVersionControlItem(getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
for (VersionControlItem item : items) {
future.runnableTask(
String.format("class=ConnectClusterMetricCollector||connectClusterId=%d||metricName=%s", connectClusterId, item.getName()),
30000,
() -> {
try {
Result<ConnectClusterMetrics> ret = connectClusterMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, item.getName());
if (null == ret || !ret.hasData()) {
return null;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectConnectMetrics||connectClusterId={}||metricName={}||errMsg=exception!",
connectClusterId, item.getName(), e
);
}
return null;
}
);
}
future.waitExecute(30000);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ConnectClusterMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CLUSTER;
}
}

View File

@@ -0,0 +1,107 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CONNECTOR;
/**
* @author didi
*/
@Component
public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollector<ConnectorMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectConnectorMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Override
public List<ConnectorMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
Result<List<String>> connectorList = connectorService.listConnectorsFromCluster(connectCluster);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
List<ConnectorMetrics> metricsList = new ArrayList<>();
for (String connectorName : connectorList.getData()) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.setClusterPhyId(clusterPhyId);
metricsList.add(metrics);
future.runnableTask(
String.format("class=ConnectConnectorMetricCollector||connectClusterId=%d||connectorName=%s", connectClusterId, connectorName),
30000,
() -> collectMetrics(connectClusterId, connectorName, metrics, items)
);
}
future.waitResult(30000);
this.publishMetric(new ConnectorMetricEvent(this, metricsList));
return metricsList;
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CONNECTOR;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String connectorName, ConnectorMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
ConnectorTypeEnum connectorType = connectorService.getConnectorType(connectClusterId, connectorName);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
//过滤已测得指标
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||connectorName={}||metric={}||errMsg=exception!",
connectClusterId, connectorName, v.getName(), e
);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -0,0 +1,117 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.connect.AbstractConnectMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_MIRROR_MAKER;
/**
* @author wyb
* @date 2022/12/15
*/
@Component
public class MirrorMakerMetricCollector extends AbstractConnectMetricCollector<MirrorMakerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private MirrorMakerService mirrorMakerService;
@Autowired
private ConnectorService connectorService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_MIRROR_MAKER;
}
@Override
public List<MirrorMakerMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<ConnectorPO> mirrorMakerList = connectorService.listByConnectClusterIdFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
Map<String, MirrorMakerTopic> mirrorMakerTopicMap = mirrorMakerService.getMirrorMakerTopicMap(connectClusterId).getData();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<MirrorMakerMetrics> metricsList = new ArrayList<>();
for (ConnectorPO mirrorMaker : mirrorMakerList) {
MirrorMakerMetrics metrics = new MirrorMakerMetrics(clusterPhyId, connectClusterId, mirrorMaker.getConnectorName());
metricsList.add(metrics);
List<MirrorMakerTopic> mirrorMakerTopicList = mirrorMakerService.getMirrorMakerTopicList(mirrorMaker, mirrorMakerTopicMap);
future.runnableTask(String.format("class=MirrorMakerMetricCollector||connectClusterId=%d||mirrorMakerName=%s", connectClusterId, mirrorMaker.getConnectorName()),
30000,
() -> collectMetrics(connectClusterId, mirrorMaker.getConnectorName(), metrics, items, mirrorMakerTopicList));
}
future.waitResult(30000);
this.publishMetric(new MirrorMakerMetricEvent(this,metricsList));
return metricsList;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String mirrorMakerName, MirrorMakerMetrics metrics, List<VersionControlItem> items, List<MirrorMakerTopic> mirrorMakerTopicList) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
//已测量指标过滤
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<MirrorMakerMetrics> ret = mirrorMakerMetricService.collectMirrorMakerMetricsFromKafka(connectClusterId, mirrorMakerName, mirrorMakerTopicList, v.getName());
if (ret == null || !ret.hasData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||mirrorMakerName={}||metric={}||errMsg=exception!",
connectClusterId, mirrorMakerName, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractKafkaMetricCollector<M> extends AbstractMetricCollector<M, ClusterPhy> {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ClusterPhyService clusterPhyService;
public abstract List<M> collectKafkaMetrics(ClusterPhy clusterPhy);
@Override
public String getClusterVersion(ClusterPhy clusterPhy){
return clusterPhyService.getVersionFromCacheFirst(clusterPhy.getId());
}
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectKafkaMetrics(clusterPhy);
// 输出耗时信息
LOGGER.info(
"metricType={}||clusterPhyId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), clusterPhy.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||clusterPhyId={}||metrics={}!",
this.collectorType().getMessage(), clusterPhy.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
@@ -11,7 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
@@ -28,8 +26,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics> { public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(BrokerMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -41,32 +39,31 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private BrokerService brokerService; private BrokerService brokerService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<BrokerMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId()); List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<BrokerMetrics> brokerMetrics = new ArrayList<>(); List<BrokerMetrics> metricsList = new ArrayList<>();
for(Broker broker : brokers) { for(Broker broker : brokers) {
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort()); BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
brokerMetrics.add(metrics); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics);
future.runnableTask( future.runnableTask(
String.format("method=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()), String.format("class=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
30000, 30000,
() -> collectMetrics(clusterPhyId, metrics, items) () -> collectMetrics(clusterPhyId, metrics, items)
); );
} }
future.waitExecute(30000); future.waitExecute(30000);
this.publishMetric(new BrokerMetricEvent(this, brokerMetrics)); this.publishMetric(new BrokerMetricEvent(this, metricsList));
LOGGER.info("method=BrokerMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.", return metricsList;
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -78,7 +75,6 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
try { try {
@@ -92,14 +88,11 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||metric={}!",
clusterPhyId, metrics.getBrokerId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||errMsg=exception!", LOGGER.error(
clusterPhyId, metrics.getBrokerId(), v.getName(), e); "method=collectMetrics||clusterPhyId={}||brokerId={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getBrokerId(), v.getName(), e
);
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -7,18 +7,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetric
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Arrays; import java.util.Collections;
import java.util.List; import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
@@ -27,8 +24,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetricPO> { public class ClusterMetricCollector extends AbstractKafkaMetricCollector<ClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(ClusterMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -37,35 +34,37 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
private ClusterMetricService clusterMetricService; private ClusterMetricService clusterMetricService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<ClusterMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis(); Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion()); ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
future.runnableTask( future.runnableTask(
String.format("method=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()), String.format("class=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
30000, 30000,
() -> { () -> {
try { try {
if(null != metrics.getMetrics().get(v.getName())){return null;} if(null != metrics.getMetrics().get(v.getName())){
return null;
}
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName()); Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
if(null == ret || ret.failed() || null == ret.getData()){return null;} if(null == ret || ret.failed() || null == ret.getData()){
return null;
}
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||errMsg=exception!", LOGGER.error(
clusterPhyId, v.getName(), e); "method=collectKafkaMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
} }
return null; return null;
@@ -76,10 +75,9 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ClusterMetricEvent(this, Arrays.asList(metrics))); publishMetric(new ClusterMetricEvent(this, Collections.singletonList(metrics)));
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.", return Collections.singletonList(metrics);
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -10,20 +9,16 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.apache.commons.collections.CollectionUtils; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.ArrayList; import java.util.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
@@ -32,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetrics>> { public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(GroupMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -45,40 +40,38 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private GroupService groupService; private GroupService groupService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<GroupMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<String> groups = new ArrayList<>(); List<String> groupNameList = new ArrayList<>();
try { try {
groups = groupService.listGroupsFromKafka(clusterPhyId); groupNameList = groupService.listGroupsFromKafka(clusterPhy);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||msg=exception!", clusterPhyId, e); LOGGER.error("method=collectKafkaMetrics||clusterPhyId={}||msg=exception!", clusterPhyId, e);
} }
if(CollectionUtils.isEmpty(groups)){return;} if(ValidateUtils.isEmptyList(groupNameList)) {
return Collections.emptyList();
}
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>(); Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
for(String groupName : groups) { for(String groupName : groupNameList) {
future.runnableTask( future.runnableTask(
String.format("method=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName), String.format("class=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
30000, 30000,
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items)); () -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
} }
future.waitResult(30000); future.waitResult(30000);
List<GroupMetrics> metricsList = new ArrayList<>(); List<GroupMetrics> metricsList = metricsMap.values().stream().collect(ArrayList::new, ArrayList::addAll, ArrayList::addAll);
metricsMap.values().forEach(elem -> metricsList.addAll(elem));
publishMetric(new GroupMetricEvent(this, metricsList)); publishMetric(new GroupMetricEvent(this, metricsList));
return metricsList;
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||startTime={}||cost={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -91,9 +84,7 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
List<GroupMetrics> groupMetricsList = new ArrayList<>(); Map<TopicPartition, GroupMetrics> subMetricMap = new HashMap<>();
Map<String, GroupMetrics> tpGroupPOMap = new HashMap<>();
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true); GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
@@ -107,38 +98,31 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
continue; continue;
} }
ret.getData().stream().forEach(metrics -> { ret.getData().forEach(metrics -> {
if (metrics.isBGroupMetric()) { if (metrics.isBGroupMetric()) {
groupMetrics.putMetric(metrics.getMetrics()); groupMetrics.putMetric(metrics.getMetrics());
} else { return;
String topicName = metrics.getTopic();
Integer partitionId = metrics.getPartitionId();
String tpGroupKey = genTopicPartitionGroupKey(topicName, partitionId);
tpGroupPOMap.putIfAbsent(tpGroupKey, new GroupMetrics(clusterPhyId, partitionId, topicName, groupName, false));
tpGroupPOMap.get(tpGroupKey).putMetric(metrics.getMetrics());
} }
});
if(!EnvUtil.isOnline()){ TopicPartition tp = new TopicPartition(metrics.getTopic(), metrics.getPartitionId());
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||groupName={}||metricName={}||metricValue={}", subMetricMap.putIfAbsent(tp, new GroupMetrics(clusterPhyId, metrics.getPartitionId(), metrics.getTopic(), groupName, false));
clusterPhyId, groupName, metricName, JSON.toJSONString(ret.getData())); subMetricMap.get(tp).putMetric(metrics.getMetrics());
} });
}catch (Exception e){ } catch (Exception e) {
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||groupName={}||errMsg=exception!", clusterPhyId, groupName, e); LOGGER.error(
"method=collectMetrics||clusterPhyId={}||groupName={}||errMsg=exception!",
clusterPhyId, groupName, e
);
} }
} }
groupMetricsList.add(groupMetrics); List<GroupMetrics> metricsList = new ArrayList<>();
groupMetricsList.addAll(tpGroupPOMap.values()); metricsList.add(groupMetrics);
metricsList.addAll(subMetricMap.values());
// 记录采集性能 // 记录采集性能
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
metricsMap.put(groupName, groupMetricsList); metricsMap.put(groupName, metricsList);
}
private String genTopicPartitionGroupKey(String topic, Integer partitionId){
return topic + "@" + partitionId;
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -9,8 +9,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
@@ -27,8 +25,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class PartitionMetricCollector extends AbstractMetricCollector<PartitionMetrics> { public class PartitionMetricCollector extends AbstractKafkaMetricCollector<PartitionMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(PartitionMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -40,13 +38,10 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
private TopicService topicService; private TopicService topicService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<PartitionMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
// 获取集群所有分区
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -55,9 +50,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>()); metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
future.runnableTask( future.runnableTask(
String.format("method=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("class=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items) () -> this.collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
); );
} }
@@ -68,10 +63,7 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
this.publishMetric(new PartitionMetricEvent(this, metricsList)); this.publishMetric(new PartitionMetricEvent(this, metricsList));
LOGGER.info( return metricsList;
"method=PartitionMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
} }
@Override @Override
@@ -109,17 +101,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId()); PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
allMetrics.putMetric(subMetrics.getMetrics()); allMetrics.putMetric(subMetrics.getMetrics());
} }
if (!EnvUtil.isOnline()) {
LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||metricValue={}!",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.info( LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception", "method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -10,8 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
@@ -31,8 +29,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetrics>> { public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(TopicMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -46,11 +44,10 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
private static final Integer AGG_METRICS_BROKER_ID = -10000; private static final Integer AGG_METRICS_BROKER_ID = -10000;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<TopicMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -64,7 +61,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
allMetricsMap.put(topic.getTopicName(), metricsMap); allMetricsMap.put(topic.getTopicName(), metricsMap);
future.runnableTask( future.runnableTask(
String.format("method=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("class=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items) () -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
); );
@@ -77,8 +74,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
this.publishMetric(new TopicMetricEvent(this, metricsList)); this.publishMetric(new TopicMetricEvent(this, metricsList));
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.", return metricsList;
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -118,14 +114,9 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics()); metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
} }
}); });
if (!EnvUtil.isOnline()) {
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||metricValue={}.",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!", LOGGER.error(
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -0,0 +1,111 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_ZOOKEEPER;
/**
* @author didi
*/
@Component
public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<ZookeeperMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override
public List<ZookeeperMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
// 构造参数
ZookeeperMetricParam param = new ZookeeperMetricParam(
clusterPhyId,
aliveZKList.stream().map(elem -> new Tuple<String, Integer>(elem.getHost(), elem.getPort())).collect(Collectors.toList()),
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
kafkaController == null? Constant.INVALID_CODE: kafkaController.getBrokerId(),
null
);
for(VersionControlItem v : items) {
try {
if(null != metrics.getMetrics().get(v.getName())) {
continue;
}
param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
if(null == ret || ret.failed() || null == ret.getData()){
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e){
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_ZOOKEEPER;
}
}

View File

@@ -237,7 +237,7 @@ public class CollectThreadPoolService {
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) { private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
// 新的 // 新的
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init( FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
"CollectorMetricsFutureUtil-Shard-" + shardId, "MetricCollect-Shard-" + shardId,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilQueueSize this.futureUtilQueueSize

View File

@@ -0,0 +1,52 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils;
import java.util.List;
import java.util.Objects;
public abstract class AbstractMetricESSender {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricESSender.class);
private static final int THRESHOLD = 100;
private static final FutureUtil<Void> esExecutor = FutureUtil.init(
"MetricsESSender",
10,
20,
10000
);
/**
* 根据不同监控维度来发送
*/
protected boolean send2es(String index, List<? extends BaseESPO> statsList) {
LOGGER.info("method=send2es||indexName={}||metricsSize={}||msg=send metrics to es", index, statsList.size());
if (CollectionUtils.isEmpty(statsList)) {
return true;
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull(baseMetricESDao)) {
LOGGER.error("method=send2es||indexName={}||errMsg=find dao failed", index);
return false;
}
for (int i = 0; i < statsList.size(); i += THRESHOLD) {
final int idxStart = i;
// 异步发送
esExecutor.submitTask(
() -> baseMetricESDao.batchInsertStats(statsList.subList(idxStart, Math.min(idxStart + THRESHOLD, statsList.size())))
);
}
return true;
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CLUSTER_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectClusterMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectClusterMetricEvent event) {
send2es(CONNECT_CLUSTER_INDEX, ConvertUtil.list2List(event.getConnectClusterMetrics(), ConnectClusterMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectorMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CONNECTOR_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectorMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectorMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectorMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectorMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectorMetricEvent event) {
send2es(CONNECT_CONNECTOR_INDEX, ConvertUtil.list2List(event.getConnectorMetricsList(), ConnectorMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.BROKER_INDEX;
@Component
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(BrokerMetricEvent event) {
send2es(BROKER_INDEX, ConvertUtil.list2List(event.getBrokerMetrics(), BrokerMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CLUSTER_INDEX;
@Component
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ClusterMetricEvent event) {
send2es(CLUSTER_INDEX, ConvertUtil.list2List(event.getClusterMetrics(), ClusterMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.GROUP_INDEX;
@Component
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(GroupMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(GroupMetricEvent event) {
send2es(GROUP_INDEX, ConvertUtil.list2List(event.getGroupMetrics(), GroupMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.PARTITION_INDEX;
@Component
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(PartitionMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(PartitionMetricEvent event) {
send2es(PARTITION_INDEX, ConvertUtil.list2List(event.getPartitionMetrics(), PartitionMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.TOPIC_INDEX;
@Component
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(TopicMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(TopicMetricEvent event) {
send2es(TOPIC_INDEX, ConvertUtil.list2List(event.getTopicMetrics(), TopicMetricPO.class));
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.ZOOKEEPER_INDEX;
@Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ZookeeperMetricEvent event) {
send2es(ZOOKEEPER_INDEX, ConvertUtil.list2List(event.getZookeeperMetrics(), ZookeeperMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.mm2.MirrorMakerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_MM2_INDEX;
/**
* @author zengqiao
* @date 2022/12/20
*/
@Component
public class MirrorMakerMetricESSender extends AbstractMetricESSender implements ApplicationListener<MirrorMakerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(MirrorMakerMetricEvent event) {
send2es(CONNECT_MM2_INDEX, ConvertUtil.list2List(event.getMetricsList(), MirrorMakerMetricPO.class));
}
}

View File

@@ -5,13 +5,13 @@
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-common</artifactId> <artifactId>km-common</artifactId>
<version>${km.revision}</version> <version>${revision}</version>
<packaging>jar</packaging> <packaging>jar</packaging>
<parent> <parent>
<artifactId>km</artifactId> <artifactId>km</artifactId>
<groupId>com.xiaojukeji.kafka</groupId> <groupId>com.xiaojukeji.kafka</groupId>
<version>${km.revision}</version> <version>${revision}</version>
</parent> </parent>
<properties> <properties>
@@ -81,10 +81,6 @@
<version>3.0.2</version> <version>3.0.2</version>
</dependency> </dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency> <dependency>
<groupId>org.projectlombok</groupId> <groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId> <artifactId>lombok</artifactId>
@@ -127,5 +123,9 @@
<groupId>org.apache.kafka</groupId> <groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId> <artifactId>kafka_2.13</artifactId>
</dependency> </dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-runtime</artifactId>
</dependency>
</dependencies> </dependencies>
</project> </project>

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterConnectorsOverviewDTO extends PaginationSortDTO {
@NotNull(message = "latestMetricNames不允许为空")
@ApiModelProperty("需要指标点的信息")
private List<String> latestMetricNames;
@NotNull(message = "metricLines不允许为空")
@ApiModelProperty("需要指标曲线的信息")
private MetricDTO metricLines;
@ApiModelProperty("需要排序的指标名称列表,比较第一个不为空的metric")
private List<String> sortMetricNameList;
}

View File

@@ -1,19 +1,18 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster; package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationMulFuzzySearchDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import io.swagger.annotations.ApiModelProperty; import io.swagger.annotations.ApiModelProperty;
import lombok.Data; import lombok.Data;
/** /**
* @author zengqiao * @author wyb
* @date 22/02/24 * @date 2022/10/17
*/ */
@Data @Data
public class ClusterGroupsOverviewDTO extends PaginationMulFuzzySearchDTO { public class ClusterGroupSummaryDTO extends PaginationBaseDTO {
@ApiModelProperty("查找该Topic") @ApiModelProperty("查找该Topic")
private String topicName; private String searchTopicName;
@ApiModelProperty("查找该Group") @ApiModelProperty("查找该Group")
private String groupName; private String searchGroupName;
} }

View File

@@ -0,0 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import lombok.Data;
/**
* @author zengqiao
* @date 22/12/12
*/
@Data
public class ClusterMirrorMakersOverviewDTO extends ClusterConnectorsOverviewDTO {
}

View File

@@ -3,6 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig; import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty; import io.swagger.annotations.ApiModelProperty;
import lombok.Data; import lombok.Data;
@@ -34,4 +35,8 @@ public class ClusterPhyBaseDTO extends BaseDTO {
@NotNull(message = "jmxProperties不允许为空") @NotNull(message = "jmxProperties不允许为空")
@ApiModelProperty(value="Jmx配置") @ApiModelProperty(value="Jmx配置")
protected JmxConfig jmxProperties; protected JmxConfig jmxProperties;
// TODO 前端页面增加时,需要加一个不为空的限制
@ApiModelProperty(value="ZK配置")
protected ZKConfig zkProperties;
} }

View File

@@ -0,0 +1,13 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ClusterZookeepersOverviewDTO extends PaginationBaseDTO {
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "集群Connector")
public class ClusterConnectorDTO extends BaseDTO {
@NotNull(message = "connectClusterId不允许为空")
@ApiModelProperty(value = "Connector集群ID", example = "1")
protected Long connectClusterId;
@NotBlank(message = "name不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
protected String connectorName;
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "集群Connector")
public class ConnectClusterDTO extends BaseDTO {
@ApiModelProperty(value = "Connect集群ID", example = "1")
private Long id;
@ApiModelProperty(value = "Connect集群名称", example = "know-streaming")
private String name;
@ApiModelProperty(value = "Connect集群URL", example = "http://127.0.0.1:8080")
private String clusterUrl;
@ApiModelProperty(value = "Connect集群版本", example = "2.5.1")
private String version;
@ApiModelProperty(value = "JMX配置", example = "")
private String jmxProperties;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Connector")
public class ConnectorActionDTO extends ClusterConnectorDTO {
@NotBlank(message = "action不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "stop|restart|resume")
private String action;
}

View File

@@ -0,0 +1,36 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@JsonIgnoreProperties(ignoreUnknown = true)
@NoArgsConstructor
@ApiModel(description = "创建Connector")
public class ConnectorCreateDTO extends ClusterConnectorDTO {
@Deprecated
@ApiModelProperty(value = "配置, 优先使用config字段3.5.0版本将删除该字段", example = "")
protected Properties configs;
@ApiModelProperty(value = "配置", example = "")
protected Properties config;
public ConnectorCreateDTO(Long connectClusterId, String connectorName, Properties config) {
super(connectClusterId, connectorName);
this.config = config;
}
public Properties getSuitableConfig() {
return config != null? config: configs;
}
}

View File

@@ -0,0 +1,14 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "删除Connector")
public class ConnectorDeleteDTO extends ClusterConnectorDTO {
}

Some files were not shown because too many files have changed in this diff Show More