Compare commits

...

130 Commits

Author SHA1 Message Date
zengqiao
c27786a257 bump version to 3.1.0 2022-10-31 14:55:50 +08:00
zengqiao
81910d1958 [Hotfix] 修复新接入集群时,健康状态信息页面出现空指针问题 2022-10-31 14:55:22 +08:00
zengqiao
55d5fc4bde 增加v3.1.0版本的变更项 2022-10-31 14:05:42 +08:00
GraceWalk
f30586b150 fix: 依赖安装默认采用 taobao 镜像 2022-10-29 13:55:36 +08:00
GraceWalk
37037c19f0 fix: 更新版本信息获取方式 2022-10-29 13:55:36 +08:00
GraceWalk
1a5e2c7309 fix: 错误页面优化 2022-10-29 13:55:36 +08:00
GraceWalk
941dd4fd65 feat: 支持 Zookeeper 模块 2022-10-29 13:55:36 +08:00
GraceWalk
5f6df3681c feat: 健康状态展示优化 2022-10-29 13:55:36 +08:00
zengqiao
7d045dbf05 补充ZK健康巡检任务 2022-10-29 13:55:07 +08:00
zengqiao
4ff4accdc3 补充3.1.0版本升级信息 2022-10-29 13:55:07 +08:00
zengqiao
bbe967c4a8 补充多集群健康状态概览信息 2022-10-29 13:55:07 +08:00
zengqiao
b101cec6fa 健康分调整为健康状态 2022-10-29 13:55:07 +08:00
zengqiao
e98ec562a2 Znode信息中,补充当前节点路径信息 2022-10-29 13:55:07 +08:00
zengqiao
0e71ecc587 延长健康检查结果过期时间 2022-10-29 13:55:07 +08:00
zengqiao
0f11a65df8 补充获取ZK的namespace的方法 2022-10-29 13:55:07 +08:00
zengqiao
da00c8c877 还原消费组重置失败的提示文案 2022-10-29 13:55:07 +08:00
hongtenzone@foxmail.com
8b177877bb Add release notes 2022-10-28 15:35:26 +08:00
hongtenzone@foxmail.com
ea199dca8d Add release notes 2022-10-28 15:35:26 +08:00
renxiangde
88b5833f77 [Bugfix] 修复新建Topic后,立即查看Topic-Messages信息会提示Topic不存在的问题 (#697) 2022-10-27 11:04:26 +08:00
zwen
127b5be651 [fix]Repair that preferredReplicaElection is not called as expected 2022-10-27 10:15:15 +08:00
Mengqi777
80f001cdd5 [ISSUE #723]Ignore error and continue to package km-rest if no git directory 2022-10-26 10:14:14 +08:00
zengqiao
30d297cae1 bump version to 3.1.0-SNAPSHOT 2022-10-21 17:13:02 +08:00
zengqiao
a96853db90 bump version to v3.0.1 2022-10-21 15:02:09 +08:00
zengqiao
c1502152c0 Revert "bump version to 3.1.0"
This reverts commit 7b5c2d80
2022-10-21 14:59:42 +08:00
GraceWalk
afda292796 fix: typescript 版本更新 2022-10-21 14:47:01 +08:00
GraceWalk
163cab78ae fix: 部分文案 & 样式优化 2022-10-21 14:47:01 +08:00
GraceWalk
8f4ff36c09 fix: 优化 Topic 扩分区名称 & 描述展示 2022-10-21 14:47:01 +08:00
GraceWalk
47b6b3577a fix: Broker 列表 jmxPort 列支持展示连接状态 2022-10-21 14:47:01 +08:00
GraceWalk
f3eca3b214 fix: ConsumerGroup 列表 & 详情页重构 2022-10-21 14:47:00 +08:00
GraceWalk
62f7d3f72f fix: 图表逻辑 & 展示优化 2022-10-21 14:47:00 +08:00
GraceWalk
26e60d8a64 fix: 优化全局 Message & Notification 展示效果 2022-10-21 14:47:00 +08:00
zengqiao
df655a250c 增加v3.0.1变更内容 2022-10-21 14:36:29 +08:00
zengqiao
811fc9b400 补充v3.0.1版本升级信息 2022-10-21 14:32:57 +08:00
zengqiao
83df02783c 安装包中,去除docs相关的文档 2022-10-21 14:32:07 +08:00
zengqiao
6a5efce874 [Bugfix] 修复指标版本信息list转map时出现key冲突从而抛出异常的问题 2022-10-21 12:06:22 +08:00
zengqiao
fa0ae5e474 [Optimize] 集群Broker列表中,补充Jmx是否成功连接的信息
1、当前页面无数据时,一部分的原因是JMX连接失败导致;
2、Broker列表中增加是否连接成功的信息,便于问题的排查;
2022-10-21 12:03:19 +08:00
zengqiao
cafd665a2d [Optimize] 删除Replica指标采集任务
1、当集群存在较多副本时,指标采集的性能会严重降低;
2、Replica的指标基本上都是在实时获取时才需要,因此当前先将Replica指标采集任务关闭,后续依据产品需要再看是否开启;
2022-10-21 11:49:58 +08:00
zengqiao
e8f77a456b [Optimize] 优化ZK指标的获取,减少重复采集的出现 (#709)
1、避免不同集群,相同的ZK地址时,指标重复获取的情况;
2、避免集群某个ZK地址获取指标失败时,下一个周期还会继续尝试从该地址获取指标;
2022-10-21 11:26:07 +08:00
_haoqi
4510c62ebd [ISSUE #677] 重启会导致部分信息采集抛出空指针 2022-10-20 15:36:32 +08:00
zengqiao
79864955e1 [Feature] 集群Group列表按照Group维度进行展示 (#580) 2022-10-20 13:29:43 +08:00
Richard
ff26a8d46c fix issue:
* [issue #700] Adjust the prompt and replace the Arrays.asList() with the Collections.singletonList()
2022-10-19 15:19:43 +08:00
dianyang12138
cc226d552e fix:修复es模版错误 2022-10-19 11:44:00 +08:00
EricZeng
962f89475b Merge pull request #699 from silent-night-no-trace/dev
[ISSUE #683]  fix ldap bug
2022-10-19 10:23:47 +08:00
night.liang
ec204a1605 fix ldap bug 2022-10-18 20:16:40 +08:00
早晚会起风
58d7623938 Merge pull request #696 from chenzhongyu11/dev
[ISSUE #672] 修复健康巡检结果时间展示错误的问题
2022-10-18 10:41:47 +08:00
EricZeng
8f4ecfcdc0 Merge pull request #691 from didi/dev
补充Kafka-Group表
2022-10-17 20:30:32 +08:00
zengqiao
ef719cedbc 补充Kafka-Group表 2022-10-17 10:34:21 +08:00
EricZeng
b7856c892b Merge pull request #690 from didi/master
合并默认分支
2022-10-17 10:30:18 +08:00
EricZeng
7435a78883 Merge pull request #689 from didi/dev
优化健康检查结果替换时出现死锁问题
2022-10-17 10:26:11 +08:00
chenzy
f49206b316 修复时间展示有误的bug,由原先的12小时制改为24小时制 2022-10-16 22:57:50 +08:00
EricZeng
7d500a0721 Merge pull request #684 from RichardZhengkay/dev
fix issue: [#662]
2022-10-15 14:39:37 +08:00
EricZeng
98a519f20b Merge pull request #682 from haoqi123/fix_678
[ISSUE #678] zk-Latency avg为多位小数会抛出空指针
2022-10-15 14:17:23 +08:00
Richard
39b655bb43 fix issue:
* [issue #662] Fix deadlocks caused by adding data using MySQL's REPLACE method
2022-10-14 14:03:16 +08:00
_haoqi
78d56a49fe 修改zk-Latency avg为小数时的数值转换异常问题 2022-10-14 11:53:48 +08:00
EricZeng
d2e9d1fa01 Merge pull request #673 from didi/dev
fix [ISSUE-666] Error in ks_km_zookeeper table role type #666
2022-10-13 18:57:06 +08:00
zengqiao
41ff914dc3 修复ZK元信息表role字段类型错误问题 2022-10-13 18:50:41 +08:00
shirenchuang
3ba447fac2 update readme 2022-10-13 18:49:06 +08:00
shirenchuang
e9cc380a2e update readme 2022-10-13 18:30:13 +08:00
EricZeng
017cac9bbe Merge pull request #670 from RichardZhengkay/dev
fix issue: [#666]
2022-10-13 18:25:15 +08:00
Richard
9ad72694af fix issue:
* [issue #666] Fix the type of role phase in ks_km_zookeeper table
2022-10-13 18:00:43 +08:00
shirenchuang
e8f9821870 Merge remote-tracking branch 'origin/master' 2022-10-13 16:31:03 +08:00
shirenchuang
bb167b9f8d update readme 2022-10-13 15:31:34 +08:00
石臻臻的杂货铺
28fbb5e130 Merge pull request #665 from zwOvO/patch-1
[ISSUE #664]关于'JMX-连接失败问题解决'的超链接修复
2022-10-13 10:17:29 +08:00
EricZeng
16101e81e8 Merge pull request #661 from didi/dev
合并开发分支
2022-10-13 10:16:14 +08:00
赤月
aced504d2a Update faq.md 2022-10-12 22:08:29 +08:00
shirenchuang
abb064d9d1 update readme add who's using know streaming 2022-10-12 19:15:19 +08:00
zengqiao
dc1899a1cd 修复集群ZK列表中缺少返回服务状态字段的问题 2022-10-12 16:45:47 +08:00
zengqiao
442f34278c 指标信息中,增加返回ZK的指标信息 2022-10-12 16:44:07 +08:00
zengqiao
a6dcbcd35b 删除未被使用的import 2022-10-12 16:43:16 +08:00
zengqiao
2b600e96eb 健康检查任务优化 2022-10-12 16:41:27 +08:00
zengqiao
177bb80f31 application.yml文件中增加ES用户名密码的配置项 2022-10-12 16:36:04 +08:00
zengqiao
63fbe728c4 增加ZK指标上报普罗米修斯 2022-10-12 11:11:25 +08:00
EricZeng
b33020840b ZookeeperService中增加服务存活统计方法(#659) 2022-10-12 11:07:52 +08:00
zengqiao
c5caf7c0d6 ZookeeperService中增加服务存活统计方法 2022-10-12 11:02:41 +08:00
EricZeng
0f0473db4c 增加float转integer方法(#658)
增加float转integer方法
2022-10-12 10:09:16 +08:00
zengqiao
beadde3e06 增加float转integer方法 2022-10-11 18:46:16 +08:00
EricZeng
a423a20480 修复获取TopN的Broker指标时,会出现部分指标缺失的问题(#657)
修复获取TopN的Broker指标时,会出现部分指标缺失的问题
2022-10-11 18:44:02 +08:00
shirenchuang
79f0a23813 update contribuer document 2022-10-11 17:38:15 +08:00
zengqiao
780fdea2cc 修复获取TopN的Broker指标时,会出现部分指标缺失的问题 2022-10-11 16:54:39 +08:00
shirenchuang
1c0fda1adf Merge remote-tracking branch 'origin/master' 2022-10-11 10:39:08 +08:00
EricZeng
9cf13e9b30 Broker增加服务是否存活接口(#654)
Broker增加服务是否存活接口
2022-10-10 19:56:12 +08:00
zengqiao
87cd058fd8 Broker增加服务是否存活接口 2022-10-10 19:54:47 +08:00
EricZeng
81b1ec48c2 调整贡献者名单(#653)
调整贡献者名单
2022-10-10 19:52:50 +08:00
zengqiao
66dd82f4fd 调整贡献者名单 2022-10-10 19:49:22 +08:00
EricZeng
ce35b23911 修复DSL错误导致ZK指标查询失败问题(#652)
修复DSL错误导致ZK指标查询失败问题
2022-10-10 19:27:48 +08:00
zengqiao
e79342acf5 修复DSL错误导致ZK指标查询失败问题 2022-10-10 19:19:05 +08:00
EricZeng
3fc9f39d24 Merge pull request #651 from didi/master
合并主分支
2022-10-10 19:10:48 +08:00
shirenchuang
0221fb3a4a 贡献者相关文档 2022-10-10 18:02:19 +08:00
shirenchuang
f009f8b7ba 贡献者相关文档 2022-10-10 17:21:21 +08:00
shirenchuang
b76959431a 贡献者相关文档 2022-10-10 16:55:33 +08:00
shirenchuang
975370b593 贡献者相关文档 2022-10-10 15:57:07 +08:00
shirenchuang
7275030971 贡献者相关文档 2022-10-10 15:50:16 +08:00
shirenchuang
99b0be5a95 Merge branch 'master' into docs_only 2022-10-10 15:01:00 +08:00
石臻臻的杂货铺
edd3f95fc4 Update CONTRIBUTING.md 2022-10-10 14:22:24 +08:00
石臻臻的杂货铺
479f983b09 Update CONTRIBUTING.md 2022-10-10 13:58:35 +08:00
石臻臻的杂货铺
7650332252 Update CONTRIBUTING.md 2022-10-10 13:50:55 +08:00
shirenchuang
8f1a021851 readme 2022-10-10 13:46:14 +08:00
shirenchuang
ce4df4d5fd Merge remote-tracking branch 'origin/master' 2022-10-10 13:00:28 +08:00
shirenchuang
bd43ae1b5d Issue 模板 2022-10-10 12:57:53 +08:00
石臻臻的杂货铺
8fa34116b9 Merge pull request #648 from didi/docs_only
PR 模板
2022-10-10 12:39:38 +08:00
shirenchuang
7e92553017 PR 模板 2022-10-10 11:42:04 +08:00
shirenchuang
b7e243a693 Merge remote-tracking branch 'origin/master' 2022-10-09 17:23:16 +08:00
shirenchuang
35d4888afb 贡献者规约文档 2022-10-09 17:03:46 +08:00
EricZeng
b3e8a4f0f6 Merge pull request #647 from didi/dev
合并DEV分支
2022-10-09 16:54:45 +08:00
shirenchuang
321125caee issue template 2022-10-09 15:47:13 +08:00
shirenchuang
e01427aa4f issue template 2022-10-09 15:42:40 +08:00
shirenchuang
14652e7f7a issue template 2022-10-09 15:39:20 +08:00
shirenchuang
7c05899dbd issue template 2022-10-09 15:26:57 +08:00
shirenchuang
56726b703f issue template 2022-10-09 13:56:44 +08:00
shirenchuang
6237b0182f issue template 2022-10-09 12:27:27 +08:00
EricZeng
be5b662f65 Merge pull request #645 from didi/dev_feature_zk_kerberos
如何修改代码支持ZK-Kerberos认证
2022-10-09 10:39:26 +08:00
EricZeng
224698355c 恢复为原先代码
恢复为原先代码
2022-10-09 10:38:36 +08:00
EricZeng
8f47138ecd Merge pull request #643 from didi/dev_3.1
监控Kafka的ZK
2022-10-08 17:22:03 +08:00
zengqiao
d159746391 调整接入带Kerberos认证的ZK集群的文档 2022-10-08 17:00:08 +08:00
EricZeng
63df93ea5e Merge pull request #608 from luhea/dev_feature_zk_kerberos
Add zk supported kerberos
2022-10-08 16:11:37 +08:00
EricZeng
38948c0daa Merge pull request #644 from didi/master
合并主分支
2022-10-08 16:09:40 +08:00
zengqiao
6c610427b6 ZK-增加ZK信息查询接口 2022-10-08 15:46:18 +08:00
zengqiao
b4cc31c459 ZK-指标采集入ES 2022-10-08 15:31:59 +08:00
zengqiao
7d781712c9 ZK-同步ZK元信息至DB 2022-10-08 15:19:09 +08:00
zengqiao
dd61ce9b2a ZK-增加配置的默认值 2022-10-08 14:58:28 +08:00
zengqiao
69a7212986 ZK-增加四字命令信息的获取 2022-10-08 14:52:17 +08:00
EricZeng
ff05a951fd Merge pull request #642 from didi/master
合并主分支
2022-10-08 14:42:37 +08:00
EricZeng
89d5357b40 Merge pull request #641 from didi/dev
删除无效的健康分计算代码
2022-10-08 14:41:27 +08:00
zengqiao
7ca3d65c42 删除无效的健康分计算代码 2022-10-08 14:15:20 +08:00
zengqiao
7b5c2d800f bump version to 3.1.0 2022-09-29 15:13:41 +08:00
luhe
c8806dbb4d 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:09:04 +08:00
luhe
e5802c7f50 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:02:38 +08:00
luhe
590f684d66 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 15:59:31 +08:00
luhe
8e5a67f565 修改代码支持ZK-Kerberos认证 2022-09-21 15:58:59 +08:00
luhe
8d2fbce11e 修改代码支持ZK-Kerberos认证 2022-09-21 15:54:30 +08:00
305 changed files with 11124 additions and 2708 deletions

51
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,51 @@
---
name: 报告Bug
about: 报告KnowStreaming的相关Bug
title: ''
labels: bug
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个Bug。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 重现该问题的步骤
1. xxx
2. xxx
3. xxx
### 预期结果
<!-- 写下应该出现的预期结果?-->
### 实际结果
<!-- 实际发生了什么? -->
---
如果有异常请附上异常Trace:
```
Just put your stack trace here!
```

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: true
contact_links:
- name: 讨论问题
url: https://github.com/didi/KnowStreaming/discussions/new
about: 发起问题、讨论 等等
- name: KnowStreaming官网
url: https://knowstreaming.com/
about: KnowStreaming website

View File

@@ -0,0 +1,26 @@
---
name: 优化建议
about: 相关功能优化建议
title: ''
labels: Optimization Suggestions
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个优化建议。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 需要优化的功能点
### 建议如何优化

View File

@@ -0,0 +1,20 @@
---
name: 提议新功能/需求
about: 给KnowStreaming提一个功能需求
title: ''
labels: feature
assignees: ''
---
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
你是否希望来认领这个Feature。
「 Y / N 」
## 这里描述需求
<!--请尽可能的描述清楚您的需求 -->

12
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@@ -0,0 +1,12 @@
---
name: 提个问题
about: 问KnowStreaming相关问题
title: ''
labels: question
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
## 在这里提出你的问题

22
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,22 @@
请不要在没有先创建Issue的情况下创建Pull Request。
## 变更的目的是什么
XXXXX
## 简短的更新日志
XX
## 验证这一变化
XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 确保有针对更改提交的 Github issue通常在您开始处理之前。诸如拼写错误之类的琐碎更改不需要 Github issue。您的Pull Request应该只解决这个问题而不需要进行其他更改—— 一个 PR 解决一个问题。
* [ ] 格式化 Pull Request 标题,如[ISSUE #123] support Confluent Schema Registry。 Pull Request 中的每个提交都应该有一个有意义的主题行和正文。
* [ ] 编写足够详细的Pull Request描述以了解Pull Request的作用、方式和原因。
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改请记住在test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过

74
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at shirenchuang@didiglobal.com . All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@@ -1,28 +1,150 @@
# Contribution Guideline
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
## Pull Requests
Before sending pull request to this project, please read and follow guidelines below.
# 为KnowStreaming做贡献
1. Branch: We only accept pull request on `dev` branch.
2. Coding style: Follow the coding style used in LogiKM.
3. Commit message: Use English and be aware of your spell.
4. Test: Make sure to test your code.
Add device mode, API version, related log, screenshots and other related information in your pull request if possible.
欢迎👏🏻来到KnowStreaming本文档是关于如何为KnowStreaming做出贡献的指南。
NOTE: We assume all your contribution can be licensed under the [AGPL-3.0](LICENSE).
如果您发现不正确或遗漏的内容, 请留下意见/建议。
## Issues
## 行为守则
请务必阅读并遵守我们的 [行为准则](./CODE_OF_CONDUCT.md).
We love clearly described issues. :)
Following information can help us to resolve the issue faster.
* Device mode and hardware information.
* API version.
* Logs.
* Screenshots.
* Steps to reproduce the issue.
## 贡献
**KnowStreaming** 欢迎任何角色的新参与者,包括 **User** 、**Contributor**、**Committer**、**PMC** 。
我们鼓励新人积极加入 **KnowStreaming** 项目从User到Contributor、Committer ,甚至是 PMC 角色。
为了做到这一点,新人需要积极地为 **KnowStreaming** 项目做出贡献。以下介绍如何对 **KnowStreaming** 进行贡献。
### 创建/打开 Issue
如果您在文档中发现拼写错误、在代码中**发现错误**或想要**新功能**或想要**提供建议**,您可以在 GitHub 上[创建一个Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 进行报告。
如果您想直接贡献, 您可以选择下面标签的问题。
- [contribution welcome](https://github.com/didi/KnowStreaming/labels/contribution%20welcome) : 非常需要解决/新增 的Issues
- [good first issue](https://github.com/didi/KnowStreaming/labels/good%20first%20issue): 对新人比较友好, 新人可以拿这个Issue来练练手热热身。
<font color=red ><b> 请注意,任何 PR 都必须与有效issue相关联。否则PR 将被拒绝。</b></font>
### 开始你的贡献
**分支介绍**
我们将 `dev`分支作为开发分支, 说明这是一个不稳定的分支。
此外,我们的分支模型符合 [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/). 我们强烈建议新人在创建PR之前先阅读上述文章。
**贡献流程**
为方便描述,我们这里定义一下2个名词
自己Fork出来的仓库是私人仓库, 我们这里称之为 **分叉仓库**
Fork的源项目,我们称之为:**源仓库**
现在如果您准备好创建PR, 以下是贡献者的工作流程:
1. Fork [KnowStreaming](https://github.com/didi/KnowStreaming) 项目到自己的仓库
2. 从源仓库的`dev`拉取并创建自己的本地分支,例如: `dev`
3. 在本地分支上对代码进行修改
4. Rebase 开发分支, 并解决冲突
5. commit 并 push 您的更改到您自己的**分叉仓库**
6. 创建一个 Pull Request 到**源仓库**的`dev`分支中。
7. 等待回复。如果回复的慢,请无情的催促。
更为详细的贡献流程请看:[贡献流程](./docs/contributer_guide/贡献流程.md)
创建Pull Request时
1. 请遵循 PR的 [模板](./.github/PULL_REQUEST_TEMPLATE.md)
2. 请确保 PR 有相应的issue。
3. 如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档(在对应的issue中)。
4. 注意单个 PR 不能太大。如果需要进行大量更改,最好将更改分成几个单独的 PR。
5. 在合并PR之前尽量的将最终的提交信息清晰简洁, 将多次修改的提交尽可能的合并为一次提交。
6. 创建 PR 后将为PR分配一个或多个reviewers。
<font color=red><b>如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档。</b></font>
# 代码审查指南
Commiter将轮流review代码以确保在合并前至少有一名Commiter
一些原则:
- 可读性——重要的代码应该有详细的文档。API 应该有 Javadoc。代码风格应与现有风格保持一致。
- 优雅:新的函数、类或组件应该设计得很好。
- 可测试性——单元测试用例应该覆盖 80% 的新代码。
- 可维护性 - 遵守我们的编码规范。
# 开发者
## 成为Contributor
只要成功提交并合并PR , 则为Contributor
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
## 尝试成为Commiter
一般来说, 贡献8个重要的补丁并至少让三个不同的人来Review他们(您需要3个Commiter的支持)。
然后请人给你提名, 您需要展示您的
1. 至少8个重要的PR和项目的相关问题
2. 与团队合作的能力
3. 了解项目的代码库和编码风格
4. 编写好代码的能力
当前的Commiter可以通过在KnowStreaming中的Issue标签 `nomination`(提名)来提名您
1. 你的名字和姓氏
2. 指向您的Git个人资料的链接
3. 解释为什么你应该成为Commiter
4. 详细说明提名人与您合作的3个PR以及相关问题,这些问题可以证明您的能力。
另外2个Commiter需要支持您的**提名**如果5个工作日内没有人反对您就是提交者,如果有人反对或者想要更多的信息Commiter会讨论并通常达成共识(5个工作日内) 。
# 开源奖励计划
我们非常欢迎开发者们为KnowStreaming开源项目贡献一份力量相应也将给予贡献者激励以表认可与感谢。
## 参与贡献
1. 积极参与 Issue 的讨论如答疑解惑、提供想法或报告无法解决的错误Issue
2. 撰写和改进项目的文档Wiki
3. 提交补丁优化代码Coding
## 你将获得
1. 加入KnowStreaming开源项目贡献者名单并展示
2. KnowStreaming开源贡献者证书(纸质&电子版)
3. KnowStreaming贡献者精美大礼包(KnowStreamin/滴滴 周边)
## 相关规则
- Contributer和Commiter都会有对应的证书和对应的礼包
- 每季度有KnowStreaming项目团队评选出杰出贡献者,颁发相应证书。
- 年末进行年度评选
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)

View File

@@ -45,7 +45,14 @@
## `Know Streaming` 简介
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。整体具有以下特点:
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。
我们现在正在收集 Know Streaming 用户信息,以帮助我们进一步改进 Know Streaming。
请在 [issue#663](https://github.com/didi/KnowStreaming/issues/663) 上提供您的使用信息来支持我们:[谁在使用 Know Streaming](https://github.com/didi/KnowStreaming/issues/663)
整体具有以下特点:
- 👀 &nbsp;**零侵入、全覆盖**
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
@@ -99,9 +106,13 @@
## 成为社区贡献者
点击 [这里](CONTRIBUTING.md)了解如何成为 Know Streaming 的贡献者
1. [贡献源码](https://doc.knowstreaming.com/product/10-contribution) 了解如何成为 Know Streaming 的贡献者
2. [具体贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)
3. [开源激励计划](https://doc.knowstreaming.com/product/10-contribution#105-开源激励计划)
4. [贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)
获取KnowStreaming开源社区证书。
## 加入技术交流群
@@ -134,6 +145,11 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。
<br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。
感谢感谢!!!
<img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png">
## Star History

View File

@@ -1,4 +1,71 @@
## v3.1.0
**Bug修复**
- 修复重置 Group Offset 的提示信息中缺少Dead状态也可进行重置的描述
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
- 修复 git 目录不存在时,打包不能正常进行的问题;
- 修复 KRaft 模式的 Kafka 集群JMX PORT 显示 -1 的问题;
**体验优化**
- 优化Cluster、Broker、Topic、Group的健康分为健康状态
- 去除健康巡检配置中的权重信息;
- 错误提示页面展示优化;
- 前端打包编译依赖默认使用 taobao 镜像;
- 重新设计优化导航栏的 icon
**新增**
- 个人头像下拉信息中,新增产品版本信息;
- 多集群列表页面,新增集群健康状态分布信息;
**Kafka ZK 部分 (v3.1.0版本正式发布)**
- 新增 ZK 集群的指标大盘信息;
- 新增 ZK 集群的服务状态概览信息;
- 新增 ZK 集群的服务节点列表信息;
- 新增 Kafka 在 ZK 的存储数据查看功能;
- 新增 ZK 的健康巡检及健康状态计算;
---
## v3.0.1
**Bug修复**
- 修复重置 Group Offset 时,提示信息中缺少 Dead 状态也可进行重置的信息;
- 修复 Ldap 某个属性不存在时,会直接抛出空指针导致登陆失败的问题;
- 修复集群 Topic 列表页,健康分详情信息中,检查时间展示错误的问题;
- 修复更新健康检查结果时,出现死锁的问题;
- 修复 Replica 索引模版错误的问题;
- 修复 FAQ 文档中的错误链接;
- 修复 Broker 的 TopN 指标不存在时,页面数据不展示的问题;
- 修复 Group 详情页,图表时间范围选择不生效的问题;
**体验优化**
- 集群 Group 列表按照 Group 维度进行展示;
- 优化避免因 ES 中该指标不存在,导致日志中出现大量空指针的问题;
- 优化全局 Message & Notification 展示效果;
- 优化 Topic 扩分区名称 & 描述展示;
**新增**
- Broker 列表页面,新增 JMX 是否成功连接的信息;
**ZK 部分(未完全发布)**
- 后端补充 Kafka ZK 指标采集Kafka ZK 信息获取相关功能;
- 增加本地缓存,避免同一采集周期内 ZK 指标重复采集;
- 增加 ZK 节点采集失败跳过策略,避免不断对存在问题的节点不断尝试;
- 修复 zkAvgLatency 指标转 Long 时抛出异常问题;
- 修复 ks_km_zookeeper 表中role 字段类型错误问题;
---
## v3.0.0
@@ -25,7 +92,7 @@
- 集群信息中,新增 Kafka 集群运行模式字段
- 新增 docker-compose 的部署方式
---
## v3.0.0-beta.3

View File

@@ -439,7 +439,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_partition_metric*"
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
@@ -500,30 +500,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
}
},
"aliases" : { }
}[root@10-255-0-23 template]# cat ks_kafka_replication_metric
PUT _template/ks_kafka_replication_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
"order" : 10,
@@ -640,7 +617,92 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
}
},
"aliases" : { }
}'
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_zookeeper_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}'
for i in {0..6};
do
@@ -650,6 +712,7 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2
done
done

View File

@@ -0,0 +1 @@
TODO.

View File

@@ -0,0 +1,6 @@
开源贡献者证书发放名单(定期更新)
贡献者名单请看:[贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)

View File

@@ -0,0 +1,6 @@
<br>
<br>
请点击:[贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -0,0 +1,69 @@
## 支持Kerberos认证的ZK
### 1、修改 KnowStreaming 代码
代码位置:`src/main/java/com/xiaojukeji/know/streaming/km/persistence/kafka/KafkaAdminZKClient.java`
`createZKClient``135行 的 false 改为 true
![need_modify_code.png](assets/support_kerberos_zk/need_modify_code.png)
修改完后重新进行打包编译,打包编译见:[打包编译](https://github.com/didi/KnowStreaming/blob/master/docs/install_guide/%E6%BA%90%E7%A0%81%E7%BC%96%E8%AF%91%E6%89%93%E5%8C%85%E6%89%8B%E5%86%8C.md
)
### 2、查看用户在ZK的ACL
假设我们使用的用户是 `kafka` 这个用户。
- 1、查看 server.properties 的配置的 zookeeper.connect 的地址;
- 2、使用 `zkCli.sh -serve zookeeper.connect的地址` 登录到ZK页面
- 3、ZK页面上执行命令 `getAcl /kafka` 查看 `kafka` 用户的权限;
此时,我们可以看到如下信息:
![watch_user_acl.png](assets/support_kerberos_zk/watch_user_acl.png)
`kafka` 用户需要的权限是 `cdrwa`。如果用户没有 `cdrwa` 权限的话,需要创建用户并授权,授权命令为:`setAcl`
### 3、创建Kerberos的keytab并修改 KnowStreaming 主机
- 1、在 Kerberos 的域中创建 `kafka/_HOST` 的 `keytab`,并导出。例如:`kafka/dbs-kafka-test-8-53`
- 2、导出 keytab 后上传到安装 KS 的机器的 `/etc/keytab` 下;
- 3、在 KS 机器上,执行 `kinit -kt zookeepe.keytab kafka/dbs-kafka-test-8-53` 看是否能进行 `Kerberos` 登录;
- 4、可以登录后配置 `/opt/zookeeper.jaas` 文件,例子如下:
```sql
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=false
serviceName="zookeeper"
keyTab="/etc/keytab/zookeeper.keytab"
principal="kafka/dbs-kafka-test-8-53@XXX.XXX.XXX";
};
```
- 5、需要配置 `KDC-Server` 对 `KnowStreaming` 的机器开通防火墙并在KS的机器 `/etc/host/` 配置 `kdc-server` 的 `hostname`。并将 `krb5.conf` 导入到 `/etc` 下;
### 4、修改 KnowStreaming 的配置
- 1、在 `/usr/local/KnowStreaming/KnowStreaming/bin/startup.sh` 中的47行的JAVA_OPT中追加如下设置
```bash
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/zookeeper.jaas
```
- 2、重启KS集群后再 start.out 中看到如下信息则证明Kerberos配置成功
![success_1.png](assets/support_kerberos_zk/success_1.png)
![success_2.png](assets/support_kerberos_zk/success_2.png)
### 5、补充说明
- 1、多Kafka集群如果用的是一样的Kerberos域的话只需在每个`ZK`中给`kafka`用户配置`crdwa`权限即可,这样集群初始化的时候`zkclient`是都可以认证;
- 2、当前需要修改代码重新打包才可以支持后续考虑通过页面支持Kerberos认证的ZK接入
- 3、多个Kerberos域暂时未适配

View File

@@ -4,13 +4,158 @@
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 6.2.0、升级至 `master` 版本
暂无
### 6.2.1、升级至 `v3.1.0` 版本
### 6.2.1、升级至 `v3.0.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
```
### 6.2.2、升级至 `v3.0.1` 版本
**ES 索引模版**
```bash
# 新增 ks_kafka_zookeeper_metric 索引模版。
# 可通过再次执行 bin/init_es_template.sh 脚本,创建该索引模版。
# 索引模版内容
PUT _template/ks_kafka_zookeeper_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_km_zookeeper`;
CREATE TABLE `ks_km_zookeeper` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '物理集群ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper主机名',
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'zookeeper端口',
`role` varchar(16) NOT NULL DEFAULT '' COMMENT '角色, leader follower observer',
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper版本',
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1存活0未存活11存活但是4字命令使用不了',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_host_port` (`cluster_phy_id`,`host`, `port`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Zookeeper信息表';
DROP TABLE IF EXISTS `ks_km_group`;
CREATE TABLE `ks_km_group` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`name` varchar(192) COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Group名称',
`member_count` int(11) unsigned NOT NULL DEFAULT '0' COMMENT '成员数',
`topic_members` text CHARACTER SET utf8 COMMENT 'group消费的topic列表',
`partition_assignor` varchar(255) CHARACTER SET utf8 NOT NULL COMMENT '分配策略',
`coordinator_id` int(11) NOT NULL COMMENT 'group协调器brokerId',
`type` int(11) NOT NULL COMMENT 'group类型 0consumer 1connector',
`state` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '' COMMENT '状态',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_name` (`cluster_phy_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Group信息表';
```
### 6.2.3、升级至 `v3.0.0` 版本
**SQL 变更**
@@ -22,7 +167,7 @@ ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
---
### 6.2.2、升级至 `v3.0.0-beta.2`版本
### 6.2.4、升级至 `v3.0.0-beta.2`版本
**配置变更**
@@ -93,7 +238,7 @@ ALTER TABLE `logi_security_oplog`
---
### 6.2.3、升级至 `v3.0.0-beta.1`版本
### 6.2.5、升级至 `v3.0.0-beta.1`版本
**SQL 变更**
@@ -112,7 +257,7 @@ ALTER COLUMN `operation_methods` set default '';
---
### 6.2.4、`2.x`版本 升级至 `v3.0.0-beta.0`版本
### 6.2.6、`2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:**

View File

@@ -37,7 +37,7 @@
## 8.4、`Jmx`连接失败如何解决?
- 参看 [Jmx 连接配置&问题解决](./9-attachment#jmx-连接失败问题解决) 说明。
- 参看 [Jmx 连接配置&问题解决](https://doc.knowstreaming.com/product/9-attachment#91jmx-%E8%BF%9E%E6%8E%A5%E5%A4%B1%E8%B4%A5%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3) 说明。
&nbsp;

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
/**
* 多集群总体状态
*/
public interface ClusterZookeepersManager {
Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId);
PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto);
Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path);
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -15,6 +16,8 @@ public interface MultiClusterPhyManager {
*/
ClusterPhysState getClusterPhysState();
ClusterPhysHealthState getClusterPhysHealthState();
/**
* 查询多集群大盘
* @param dto 分页信息

View File

@@ -24,6 +24,7 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -51,6 +52,9 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
@Autowired
private KafkaControllerService kafkaControllerService;
@Autowired
private KafkaJMXClient kafkaJMXClient;
@Override
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
// 获取集群Broker列表
@@ -75,6 +79,10 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
//获取controller信息
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
//获取jmx状态信息
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
// 格式转换
return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList(
@@ -83,7 +91,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
metricsResult.getData(),
groupTopic,
transactionTopic,
kafkaController
kafkaController,
jmxConnectedMap
),
paginationResult
);
@@ -165,22 +174,24 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
List<BrokerMetrics> metricsList,
Topic groupTopic,
Topic transactionTopic,
KafkaController kafkaController) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null? new HashMap<>(): metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
KafkaController kafkaController,
Map<Integer, Boolean> jmxConnectedMap) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null ? new HashMap<>() : metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null? new HashMap<>(): brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null ? new HashMap<>() : brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
for (Integer brokerId : pagedBrokerIdList) {
Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController));
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
}
return voList;
}
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController) {
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController, Boolean jmxConnected) {
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
clusterBrokersOverviewVO.setBrokerId(brokerId);
if (broker != null) {
@@ -203,6 +214,7 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
}
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
clusterBrokersOverviewVO.setJmxConnected(jmxConnected);
return clusterBrokersOverviewVO;
}

View File

@@ -0,0 +1,137 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.Arrays;
import java.util.List;
@Service
public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterZookeepersManagerImpl.class);
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZnodeService znodeService;
@Override
public Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId) {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
}
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
vo.setTotalServerCount(infoList.size());
vo.setAliveFollowerCount(0);
vo.setTotalFollowerCount(0);
vo.setAliveObserverCount(0);
vo.setTotalObserverCount(0);
vo.setAliveServerCount(0);
for (ZookeeperInfo info: infoList) {
if (info.getRole().equals(ZKRoleEnum.LEADER.getRole())) {
vo.setLeaderNode(info.getHost());
}
if (info.getRole().equals(ZKRoleEnum.FOLLOWER.getRole())) {
vo.setTotalFollowerCount(vo.getTotalFollowerCount() + 1);
vo.setAliveFollowerCount(info.alive()? vo.getAliveFollowerCount() + 1: vo.getAliveFollowerCount());
}
if (info.getRole().equals(ZKRoleEnum.OBSERVER.getRole())) {
vo.setTotalObserverCount(vo.getTotalObserverCount() + 1);
vo.setAliveObserverCount(info.alive()? vo.getAliveObserverCount() + 1: vo.getAliveObserverCount());
}
if (info.alive()) {
vo.setAliveServerCount(vo.getAliveServerCount() + 1);
}
}
// 指标获取
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
clusterPhyId,
Arrays.asList(
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
)
);
if (metricsResult.failed()) {
LOGGER.error(
"class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage()
);
return Result.buildSuc(vo);
}
ZookeeperMetrics metrics = metricsResult.getData();
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
return Result.buildSuc(vo);
}
@Override
public PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto) {
//获取集群zookeeper列表
List<ClusterZookeepersOverviewVO> clusterZookeepersOverviewVOList = ConvertUtil.list2List(zookeeperService.listFromDBByCluster(clusterPhyId), ClusterZookeepersOverviewVO.class);
//搜索
clusterZookeepersOverviewVOList = PaginationUtil.pageByFuzzyFilter(clusterZookeepersOverviewVOList, dto.getSearchKeywords(), Arrays.asList("host"));
//分页
PaginationResult<ClusterZookeepersOverviewVO> paginationResult = PaginationUtil.pageBySubData(clusterZookeepersOverviewVOList, dto);
return paginationResult;
}
@Override
public Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path) {
Result<Znode> result = znodeService.getZnode(clusterPhyId, path);
if (result.failed()) {
return Result.buildFromIgnoreData(result);
}
return Result.buildSuc(ConvertUtil.obj2ObjByJSON(result.getData(), ZnodeVO.class));
}
/**************************************************** private method ****************************************************/
}

View File

@@ -5,6 +5,7 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
@@ -16,6 +17,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboa
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
@@ -75,6 +77,32 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
return physState;
}
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
physState.setGoodCount(physState.getGoodCount() + 1);
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
physState.setMediumCount(physState.getMediumCount() + 1);
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
physState.setPoorCount(physState.getPoorCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDeadCount(physState.getDeadCount() + 1);
} else {
physState.setUnknownCount(physState.getUnknownCount() + 1);
}
}
return physState;
}
@Override
public PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto) {
// 获取集群
@@ -148,16 +176,7 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 获取所有的metrics
List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) {
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) {
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
// 如果集群没有健康分,则设置一个默认的健康分数值
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE,
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue()
);
}
metricsList.add(clusterMetrics);
metricsList.add(clusterMetricService.getLatestMetricsFromCache(vo.getId()));
}
// 范围搜索

View File

@@ -1,11 +1,14 @@
package com.xiaojukeji.know.streaming.km.biz.group;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
@@ -22,6 +25,10 @@ public interface GroupManager {
String searchGroupKeyword,
PaginationBaseDTO dto);
PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto);
PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto);
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
String topicName,
String groupName,
@@ -31,4 +38,6 @@ public interface GroupManager {
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList (Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
}

View File

@@ -3,11 +3,14 @@ package com.xiaojukeji.know.streaming.km.biz.group.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -15,11 +18,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
import com.xiaojukeji.know.streaming.km.common.converter.GroupConverter;
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
@@ -71,30 +78,60 @@ public class GroupManagerImpl implements GroupManager {
String searchGroupKeyword,
PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
if (paginationResult.failed()) {
return PaginationResult.buildFailure(paginationResult, dto);
}
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
paginationResult.getData().getBizData().stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=pagingGroupMembers||clusterPhyId={}||topicName={}||groupName={}||result={}||errMsg=search es failed", clusterPhyId, topicName, groupName, metricsListResult);
List<GroupTopicOverviewVO> groupTopicVOList = this.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
@Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) {
Group group = groupService.getGroupFromDB(clusterPhyId, groupName);
//没有topicMember则直接返回
if (group == null || ValidateUtils.isEmptyList(group.getTopicMembers())) {
return PaginationResult.buildSuc(dto);
}
return PaginationResult.buildSuc(
this.convert2GroupTopicOverviewVOList(paginationResult.getData().getBizData(), metricsListResult.getData()),
paginationResult
);
//排序
List<GroupTopicMember> groupTopicMembers = PaginationUtil.pageBySort(group.getTopicMembers(), PaginationConstant.DEFAULT_GROUP_TOPIC_SORTED_FIELD, SortTypeEnum.DESC.getSortType());
//分页
PaginationResult<GroupTopicMember> paginationResult = PaginationUtil.pageBySubData(groupTopicMembers, dto);
List<GroupMemberPO> groupMemberPOList = paginationResult.getData().getBizData().stream().map(elem -> new GroupMemberPO(clusterPhyId, elem.getTopicName(), groupName, group.getState().getState(), elem.getMemberCount())).collect(Collectors.toList());
return PaginationResult.buildSuc(this.getGroupTopicOverviewVOList(clusterPhyId, groupMemberPOList), paginationResult);
}
@Override
public PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto) {
List<Group> groupList = groupService.listClusterGroups(clusterPhyId);
// 类型转化
List<GroupOverviewVO> voList = groupList.stream().map(elem -> GroupConverter.convert2GroupOverviewVO(elem)).collect(Collectors.toList());
// 搜索groupName
voList = PaginationUtil.pageByFuzzyFilter(voList, dto.getSearchGroupName(), Arrays.asList("name"));
//搜索topic
if (!ValidateUtils.isBlank(dto.getSearchTopicName())) {
voList = voList.stream().filter(elem -> {
for (String topicName : elem.getTopicNameList()) {
if (topicName.contains(dto.getSearchTopicName())) {
return true;
}
}
return false;
}).collect(Collectors.toList());
}
// 分页 后 返回
return PaginationUtil.pageBySubData(voList, dto);
}
@Override
@@ -104,7 +141,7 @@ public class GroupManagerImpl implements GroupManager {
List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException {
// 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffset(clusterPhyId, groupName);
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet()
.stream()
.filter(elem -> elem.topic().equals(topicName))
@@ -113,7 +150,7 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList);
// 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescription(clusterPhyId, groupName);
ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName);
// 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
@@ -166,13 +203,13 @@ public class GroupManagerImpl implements GroupManager {
return rv;
}
ConsumerGroupDescription description = groupService.getGroupDescription(dto.getClusterId(), dto.getGroupName());
ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
}
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
}
// 获取offset
@@ -185,6 +222,22 @@ public class GroupManagerImpl implements GroupManager {
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
}
@Override
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList) {
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
groupMemberPOList.stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=completeMetricData||clusterPhyId={}||result={}||errMsg=search es failed", clusterPhyId, metricsListResult);
}
return this.convert2GroupTopicOverviewVOList(groupMemberPOList, metricsListResult.getData());
}
/**************************************************** private method ****************************************************/
@@ -293,4 +346,31 @@ public class GroupManagerImpl implements GroupManager {
);
}
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
}

View File

@@ -1,8 +1,10 @@
package com.xiaojukeji.know.streaming.km.biz.topic;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -23,4 +25,6 @@ public interface TopicStateManager {
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto);
}

View File

@@ -10,14 +10,18 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils;
@@ -52,6 +56,9 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private PartitionService partitionService;
@Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -80,7 +87,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
);
// 创建Topic
return opTopicService.createTopic(
Result<Void> createTopicRes = opTopicService.createTopic(
new TopicCreateParam(
dto.getClusterId(),
dto.getTopicName(),
@@ -90,6 +97,21 @@ public class OpTopicManagerImpl implements OpTopicManager {
),
operator
);
if (createTopicRes.successful()){
try{
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
BackoffUtils.backoff(3000);
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
if (partitionsResult.successful()){
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
}
});
}catch (Exception e) {
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功但记录Partition到DB中失败等待定时任务同步partition信息");
}
}
return createTopicRes;
}
@Override

View File

@@ -2,17 +2,22 @@ package com.xiaojukeji.know.streaming.km.biz.topic.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -32,6 +37,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
@@ -77,6 +83,12 @@ public class TopicStateManagerImpl implements TopicStateManager {
@Autowired
private TopicConfigService topicConfigService;
@Autowired
private GroupService groupService;
@Autowired
private GroupManager groupManager;
@Override
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
Topic topic = topicService.getTopic(clusterPhyId, topicName);
@@ -346,6 +358,19 @@ public class TopicStateManagerImpl implements TopicStateManager {
return Result.buildSuc(vo);
}
@Override
public PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, "", "", searchGroupName, dto);
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = groupManager.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
/**************************************************** private method ****************************************************/
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {

View File

@@ -14,7 +14,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.vo.config.metric.UserMetricConfigVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
@@ -48,7 +47,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct
public void init(){
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
@@ -58,7 +57,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
@@ -76,9 +75,9 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
@@ -108,10 +107,15 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
Collectors.toMap(u -> u.getType() + "@" + u.getName(), Function.identity() ));
Collectors.toMap(
u -> u.getType() + "@" + u.getName(),
Function.identity(),
(v1, v2) -> v1)
);
return Result.buildSuc(map);
}

View File

@@ -91,7 +91,7 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
continue;
}
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafkaWithCache(
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafka(
clusterPhyId,
metrics.getTopic(),
metrics.getBrokerId(),

View File

@@ -0,0 +1,122 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_ZOOKEEPER;
/**
* @author didi
*/
@Component
public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired
private VersionControlService versionControlService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE);
if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return;
}
// 构造参数
ZookeeperMetricParam param = new ZookeeperMetricParam(
clusterPhyId,
aliveZKList.stream().map(elem -> new Tuple<String, Integer>(elem.getHost(), elem.getPort())).collect(Collectors.toList()),
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
kafkaController == null? Constant.INVALID_CODE: kafkaController.getBrokerId(),
null
);
for(VersionControlItem v : items) {
try {
if(null != metrics.getMetrics().get(v.getName())) {
continue;
}
param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
if(null == ret || ret.failed() || null == ret.getData()){
continue;
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){
LOGGER.error(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_ZOOKEEPER;
}
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX;
@Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ZookeeperMetricEvent event) {
send2es(ZOOKEEPER_INDEX, ConvertUtil.list2List(event.getZookeeperMetrics(), ZookeeperMetricPO.class));
}
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/17
*/
@Data
public class ClusterGroupSummaryDTO extends PaginationBaseDTO {
@ApiModelProperty("查找该Topic")
private String searchTopicName;
@ApiModelProperty("查找该Group")
private String searchGroupName;
}

View File

@@ -0,0 +1,13 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ClusterZookeepersOverviewDTO extends PaginationBaseDTO {
}

View File

@@ -3,6 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.broker;
import com.alibaba.fastjson.TypeReference;
import com.xiaojukeji.know.streaming.km.common.bean.entity.common.IpPortData;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.po.broker.BrokerPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import lombok.AllArgsConstructor;
@@ -65,13 +66,13 @@ public class Broker implements Serializable {
*/
private Map<String, IpPortData> endpointMap;
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp) {
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp, JmxConfig jmxConfig) {
Broker metadata = new Broker();
metadata.setClusterPhyId(clusterPhyId);
metadata.setBrokerId(node.id());
metadata.setHost(node.host());
metadata.setPort(node.port());
metadata.setJmxPort(-1);
metadata.setJmxPort(jmxConfig != null ? jmxConfig.getJmxPort() : -1);
metadata.setStartTimestamp(startTimestamp);
metadata.setRack(node.rack());
metadata.setStatus(1);

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 集群状态信息
* @author zengqiao
* @date 22/02/24
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhysHealthState {
private Integer unknownCount;
private Integer goodCount;
private Integer mediumCount;
private Integer poorCount;
private Integer deadCount;
private Integer total;
public ClusterPhysHealthState(Integer total) {
this.unknownCount = 0;
this.goodCount = 0;
this.mediumCount = 0;
this.poorCount = 0;
this.deadCount = 0;
this.total = total;
}
}

View File

@@ -1,8 +1,8 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
import java.util.Properties;
@@ -11,7 +11,6 @@ import java.util.Properties;
* @author zengqiao
* @date 22/02/24
*/
@Data
@ApiModel(description = "ZK配置")
public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的jmx配置")
@@ -21,11 +20,51 @@ public class ZKConfig implements Serializable {
private Boolean openSecure = false;
@ApiModelProperty(value="ZK的Session超时时间", example = "15000")
private Long sessionTimeoutUnitMs = 15000L;
private Integer sessionTimeoutUnitMs = 15000;
@ApiModelProperty(value="ZK的Request超时时间", example = "5000")
private Long requestTimeoutUnitMs = 5000L;
private Integer requestTimeoutUnitMs = 5000;
@ApiModelProperty(value="ZK的Request超时时间")
private Properties otherProps = new Properties();
public JmxConfig getJmxConfig() {
return jmxConfig == null? new JmxConfig(): jmxConfig;
}
public void setJmxConfig(JmxConfig jmxConfig) {
this.jmxConfig = jmxConfig;
}
public Boolean getOpenSecure() {
return openSecure != null && openSecure;
}
public void setOpenSecure(Boolean openSecure) {
this.openSecure = openSecure;
}
public Integer getSessionTimeoutUnitMs() {
return sessionTimeoutUnitMs == null? Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS: sessionTimeoutUnitMs;
}
public void setSessionTimeoutUnitMs(Integer sessionTimeoutUnitMs) {
this.sessionTimeoutUnitMs = sessionTimeoutUnitMs;
}
public Integer getRequestTimeoutUnitMs() {
return requestTimeoutUnitMs == null? Constant.DEFAULT_REQUEST_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
}
public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) {
this.requestTimeoutUnitMs = requestTimeoutUnitMs;
}
public Properties getOtherProps() {
return otherProps == null? new Properties() : otherProps;
}
public void setOtherProps(Properties otherProps) {
this.otherProps = otherProps;
}
}

View File

@@ -13,9 +13,4 @@ public class BaseClusterHealthConfig extends BaseClusterConfigValue {
* 健康检查名称
*/
protected HealthCheckNameEnum checkNameEnum;
/**
* 权重
*/
protected Float weight;
}

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/26
*/
@Data
public class HealthAmountRatioConfig extends BaseClusterHealthConfig {
/**
* 总数
*/
private Integer amount;
/**
* 比例
*/
private Double ratio;
}

View File

@@ -0,0 +1,74 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import java.util.ArrayList;
import java.util.List;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Group {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
* @see GroupTypeEnum
*/
private GroupTypeEnum type;
/**
* group名称
*/
private String name;
/**
* group状态
* @see GroupStateEnum
*/
private GroupStateEnum state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private List<GroupTopicMember> topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
public Group(Long clusterPhyId, String groupName, ConsumerGroupDescription groupDescription) {
this.clusterPhyId = clusterPhyId;
this.type = groupDescription.isSimpleConsumerGroup()? GroupTypeEnum.CONSUMER: GroupTypeEnum.CONNECTOR;
this.name = groupName;
this.state = GroupStateEnum.getByRawState(groupDescription.state());
this.memberCount = groupDescription.members() == null? 0: groupDescription.members().size();
this.topicMembers = new ArrayList<>();
this.partitionAssignor = groupDescription.partitionAssignor();
this.coordinatorId = groupDescription.coordinator() == null? Constant.INVALID_CODE: groupDescription.coordinator().id();
}
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
public class GroupTopicMember {
/**
* Topic名称
*/
private String topicName;
/**
* 消费此Topic的成员数量
*/
private Integer memberCount;
public GroupTopicMember(String topicName, Integer memberCount) {
this.topicName = topicName;
this.memberCount = memberCount;
}
}

View File

@@ -0,0 +1,83 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.health;
import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.stream.Collectors;
@Data
@NoArgsConstructor
public class HealthCheckAggResult {
private HealthCheckNameEnum checkNameEnum;
private List<HealthCheckResultPO> poList;
private Boolean passed;
public HealthCheckAggResult(HealthCheckNameEnum checkNameEnum, List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.poList = poList;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
}
}
public Integer getTotalCount() {
if (poList == null) {
return 0;
}
return poList.size();
}
public Integer getPassedCount() {
if (poList == null) {
return 0;
}
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
*/
public Integer calRawHealthScore() {
if (poList == null || poList.isEmpty()) {
return 100;
}
return 100 * this.getPassedCount() / this.getTotalCount();
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getCreateTime();
}
public Date getUpdateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getUpdateTime();
}
}

View File

@@ -17,10 +17,6 @@ import java.util.stream.Collectors;
public class HealthScoreResult {
private HealthCheckNameEnum checkNameEnum;
private Float presentDimensionTotalWeight;
private Float allDimensionTotalWeight;
private BaseClusterHealthConfig baseConfig;
private List<HealthCheckResultPO> poList;
@@ -28,15 +24,11 @@ public class HealthScoreResult {
private Boolean passed;
public HealthScoreResult(HealthCheckNameEnum checkNameEnum,
Float presentDimensionTotalWeight,
Float allDimensionTotalWeight,
BaseClusterHealthConfig baseConfig,
List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.baseConfig = baseConfig;
this.poList = poList;
this.presentDimensionTotalWeight = presentDimensionTotalWeight;
this.allDimensionTotalWeight = allDimensionTotalWeight;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
@@ -59,32 +51,6 @@ public class HealthScoreResult {
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算所有检查结果的健康分
* 比如:计算集群健康分
*/
public Float calAllWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / allDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前维度的健康分
* 比如计算集群Broker健康分
*/
public Float calDimensionWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / presentDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
@@ -102,7 +68,7 @@ public class HealthScoreResult {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
return poList.stream().filter(elem -> elem.getPassed() <= 0 && !ValidateUtils.isBlank(elem.getResName())).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics;
import lombok.Data;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@ToString
public class ZookeeperMetrics extends BaseMetrics {
public ZookeeperMetrics(Long clusterPhyId) {
super(clusterPhyId);
}
public static ZookeeperMetrics initWithMetric(Long clusterPhyId, String metric, Float value) {
ZookeeperMetrics metrics = new ZookeeperMetrics(clusterPhyId);
metrics.setClusterPhyId( clusterPhyId );
metrics.putMetric(metric, value);
return metrics;
}
@Override
public String unique() {
return "ZK@" + clusterPhyId;
}
}

View File

@@ -0,0 +1,47 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
public class ZookeeperMetricParam extends MetricParam {
private Long clusterPhyId;
private List<Tuple<String, Integer>> zkAddressList;
private ZKConfig zkConfig;
private String metricName;
private Integer kafkaControllerId;
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.metricName = metricName;
}
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
Integer kafkaControllerId,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.kafkaControllerId = kafkaControllerId;
this.metricName = metricName;
}
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.zookeeper;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
public class ZookeeperParam extends ClusterPhyParam {
private List<Tuple<String, Integer>> zkAddressList;
private ZKConfig zkConfig;
public ZookeeperParam(Long clusterPhyId, List<Tuple<String, Integer>> zkAddressList, ZKConfig zkConfig) {
super(clusterPhyId);
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
}
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.reassign;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import org.apache.kafka.common.TopicPartition;
@@ -19,4 +20,10 @@ public class ReassignResult {
return state.isDone();
}
public boolean checkPreferredReplicaElectionUnNeed(String reassignBrokerIds, String originalBrokerIds) {
Integer targetLeader = CommonUtils.string2IntList(reassignBrokerIds).get(0);
Integer originalLeader = CommonUtils.string2IntList(originalBrokerIds).get(0);
return originalLeader.equals(targetLeader);
}
}

View File

@@ -56,6 +56,7 @@ public enum ResultStatus {
KAFKA_OPERATE_FAILED(8010, "Kafka操作失败"),
MYSQL_OPERATE_FAILED(8020, "MySQL操作失败"),
ZK_OPERATE_FAILED(8030, "ZK操作失败"),
ZK_FOUR_LETTER_CMD_FORBIDDEN(8031, "ZK四字命令被禁止"),
ES_OPERATE_ERROR(8040, "ES操作失败"),
HTTP_REQ_ERROR(8050, "第三方http请求异常"),

View File

@@ -23,6 +23,8 @@ public class VersionMetricControlItem extends VersionControlItem{
public static final String CATEGORY_PERFORMANCE = "Performance";
public static final String CATEGORY_FLOW = "Flow";
public static final String CATEGORY_CLIENT = "Client";
/**
* 指标单位名称,非指标的没有
*/

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import org.apache.zookeeper.data.Stat;
@Data
public class Znode {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private Stat stat;
@ApiModelProperty(value = "节点路径", example = "")
private String namespace;
}

View File

@@ -0,0 +1,42 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.bean.entity.BaseEntity;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
public class ZookeeperInfo extends BaseEntity {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
public boolean alive() {
return !(Constant.DOWN.equals(status));
}
}

View File

@@ -0,0 +1,9 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import java.io.Serializable;
/**
* 四字命令结果数据的基础类
*/
public class BaseFourLetterWordCmdData implements Serializable {
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdData extends BaseFourLetterWordCmdData {
private Long clientPort;
private String dataDir;
private String dataLogDir;
private Long tickTime;
private Long maxClientCnxns;
private Long minSessionTimeout;
private Long maxSessionTimeout;
private Integer serverId;
private String initLimit;
private Long syncLimit;
private Long electionAlg;
private Long electionPort;
private Long quorumPort;
private Long peerType;
}

View File

@@ -0,0 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkWatchCount;
private Long zkEphemeralsCount;
private Long zkApproximateDataSize;
private Long zkOpenFileDescriptorCount;
private Long zkMaxFileDescriptorCount;
}

View File

@@ -0,0 +1,30 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkZxid;
}

View File

@@ -0,0 +1,116 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ConfigCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdDataParser implements FourLetterWordDataParser<ConfigCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ConfigCmdDataParser.class);
private Result<ConfigCmdData> dataResult = null;
@Override
public String getCmd() {
return FourLetterWordUtil.ConfigCmd;
}
@Override
public ConfigCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('=');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ConfigCmdData configCmdData = new ConfigCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "clientPort":
configCmdData.setClientPort(Long.valueOf(elem.getValue()));
break;
case "dataDir":
configCmdData.setDataDir(elem.getValue());
break;
case "dataLogDir":
configCmdData.setDataLogDir(elem.getValue());
break;
case "tickTime":
configCmdData.setTickTime(Long.valueOf(elem.getValue()));
break;
case "maxClientCnxns":
configCmdData.setMaxClientCnxns(Long.valueOf(elem.getValue()));
break;
case "minSessionTimeout":
configCmdData.setMinSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "maxSessionTimeout":
configCmdData.setMaxSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "serverId":
configCmdData.setServerId(Integer.valueOf(elem.getValue()));
break;
case "initLimit":
configCmdData.setInitLimit(elem.getValue());
break;
case "syncLimit":
configCmdData.setSyncLimit(Long.valueOf(elem.getValue()));
break;
case "electionAlg":
configCmdData.setElectionAlg(Long.valueOf(elem.getValue()));
break;
case "electionPort":
configCmdData.setElectionPort(Long.valueOf(elem.getValue()));
break;
case "quorumPort":
configCmdData.setQuorumPort(Long.valueOf(elem.getValue()));
break;
case "peerType":
configCmdData.setPeerType(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=ConfigCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ConfigCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return configCmdData;
}
}

View File

@@ -0,0 +1,10 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
/**
* 四字命令结果解析类
*/
public interface FourLetterWordDataParser<T> {
String getCmd();
T parseAndInitData(Long clusterPhyId, String host, int port, String cmdData);
}

View File

@@ -0,0 +1,117 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.MonitorCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdDataParser implements FourLetterWordDataParser<MonitorCmdData> {
private static final ILog LOGGER = LogFactory.getLog(MonitorCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.MonitorCmd;
}
@Override
public MonitorCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('\t');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
MonitorCmdData monitorCmdData = new MonitorCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "zk_version":
monitorCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "zk_avg_latency":
monitorCmdData.setZkAvgLatency(Float.valueOf(elem.getValue()));
break;
case "zk_max_latency":
monitorCmdData.setZkMaxLatency(Long.valueOf(elem.getValue()));
break;
case "zk_min_latency":
monitorCmdData.setZkMinLatency(Long.valueOf(elem.getValue()));
break;
case "zk_packets_received":
monitorCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "zk_packets_sent":
monitorCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "zk_num_alive_connections":
monitorCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "zk_outstanding_requests":
monitorCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "zk_server_state":
monitorCmdData.setZkServerState(elem.getValue());
break;
case "zk_znode_count":
monitorCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "zk_watch_count":
monitorCmdData.setZkWatchCount(Long.valueOf(elem.getValue()));
break;
case "zk_ephemerals_count":
monitorCmdData.setZkEphemeralsCount(Long.valueOf(elem.getValue()));
break;
case "zk_approximate_data_size":
monitorCmdData.setZkApproximateDataSize(Long.valueOf(elem.getValue()));
break;
case "zk_open_file_descriptor_count":
monitorCmdData.setZkOpenFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
case "zk_max_file_descriptor_count":
monitorCmdData.setZkMaxFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=MonitorCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=MonitorCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return monitorCmdData;
}
}

View File

@@ -0,0 +1,97 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ServerCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdDataParser implements FourLetterWordDataParser<ServerCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ServerCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.ServerCmd;
}
@Override
public ServerCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf(':');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ServerCmdData serverCmdData = new ServerCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "Zookeeper version":
serverCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "Latency min/avg/max":
String[] data = elem.getValue().split("/");
serverCmdData.setZkMinLatency(Long.valueOf(data[0]));
serverCmdData.setZkAvgLatency(Float.valueOf(data[1]));
serverCmdData.setZkMaxLatency(Long.valueOf(data[2]));
break;
case "Received":
serverCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "Sent":
serverCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "Connections":
serverCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "Outstanding":
serverCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "Mode":
serverCmdData.setZkServerState(elem.getValue());
break;
case "Node count":
serverCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "Zxid":
serverCmdData.setZkZxid(Long.parseUnsignedLong(elem.getValue().trim().substring(2), 16));
break;
default:
LOGGER.warn(
"class=ServerCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ServerCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return serverCmdData;
}
}

View File

@@ -8,8 +8,6 @@ import org.springframework.context.ApplicationEvent;
*/
@Getter
public class BaseMetricEvent extends ApplicationEvent {
public BaseMetricEvent(Object source) {
super( source );
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.event.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import lombok.Getter;
import java.util.List;
/**
* @author didi
*/
@Getter
public class ZookeeperMetricEvent extends BaseMetricEvent {
private List<ZookeeperMetrics> zookeeperMetrics;
public ZookeeperMetricEvent(Object source, List<ZookeeperMetrics> zookeeperMetrics) {
super( source );
this.zookeeperMetrics = zookeeperMetrics;
}
}

View File

@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
@@ -23,12 +22,19 @@ public class GroupMemberPO extends BasePO {
private Integer memberCount;
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, Date updateTime) {
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount) {
this.clusterPhyId = clusterPhyId;
this.topicName = topicName;
this.groupName = groupName;
this.state = GroupStateEnum.UNKNOWN.getState();
this.memberCount = 0;
this.state = state;
this.memberCount = memberCount;
}
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount, Date updateTime) {
this.clusterPhyId = clusterPhyId;
this.topicName = topicName;
this.groupName = groupName;
this.state = state;
this.memberCount = memberCount;
this.updateTime = updateTime;
}
}

View File

@@ -0,0 +1,61 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "group")
public class GroupPO extends BasePO {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
*
* @see GroupTypeEnum
*/
private Integer type;
/**
* group名称
*/
private String name;
/**
* group状态
*
* @see GroupStateEnum
*/
private String state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private String topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
}

View File

@@ -0,0 +1,24 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.metrice;
import lombok.Data;
import lombok.NoArgsConstructor;
import static com.xiaojukeji.know.streaming.km.common.utils.CommonUtils.monitorTimestamp2min;
@Data
@NoArgsConstructor
public class ZookeeperMetricPO extends BaseMetricESPO {
public ZookeeperMetricPO(Long clusterPhyId){
super(clusterPhyId);
}
@Override
public String getKey() {
return "ZK@" + clusterPhyId + "@" + monitorTimestamp2min(timestamp);
}
@Override
public String getRoutingValue() {
return String.valueOf(clusterPhyId);
}
}

View File

@@ -0,0 +1,40 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.zookeeper;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "zookeeper")
public class ZookeeperInfoPO extends BasePO {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
@ApiModel(description = "集群健康状态信息")
public class ClusterPhysHealthStateVO {
@ApiModelProperty(value = "未知", example = "30")
private Integer unknownCount;
@ApiModelProperty(value = "", example = "30")
private Integer goodCount;
@ApiModelProperty(value = "", example = "30")
private Integer mediumCount;
@ApiModelProperty(value = "", example = "30")
private Integer poorCount;
@ApiModelProperty(value = "down", example = "30")
private Integer deadCount;
@ApiModelProperty(value = "总数", example = "150")
private Integer total;
}

View File

@@ -31,6 +31,9 @@ public class ClusterBrokersOverviewVO extends BrokerMetadataVO {
@ApiModelProperty(value = "jmx端口")
private Integer jmxPort;
@ApiModelProperty(value = "jmx连接状态 true:连接成功 false:连接失败")
private Boolean jmxConnected;
@ApiModelProperty(value = "是否存活 true存活 false不存活")
private Boolean alive;
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.group;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* @author wyb
* @date 2022/10/9
*/
@Data
@ApiModel(value = "Group信息")
public class GroupOverviewVO {
@ApiModelProperty(value = "Group名称", example = "group-know-streaming-test")
private String name;
@ApiModelProperty(value = "Group状态", example = "Empty")
private String state;
@ApiModelProperty(value = "group的成员数", example = "12")
private Integer memberCount;
@ApiModelProperty(value = "Topic列表", example = "[topic1,topic2]")
private List<String> topicNameList;
}

View File

@@ -10,7 +10,7 @@ import lombok.Data;
*/
@Data
@ApiModel(value = "GroupTopic信息")
public class GroupTopicOverviewVO extends GroupTopicBasicVO{
public class GroupTopicOverviewVO extends GroupTopicBasicVO {
@ApiModelProperty(value = "最大Lag", example = "12345678")
private Long maxLag;
}

View File

@@ -32,9 +32,6 @@ public class HealthCheckConfigVO {
@ApiModelProperty(value="检查说明", example = "Group延迟")
private String configDesc;
@ApiModelProperty(value="权重", example = "10")
private Float weight;
@ApiModelProperty(value="检查配置", example = "100")
private String value;
}

View File

@@ -18,6 +18,9 @@ public class HealthScoreBaseResultVO extends BaseTimeVO {
@ApiModelProperty(value="检查维度", example = "1")
private Integer dimension;
@ApiModelProperty(value="检查维度名称", example = "cluster")
private String dimensionName;
@ApiModelProperty(value="检查名称", example = "Group延迟")
private String configName;
@@ -27,9 +30,6 @@ public class HealthScoreBaseResultVO extends BaseTimeVO {
@ApiModelProperty(value="检查说明", example = "Group延迟")
private String configDesc;
@ApiModelProperty(value="权重百分比[0-100]", example = "10")
private Integer weightPercent;
@ApiModelProperty(value="得分", example = "100")
private Integer score;

View File

@@ -1,16 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.point.MetricPointVO;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author didi
@@ -26,19 +22,4 @@ public class MetricMultiLinesVO {
@ApiModelProperty(value = "指标名称对应的指标线")
private List<MetricLineVO> metricLines;
public List<MetricPointVO> getMetricPoints(String resName) {
if (ValidateUtils.isNull(metricLines)) {
return new ArrayList<>();
}
List<MetricLineVO> voList = metricLines.stream().filter(elem -> elem.getName().equals(resName)).collect(Collectors.toList());
if (ValidateUtils.isEmptyList(voList)) {
return new ArrayList<>();
}
// 仅获取idx=0的指标
return voList.get(0).getMetricPoints();
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "Zookeeper信息概览")
public class ClusterZookeepersOverviewVO {
@ApiModelProperty(value = "主机ip", example = "121.0.0.1")
private String host;
@ApiModelProperty(value = "主机存活状态1Live0Down", example = "1")
private Integer status;
@ApiModelProperty(value = "端口号", example = "2416")
private Integer port;
@ApiModelProperty(value = "版本", example = "1.1.2")
private String version;
@ApiModelProperty(value = "角色", example = "Leader")
private String role;
}

View File

@@ -0,0 +1,47 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "ZK状态信息")
public class ClusterZookeepersStateVO {
@ApiModelProperty(value = "健康检查状态", example = "1")
private Integer healthState;
@ApiModelProperty(value = "健康检查通过数", example = "1")
private Integer healthCheckPassed;
@ApiModelProperty(value = "健康检查总数", example = "1")
private Integer healthCheckTotal;
@ApiModelProperty(value = "ZK的Leader机器", example = "127.0.0.1")
private String leaderNode;
@ApiModelProperty(value = "Watch数", example = "123456")
private Integer watchCount;
@ApiModelProperty(value = "节点存活数", example = "8")
private Integer aliveServerCount;
@ApiModelProperty(value = "总节点数", example = "10")
private Integer totalServerCount;
@ApiModelProperty(value = "Follower角色存活数", example = "8")
private Integer aliveFollowerCount;
@ApiModelProperty(value = "Follower角色总数", example = "10")
private Integer totalFollowerCount;
@ApiModelProperty(value = "Observer角色存活数", example = "3")
private Integer aliveObserverCount;
@ApiModelProperty(value = "Observer角色总数", example = "3")
private Integer totalObserverCount;
}

View File

@@ -0,0 +1,44 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeStatVO {
@ApiModelProperty(value = "节点被创建时的事物的ID", example = "0x1f09")
private Long czxid;
@ApiModelProperty(value = "创建时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long ctime;
@ApiModelProperty(value = "节点最后一次被修改时的事物的ID", example = "0x1f09")
private Long mzxid;
@ApiModelProperty(value = "最后一次修改时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long mtime;
@ApiModelProperty(value = "子节点列表最近一次呗修改的事物ID", example = "0x31")
private Long pzxid;
@ApiModelProperty(value = "子节点版本号", example = "0")
private Integer cversion;
@ApiModelProperty(value = "数据版本号", example = "0")
private Integer version;
@ApiModelProperty(value = "ACL版本号", example = "0")
private Integer aversion;
@ApiModelProperty(value = "创建临时节点的事物ID持久节点事物为0", example = "0")
private Long ephemeralOwner;
@ApiModelProperty(value = "数据长度,每个节点都可保存数据", example = "22")
private Integer dataLength;
@ApiModelProperty(value = "子节点的个数", example = "6")
private Integer numChildren;
}

View File

@@ -0,0 +1,25 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeVO {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private ZnodeStatVO stat;
@ApiModelProperty(value = "节点路径", example = "/cluster")
private String namespace;
}

View File

@@ -23,8 +23,8 @@ public class Constant {
public static final Integer YES = 1;
public static final Integer NO = 0;
public static final Integer ALIVE = 1;
public static final Integer DOWN = 0;
public static final Integer ALIVE = 1;
public static final Integer DOWN = 0;
public static final Integer ONE_HUNDRED = 100;
@@ -33,15 +33,12 @@ public class Constant {
public static final Long B_TO_MB = 1024L * 1024L;
public static final Integer DEFAULT_SESSION_TIMEOUT_UNIT_MS = 15000;
public static final Float MIN_HEALTH_SCORE = 10f;
public static final Integer DEFAULT_REQUEST_TIMEOUT_UNIT_MS = 5000;
/**
* 指标相关
*/
public static final Integer DEFAULT_CLUSTER_HEALTH_SCORE = 90;
public static final Integer PER_BATCH_MAX_VALUE = 100;
public static final String DEFAULT_USER_NAME = "know-streaming-app";
@@ -66,4 +63,5 @@ public class Constant {
public static final Integer DEFAULT_RETRY_TIME = 3;
public static final Integer ZK_ALIVE_BUT_4_LETTER_FORBIDDEN = 11;
}

View File

@@ -34,6 +34,8 @@ public class ESConstant {
public static final String TOTAL = "total";
public static final Integer DEFAULT_RETRY_TIME = 3;
private ESConstant() {
}
}

View File

@@ -558,7 +558,7 @@ public class ESIndexConstant {
public final static String REPLICATION_TEMPLATE = "{\n" +
" \"order\" : 10,\n" +
" \"index_patterns\" : [\n" +
" \"ks_kafka_partition_metric*\"\n" +
" \"ks_kafka_replication_metric*\"\n" +
" ],\n" +
" \"settings\" : {\n" +
" \"index\" : {\n" +
@@ -619,12 +619,13 @@ public class ESIndexConstant {
" }\n" +
" },\n" +
" \"aliases\" : { }\n" +
" }[root@10-255-0-23 template]# cat ks_kafka_replication_metric\n" +
"PUT _template/ks_kafka_replication_metric\n" +
"{\n" +
" }";
public final static String ZOOKEEPER_INDEX = "ks_kafka_zookeeper_metric";
public final static String ZOOKEEPER_TEMPLATE = "{\n" +
" \"order\" : 10,\n" +
" \"index_patterns\" : [\n" +
" \"ks_kafka_replication_metric*\"\n" +
" \"ks_kafka_zookeeper_metric*\"\n" +
" ],\n" +
" \"settings\" : {\n" +
" \"index\" : {\n" +
@@ -633,15 +634,76 @@ public class ESIndexConstant {
" },\n" +
" \"mappings\" : {\n" +
" \"properties\" : {\n" +
" \"routingValue\" : {\n" +
" \"type\" : \"text\",\n" +
" \"fields\" : {\n" +
" \"keyword\" : {\n" +
" \"ignore_above\" : 256,\n" +
" \"type\" : \"keyword\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"clusterPhyId\" : {\n" +
" \"type\" : \"long\"\n" +
" },\n" +
" \"metrics\" : {\n" +
" \"properties\" : {\n" +
" \"AvgRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MinRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MaxRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"OutstandingRequests\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"NodeCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"WatchCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"NumAliveConnections\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"PacketsReceived\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"PacketsSent\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"EphemeralsCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"ApproximateDataSize\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"OpenFileDescriptorCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MaxFileDescriptorCount\" : {\n" +
" \"type\" : \"double\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"key\" : {\n" +
" \"type\" : \"text\",\n" +
" \"fields\" : {\n" +
" \"keyword\" : {\n" +
" \"ignore_above\" : 256,\n" +
" \"type\" : \"keyword\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"timestamp\" : {\n" +
" \"format\" : \"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis\",\n" +
" \"index\" : true,\n" +
" \"type\" : \"date\",\n" +
" \"doc_values\" : true\n" +
" \"type\" : \"date\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"aliases\" : { }\n" +
" }";
}

View File

@@ -18,4 +18,14 @@ public class PaginationConstant {
* 默认页大小
*/
public static final Integer DEFAULT_PAGE_SIZE = 10;
/**
* group列表的默认排序规则
*/
public static final String DEFAULT_GROUP_SORTED_FIELD = "name";
/**
* groupTopic列表的默认排序规则
*/
public static final String DEFAULT_GROUP_TOPIC_SORTED_FIELD = "topicName";
}

View File

@@ -0,0 +1,62 @@
package com.xiaojukeji.know.streaming.km.common.converter;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import java.util.ArrayList;
import java.util.stream.Collectors;
/**
* @author wyb
* @date 2022/10/10
*/
public class GroupConverter {
private GroupConverter() {
}
public static GroupOverviewVO convert2GroupOverviewVO(Group group) {
GroupOverviewVO vo = ConvertUtil.obj2Obj(group, GroupOverviewVO.class);
vo.setState(group.getState().getState());
vo.setTopicNameList(group.getTopicMembers().stream().map(elem -> elem.getTopicName()).collect(Collectors.toList()));
return vo;
}
public static Group convert2Group(GroupPO po) {
if (po == null) {
return null;
}
Group group = ConvertUtil.obj2Obj(po, Group.class);
if (!ValidateUtils.isBlank(po.getTopicMembers())) {
group.setTopicMembers(ConvertUtil.str2ObjArrayByJson(po.getTopicMembers(), GroupTopicMember.class));
} else {
group.setTopicMembers(new ArrayList<>());
}
group.setType(GroupTypeEnum.getTypeByCode(po.getType()));
group.setState(GroupStateEnum.getByState(po.getState()));
return group;
}
public static GroupPO convert2GroupPO(Group group) {
if (group == null) {
return null;
}
GroupPO po = ConvertUtil.obj2Obj(group, GroupPO.class);
po.setTopicMembers(ConvertUtil.obj2Json(group.getTopicMembers()));
po.setType(group.getType().getCode());
po.setState(group.getState().getState());
return po;
}
}

View File

@@ -15,24 +15,15 @@ public class HealthScoreVOConverter {
private HealthScoreVOConverter() {
}
public static List<HealthScoreResultDetailVO> convert2HealthScoreResultDetailVOList(List<HealthScoreResult> healthScoreResultList, boolean useGlobalWeight) {
Float globalWeightSum = 1f;
if (!healthScoreResultList.isEmpty()) {
globalWeightSum = healthScoreResultList.get(0).getAllDimensionTotalWeight();
}
public static List<HealthScoreResultDetailVO> convert2HealthScoreResultDetailVOList(List<HealthScoreResult> healthScoreResultList) {
List<HealthScoreResultDetailVO> voList = new ArrayList<>();
for (HealthScoreResult healthScoreResult: healthScoreResultList) {
HealthScoreResultDetailVO vo = new HealthScoreResultDetailVO();
vo.setDimension(healthScoreResult.getCheckNameEnum().getDimensionEnum().getDimension());
vo.setDimensionName(healthScoreResult.getCheckNameEnum().getDimensionEnum().getMessage());
vo.setConfigName(healthScoreResult.getCheckNameEnum().getConfigName());
vo.setConfigItem(healthScoreResult.getCheckNameEnum().getConfigItem());
vo.setConfigDesc(healthScoreResult.getCheckNameEnum().getConfigDesc());
if (useGlobalWeight) {
vo.setWeightPercent(healthScoreResult.getBaseConfig().getWeight().intValue() * 100 / globalWeightSum.intValue());
} else {
vo.setWeightPercent(healthScoreResult.getBaseConfig().getWeight().intValue() * 100 / healthScoreResult.getPresentDimensionTotalWeight().intValue());
}
vo.setScore(healthScoreResult.calRawHealthScore());
if (healthScoreResult.getTotalCount() <= 0) {
@@ -57,9 +48,9 @@ public class HealthScoreVOConverter {
for (HealthScoreResult healthScoreResult: healthScoreResultList) {
HealthScoreBaseResultVO vo = new HealthScoreBaseResultVO();
vo.setDimension(healthScoreResult.getCheckNameEnum().getDimensionEnum().getDimension());
vo.setDimensionName(healthScoreResult.getCheckNameEnum().getDimensionEnum().getMessage());
vo.setConfigName(healthScoreResult.getCheckNameEnum().getConfigName());
vo.setConfigDesc(healthScoreResult.getCheckNameEnum().getConfigDesc());
vo.setWeightPercent(healthScoreResult.getBaseConfig().getWeight().intValue() * 100 / healthScoreResult.getPresentDimensionTotalWeight().intValue());
vo.setScore(healthScoreResult.calRawHealthScore());
vo.setPassed(healthScoreResult.getPassedCount().equals(healthScoreResult.getTotalCount()));
vo.setCheckConfig(convert2HealthCheckConfigVO(ConfigGroupEnum.HEALTH.name(), healthScoreResult.getBaseConfig()));
@@ -86,7 +77,6 @@ public class HealthScoreVOConverter {
vo.setConfigName(config.getCheckNameEnum().getConfigName());
vo.setConfigItem(config.getCheckNameEnum().getConfigItem());
vo.setConfigDesc(config.getCheckNameEnum().getConfigDesc());
vo.setWeight(config.getWeight());
vo.setValue(ConvertUtil.obj2Json(config));
return vo;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.converter;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.ZookeeperUtils;
import org.apache.zookeeper.data.Stat;
public class ZnodeConverter {
ZnodeConverter(){
}
public static Znode convert2Znode(ClusterPhy clusterPhy, Tuple<byte[], Stat> dataAndStat, String path) {
Znode znode = new Znode();
znode.setStat(dataAndStat.getV2());
znode.setData(dataAndStat.getV1() == null ? null : new String(dataAndStat.getV1()));
znode.setName(path.substring(path.lastIndexOf('/') + 1));
znode.setNamespace(ZookeeperUtils.getNamespace(clusterPhy.getZookeeper()));
return znode;
}
}

View File

@@ -0,0 +1,36 @@
package com.xiaojukeji.know.streaming.km.common.enums.group;
import lombok.Getter;
/**
* @author wyb
* @date 2022/10/11
*/
@Getter
public enum GroupTypeEnum {
UNKNOWN(-1, "Unknown"),
CONSUMER(0, "Consumer客户端的消费组"),
CONNECTOR(1, "Connector的消费组");
private final Integer code;
private final String msg;
GroupTypeEnum(Integer code, String msg) {
this.code = code;
this.msg = msg;
}
public static GroupTypeEnum getTypeByCode(Integer code) {
if (code == null) return UNKNOWN;
for (GroupTypeEnum groupTypeEnum : GroupTypeEnum.values()) {
if (groupTypeEnum.code.equals(code)) {
return groupTypeEnum;
}
}
return UNKNOWN;
}
}

View File

@@ -10,13 +10,15 @@ import lombok.Getter;
public enum HealthCheckDimensionEnum {
UNKNOWN(-1, "未知"),
CLUSTER(0, "Cluster维度"),
CLUSTER(0, "Cluster"),
BROKER(1, "Broker维度"),
BROKER(1, "Broker"),
TOPIC(2, "Topic维度"),
TOPIC(2, "Topic"),
GROUP(3, "消费组维度"),
GROUP(3, "Group"),
ZOOKEEPER(4, "Zookeeper"),
;

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.know.streaming.km.common.enums.health;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.BaseClusterHealthConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.HealthAmountRatioConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.HealthCompareValueConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.HealthDetectedInLatestMinutesConfig;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
@@ -19,7 +20,8 @@ public enum HealthCheckNameEnum {
"未知",
Constant.HC_CONFIG_NAME_PREFIX + "UNKNOWN",
"未知",
BaseClusterHealthConfig.class
BaseClusterHealthConfig.class,
false
),
CLUSTER_NO_CONTROLLER(
@@ -27,7 +29,8 @@ public enum HealthCheckNameEnum {
"Controller",
Constant.HC_CONFIG_NAME_PREFIX + "CLUSTER_NO_CONTROLLER",
"集群Controller数正常",
HealthCompareValueConfig.class
HealthCompareValueConfig.class,
true
),
BROKER_REQUEST_QUEUE_FULL(
@@ -35,7 +38,8 @@ public enum HealthCheckNameEnum {
"RequestQueueSize",
Constant.HC_CONFIG_NAME_PREFIX + "BROKER_REQUEST_QUEUE_FULL",
"Broker-RequestQueueSize指标",
HealthCompareValueConfig.class
HealthCompareValueConfig.class,
false
),
BROKER_NETWORK_PROCESSOR_AVG_IDLE_TOO_LOW(
@@ -43,7 +47,8 @@ public enum HealthCheckNameEnum {
"NetworkProcessorAvgIdlePercent",
Constant.HC_CONFIG_NAME_PREFIX + "BROKER_NETWORK_PROCESSOR_AVG_IDLE_TOO_LOW",
"Broker-NetworkProcessorAvgIdlePercent指标",
HealthCompareValueConfig.class
HealthCompareValueConfig.class,
false
),
GROUP_RE_BALANCE_TOO_FREQUENTLY(
@@ -51,7 +56,8 @@ public enum HealthCheckNameEnum {
"Group Re-Balance",
Constant.HC_CONFIG_NAME_PREFIX + "GROUP_RE_BALANCE_TOO_FREQUENTLY",
"Group re-balance频率",
HealthDetectedInLatestMinutesConfig.class
HealthDetectedInLatestMinutesConfig.class,
false
),
TOPIC_NO_LEADER(
@@ -59,7 +65,8 @@ public enum HealthCheckNameEnum {
"NoLeader",
Constant.HC_CONFIG_NAME_PREFIX + "TOPIC_NO_LEADER",
"Topic 无Leader数",
HealthCompareValueConfig.class
HealthCompareValueConfig.class,
false
),
TOPIC_UNDER_REPLICA_TOO_LONG(
@@ -67,9 +74,66 @@ public enum HealthCheckNameEnum {
"UnderReplicaTooLong",
Constant.HC_CONFIG_NAME_PREFIX + "TOPIC_UNDER_REPLICA_TOO_LONG",
"Topic 未同步持续时间",
HealthDetectedInLatestMinutesConfig.class
HealthDetectedInLatestMinutesConfig.class,
false
),
ZK_BRAIN_SPLIT(
HealthCheckDimensionEnum.ZOOKEEPER,
"BrainSplit",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_BRAIN_SPLIT",
"ZK 脑裂",
HealthCompareValueConfig.class,
true
),
ZK_OUTSTANDING_REQUESTS(
HealthCheckDimensionEnum.ZOOKEEPER,
"OutstandingRequests",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_OUTSTANDING_REQUESTS",
"ZK Outstanding 请求堆积数",
HealthAmountRatioConfig.class,
false
),
ZK_WATCH_COUNT(
HealthCheckDimensionEnum.ZOOKEEPER,
"WatchCount",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_WATCH_COUNT",
"ZK WatchCount 数",
HealthAmountRatioConfig.class,
false
),
ZK_ALIVE_CONNECTIONS(
HealthCheckDimensionEnum.ZOOKEEPER,
"AliveConnections",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_ALIVE_CONNECTIONS",
"ZK 连接数",
HealthAmountRatioConfig.class,
false
),
ZK_APPROXIMATE_DATA_SIZE(
HealthCheckDimensionEnum.ZOOKEEPER,
"ApproximateDataSize",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_APPROXIMATE_DATA_SIZE",
"ZK 数据大小(Byte)",
HealthAmountRatioConfig.class,
false
),
ZK_SENT_RATE(
HealthCheckDimensionEnum.ZOOKEEPER,
"SentRate",
Constant.HC_CONFIG_NAME_PREFIX + "ZK_SENT_RATE",
"ZK 发包数",
HealthAmountRatioConfig.class,
false
),
;
/**
@@ -97,12 +161,18 @@ public enum HealthCheckNameEnum {
*/
private final Class configClazz;
HealthCheckNameEnum(HealthCheckDimensionEnum dimensionEnum, String configItem, String configName, String configDesc, Class configClazz) {
/**
* 是可用性检查?
*/
private final boolean availableChecker;
HealthCheckNameEnum(HealthCheckDimensionEnum dimensionEnum, String configItem, String configName, String configDesc, Class configClazz, boolean availableChecker) {
this.dimensionEnum = dimensionEnum;
this.configItem = configItem;
this.configName = configName;
this.configDesc = configDesc;
this.configClazz = configClazz;
this.availableChecker = availableChecker;
}
public static HealthCheckNameEnum getByName(String configName) {

View File

@@ -0,0 +1,31 @@
package com.xiaojukeji.know.streaming.km.common.enums.health;
import lombok.Getter;
/**
* 健康状态
*/
@Getter
public enum HealthStateEnum {
UNKNOWN(-1, "未知"),
GOOD(0, ""),
MEDIUM(1, ""),
POOR(2, ""),
DEAD(3, "Down"),
;
private final int dimension;
private final String message;
HealthStateEnum(int dimension, String message) {
this.dimension = dimension;
this.message = message;
}
}

View File

@@ -9,7 +9,9 @@ public enum VersionItemTypeEnum {
METRIC_GROUP(102, "group_metric"),
METRIC_BROKER(103, "broker_metric"),
METRIC_PARTITION(104, "partition_metric"),
METRIC_REPLICATION (105, "replication_metric"),
METRIC_REPLICATION(105, "replication_metric"),
METRIC_ZOOKEEPER(110, "zookeeper_metric"),
/**
* 服务端查询

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.enums.zookeeper;
import lombok.Getter;
@Getter
public enum ZKRoleEnum {
LEADER("leader"),
FOLLOWER("follower"),
OBSERVER("observer"),
UNKNOWN("unknown"),
;
private final String role;
ZKRoleEnum(String role) {
this.role = role;
}
}

View File

@@ -22,6 +22,12 @@ public class JmxAttribute {
public static final String PERCENTILE_99 = "99thPercentile";
public static final String MAX = "Max";
public static final String MEAN = "Mean";
public static final String MIN = "Min";
public static final String VALUE = "Value";
public static final String CONNECTION_COUNT = "connection-count";

View File

@@ -63,6 +63,12 @@ public class JmxName {
/*********************************************************** cluster ***********************************************************/
public static final String JMX_CLUSTER_PARTITION_UNDER_REPLICATED = "kafka.cluster:type=Partition,name=UnderReplicated";
/*********************************************************** zookeeper ***********************************************************/
public static final String JMX_ZK_REQUEST_LATENCY_MS = "kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs";
public static final String JMX_ZK_SYNC_CONNECTS_PER_SEC = "kafka.server:type=SessionExpireListener,name=ZooKeeperSyncConnectsPerSec";
public static final String JMX_ZK_DISCONNECTORS_PER_SEC = "kafka.server:type=SessionExpireListener,name=ZooKeeperDisconnectsPerSec";
private JmxName() {
}
}

View File

@@ -389,4 +389,16 @@ public class ConvertUtil {
}
return null;
}
public static Integer float2Integer(Float f) {
if (null == f) {
return null;
}
try {
return f.intValue();
} catch (Exception e) {
// ignore exception
}
return null;
}
}

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.know.streaming.km.common.utils;
import org.apache.commons.lang.StringUtils;
import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
@@ -56,6 +57,18 @@ public class ValidateUtils {
return false;
}
public static <T> boolean isNotEmpty(T[] array) {
return !isEmpty(array);
}
public static boolean isEmpty(Object[] array) {
return getLength(array) == 0;
}
public static int getLength(Object array) {
return array == null ? 0 : Array.getLength(array);
}
/**
* 是空字符串
*/
@@ -65,7 +78,7 @@ public class ValidateUtils {
} else if (isNull(seq1) || isNull(seq2) || seq1.size() != seq2.size()) {
return false;
}
for (Object elem: seq1) {
for (Object elem : seq1) {
if (!seq2.contains(elem)) {
return false;
}

View File

@@ -0,0 +1,163 @@
package com.xiaojukeji.know.streaming.km.common.utils.zookeeper;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser.FourLetterWordDataParser;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import org.apache.zookeeper.common.ClientX509Util;
import org.apache.zookeeper.common.X509Exception;
import org.apache.zookeeper.common.X509Util;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketTimeoutException;
import java.util.HashSet;
import java.util.Set;
public class FourLetterWordUtil {
private static final ILog LOGGER = LogFactory.getLog(FourLetterWordUtil.class);
public static final String MonitorCmd = "mntr";
public static final String ConfigCmd = "conf";
public static final String ServerCmd = "srvr";
private static final Set<String> supportedCommands = new HashSet<>();
public static <T> Result<T> executeFourLetterCmd(Long clusterPhyId,
String host,
int port,
boolean secure,
int timeout,
FourLetterWordDataParser<T> dataParser) {
try {
if (!supportedCommands.contains(dataParser.getCmd())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, String.format("ZK %s命令暂未进行支持", dataParser.getCmd()));
}
String cmdData = send4LetterWord(host, port, dataParser.getCmd(), secure, timeout);
if (cmdData.contains("not executed because it is not in the whitelist.")) {
return Result.buildFromRSAndMsg(ResultStatus.ZK_FOUR_LETTER_CMD_FORBIDDEN, cmdData);
}
if (ValidateUtils.isBlank(cmdData)) {
return Result.buildFromRSAndMsg(ResultStatus.ZK_OPERATE_FAILED, cmdData);
}
return Result.buildSuc(dataParser.parseAndInitData(clusterPhyId, host, port, cmdData));
} catch (Exception e) {
LOGGER.error(
"class=FourLetterWordUtil||method=executeFourLetterCmd||clusterPhyId={}||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
clusterPhyId, host, port, dataParser.getCmd(), secure, timeout, e
);
return Result.buildFromRSAndMsg(ResultStatus.ZK_OPERATE_FAILED, e.getMessage());
}
}
/**************************************************** private method ****************************************************/
private static String send4LetterWord(
String host,
int port,
String cmd,
boolean secure,
int timeout) throws IOException, X509Exception.SSLContextException {
long startTime = System.currentTimeMillis();
LOGGER.info("connecting to {} {}", host, port);
Socket socket = null;
OutputStream outputStream = null;
BufferedReader bufferedReader = null;
try {
InetSocketAddress hostaddress = host != null
? new InetSocketAddress(host, port)
: new InetSocketAddress(InetAddress.getByName(null), port);
if (secure) {
LOGGER.info("using secure socket");
try (X509Util x509Util = new ClientX509Util()) {
SSLContext sslContext = x509Util.getDefaultSSLContext();
SSLSocketFactory socketFactory = sslContext.getSocketFactory();
SSLSocket sslSock = (SSLSocket) socketFactory.createSocket();
sslSock.connect(hostaddress, timeout);
sslSock.startHandshake();
socket = sslSock;
}
} else {
socket = new Socket();
socket.connect(hostaddress, timeout);
}
socket.setSoTimeout(timeout);
outputStream = socket.getOutputStream();
outputStream.write(cmd.getBytes());
outputStream.flush();
// 等待InputStream有数据
while (System.currentTimeMillis() - startTime <= timeout && socket.getInputStream().available() <= 0) {
BackoffUtils.backoff(10);
}
bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
StringBuilder sb = new StringBuilder();
String line;
while ((line = bufferedReader.readLine()) != null) {
sb.append(line).append("\n");
}
return sb.toString();
} catch (SocketTimeoutException e) {
throw new IOException("Exception while executing four letter word: " + cmd, e);
} finally {
if (outputStream != null) {
try {
outputStream.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||clusterPhyId={}||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
if (bufferedReader != null) {
try {
bufferedReader.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
if (socket != null) {
try {
socket.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
}
}
static {
supportedCommands.add(MonitorCmd);
supportedCommands.add(ConfigCmd);
supportedCommands.add(ServerCmd);
}
}

View File

@@ -0,0 +1,68 @@
package com.xiaojukeji.know.streaming.km.common.utils.zookeeper;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import org.apache.zookeeper.client.ConnectStringParser;
import org.apache.zookeeper.common.NetUtils;
import java.util.ArrayList;
import java.util.List;
import static org.apache.zookeeper.common.StringUtils.split;
public class ZookeeperUtils {
private static final int DEFAULT_PORT = 2181;
/**
* 解析ZK地址
* @see ConnectStringParser
*/
public static List<Tuple<String, Integer>> connectStringParser(String connectString) {
List<Tuple<String, Integer>> ipPortList = new ArrayList<>();
if (connectString == null) {
return ipPortList;
}
// parse out chroot, if any
int off = connectString.indexOf('/');
if (off >= 0) {
connectString = connectString.substring(0, off);
}
List<String> hostsList = split(connectString, ",");
for (String host : hostsList) {
int port = DEFAULT_PORT;
String[] hostAndPort = NetUtils.getIPV6HostAndPort(host);
if (hostAndPort.length != 0) {
host = hostAndPort[0];
if (hostAndPort.length == 2) {
port = Integer.parseInt(hostAndPort[1]);
}
} else {
int pidx = host.lastIndexOf(':');
if (pidx >= 0) {
// otherwise : is at the end of the string, ignore
if (pidx < host.length() - 1) {
port = Integer.parseInt(host.substring(pidx + 1));
}
host = host.substring(0, pidx);
}
}
ipPortList.add(new Tuple<>(host, port));
}
return ipPortList;
}
public static String getNamespace(String zookeeperAddress) {
int index = zookeeperAddress.indexOf('/');
String namespace = "/";
if (index != -1) {
namespace = zookeeperAddress.substring(index);
}
return namespace;
}
}

View File

@@ -24,7 +24,7 @@ npm install -g lerna
npm run i
```
我们默认保留了 `package-lock.json` 文件,以防止可能的依赖包自动升级导致的问题。依赖默认会通过 taobao 镜像 `https://registry.npmmirror.com/` 服务下载。
我们默认保留了 `package-lock.json` 文件,以防止可能的依赖包自动升级导致的问题。依赖默认会通过 taobao 镜像 `https://registry.npmmirror.com/` 服务下载(如需修改下载源,请见当前目录下 package.json 文件)
## 三、启动项目(可选,打包构建请直接看步骤三)

View File

@@ -22,7 +22,7 @@
"prettier": "2.3.2"
},
"scripts": {
"i": "npm install && lerna bootstrap",
"i": "npm config set registry https://registry.npmmirror.com/ && npm install && lerna bootstrap",
"clean": "rm -rf node_modules package-lock.json packages/*/node_modules packages/*/package-lock.json",
"start": "lerna run start",
"build": "lerna run build",

Some files were not shown because too many files have changed in this diff Show More