Compare commits

...

178 Commits
v3.0.0 ... v3.0

Author SHA1 Message Date
EricZeng
508402d8ec Merge pull request #717 from didi/master
合并主分支
2022-10-21 15:09:52 +08:00
EricZeng
eb3e573b22 Merge branch 'v3.0' into master 2022-10-21 15:07:31 +08:00
zengqiao
a96853db90 bump version to v3.0.1 2022-10-21 15:02:09 +08:00
zengqiao
c1502152c0 Revert "bump version to 3.1.0"
This reverts commit 7b5c2d80
2022-10-21 14:59:42 +08:00
GraceWalk
afda292796 fix: typescript 版本更新 2022-10-21 14:47:01 +08:00
GraceWalk
163cab78ae fix: 部分文案 & 样式优化 2022-10-21 14:47:01 +08:00
GraceWalk
8f4ff36c09 fix: 优化 Topic 扩分区名称 & 描述展示 2022-10-21 14:47:01 +08:00
GraceWalk
47b6b3577a fix: Broker 列表 jmxPort 列支持展示连接状态 2022-10-21 14:47:01 +08:00
GraceWalk
f3eca3b214 fix: ConsumerGroup 列表 & 详情页重构 2022-10-21 14:47:00 +08:00
GraceWalk
62f7d3f72f fix: 图表逻辑 & 展示优化 2022-10-21 14:47:00 +08:00
GraceWalk
26e60d8a64 fix: 优化全局 Message & Notification 展示效果 2022-10-21 14:47:00 +08:00
zengqiao
5e7fbcf078 增加v3.0.1变更内容 2022-10-21 14:46:41 +08:00
zengqiao
3fb35d1fcc 补充v3.0.1版本升级信息 2022-10-21 14:46:41 +08:00
zengqiao
538d54cae0 安装包中,去除docs相关的文档 2022-10-21 14:46:41 +08:00
zengqiao
78b02f80ba [Bugfix] 修复指标版本信息list转map时出现key冲突从而抛出异常的问题 2022-10-21 14:46:41 +08:00
zengqiao
f9ec890e1d [Optimize] 集群Broker列表中,补充Jmx是否成功连接的信息
1、当前页面无数据时,一部分的原因是JMX连接失败导致;
2、Broker列表中增加是否连接成功的信息,便于问题的排查;
2022-10-21 14:46:41 +08:00
zengqiao
af1bb2ccbd [Optimize] 删除Replica指标采集任务
1、当集群存在较多副本时,指标采集的性能会严重降低;
2、Replica的指标基本上都是在实时获取时才需要,因此当前先将Replica指标采集任务关闭,后续依据产品需要再看是否开启;
2022-10-21 14:46:41 +08:00
zengqiao
714e9a56a3 [Optimize] 优化ZK指标的获取,减少重复采集的出现 (#709)
1、避免不同集群,相同的ZK地址时,指标重复获取的情况;
2、避免集群某个ZK地址获取指标失败时,下一个周期还会继续尝试从该地址获取指标;
2022-10-21 14:46:41 +08:00
_haoqi
88d0a60182 [ISSUE #677] 重启会导致部分信息采集抛出空指针 2022-10-21 14:46:41 +08:00
zengqiao
05c52cd672 [Feature] 集群Group列表按照Group维度进行展示 (#580) 2022-10-21 14:46:41 +08:00
Richard
586b37caa0 fix issue:
* [issue #700] Adjust the prompt and replace the Arrays.asList() with the Collections.singletonList()
2022-10-21 14:46:41 +08:00
dianyang12138
d8aa3d64df fix:修复es模版错误 2022-10-21 14:46:41 +08:00
night.liang
13d8fd55c8 fix ldap bug 2022-10-21 14:46:41 +08:00
zengqiao
4133981048 补充Kafka-Group表 2022-10-21 14:46:41 +08:00
chenzy
2f0b18b005 修复时间展示有误的bug,由原先的12小时制改为24小时制 2022-10-21 14:46:41 +08:00
Richard
44134ce0d6 fix issue:
* [issue #662] Fix deadlocks caused by adding data using MySQL's REPLACE method
2022-10-21 14:46:41 +08:00
_haoqi
5f21e5a728 修改zk-Latency avg为小数时的数值转换异常问题 2022-10-21 14:46:41 +08:00
zengqiao
d5079a1b75 修复ZK元信息表role字段类型错误问题 2022-10-21 14:46:41 +08:00
shirenchuang
656dfc2285 update readme 2022-10-21 14:46:41 +08:00
shirenchuang
99be2d704f update readme 2022-10-21 14:46:41 +08:00
Richard
d071e31106 fix issue:
* [issue #666] Fix the type of role phase in ks_km_zookeeper table
2022-10-21 14:46:41 +08:00
shirenchuang
55b34d08dd update readme 2022-10-21 14:46:41 +08:00
赤月
7a29e58453 Update faq.md 2022-10-21 14:46:41 +08:00
shirenchuang
8892b5250e update readme add who's using know streaming 2022-10-21 14:46:41 +08:00
zengqiao
75e53a9617 修复集群ZK列表中缺少返回服务状态字段的问题 2022-10-21 14:46:41 +08:00
zengqiao
7294aba59f 指标信息中,增加返回ZK的指标信息 2022-10-21 14:46:41 +08:00
zengqiao
a8c779675a 删除未被使用的import 2022-10-21 14:46:41 +08:00
zengqiao
facae65f61 健康检查任务优化 2022-10-21 14:46:41 +08:00
zengqiao
0c6475b063 application.yml文件中增加ES用户名密码的配置项 2022-10-21 14:46:41 +08:00
zengqiao
92d6214f4f 增加ZK指标上报普罗米修斯 2022-10-21 14:46:41 +08:00
zengqiao
6ad29b9565 ZookeeperService中增加服务存活统计方法 2022-10-21 14:46:41 +08:00
zengqiao
f3b64ca463 增加float转integer方法 2022-10-21 14:46:41 +08:00
shirenchuang
9340e07662 update contribuer document 2022-10-21 14:46:41 +08:00
zengqiao
50482c40d5 修复获取TopN的Broker指标时,会出现部分指标缺失的问题 2022-10-21 14:46:41 +08:00
zengqiao
12ebc32cec Broker增加服务是否存活接口 2022-10-21 14:46:41 +08:00
zengqiao
215602bb84 调整贡献者名单 2022-10-21 14:46:41 +08:00
zengqiao
5355c5c1f3 修复DSL错误导致ZK指标查询失败问题 2022-10-21 14:46:41 +08:00
shirenchuang
e13d77c81d 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
103db39460 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
750da7c9d7 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
0fea002142 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
7163c74cba 贡献者相关文档 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
2fb3aa1c14 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
dc8604ad81 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
9c67afd170 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
shirenchuang
bd48bc6a3d readme 2022-10-21 14:46:41 +08:00
shirenchuang
b75e630bac Issue 模板 2022-10-21 14:46:41 +08:00
shirenchuang
ebd4e4735d PR 模板 2022-10-21 14:46:41 +08:00
shirenchuang
b3ad6a71ca 贡献者规约文档 2022-10-21 14:46:41 +08:00
shirenchuang
91e2189864 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
ddd5d1b892 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
8aa877071c issue template 2022-10-21 14:46:41 +08:00
shirenchuang
efa253fac8 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
3744c0e97d issue template 2022-10-21 14:46:41 +08:00
shirenchuang
d510640e43 issue template 2022-10-21 14:46:41 +08:00
EricZeng
d7986ad8dd 恢复为原先代码
恢复为原先代码
2022-10-21 14:46:41 +08:00
zengqiao
fbc4d4a540 调整接入带Kerberos认证的ZK集群的文档 2022-10-21 14:46:41 +08:00
zengqiao
bc32c71048 ZK-增加ZK信息查询接口 2022-10-21 14:46:41 +08:00
zengqiao
c4910964db ZK-指标采集入ES 2022-10-21 14:46:41 +08:00
zengqiao
1bc725bd62 ZK-同步ZK元信息至DB 2022-10-21 14:46:41 +08:00
zengqiao
34b7c6746b ZK-增加配置的默认值 2022-10-21 14:46:41 +08:00
zengqiao
20d5b27bb6 ZK-增加四字命令信息的获取 2022-10-21 14:46:41 +08:00
zengqiao
a4abb4069d 删除无效的健康分计算代码 2022-10-21 14:46:41 +08:00
zengqiao
c73cfce780 bump version to 3.1.0 2022-10-21 14:46:41 +08:00
luhe
dfb9b6136b 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
341bd58d51 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
4386181304 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
fb21d8135c 修改代码支持ZK-Kerberos认证 2022-10-21 14:46:41 +08:00
luhe
b4580277a9 修改代码支持ZK-Kerberos认证 2022-10-21 14:46:41 +08:00
zengqiao
df655a250c 增加v3.0.1变更内容 2022-10-21 14:36:29 +08:00
zengqiao
811fc9b400 补充v3.0.1版本升级信息 2022-10-21 14:32:57 +08:00
zengqiao
83df02783c 安装包中,去除docs相关的文档 2022-10-21 14:32:07 +08:00
zengqiao
6a5efce874 [Bugfix] 修复指标版本信息list转map时出现key冲突从而抛出异常的问题 2022-10-21 12:06:22 +08:00
zengqiao
fa0ae5e474 [Optimize] 集群Broker列表中,补充Jmx是否成功连接的信息
1、当前页面无数据时,一部分的原因是JMX连接失败导致;
2、Broker列表中增加是否连接成功的信息,便于问题的排查;
2022-10-21 12:03:19 +08:00
zengqiao
cafd665a2d [Optimize] 删除Replica指标采集任务
1、当集群存在较多副本时,指标采集的性能会严重降低;
2、Replica的指标基本上都是在实时获取时才需要,因此当前先将Replica指标采集任务关闭,后续依据产品需要再看是否开启;
2022-10-21 11:49:58 +08:00
zengqiao
e8f77a456b [Optimize] 优化ZK指标的获取,减少重复采集的出现 (#709)
1、避免不同集群,相同的ZK地址时,指标重复获取的情况;
2、避免集群某个ZK地址获取指标失败时,下一个周期还会继续尝试从该地址获取指标;
2022-10-21 11:26:07 +08:00
_haoqi
4510c62ebd [ISSUE #677] 重启会导致部分信息采集抛出空指针 2022-10-20 15:36:32 +08:00
zengqiao
79864955e1 [Feature] 集群Group列表按照Group维度进行展示 (#580) 2022-10-20 13:29:43 +08:00
Richard
ff26a8d46c fix issue:
* [issue #700] Adjust the prompt and replace the Arrays.asList() with the Collections.singletonList()
2022-10-19 15:19:43 +08:00
dianyang12138
cc226d552e fix:修复es模版错误 2022-10-19 11:44:00 +08:00
EricZeng
962f89475b Merge pull request #699 from silent-night-no-trace/dev
[ISSUE #683]  fix ldap bug
2022-10-19 10:23:47 +08:00
night.liang
ec204a1605 fix ldap bug 2022-10-18 20:16:40 +08:00
早晚会起风
58d7623938 Merge pull request #696 from chenzhongyu11/dev
[ISSUE #672] 修复健康巡检结果时间展示错误的问题
2022-10-18 10:41:47 +08:00
EricZeng
8f4ecfcdc0 Merge pull request #691 from didi/dev
补充Kafka-Group表
2022-10-17 20:30:32 +08:00
zengqiao
ef719cedbc 补充Kafka-Group表 2022-10-17 10:34:21 +08:00
EricZeng
b7856c892b Merge pull request #690 from didi/master
合并默认分支
2022-10-17 10:30:18 +08:00
EricZeng
7435a78883 Merge pull request #689 from didi/dev
优化健康检查结果替换时出现死锁问题
2022-10-17 10:26:11 +08:00
chenzy
f49206b316 修复时间展示有误的bug,由原先的12小时制改为24小时制 2022-10-16 22:57:50 +08:00
EricZeng
7d500a0721 Merge pull request #684 from RichardZhengkay/dev
fix issue: [#662]
2022-10-15 14:39:37 +08:00
EricZeng
98a519f20b Merge pull request #682 from haoqi123/fix_678
[ISSUE #678] zk-Latency avg为多位小数会抛出空指针
2022-10-15 14:17:23 +08:00
Richard
39b655bb43 fix issue:
* [issue #662] Fix deadlocks caused by adding data using MySQL's REPLACE method
2022-10-14 14:03:16 +08:00
_haoqi
78d56a49fe 修改zk-Latency avg为小数时的数值转换异常问题 2022-10-14 11:53:48 +08:00
EricZeng
d2e9d1fa01 Merge pull request #673 from didi/dev
fix [ISSUE-666] Error in ks_km_zookeeper table role type #666
2022-10-13 18:57:06 +08:00
zengqiao
41ff914dc3 修复ZK元信息表role字段类型错误问题 2022-10-13 18:50:41 +08:00
shirenchuang
3ba447fac2 update readme 2022-10-13 18:49:06 +08:00
shirenchuang
e9cc380a2e update readme 2022-10-13 18:30:13 +08:00
EricZeng
017cac9bbe Merge pull request #670 from RichardZhengkay/dev
fix issue: [#666]
2022-10-13 18:25:15 +08:00
Richard
9ad72694af fix issue:
* [issue #666] Fix the type of role phase in ks_km_zookeeper table
2022-10-13 18:00:43 +08:00
shirenchuang
e8f9821870 Merge remote-tracking branch 'origin/master' 2022-10-13 16:31:03 +08:00
shirenchuang
bb167b9f8d update readme 2022-10-13 15:31:34 +08:00
石臻臻的杂货铺
28fbb5e130 Merge pull request #665 from zwOvO/patch-1
[ISSUE #664]关于'JMX-连接失败问题解决'的超链接修复
2022-10-13 10:17:29 +08:00
EricZeng
16101e81e8 Merge pull request #661 from didi/dev
合并开发分支
2022-10-13 10:16:14 +08:00
赤月
aced504d2a Update faq.md 2022-10-12 22:08:29 +08:00
shirenchuang
abb064d9d1 update readme add who's using know streaming 2022-10-12 19:15:19 +08:00
zengqiao
dc1899a1cd 修复集群ZK列表中缺少返回服务状态字段的问题 2022-10-12 16:45:47 +08:00
zengqiao
442f34278c 指标信息中,增加返回ZK的指标信息 2022-10-12 16:44:07 +08:00
zengqiao
a6dcbcd35b 删除未被使用的import 2022-10-12 16:43:16 +08:00
zengqiao
2b600e96eb 健康检查任务优化 2022-10-12 16:41:27 +08:00
zengqiao
177bb80f31 application.yml文件中增加ES用户名密码的配置项 2022-10-12 16:36:04 +08:00
zengqiao
63fbe728c4 增加ZK指标上报普罗米修斯 2022-10-12 11:11:25 +08:00
EricZeng
b33020840b ZookeeperService中增加服务存活统计方法(#659) 2022-10-12 11:07:52 +08:00
zengqiao
c5caf7c0d6 ZookeeperService中增加服务存活统计方法 2022-10-12 11:02:41 +08:00
EricZeng
0f0473db4c 增加float转integer方法(#658)
增加float转integer方法
2022-10-12 10:09:16 +08:00
zengqiao
beadde3e06 增加float转integer方法 2022-10-11 18:46:16 +08:00
EricZeng
a423a20480 修复获取TopN的Broker指标时,会出现部分指标缺失的问题(#657)
修复获取TopN的Broker指标时,会出现部分指标缺失的问题
2022-10-11 18:44:02 +08:00
shirenchuang
79f0a23813 update contribuer document 2022-10-11 17:38:15 +08:00
zengqiao
780fdea2cc 修复获取TopN的Broker指标时,会出现部分指标缺失的问题 2022-10-11 16:54:39 +08:00
shirenchuang
1c0fda1adf Merge remote-tracking branch 'origin/master' 2022-10-11 10:39:08 +08:00
EricZeng
9cf13e9b30 Broker增加服务是否存活接口(#654)
Broker增加服务是否存活接口
2022-10-10 19:56:12 +08:00
zengqiao
87cd058fd8 Broker增加服务是否存活接口 2022-10-10 19:54:47 +08:00
EricZeng
81b1ec48c2 调整贡献者名单(#653)
调整贡献者名单
2022-10-10 19:52:50 +08:00
zengqiao
66dd82f4fd 调整贡献者名单 2022-10-10 19:49:22 +08:00
EricZeng
ce35b23911 修复DSL错误导致ZK指标查询失败问题(#652)
修复DSL错误导致ZK指标查询失败问题
2022-10-10 19:27:48 +08:00
zengqiao
e79342acf5 修复DSL错误导致ZK指标查询失败问题 2022-10-10 19:19:05 +08:00
EricZeng
3fc9f39d24 Merge pull request #651 from didi/master
合并主分支
2022-10-10 19:10:48 +08:00
shirenchuang
0221fb3a4a 贡献者相关文档 2022-10-10 18:02:19 +08:00
shirenchuang
f009f8b7ba 贡献者相关文档 2022-10-10 17:21:21 +08:00
shirenchuang
b76959431a 贡献者相关文档 2022-10-10 16:55:33 +08:00
shirenchuang
975370b593 贡献者相关文档 2022-10-10 15:57:07 +08:00
shirenchuang
7275030971 贡献者相关文档 2022-10-10 15:50:16 +08:00
shirenchuang
99b0be5a95 Merge branch 'master' into docs_only 2022-10-10 15:01:00 +08:00
石臻臻的杂货铺
edd3f95fc4 Update CONTRIBUTING.md 2022-10-10 14:22:24 +08:00
石臻臻的杂货铺
479f983b09 Update CONTRIBUTING.md 2022-10-10 13:58:35 +08:00
石臻臻的杂货铺
7650332252 Update CONTRIBUTING.md 2022-10-10 13:50:55 +08:00
shirenchuang
8f1a021851 readme 2022-10-10 13:46:14 +08:00
shirenchuang
ce4df4d5fd Merge remote-tracking branch 'origin/master' 2022-10-10 13:00:28 +08:00
shirenchuang
bd43ae1b5d Issue 模板 2022-10-10 12:57:53 +08:00
石臻臻的杂货铺
8fa34116b9 Merge pull request #648 from didi/docs_only
PR 模板
2022-10-10 12:39:38 +08:00
shirenchuang
7e92553017 PR 模板 2022-10-10 11:42:04 +08:00
shirenchuang
b7e243a693 Merge remote-tracking branch 'origin/master' 2022-10-09 17:23:16 +08:00
shirenchuang
35d4888afb 贡献者规约文档 2022-10-09 17:03:46 +08:00
EricZeng
b3e8a4f0f6 Merge pull request #647 from didi/dev
合并DEV分支
2022-10-09 16:54:45 +08:00
shirenchuang
321125caee issue template 2022-10-09 15:47:13 +08:00
shirenchuang
e01427aa4f issue template 2022-10-09 15:42:40 +08:00
shirenchuang
14652e7f7a issue template 2022-10-09 15:39:20 +08:00
shirenchuang
7c05899dbd issue template 2022-10-09 15:26:57 +08:00
shirenchuang
56726b703f issue template 2022-10-09 13:56:44 +08:00
shirenchuang
6237b0182f issue template 2022-10-09 12:27:27 +08:00
EricZeng
be5b662f65 Merge pull request #645 from didi/dev_feature_zk_kerberos
如何修改代码支持ZK-Kerberos认证
2022-10-09 10:39:26 +08:00
EricZeng
224698355c 恢复为原先代码
恢复为原先代码
2022-10-09 10:38:36 +08:00
EricZeng
8f47138ecd Merge pull request #643 from didi/dev_3.1
监控Kafka的ZK
2022-10-08 17:22:03 +08:00
zengqiao
d159746391 调整接入带Kerberos认证的ZK集群的文档 2022-10-08 17:00:08 +08:00
EricZeng
63df93ea5e Merge pull request #608 from luhea/dev_feature_zk_kerberos
Add zk supported kerberos
2022-10-08 16:11:37 +08:00
EricZeng
38948c0daa Merge pull request #644 from didi/master
合并主分支
2022-10-08 16:09:40 +08:00
zengqiao
6c610427b6 ZK-增加ZK信息查询接口 2022-10-08 15:46:18 +08:00
zengqiao
b4cc31c459 ZK-指标采集入ES 2022-10-08 15:31:59 +08:00
zengqiao
7d781712c9 ZK-同步ZK元信息至DB 2022-10-08 15:19:09 +08:00
zengqiao
dd61ce9b2a ZK-增加配置的默认值 2022-10-08 14:58:28 +08:00
zengqiao
69a7212986 ZK-增加四字命令信息的获取 2022-10-08 14:52:17 +08:00
EricZeng
ff05a951fd Merge pull request #642 from didi/master
合并主分支
2022-10-08 14:42:37 +08:00
EricZeng
89d5357b40 Merge pull request #641 from didi/dev
删除无效的健康分计算代码
2022-10-08 14:41:27 +08:00
zengqiao
7ca3d65c42 删除无效的健康分计算代码 2022-10-08 14:15:20 +08:00
zengqiao
7b5c2d800f bump version to 3.1.0 2022-09-29 15:13:41 +08:00
luhe
c8806dbb4d 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:09:04 +08:00
luhe
e5802c7f50 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 16:02:38 +08:00
luhe
590f684d66 修改代码支持ZK-Kerberos认证与配置文档 2022-09-21 15:59:31 +08:00
luhe
8e5a67f565 修改代码支持ZK-Kerberos认证 2022-09-21 15:58:59 +08:00
luhe
8d2fbce11e 修改代码支持ZK-Kerberos认证 2022-09-21 15:54:30 +08:00
223 changed files with 7946 additions and 1483 deletions

51
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,51 @@
---
name: 报告Bug
about: 报告KnowStreaming的相关Bug
title: ''
labels: bug
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个Bug。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 重现该问题的步骤
1. xxx
2. xxx
3. xxx
### 预期结果
<!-- 写下应该出现的预期结果?-->
### 实际结果
<!-- 实际发生了什么? -->
---
如果有异常请附上异常Trace:
```
Just put your stack trace here!
```

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: true
contact_links:
- name: 讨论问题
url: https://github.com/didi/KnowStreaming/discussions/new
about: 发起问题、讨论 等等
- name: KnowStreaming官网
url: https://knowstreaming.com/
about: KnowStreaming website

View File

@@ -0,0 +1,26 @@
---
name: 优化建议
about: 相关功能优化建议
title: ''
labels: Optimization Suggestions
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个优化建议。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 需要优化的功能点
### 建议如何优化

View File

@@ -0,0 +1,20 @@
---
name: 提议新功能/需求
about: 给KnowStreaming提一个功能需求
title: ''
labels: feature
assignees: ''
---
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
你是否希望来认领这个Feature。
「 Y / N 」
## 这里描述需求
<!--请尽可能的描述清楚您的需求 -->

12
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@@ -0,0 +1,12 @@
---
name: 提个问题
about: 问KnowStreaming相关问题
title: ''
labels: question
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
## 在这里提出你的问题

22
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,22 @@
请不要在没有先创建Issue的情况下创建Pull Request。
## 变更的目的是什么
XXXXX
## 简短的更新日志
XX
## 验证这一变化
XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 确保有针对更改提交的 Github issue通常在您开始处理之前。诸如拼写错误之类的琐碎更改不需要 Github issue。您的Pull Request应该只解决这个问题而不需要进行其他更改—— 一个 PR 解决一个问题。
* [ ] 格式化 Pull Request 标题,如[ISSUE #123] support Confluent Schema Registry。 Pull Request 中的每个提交都应该有一个有意义的主题行和正文。
* [ ] 编写足够详细的Pull Request描述以了解Pull Request的作用、方式和原因。
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改请记住在test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过

74
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at shirenchuang@didiglobal.com . All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@@ -1,28 +1,150 @@
# Contribution Guideline
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
## Pull Requests
Before sending pull request to this project, please read and follow guidelines below. # 为KnowStreaming做贡献
1. Branch: We only accept pull request on `dev` branch.
2. Coding style: Follow the coding style used in LogiKM.
3. Commit message: Use English and be aware of your spell.
4. Test: Make sure to test your code.
Add device mode, API version, related log, screenshots and other related information in your pull request if possible. 欢迎👏🏻来到KnowStreaming本文档是关于如何为KnowStreaming做出贡献的指南。
NOTE: We assume all your contribution can be licensed under the [AGPL-3.0](LICENSE). 如果您发现不正确或遗漏的内容, 请留下意见/建议。
## Issues ## 行为守则
请务必阅读并遵守我们的 [行为准则](./CODE_OF_CONDUCT.md).
We love clearly described issues. :)
Following information can help us to resolve the issue faster.
* Device mode and hardware information. ## 贡献
* API version.
* Logs. **KnowStreaming** 欢迎任何角色的新参与者,包括 **User** 、**Contributor**、**Committer**、**PMC** 。
* Screenshots.
* Steps to reproduce the issue. 我们鼓励新人积极加入 **KnowStreaming** 项目从User到Contributor、Committer ,甚至是 PMC 角色。
为了做到这一点,新人需要积极地为 **KnowStreaming** 项目做出贡献。以下介绍如何对 **KnowStreaming** 进行贡献。
### 创建/打开 Issue
如果您在文档中发现拼写错误、在代码中**发现错误**或想要**新功能**或想要**提供建议**,您可以在 GitHub 上[创建一个Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 进行报告。
如果您想直接贡献, 您可以选择下面标签的问题。
- [contribution welcome](https://github.com/didi/KnowStreaming/labels/contribution%20welcome) : 非常需要解决/新增 的Issues
- [good first issue](https://github.com/didi/KnowStreaming/labels/good%20first%20issue): 对新人比较友好, 新人可以拿这个Issue来练练手热热身。
<font color=red ><b> 请注意,任何 PR 都必须与有效issue相关联。否则PR 将被拒绝。</b></font>
### 开始你的贡献
**分支介绍**
我们将 `dev`分支作为开发分支, 说明这是一个不稳定的分支。
此外,我们的分支模型符合 [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/). 我们强烈建议新人在创建PR之前先阅读上述文章。
**贡献流程**
为方便描述,我们这里定义一下2个名词
自己Fork出来的仓库是私人仓库, 我们这里称之为 **分叉仓库**
Fork的源项目,我们称之为:**源仓库**
现在如果您准备好创建PR, 以下是贡献者的工作流程:
1. Fork [KnowStreaming](https://github.com/didi/KnowStreaming) 项目到自己的仓库
2. 从源仓库的`dev`拉取并创建自己的本地分支,例如: `dev`
3. 在本地分支上对代码进行修改
4. Rebase 开发分支, 并解决冲突
5. commit 并 push 您的更改到您自己的**分叉仓库**
6. 创建一个 Pull Request 到**源仓库**的`dev`分支中。
7. 等待回复。如果回复的慢,请无情的催促。
更为详细的贡献流程请看:[贡献流程](./docs/contributer_guide/贡献流程.md)
创建Pull Request时
1. 请遵循 PR的 [模板](./.github/PULL_REQUEST_TEMPLATE.md)
2. 请确保 PR 有相应的issue。
3. 如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档(在对应的issue中)。
4. 注意单个 PR 不能太大。如果需要进行大量更改,最好将更改分成几个单独的 PR。
5. 在合并PR之前尽量的将最终的提交信息清晰简洁, 将多次修改的提交尽可能的合并为一次提交。
6. 创建 PR 后将为PR分配一个或多个reviewers。
<font color=red><b>如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档。</b></font>
# 代码审查指南
Commiter将轮流review代码以确保在合并前至少有一名Commiter
一些原则:
- 可读性——重要的代码应该有详细的文档。API 应该有 Javadoc。代码风格应与现有风格保持一致。
- 优雅:新的函数、类或组件应该设计得很好。
- 可测试性——单元测试用例应该覆盖 80% 的新代码。
- 可维护性 - 遵守我们的编码规范。
# 开发者
## 成为Contributor
只要成功提交并合并PR , 则为Contributor
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
## 尝试成为Commiter
一般来说, 贡献8个重要的补丁并至少让三个不同的人来Review他们(您需要3个Commiter的支持)。
然后请人给你提名, 您需要展示您的
1. 至少8个重要的PR和项目的相关问题
2. 与团队合作的能力
3. 了解项目的代码库和编码风格
4. 编写好代码的能力
当前的Commiter可以通过在KnowStreaming中的Issue标签 `nomination`(提名)来提名您
1. 你的名字和姓氏
2. 指向您的Git个人资料的链接
3. 解释为什么你应该成为Commiter
4. 详细说明提名人与您合作的3个PR以及相关问题,这些问题可以证明您的能力。
另外2个Commiter需要支持您的**提名**如果5个工作日内没有人反对您就是提交者,如果有人反对或者想要更多的信息Commiter会讨论并通常达成共识(5个工作日内) 。
# 开源奖励计划
我们非常欢迎开发者们为KnowStreaming开源项目贡献一份力量相应也将给予贡献者激励以表认可与感谢。
## 参与贡献
1. 积极参与 Issue 的讨论如答疑解惑、提供想法或报告无法解决的错误Issue
2. 撰写和改进项目的文档Wiki
3. 提交补丁优化代码Coding
## 你将获得
1. 加入KnowStreaming开源项目贡献者名单并展示
2. KnowStreaming开源贡献者证书(纸质&电子版)
3. KnowStreaming贡献者精美大礼包(KnowStreamin/滴滴 周边)
## 相关规则
- Contributer和Commiter都会有对应的证书和对应的礼包
- 每季度有KnowStreaming项目团队评选出杰出贡献者,颁发相应证书。
- 年末进行年度评选
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)

View File

@@ -45,7 +45,14 @@
## `Know Streaming` 简介 ## `Know Streaming` 简介
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。整体具有以下特点: `Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。
我们现在正在收集 Know Streaming 用户信息,以帮助我们进一步改进 Know Streaming。
请在 [issue#663](https://github.com/didi/KnowStreaming/issues/663) 上提供您的使用信息来支持我们:[谁在使用 Know Streaming](https://github.com/didi/KnowStreaming/issues/663)
整体具有以下特点:
- 👀 &nbsp;**零侵入、全覆盖** - 👀 &nbsp;**零侵入、全覆盖**
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平; - 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
@@ -99,9 +106,13 @@
## 成为社区贡献者 ## 成为社区贡献者
点击 [这里](CONTRIBUTING.md)了解如何成为 Know Streaming 的贡献者 1. [贡献源码](https://doc.knowstreaming.com/product/10-contribution) 了解如何成为 Know Streaming 的贡献者
2. [具体贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)
3. [开源激励计划](https://doc.knowstreaming.com/product/10-contribution#105-开源激励计划)
4. [贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)
获取KnowStreaming开源社区证书。
## 加入技术交流群 ## 加入技术交流群
@@ -134,6 +145,11 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。 微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。
<br/> <br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。
感谢感谢!!!
<img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png"> <img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png">
## Star History ## Star History

View File

@@ -1,4 +1,36 @@
## v3.0.1
**Bug修复**
- 修复重置 Group Offset 时,提示信息中缺少 Dead 状态也可进行重置的信息;
- 修复 Ldap 某个属性不存在时,会直接抛出空指针导致登陆失败的问题;
- 修复集群 Topic 列表页,健康分详情信息中,检查时间展示错误的问题;
- 修复更新健康检查结果时,出现死锁的问题;
- 修复 Replica 索引模版错误的问题;
- 修复 FAQ 文档中的错误链接;
- 修复 Broker 的 TopN 指标不存在时,页面数据不展示的问题;
- 修复 Group 详情页,图表时间范围选择不生效的问题;
**体验优化**
- 集群 Group 列表按照 Group 维度进行展示;
- 优化避免因 ES 中该指标不存在,导致日志中出现大量空指针的问题;
- 优化全局 Message & Notification 展示效果;
- 优化 Topic 扩分区名称 & 描述展示;
**新增**
- Broker 列表页面,新增 JMX 是否成功连接的信息;
**ZK 部分(未完全发布)**
- 后端补充 Kafka ZK 指标采集Kafka ZK 信息获取相关功能;
- 增加本地缓存,避免同一采集周期内 ZK 指标重复采集;
- 增加 ZK 节点采集失败跳过策略,避免不断对存在问题的节点不断尝试;
- 修复 zkAvgLatency 指标转 Long 时抛出异常问题;
- 修复 ks_km_zookeeper 表中role 字段类型错误问题;
---
## v3.0.0 ## v3.0.0
@@ -25,7 +57,7 @@
- 集群信息中,新增 Kafka 集群运行模式字段 - 集群信息中,新增 Kafka 集群运行模式字段
- 新增 docker-compose 的部署方式 - 新增 docker-compose 的部署方式
---
## v3.0.0-beta.3 ## v3.0.0-beta.3

View File

@@ -439,7 +439,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10, "order" : 10,
"index_patterns" : [ "index_patterns" : [
"ks_kafka_partition_metric*" "ks_kafka_replication_metric*"
], ],
"settings" : { "settings" : {
"index" : { "index" : {
@@ -500,30 +500,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
} }
}, },
"aliases" : { } "aliases" : { }
}[root@10-255-0-23 template]# cat ks_kafka_replication_metric }'
PUT _template/ks_kafka_replication_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
"order" : 10, "order" : 10,
@@ -640,7 +617,92 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
} }
}, },
"aliases" : { } "aliases" : { }
}' }'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_zookeeper_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}'
for i in {0..6}; for i in {0..6};
do do
@@ -650,6 +712,7 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2 exit 2
done done

View File

@@ -0,0 +1 @@
TODO.

View File

@@ -0,0 +1,6 @@
开源贡献者证书发放名单(定期更新)
贡献者名单请看:[贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)

View File

@@ -0,0 +1,6 @@
<br>
<br>
请点击:[贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -0,0 +1,69 @@
## 支持Kerberos认证的ZK
### 1、修改 KnowStreaming 代码
代码位置:`src/main/java/com/xiaojukeji/know/streaming/km/persistence/kafka/KafkaAdminZKClient.java`
`createZKClient``135行 的 false 改为 true
![need_modify_code.png](assets/support_kerberos_zk/need_modify_code.png)
修改完后重新进行打包编译,打包编译见:[打包编译](https://github.com/didi/KnowStreaming/blob/master/docs/install_guide/%E6%BA%90%E7%A0%81%E7%BC%96%E8%AF%91%E6%89%93%E5%8C%85%E6%89%8B%E5%86%8C.md
)
### 2、查看用户在ZK的ACL
假设我们使用的用户是 `kafka` 这个用户。
- 1、查看 server.properties 的配置的 zookeeper.connect 的地址;
- 2、使用 `zkCli.sh -serve zookeeper.connect的地址` 登录到ZK页面
- 3、ZK页面上执行命令 `getAcl /kafka` 查看 `kafka` 用户的权限;
此时,我们可以看到如下信息:
![watch_user_acl.png](assets/support_kerberos_zk/watch_user_acl.png)
`kafka` 用户需要的权限是 `cdrwa`。如果用户没有 `cdrwa` 权限的话,需要创建用户并授权,授权命令为:`setAcl`
### 3、创建Kerberos的keytab并修改 KnowStreaming 主机
- 1、在 Kerberos 的域中创建 `kafka/_HOST` 的 `keytab`,并导出。例如:`kafka/dbs-kafka-test-8-53`
- 2、导出 keytab 后上传到安装 KS 的机器的 `/etc/keytab` 下;
- 3、在 KS 机器上,执行 `kinit -kt zookeepe.keytab kafka/dbs-kafka-test-8-53` 看是否能进行 `Kerberos` 登录;
- 4、可以登录后配置 `/opt/zookeeper.jaas` 文件,例子如下:
```sql
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=false
serviceName="zookeeper"
keyTab="/etc/keytab/zookeeper.keytab"
principal="kafka/dbs-kafka-test-8-53@XXX.XXX.XXX";
};
```
- 5、需要配置 `KDC-Server` 对 `KnowStreaming` 的机器开通防火墙并在KS的机器 `/etc/host/` 配置 `kdc-server` 的 `hostname`。并将 `krb5.conf` 导入到 `/etc` 下;
### 4、修改 KnowStreaming 的配置
- 1、在 `/usr/local/KnowStreaming/KnowStreaming/bin/startup.sh` 中的47行的JAVA_OPT中追加如下设置
```bash
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/zookeeper.jaas
```
- 2、重启KS集群后再 start.out 中看到如下信息则证明Kerberos配置成功
![success_1.png](assets/support_kerberos_zk/success_1.png)
![success_2.png](assets/support_kerberos_zk/success_2.png)
### 5、补充说明
- 1、多Kafka集群如果用的是一样的Kerberos域的话只需在每个`ZK`中给`kafka`用户配置`crdwa`权限即可,这样集群初始化的时候`zkclient`是都可以认证;
- 2、当前需要修改代码重新打包才可以支持后续考虑通过页面支持Kerberos认证的ZK接入
- 3、多个Kerberos域暂时未适配

View File

@@ -4,13 +4,148 @@
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。 - 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。 - 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 6.2.0、升级至 `master` 版本 ### 6.2.0、升级至 `master` 版本
暂无 暂无
### 6.2.1、升级至 `v3.0.1` 版本
### 6.2.1、升级至 `v3.0.0` 版本 **ES 索引模版**
```bash
# 新增 ks_kafka_zookeeper_metric 索引模版。
# 可通过再次执行 bin/init_es_template.sh 脚本,创建该索引模版。
# 索引模版内容
PUT _template/ks_kafka_zookeeper_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_km_zookeeper`;
CREATE TABLE `ks_km_zookeeper` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '物理集群ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper主机名',
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'zookeeper端口',
`role` varchar(16) NOT NULL DEFAULT '' COMMENT '角色, leader follower observer',
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper版本',
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1存活0未存活11存活但是4字命令使用不了',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_host_port` (`cluster_phy_id`,`host`, `port`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Zookeeper信息表';
DROP TABLE IF EXISTS `ks_km_group`;
CREATE TABLE `ks_km_group` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`name` varchar(192) COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Group名称',
`member_count` int(11) unsigned NOT NULL DEFAULT '0' COMMENT '成员数',
`topic_members` text CHARACTER SET utf8 COMMENT 'group消费的topic列表',
`partition_assignor` varchar(255) CHARACTER SET utf8 NOT NULL COMMENT '分配策略',
`coordinator_id` int(11) NOT NULL COMMENT 'group协调器brokerId',
`type` int(11) NOT NULL COMMENT 'group类型 0consumer 1connector',
`state` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '' COMMENT '状态',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_name` (`cluster_phy_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Group信息表';
```
---
### 6.2.2、升级至 `v3.0.0` 版本
**SQL 变更** **SQL 变更**
@@ -22,7 +157,7 @@ ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
--- ---
### 6.2.2、升级至 `v3.0.0-beta.2`版本 ### 6.2.3、升级至 `v3.0.0-beta.2`版本
**配置变更** **配置变更**
@@ -93,7 +228,7 @@ ALTER TABLE `logi_security_oplog`
--- ---
### 6.2.3、升级至 `v3.0.0-beta.1`版本 ### 6.2.4、升级至 `v3.0.0-beta.1`版本
**SQL 变更** **SQL 变更**
@@ -112,7 +247,7 @@ ALTER COLUMN `operation_methods` set default '';
--- ---
### 6.2.4、`2.x`版本 升级至 `v3.0.0-beta.0`版本 ### 6.2.5、`2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:** **升级步骤:**

View File

@@ -37,7 +37,7 @@
## 8.4、`Jmx`连接失败如何解决? ## 8.4、`Jmx`连接失败如何解决?
- 参看 [Jmx 连接配置&问题解决](./9-attachment#jmx-连接失败问题解决) 说明。 - 参看 [Jmx 连接配置&问题解决](https://doc.knowstreaming.com/product/9-attachment#91jmx-%E8%BF%9E%E6%8E%A5%E5%A4%B1%E8%B4%A5%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3) 说明。
&nbsp; &nbsp;

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
/**
* 多集群总体状态
*/
public interface ClusterZookeepersManager {
Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId);
PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto);
Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path);
}

View File

@@ -24,6 +24,7 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -51,6 +52,9 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
@Autowired @Autowired
private KafkaControllerService kafkaControllerService; private KafkaControllerService kafkaControllerService;
@Autowired
private KafkaJMXClient kafkaJMXClient;
@Override @Override
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) { public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
// 获取集群Broker列表 // 获取集群Broker列表
@@ -75,6 +79,10 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
//获取controller信息 //获取controller信息
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId); KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
//获取jmx状态信息
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
// 格式转换 // 格式转换
return PaginationResult.buildSuc( return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList( this.convert2ClusterBrokersOverviewVOList(
@@ -83,7 +91,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
metricsResult.getData(), metricsResult.getData(),
groupTopic, groupTopic,
transactionTopic, transactionTopic,
kafkaController kafkaController,
jmxConnectedMap
), ),
paginationResult paginationResult
); );
@@ -165,22 +174,24 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
List<BrokerMetrics> metricsList, List<BrokerMetrics> metricsList,
Topic groupTopic, Topic groupTopic,
Topic transactionTopic, Topic transactionTopic,
KafkaController kafkaController) { KafkaController kafkaController,
Map<Integer, BrokerMetrics> metricsMap = metricsList == null? new HashMap<>(): metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity())); Map<Integer, Boolean> jmxConnectedMap) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null ? new HashMap<>() : metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null? new HashMap<>(): brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity())); Map<Integer, Broker> brokerMap = brokerList == null ? new HashMap<>() : brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size()); List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
for (Integer brokerId : pagedBrokerIdList) { for (Integer brokerId : pagedBrokerIdList) {
Broker broker = brokerMap.get(brokerId); Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId); BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController)); voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
} }
return voList; return voList;
} }
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController) { private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController, Boolean jmxConnected) {
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO(); ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
clusterBrokersOverviewVO.setBrokerId(brokerId); clusterBrokersOverviewVO.setBrokerId(brokerId);
if (broker != null) { if (broker != null) {
@@ -203,6 +214,7 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
} }
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics); clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
clusterBrokersOverviewVO.setJmxConnected(jmxConnected);
return clusterBrokersOverviewVO; return clusterBrokersOverviewVO;
} }

View File

@@ -0,0 +1,137 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Service
public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterZookeepersManagerImpl.class);
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZnodeService znodeService;
@Override
public Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId) {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
}
// // TODO
// private Integer healthState;
// private Integer healthCheckPassed;
// private Integer healthCheckTotal;
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
vo.setTotalServerCount(infoList.size());
vo.setAliveFollowerCount(0);
vo.setTotalFollowerCount(0);
vo.setAliveObserverCount(0);
vo.setTotalObserverCount(0);
vo.setAliveServerCount(0);
for (ZookeeperInfo info: infoList) {
if (info.getRole().equals(ZKRoleEnum.LEADER.getRole())) {
vo.setLeaderNode(info.getHost());
}
if (info.getRole().equals(ZKRoleEnum.FOLLOWER.getRole())) {
vo.setTotalFollowerCount(vo.getTotalFollowerCount() + 1);
vo.setAliveFollowerCount(info.alive()? vo.getAliveFollowerCount() + 1: vo.getAliveFollowerCount());
}
if (info.getRole().equals(ZKRoleEnum.OBSERVER.getRole())) {
vo.setTotalObserverCount(vo.getTotalObserverCount() + 1);
vo.setAliveObserverCount(info.alive()? vo.getAliveObserverCount() + 1: vo.getAliveObserverCount());
}
if (info.alive()) {
vo.setAliveServerCount(vo.getAliveServerCount() + 1);
}
}
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.collectMetricsFromZookeeper(new ZookeeperMetricParam(
clusterPhyId,
infoList.stream().filter(elem -> elem.alive()).map(item -> new Tuple<String, Integer>(item.getHost(), item.getPort())).collect(Collectors.toList()),
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT
));
if (metricsResult.failed()) {
LOGGER.error(
"class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage()
);
return Result.buildSuc(vo);
}
Float watchCount = metricsResult.getData().getMetric(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT);
vo.setWatchCount(watchCount != null? watchCount.intValue(): null);
return Result.buildSuc(vo);
}
@Override
public PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto) {
//获取集群zookeeper列表
List<ClusterZookeepersOverviewVO> clusterZookeepersOverviewVOList = ConvertUtil.list2List(zookeeperService.listFromDBByCluster(clusterPhyId), ClusterZookeepersOverviewVO.class);
//搜索
clusterZookeepersOverviewVOList = PaginationUtil.pageByFuzzyFilter(clusterZookeepersOverviewVOList, dto.getSearchKeywords(), Arrays.asList("host"));
//分页
PaginationResult<ClusterZookeepersOverviewVO> paginationResult = PaginationUtil.pageBySubData(clusterZookeepersOverviewVOList, dto);
return paginationResult;
}
@Override
public Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path) {
Result<Znode> result = znodeService.getZnode(clusterPhyId, path);
if (result.failed()) {
return Result.buildFromIgnoreData(result);
}
return Result.buildSuc(ConvertUtil.obj2ObjByJSON(result.getData(), ZnodeVO.class));
}
/**************************************************** private method ****************************************************/
}

View File

@@ -1,11 +1,14 @@
package com.xiaojukeji.know.streaming.km.biz.group; package com.xiaojukeji.know.streaming.km.biz.group;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException; import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
@@ -22,6 +25,10 @@ public interface GroupManager {
String searchGroupKeyword, String searchGroupKeyword,
PaginationBaseDTO dto); PaginationBaseDTO dto);
PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto);
PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto);
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId, PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
String topicName, String topicName,
String groupName, String groupName,
@@ -31,4 +38,6 @@ public interface GroupManager {
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime); Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception; Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList (Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
} }

View File

@@ -3,11 +3,14 @@ package com.xiaojukeji.know.streaming.km.biz.group.impl;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager; import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -15,11 +18,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO; import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
import com.xiaojukeji.know.streaming.km.common.converter.GroupConverter;
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum; import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException; import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException; import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
@@ -71,30 +78,60 @@ public class GroupManagerImpl implements GroupManager {
String searchGroupKeyword, String searchGroupKeyword,
PaginationBaseDTO dto) { PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto); PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
if (paginationResult.failed()) {
return PaginationResult.buildFailure(paginationResult, dto);
}
if (!paginationResult.hasData()) { if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult); return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
} }
// 获取指标 List<GroupTopicOverviewVO> groupTopicVOList = this.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId, return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
paginationResult.getData().getBizData().stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()), }
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX @Override
); public PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) {
if (metricsListResult.failed()) { Group group = groupService.getGroupFromDB(clusterPhyId, groupName);
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=pagingGroupMembers||clusterPhyId={}||topicName={}||groupName={}||result={}||errMsg=search es failed", clusterPhyId, topicName, groupName, metricsListResult); //没有topicMember则直接返回
if (group == null || ValidateUtils.isEmptyList(group.getTopicMembers())) {
return PaginationResult.buildSuc(dto);
} }
return PaginationResult.buildSuc( //排序
this.convert2GroupTopicOverviewVOList(paginationResult.getData().getBizData(), metricsListResult.getData()), List<GroupTopicMember> groupTopicMembers = PaginationUtil.pageBySort(group.getTopicMembers(), PaginationConstant.DEFAULT_GROUP_TOPIC_SORTED_FIELD, SortTypeEnum.DESC.getSortType());
paginationResult
); //分页
PaginationResult<GroupTopicMember> paginationResult = PaginationUtil.pageBySubData(groupTopicMembers, dto);
List<GroupMemberPO> groupMemberPOList = paginationResult.getData().getBizData().stream().map(elem -> new GroupMemberPO(clusterPhyId, elem.getTopicName(), groupName, group.getState().getState(), elem.getMemberCount())).collect(Collectors.toList());
return PaginationResult.buildSuc(this.getGroupTopicOverviewVOList(clusterPhyId, groupMemberPOList), paginationResult);
}
@Override
public PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto) {
List<Group> groupList = groupService.listClusterGroups(clusterPhyId);
// 类型转化
List<GroupOverviewVO> voList = groupList.stream().map(elem -> GroupConverter.convert2GroupOverviewVO(elem)).collect(Collectors.toList());
// 搜索groupName
voList = PaginationUtil.pageByFuzzyFilter(voList, dto.getSearchGroupName(), Arrays.asList("name"));
//搜索topic
if (!ValidateUtils.isBlank(dto.getSearchTopicName())) {
voList = voList.stream().filter(elem -> {
for (String topicName : elem.getTopicNameList()) {
if (topicName.contains(dto.getSearchTopicName())) {
return true;
}
}
return false;
}).collect(Collectors.toList());
}
// 分页 后 返回
return PaginationUtil.pageBySubData(voList, dto);
} }
@Override @Override
@@ -104,7 +141,7 @@ public class GroupManagerImpl implements GroupManager {
List<String> latestMetricNames, List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException { PaginationSortDTO dto) throws NotExistException, AdminOperateException {
// 获取消费组消费的TopicPartition列表 // 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffset(clusterPhyId, groupName); Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet() List<Integer> partitionList = consumedOffsetMap.keySet()
.stream() .stream()
.filter(elem -> elem.topic().equals(topicName)) .filter(elem -> elem.topic().equals(topicName))
@@ -113,7 +150,7 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList); Collections.sort(partitionList);
// 获取消费组当前运行信息 // 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescription(clusterPhyId, groupName); ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName);
// 转换存储格式 // 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>(); Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
@@ -166,7 +203,7 @@ public class GroupManagerImpl implements GroupManager {
return rv; return rv;
} }
ConsumerGroupDescription description = groupService.getGroupDescription(dto.getClusterId(), dto.getGroupName()); ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) { if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败"); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
} }
@@ -185,6 +222,22 @@ public class GroupManagerImpl implements GroupManager {
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator); return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
} }
@Override
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList) {
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
groupMemberPOList.stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=completeMetricData||clusterPhyId={}||result={}||errMsg=search es failed", clusterPhyId, metricsListResult);
}
return this.convert2GroupTopicOverviewVOList(groupMemberPOList, metricsListResult.getData());
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
@@ -293,4 +346,31 @@ public class GroupManagerImpl implements GroupManager {
); );
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
} }

View File

@@ -1,8 +1,10 @@
package com.xiaojukeji.know.streaming.km.biz.topic; package com.xiaojukeji.know.streaming.km.biz.topic;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -23,4 +25,6 @@ public interface TopicStateManager {
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames); Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName); Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto);
} }

View File

@@ -2,17 +2,22 @@ package com.xiaojukeji.know.streaming.km.biz.topic.impl;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager; import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition; import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -32,6 +37,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
@@ -77,6 +83,12 @@ public class TopicStateManagerImpl implements TopicStateManager {
@Autowired @Autowired
private TopicConfigService topicConfigService; private TopicConfigService topicConfigService;
@Autowired
private GroupService groupService;
@Autowired
private GroupManager groupManager;
@Override @Override
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException { public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
Topic topic = topicService.getTopic(clusterPhyId, topicName); Topic topic = topicService.getTopic(clusterPhyId, topicName);
@@ -346,6 +358,19 @@ public class TopicStateManagerImpl implements TopicStateManager {
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }
@Override
public PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, "", "", searchGroupName, dto);
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = groupManager.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) { private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {

View File

@@ -14,7 +14,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.vo.config.metric.UserMetricConfigVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.config.metric.UserMetricConfigVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil; import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
@@ -108,10 +107,15 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect( Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
Collectors.toMap(u -> u.getType() + "@" + u.getName(), Function.identity() )); Collectors.toMap(
u -> u.getType() + "@" + u.getName(),
Function.identity(),
(v1, v2) -> v1)
);
return Result.buildSuc(map); return Result.buildSuc(map);
} }

View File

@@ -91,7 +91,7 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
continue; continue;
} }
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafkaWithCache( Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafka(
clusterPhyId, clusterPhyId,
metrics.getTopic(), metrics.getTopic(),
metrics.getBrokerId(), metrics.getBrokerId(),

View File

@@ -0,0 +1,122 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_ZOOKEEPER;
/**
* @author didi
*/
@Component
public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired
private VersionControlService versionControlService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE);
if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return;
}
// 构造参数
ZookeeperMetricParam param = new ZookeeperMetricParam(
clusterPhyId,
aliveZKList.stream().map(elem -> new Tuple<String, Integer>(elem.getHost(), elem.getPort())).collect(Collectors.toList()),
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
kafkaController == null? Constant.INVALID_CODE: kafkaController.getBrokerId(),
null
);
for(VersionControlItem v : items) {
try {
if(null != metrics.getMetrics().get(v.getName())) {
continue;
}
param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
if(null == ret || ret.failed() || null == ret.getData()){
continue;
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){
LOGGER.error(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_ZOOKEEPER;
}
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX;
@Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ZookeeperMetricEvent event) {
send2es(ZOOKEEPER_INDEX, ConvertUtil.list2List(event.getZookeeperMetrics(), ZookeeperMetricPO.class));
}
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/17
*/
@Data
public class ClusterGroupSummaryDTO extends PaginationBaseDTO {
@ApiModelProperty("查找该Topic")
private String searchTopicName;
@ApiModelProperty("查找该Group")
private String searchGroupName;
}

View File

@@ -0,0 +1,13 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ClusterZookeepersOverviewDTO extends PaginationBaseDTO {
}

View File

@@ -1,8 +1,8 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config; package com.xiaojukeji.know.streaming.km.common.bean.entity.config;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty; import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable; import java.io.Serializable;
import java.util.Properties; import java.util.Properties;
@@ -11,7 +11,6 @@ import java.util.Properties;
* @author zengqiao * @author zengqiao
* @date 22/02/24 * @date 22/02/24
*/ */
@Data
@ApiModel(description = "ZK配置") @ApiModel(description = "ZK配置")
public class ZKConfig implements Serializable { public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的jmx配置") @ApiModelProperty(value="ZK的jmx配置")
@@ -21,11 +20,51 @@ public class ZKConfig implements Serializable {
private Boolean openSecure = false; private Boolean openSecure = false;
@ApiModelProperty(value="ZK的Session超时时间", example = "15000") @ApiModelProperty(value="ZK的Session超时时间", example = "15000")
private Long sessionTimeoutUnitMs = 15000L; private Integer sessionTimeoutUnitMs = 15000;
@ApiModelProperty(value="ZK的Request超时时间", example = "5000") @ApiModelProperty(value="ZK的Request超时时间", example = "5000")
private Long requestTimeoutUnitMs = 5000L; private Integer requestTimeoutUnitMs = 5000;
@ApiModelProperty(value="ZK的Request超时时间") @ApiModelProperty(value="ZK的Request超时时间")
private Properties otherProps = new Properties(); private Properties otherProps = new Properties();
public JmxConfig getJmxConfig() {
return jmxConfig == null? new JmxConfig(): jmxConfig;
}
public void setJmxConfig(JmxConfig jmxConfig) {
this.jmxConfig = jmxConfig;
}
public Boolean getOpenSecure() {
return openSecure != null && openSecure;
}
public void setOpenSecure(Boolean openSecure) {
this.openSecure = openSecure;
}
public Integer getSessionTimeoutUnitMs() {
return sessionTimeoutUnitMs == null? Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS: sessionTimeoutUnitMs;
}
public void setSessionTimeoutUnitMs(Integer sessionTimeoutUnitMs) {
this.sessionTimeoutUnitMs = sessionTimeoutUnitMs;
}
public Integer getRequestTimeoutUnitMs() {
return requestTimeoutUnitMs == null? Constant.DEFAULT_REQUEST_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
}
public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) {
this.requestTimeoutUnitMs = requestTimeoutUnitMs;
}
public Properties getOtherProps() {
return otherProps == null? new Properties() : otherProps;
}
public void setOtherProps(Properties otherProps) {
this.otherProps = otherProps;
}
} }

View File

@@ -0,0 +1,74 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import java.util.ArrayList;
import java.util.List;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Group {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
* @see GroupTypeEnum
*/
private GroupTypeEnum type;
/**
* group名称
*/
private String name;
/**
* group状态
* @see GroupStateEnum
*/
private GroupStateEnum state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private List<GroupTopicMember> topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
public Group(Long clusterPhyId, String groupName, ConsumerGroupDescription groupDescription) {
this.clusterPhyId = clusterPhyId;
this.type = groupDescription.isSimpleConsumerGroup()? GroupTypeEnum.CONSUMER: GroupTypeEnum.CONNECTOR;
this.name = groupName;
this.state = GroupStateEnum.getByRawState(groupDescription.state());
this.memberCount = groupDescription.members() == null? 0: groupDescription.members().size();
this.topicMembers = new ArrayList<>();
this.partitionAssignor = groupDescription.partitionAssignor();
this.coordinatorId = groupDescription.coordinator() == null? Constant.INVALID_CODE: groupDescription.coordinator().id();
}
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
public class GroupTopicMember {
/**
* Topic名称
*/
private String topicName;
/**
* 消费此Topic的成员数量
*/
private Integer memberCount;
public GroupTopicMember(String topicName, Integer memberCount) {
this.topicName = topicName;
this.memberCount = memberCount;
}
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics;
import lombok.Data;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@ToString
public class ZookeeperMetrics extends BaseMetrics {
public ZookeeperMetrics(Long clusterPhyId) {
super(clusterPhyId);
}
public static ZookeeperMetrics initWithMetric(Long clusterPhyId, String metric, Float value) {
ZookeeperMetrics metrics = new ZookeeperMetrics(clusterPhyId);
metrics.setClusterPhyId( clusterPhyId );
metrics.putMetric(metric, value);
return metrics;
}
@Override
public String unique() {
return "ZK@" + clusterPhyId;
}
}

View File

@@ -0,0 +1,47 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
public class ZookeeperMetricParam extends MetricParam {
private Long clusterPhyId;
private List<Tuple<String, Integer>> zkAddressList;
private ZKConfig zkConfig;
private String metricName;
private Integer kafkaControllerId;
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.metricName = metricName;
}
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
Integer kafkaControllerId,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.kafkaControllerId = kafkaControllerId;
this.metricName = metricName;
}
}

View File

@@ -56,6 +56,7 @@ public enum ResultStatus {
KAFKA_OPERATE_FAILED(8010, "Kafka操作失败"), KAFKA_OPERATE_FAILED(8010, "Kafka操作失败"),
MYSQL_OPERATE_FAILED(8020, "MySQL操作失败"), MYSQL_OPERATE_FAILED(8020, "MySQL操作失败"),
ZK_OPERATE_FAILED(8030, "ZK操作失败"), ZK_OPERATE_FAILED(8030, "ZK操作失败"),
ZK_FOUR_LETTER_CMD_FORBIDDEN(8031, "ZK四字命令被禁止"),
ES_OPERATE_ERROR(8040, "ES操作失败"), ES_OPERATE_ERROR(8040, "ES操作失败"),
HTTP_REQ_ERROR(8050, "第三方http请求异常"), HTTP_REQ_ERROR(8050, "第三方http请求异常"),

View File

@@ -23,6 +23,8 @@ public class VersionMetricControlItem extends VersionControlItem{
public static final String CATEGORY_PERFORMANCE = "Performance"; public static final String CATEGORY_PERFORMANCE = "Performance";
public static final String CATEGORY_FLOW = "Flow"; public static final String CATEGORY_FLOW = "Flow";
public static final String CATEGORY_CLIENT = "Client";
/** /**
* 指标单位名称,非指标的没有 * 指标单位名称,非指标的没有
*/ */

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import org.apache.zookeeper.data.Stat;
@Data
public class Znode {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private Stat stat;
}

View File

@@ -0,0 +1,42 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.bean.entity.BaseEntity;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
public class ZookeeperInfo extends BaseEntity {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
public boolean alive() {
return !(Constant.DOWN.equals(status));
}
}

View File

@@ -0,0 +1,9 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import java.io.Serializable;
/**
* 四字命令结果数据的基础类
*/
public class BaseFourLetterWordCmdData implements Serializable {
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdData extends BaseFourLetterWordCmdData {
private Long clientPort;
private String dataDir;
private String dataLogDir;
private Long tickTime;
private Long maxClientCnxns;
private Long minSessionTimeout;
private Long maxSessionTimeout;
private Integer serverId;
private String initLimit;
private Long syncLimit;
private Long electionAlg;
private Long electionPort;
private Long quorumPort;
private Long peerType;
}

View File

@@ -0,0 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkWatchCount;
private Long zkEphemeralsCount;
private Long zkApproximateDataSize;
private Long zkOpenFileDescriptorCount;
private Long zkMaxFileDescriptorCount;
}

View File

@@ -0,0 +1,30 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkZxid;
}

View File

@@ -0,0 +1,116 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ConfigCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdDataParser implements FourLetterWordDataParser<ConfigCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ConfigCmdDataParser.class);
private Result<ConfigCmdData> dataResult = null;
@Override
public String getCmd() {
return FourLetterWordUtil.ConfigCmd;
}
@Override
public ConfigCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('=');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ConfigCmdData configCmdData = new ConfigCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "clientPort":
configCmdData.setClientPort(Long.valueOf(elem.getValue()));
break;
case "dataDir":
configCmdData.setDataDir(elem.getValue());
break;
case "dataLogDir":
configCmdData.setDataLogDir(elem.getValue());
break;
case "tickTime":
configCmdData.setTickTime(Long.valueOf(elem.getValue()));
break;
case "maxClientCnxns":
configCmdData.setMaxClientCnxns(Long.valueOf(elem.getValue()));
break;
case "minSessionTimeout":
configCmdData.setMinSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "maxSessionTimeout":
configCmdData.setMaxSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "serverId":
configCmdData.setServerId(Integer.valueOf(elem.getValue()));
break;
case "initLimit":
configCmdData.setInitLimit(elem.getValue());
break;
case "syncLimit":
configCmdData.setSyncLimit(Long.valueOf(elem.getValue()));
break;
case "electionAlg":
configCmdData.setElectionAlg(Long.valueOf(elem.getValue()));
break;
case "electionPort":
configCmdData.setElectionPort(Long.valueOf(elem.getValue()));
break;
case "quorumPort":
configCmdData.setQuorumPort(Long.valueOf(elem.getValue()));
break;
case "peerType":
configCmdData.setPeerType(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=ConfigCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ConfigCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return configCmdData;
}
}

View File

@@ -0,0 +1,10 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
/**
* 四字命令结果解析类
*/
public interface FourLetterWordDataParser<T> {
String getCmd();
T parseAndInitData(Long clusterPhyId, String host, int port, String cmdData);
}

View File

@@ -0,0 +1,117 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.MonitorCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdDataParser implements FourLetterWordDataParser<MonitorCmdData> {
private static final ILog LOGGER = LogFactory.getLog(MonitorCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.MonitorCmd;
}
@Override
public MonitorCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('\t');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
MonitorCmdData monitorCmdData = new MonitorCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "zk_version":
monitorCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "zk_avg_latency":
monitorCmdData.setZkAvgLatency(Float.valueOf(elem.getValue()));
break;
case "zk_max_latency":
monitorCmdData.setZkMaxLatency(Long.valueOf(elem.getValue()));
break;
case "zk_min_latency":
monitorCmdData.setZkMinLatency(Long.valueOf(elem.getValue()));
break;
case "zk_packets_received":
monitorCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "zk_packets_sent":
monitorCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "zk_num_alive_connections":
monitorCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "zk_outstanding_requests":
monitorCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "zk_server_state":
monitorCmdData.setZkServerState(elem.getValue());
break;
case "zk_znode_count":
monitorCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "zk_watch_count":
monitorCmdData.setZkWatchCount(Long.valueOf(elem.getValue()));
break;
case "zk_ephemerals_count":
monitorCmdData.setZkEphemeralsCount(Long.valueOf(elem.getValue()));
break;
case "zk_approximate_data_size":
monitorCmdData.setZkApproximateDataSize(Long.valueOf(elem.getValue()));
break;
case "zk_open_file_descriptor_count":
monitorCmdData.setZkOpenFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
case "zk_max_file_descriptor_count":
monitorCmdData.setZkMaxFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=MonitorCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=MonitorCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return monitorCmdData;
}
}

View File

@@ -0,0 +1,97 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ServerCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdDataParser implements FourLetterWordDataParser<ServerCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ServerCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.ServerCmd;
}
@Override
public ServerCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf(':');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ServerCmdData serverCmdData = new ServerCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "Zookeeper version":
serverCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "Latency min/avg/max":
String[] data = elem.getValue().split("/");
serverCmdData.setZkMinLatency(Long.valueOf(data[0]));
serverCmdData.setZkAvgLatency(Float.valueOf(data[1]));
serverCmdData.setZkMaxLatency(Long.valueOf(data[2]));
break;
case "Received":
serverCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "Sent":
serverCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "Connections":
serverCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "Outstanding":
serverCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "Mode":
serverCmdData.setZkServerState(elem.getValue());
break;
case "Node count":
serverCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "Zxid":
serverCmdData.setZkZxid(Long.parseUnsignedLong(elem.getValue().trim().substring(2), 16));
break;
default:
LOGGER.warn(
"class=ServerCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ServerCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return serverCmdData;
}
}

View File

@@ -8,8 +8,6 @@ import org.springframework.context.ApplicationEvent;
*/ */
@Getter @Getter
public class BaseMetricEvent extends ApplicationEvent { public class BaseMetricEvent extends ApplicationEvent {
public BaseMetricEvent(Object source) { public BaseMetricEvent(Object source) {
super( source ); super( source );
} }

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.event.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import lombok.Getter;
import java.util.List;
/**
* @author didi
*/
@Getter
public class ZookeeperMetricEvent extends BaseMetricEvent {
private List<ZookeeperMetrics> zookeeperMetrics;
public ZookeeperMetricEvent(Object source, List<ZookeeperMetrics> zookeeperMetrics) {
super( source );
this.zookeeperMetrics = zookeeperMetrics;
}
}

View File

@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName; import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO; import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
@@ -23,12 +22,19 @@ public class GroupMemberPO extends BasePO {
private Integer memberCount; private Integer memberCount;
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, Date updateTime) { public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount) {
this.clusterPhyId = clusterPhyId; this.clusterPhyId = clusterPhyId;
this.topicName = topicName; this.topicName = topicName;
this.groupName = groupName; this.groupName = groupName;
this.state = GroupStateEnum.UNKNOWN.getState(); this.state = state;
this.memberCount = 0; this.memberCount = memberCount;
}
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount, Date updateTime) {
this.clusterPhyId = clusterPhyId;
this.topicName = topicName;
this.groupName = groupName;
this.state = state;
this.memberCount = memberCount;
this.updateTime = updateTime; this.updateTime = updateTime;
} }
} }

View File

@@ -0,0 +1,61 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "group")
public class GroupPO extends BasePO {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
*
* @see GroupTypeEnum
*/
private Integer type;
/**
* group名称
*/
private String name;
/**
* group状态
*
* @see GroupStateEnum
*/
private String state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private String topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
}

View File

@@ -0,0 +1,24 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.metrice;
import lombok.Data;
import lombok.NoArgsConstructor;
import static com.xiaojukeji.know.streaming.km.common.utils.CommonUtils.monitorTimestamp2min;
@Data
@NoArgsConstructor
public class ZookeeperMetricPO extends BaseMetricESPO {
public ZookeeperMetricPO(Long clusterPhyId){
super(clusterPhyId);
}
@Override
public String getKey() {
return "ZK@" + clusterPhyId + "@" + monitorTimestamp2min(timestamp);
}
@Override
public String getRoutingValue() {
return String.valueOf(clusterPhyId);
}
}

View File

@@ -0,0 +1,40 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.zookeeper;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "zookeeper")
public class ZookeeperInfoPO extends BasePO {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
}

View File

@@ -31,6 +31,9 @@ public class ClusterBrokersOverviewVO extends BrokerMetadataVO {
@ApiModelProperty(value = "jmx端口") @ApiModelProperty(value = "jmx端口")
private Integer jmxPort; private Integer jmxPort;
@ApiModelProperty(value = "jmx连接状态 true:连接成功 false:连接失败")
private Boolean jmxConnected;
@ApiModelProperty(value = "是否存活 true存活 false不存活") @ApiModelProperty(value = "是否存活 true存活 false不存活")
private Boolean alive; private Boolean alive;
} }

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.group;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* @author wyb
* @date 2022/10/9
*/
@Data
@ApiModel(value = "Group信息")
public class GroupOverviewVO {
@ApiModelProperty(value = "Group名称", example = "group-know-streaming-test")
private String name;
@ApiModelProperty(value = "Group状态", example = "Empty")
private String state;
@ApiModelProperty(value = "group的成员数", example = "12")
private Integer memberCount;
@ApiModelProperty(value = "Topic列表", example = "[topic1,topic2]")
private List<String> topicNameList;
}

View File

@@ -10,7 +10,7 @@ import lombok.Data;
*/ */
@Data @Data
@ApiModel(value = "GroupTopic信息") @ApiModel(value = "GroupTopic信息")
public class GroupTopicOverviewVO extends GroupTopicBasicVO{ public class GroupTopicOverviewVO extends GroupTopicBasicVO {
@ApiModelProperty(value = "最大Lag", example = "12345678") @ApiModelProperty(value = "最大Lag", example = "12345678")
private Long maxLag; private Long maxLag;
} }

View File

@@ -1,16 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line; package com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.point.MetricPointVO;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty; import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor; import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List; import java.util.List;
import java.util.stream.Collectors;
/** /**
* @author didi * @author didi
@@ -26,19 +22,4 @@ public class MetricMultiLinesVO {
@ApiModelProperty(value = "指标名称对应的指标线") @ApiModelProperty(value = "指标名称对应的指标线")
private List<MetricLineVO> metricLines; private List<MetricLineVO> metricLines;
public List<MetricPointVO> getMetricPoints(String resName) {
if (ValidateUtils.isNull(metricLines)) {
return new ArrayList<>();
}
List<MetricLineVO> voList = metricLines.stream().filter(elem -> elem.getName().equals(resName)).collect(Collectors.toList());
if (ValidateUtils.isEmptyList(voList)) {
return new ArrayList<>();
}
// 仅获取idx=0的指标
return voList.get(0).getMetricPoints();
}
} }

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "Zookeeper信息概览")
public class ClusterZookeepersOverviewVO {
@ApiModelProperty(value = "主机ip", example = "121.0.0.1")
private String host;
@ApiModelProperty(value = "主机存活状态1Live0Down", example = "1")
private Integer status;
@ApiModelProperty(value = "端口号", example = "2416")
private Integer port;
@ApiModelProperty(value = "版本", example = "1.1.2")
private String version;
@ApiModelProperty(value = "角色", example = "Leader")
private String role;
}

View File

@@ -0,0 +1,47 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "ZK状态信息")
public class ClusterZookeepersStateVO {
@ApiModelProperty(value = "健康检查状态", example = "1")
private Integer healthState;
@ApiModelProperty(value = "健康检查通过数", example = "1")
private Integer healthCheckPassed;
@ApiModelProperty(value = "健康检查总数", example = "1")
private Integer healthCheckTotal;
@ApiModelProperty(value = "ZK的Leader机器", example = "127.0.0.1")
private String leaderNode;
@ApiModelProperty(value = "Watch数", example = "123456")
private Integer watchCount;
@ApiModelProperty(value = "节点存活数", example = "8")
private Integer aliveServerCount;
@ApiModelProperty(value = "总节点数", example = "10")
private Integer totalServerCount;
@ApiModelProperty(value = "Follower角色存活数", example = "8")
private Integer aliveFollowerCount;
@ApiModelProperty(value = "Follower角色总数", example = "10")
private Integer totalFollowerCount;
@ApiModelProperty(value = "Observer角色存活数", example = "3")
private Integer aliveObserverCount;
@ApiModelProperty(value = "Observer角色总数", example = "3")
private Integer totalObserverCount;
}

View File

@@ -0,0 +1,44 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeStatVO {
@ApiModelProperty(value = "节点被创建时的事物的ID", example = "0x1f09")
private Long czxid;
@ApiModelProperty(value = "创建时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long ctime;
@ApiModelProperty(value = "节点最后一次被修改时的事物的ID", example = "0x1f09")
private Long mzxid;
@ApiModelProperty(value = "最后一次修改时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long mtime;
@ApiModelProperty(value = "子节点列表最近一次呗修改的事物ID", example = "0x31")
private Long pzxid;
@ApiModelProperty(value = "子节点版本号", example = "0")
private Integer cversion;
@ApiModelProperty(value = "数据版本号", example = "0")
private Integer version;
@ApiModelProperty(value = "ACL版本号", example = "0")
private Integer aversion;
@ApiModelProperty(value = "创建临时节点的事物ID持久节点事物为0", example = "0")
private Long ephemeralOwner;
@ApiModelProperty(value = "数据长度,每个节点都可保存数据", example = "22")
private Integer dataLength;
@ApiModelProperty(value = "子节点的个数", example = "6")
private Integer numChildren;
}

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeVO {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private ZnodeStatVO stat;
}

View File

@@ -23,8 +23,8 @@ public class Constant {
public static final Integer YES = 1; public static final Integer YES = 1;
public static final Integer NO = 0; public static final Integer NO = 0;
public static final Integer ALIVE = 1; public static final Integer ALIVE = 1;
public static final Integer DOWN = 0; public static final Integer DOWN = 0;
public static final Integer ONE_HUNDRED = 100; public static final Integer ONE_HUNDRED = 100;
@@ -33,6 +33,7 @@ public class Constant {
public static final Long B_TO_MB = 1024L * 1024L; public static final Long B_TO_MB = 1024L * 1024L;
public static final Integer DEFAULT_SESSION_TIMEOUT_UNIT_MS = 15000; public static final Integer DEFAULT_SESSION_TIMEOUT_UNIT_MS = 15000;
public static final Integer DEFAULT_REQUEST_TIMEOUT_UNIT_MS = 5000;
public static final Float MIN_HEALTH_SCORE = 10f; public static final Float MIN_HEALTH_SCORE = 10f;
@@ -42,6 +43,7 @@ public class Constant {
*/ */
public static final Integer DEFAULT_CLUSTER_HEALTH_SCORE = 90; public static final Integer DEFAULT_CLUSTER_HEALTH_SCORE = 90;
public static final Integer PER_BATCH_MAX_VALUE = 100;
public static final String DEFAULT_USER_NAME = "know-streaming-app"; public static final String DEFAULT_USER_NAME = "know-streaming-app";
@@ -66,4 +68,5 @@ public class Constant {
public static final Integer DEFAULT_RETRY_TIME = 3; public static final Integer DEFAULT_RETRY_TIME = 3;
public static final Integer ZK_ALIVE_BUT_4_LETTER_FORBIDDEN = 11;
} }

View File

@@ -34,6 +34,8 @@ public class ESConstant {
public static final String TOTAL = "total"; public static final String TOTAL = "total";
public static final Integer DEFAULT_RETRY_TIME = 3;
private ESConstant() { private ESConstant() {
} }
} }

View File

@@ -558,7 +558,7 @@ public class ESIndexConstant {
public final static String REPLICATION_TEMPLATE = "{\n" + public final static String REPLICATION_TEMPLATE = "{\n" +
" \"order\" : 10,\n" + " \"order\" : 10,\n" +
" \"index_patterns\" : [\n" + " \"index_patterns\" : [\n" +
" \"ks_kafka_partition_metric*\"\n" + " \"ks_kafka_replication_metric*\"\n" +
" ],\n" + " ],\n" +
" \"settings\" : {\n" + " \"settings\" : {\n" +
" \"index\" : {\n" + " \"index\" : {\n" +
@@ -619,12 +619,13 @@ public class ESIndexConstant {
" }\n" + " }\n" +
" },\n" + " },\n" +
" \"aliases\" : { }\n" + " \"aliases\" : { }\n" +
" }[root@10-255-0-23 template]# cat ks_kafka_replication_metric\n" + " }";
"PUT _template/ks_kafka_replication_metric\n" +
"{\n" + public final static String ZOOKEEPER_INDEX = "ks_kafka_zookeeper_metric";
public final static String ZOOKEEPER_TEMPLATE = "{\n" +
" \"order\" : 10,\n" + " \"order\" : 10,\n" +
" \"index_patterns\" : [\n" + " \"index_patterns\" : [\n" +
" \"ks_kafka_replication_metric*\"\n" + " \"ks_kafka_zookeeper_metric*\"\n" +
" ],\n" + " ],\n" +
" \"settings\" : {\n" + " \"settings\" : {\n" +
" \"index\" : {\n" + " \"index\" : {\n" +
@@ -633,15 +634,76 @@ public class ESIndexConstant {
" },\n" + " },\n" +
" \"mappings\" : {\n" + " \"mappings\" : {\n" +
" \"properties\" : {\n" + " \"properties\" : {\n" +
" \"routingValue\" : {\n" +
" \"type\" : \"text\",\n" +
" \"fields\" : {\n" +
" \"keyword\" : {\n" +
" \"ignore_above\" : 256,\n" +
" \"type\" : \"keyword\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"clusterPhyId\" : {\n" +
" \"type\" : \"long\"\n" +
" },\n" +
" \"metrics\" : {\n" +
" \"properties\" : {\n" +
" \"AvgRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MinRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MaxRequestLatency\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"OutstandingRequests\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"NodeCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"WatchCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"NumAliveConnections\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"PacketsReceived\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"PacketsSent\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"EphemeralsCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"ApproximateDataSize\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"OpenFileDescriptorCount\" : {\n" +
" \"type\" : \"double\"\n" +
" },\n" +
" \"MaxFileDescriptorCount\" : {\n" +
" \"type\" : \"double\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"key\" : {\n" +
" \"type\" : \"text\",\n" +
" \"fields\" : {\n" +
" \"keyword\" : {\n" +
" \"ignore_above\" : 256,\n" +
" \"type\" : \"keyword\"\n" +
" }\n" +
" }\n" +
" },\n" +
" \"timestamp\" : {\n" + " \"timestamp\" : {\n" +
" \"format\" : \"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis\",\n" + " \"format\" : \"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis\",\n" +
" \"index\" : true,\n" + " \"type\" : \"date\"\n" +
" \"type\" : \"date\",\n" +
" \"doc_values\" : true\n" +
" }\n" + " }\n" +
" }\n" + " }\n" +
" },\n" + " },\n" +
" \"aliases\" : { }\n" + " \"aliases\" : { }\n" +
" }"; " }";
} }

View File

@@ -18,4 +18,14 @@ public class PaginationConstant {
* 默认页大小 * 默认页大小
*/ */
public static final Integer DEFAULT_PAGE_SIZE = 10; public static final Integer DEFAULT_PAGE_SIZE = 10;
/**
* group列表的默认排序规则
*/
public static final String DEFAULT_GROUP_SORTED_FIELD = "name";
/**
* groupTopic列表的默认排序规则
*/
public static final String DEFAULT_GROUP_TOPIC_SORTED_FIELD = "topicName";
} }

View File

@@ -0,0 +1,62 @@
package com.xiaojukeji.know.streaming.km.common.converter;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import java.util.ArrayList;
import java.util.stream.Collectors;
/**
* @author wyb
* @date 2022/10/10
*/
public class GroupConverter {
private GroupConverter() {
}
public static GroupOverviewVO convert2GroupOverviewVO(Group group) {
GroupOverviewVO vo = ConvertUtil.obj2Obj(group, GroupOverviewVO.class);
vo.setState(group.getState().getState());
vo.setTopicNameList(group.getTopicMembers().stream().map(elem -> elem.getTopicName()).collect(Collectors.toList()));
return vo;
}
public static Group convert2Group(GroupPO po) {
if (po == null) {
return null;
}
Group group = ConvertUtil.obj2Obj(po, Group.class);
if (!ValidateUtils.isBlank(po.getTopicMembers())) {
group.setTopicMembers(ConvertUtil.str2ObjArrayByJson(po.getTopicMembers(), GroupTopicMember.class));
} else {
group.setTopicMembers(new ArrayList<>());
}
group.setType(GroupTypeEnum.getTypeByCode(po.getType()));
group.setState(GroupStateEnum.getByState(po.getState()));
return group;
}
public static GroupPO convert2GroupPO(Group group) {
if (group == null) {
return null;
}
GroupPO po = ConvertUtil.obj2Obj(group, GroupPO.class);
po.setTopicMembers(ConvertUtil.obj2Json(group.getTopicMembers()));
po.setType(group.getType().getCode());
po.setState(group.getState().getState());
return po;
}
}

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.converter;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import org.apache.zookeeper.data.Stat;
public class ZnodeConverter {
ZnodeConverter(){
}
public static Znode convert2Znode(Tuple<byte[], Stat> dataAndStat, String path) {
Znode znode = new Znode();
znode.setStat(dataAndStat.getV2());
znode.setData(dataAndStat.getV1() == null ? null : new String(dataAndStat.getV1()));
znode.setName(path.substring(path.lastIndexOf('/') + 1));
return znode;
}
}

View File

@@ -0,0 +1,36 @@
package com.xiaojukeji.know.streaming.km.common.enums.group;
import lombok.Getter;
/**
* @author wyb
* @date 2022/10/11
*/
@Getter
public enum GroupTypeEnum {
UNKNOWN(-1, "Unknown"),
CONSUMER(0, "Consumer客户端的消费组"),
CONNECTOR(1, "Connector的消费组");
private final Integer code;
private final String msg;
GroupTypeEnum(Integer code, String msg) {
this.code = code;
this.msg = msg;
}
public static GroupTypeEnum getTypeByCode(Integer code) {
if (code == null) return UNKNOWN;
for (GroupTypeEnum groupTypeEnum : GroupTypeEnum.values()) {
if (groupTypeEnum.code.equals(code)) {
return groupTypeEnum;
}
}
return UNKNOWN;
}
}

View File

@@ -0,0 +1,31 @@
package com.xiaojukeji.know.streaming.km.common.enums.health;
import lombok.Getter;
/**
* 健康状态
*/
@Getter
public enum HealthStateEnum {
UNKNOWN(-1, "未知"),
GOOD(0, ""),
MEDIUM(1, ""),
POOR(2, ""),
DEAD(3, "宕机"),
;
private final int dimension;
private final String message;
HealthStateEnum(int dimension, String message) {
this.dimension = dimension;
this.message = message;
}
}

View File

@@ -9,7 +9,9 @@ public enum VersionItemTypeEnum {
METRIC_GROUP(102, "group_metric"), METRIC_GROUP(102, "group_metric"),
METRIC_BROKER(103, "broker_metric"), METRIC_BROKER(103, "broker_metric"),
METRIC_PARTITION(104, "partition_metric"), METRIC_PARTITION(104, "partition_metric"),
METRIC_REPLICATION (105, "replication_metric"), METRIC_REPLICATION(105, "replication_metric"),
METRIC_ZOOKEEPER(110, "zookeeper_metric"),
/** /**
* 服务端查询 * 服务端查询

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.enums.zookeeper;
import lombok.Getter;
@Getter
public enum ZKRoleEnum {
LEADER("leader"),
FOLLOWER("follower"),
OBSERVER("observer"),
UNKNOWN("unknown"),
;
private final String role;
ZKRoleEnum(String role) {
this.role = role;
}
}

View File

@@ -22,6 +22,12 @@ public class JmxAttribute {
public static final String PERCENTILE_99 = "99thPercentile"; public static final String PERCENTILE_99 = "99thPercentile";
public static final String MAX = "Max";
public static final String MEAN = "Mean";
public static final String MIN = "Min";
public static final String VALUE = "Value"; public static final String VALUE = "Value";
public static final String CONNECTION_COUNT = "connection-count"; public static final String CONNECTION_COUNT = "connection-count";

View File

@@ -63,6 +63,12 @@ public class JmxName {
/*********************************************************** cluster ***********************************************************/ /*********************************************************** cluster ***********************************************************/
public static final String JMX_CLUSTER_PARTITION_UNDER_REPLICATED = "kafka.cluster:type=Partition,name=UnderReplicated"; public static final String JMX_CLUSTER_PARTITION_UNDER_REPLICATED = "kafka.cluster:type=Partition,name=UnderReplicated";
/*********************************************************** zookeeper ***********************************************************/
public static final String JMX_ZK_REQUEST_LATENCY_MS = "kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs";
public static final String JMX_ZK_SYNC_CONNECTS_PER_SEC = "kafka.server:type=SessionExpireListener,name=ZooKeeperSyncConnectsPerSec";
public static final String JMX_ZK_DISCONNECTORS_PER_SEC = "kafka.server:type=SessionExpireListener,name=ZooKeeperDisconnectsPerSec";
private JmxName() { private JmxName() {
} }
} }

View File

@@ -389,4 +389,16 @@ public class ConvertUtil {
} }
return null; return null;
} }
public static Integer float2Integer(Float f) {
if (null == f) {
return null;
}
try {
return f.intValue();
} catch (Exception e) {
// ignore exception
}
return null;
}
} }

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.know.streaming.km.common.utils;
import org.apache.commons.lang.StringUtils; import org.apache.commons.lang.StringUtils;
import java.lang.reflect.Array;
import java.util.Arrays; import java.util.Arrays;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
@@ -56,6 +57,18 @@ public class ValidateUtils {
return false; return false;
} }
public static <T> boolean isNotEmpty(T[] array) {
return !isEmpty(array);
}
public static boolean isEmpty(Object[] array) {
return getLength(array) == 0;
}
public static int getLength(Object array) {
return array == null ? 0 : Array.getLength(array);
}
/** /**
* 是空字符串 * 是空字符串
*/ */
@@ -65,7 +78,7 @@ public class ValidateUtils {
} else if (isNull(seq1) || isNull(seq2) || seq1.size() != seq2.size()) { } else if (isNull(seq1) || isNull(seq2) || seq1.size() != seq2.size()) {
return false; return false;
} }
for (Object elem: seq1) { for (Object elem : seq1) {
if (!seq2.contains(elem)) { if (!seq2.contains(elem)) {
return false; return false;
} }

View File

@@ -0,0 +1,163 @@
package com.xiaojukeji.know.streaming.km.common.utils.zookeeper;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser.FourLetterWordDataParser;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import org.apache.zookeeper.common.ClientX509Util;
import org.apache.zookeeper.common.X509Exception;
import org.apache.zookeeper.common.X509Util;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketTimeoutException;
import java.util.HashSet;
import java.util.Set;
public class FourLetterWordUtil {
private static final ILog LOGGER = LogFactory.getLog(FourLetterWordUtil.class);
public static final String MonitorCmd = "mntr";
public static final String ConfigCmd = "conf";
public static final String ServerCmd = "srvr";
private static final Set<String> supportedCommands = new HashSet<>();
public static <T> Result<T> executeFourLetterCmd(Long clusterPhyId,
String host,
int port,
boolean secure,
int timeout,
FourLetterWordDataParser<T> dataParser) {
try {
if (!supportedCommands.contains(dataParser.getCmd())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, String.format("ZK %s命令暂未进行支持", dataParser.getCmd()));
}
String cmdData = send4LetterWord(host, port, dataParser.getCmd(), secure, timeout);
if (cmdData.contains("not executed because it is not in the whitelist.")) {
return Result.buildFromRSAndMsg(ResultStatus.ZK_FOUR_LETTER_CMD_FORBIDDEN, cmdData);
}
if (ValidateUtils.isBlank(cmdData)) {
return Result.buildFromRSAndMsg(ResultStatus.ZK_OPERATE_FAILED, cmdData);
}
return Result.buildSuc(dataParser.parseAndInitData(clusterPhyId, host, port, cmdData));
} catch (Exception e) {
LOGGER.error(
"class=FourLetterWordUtil||method=executeFourLetterCmd||clusterPhyId={}||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
clusterPhyId, host, port, dataParser.getCmd(), secure, timeout, e
);
return Result.buildFromRSAndMsg(ResultStatus.ZK_OPERATE_FAILED, e.getMessage());
}
}
/**************************************************** private method ****************************************************/
private static String send4LetterWord(
String host,
int port,
String cmd,
boolean secure,
int timeout) throws IOException, X509Exception.SSLContextException {
long startTime = System.currentTimeMillis();
LOGGER.info("connecting to {} {}", host, port);
Socket socket = null;
OutputStream outputStream = null;
BufferedReader bufferedReader = null;
try {
InetSocketAddress hostaddress = host != null
? new InetSocketAddress(host, port)
: new InetSocketAddress(InetAddress.getByName(null), port);
if (secure) {
LOGGER.info("using secure socket");
try (X509Util x509Util = new ClientX509Util()) {
SSLContext sslContext = x509Util.getDefaultSSLContext();
SSLSocketFactory socketFactory = sslContext.getSocketFactory();
SSLSocket sslSock = (SSLSocket) socketFactory.createSocket();
sslSock.connect(hostaddress, timeout);
sslSock.startHandshake();
socket = sslSock;
}
} else {
socket = new Socket();
socket.connect(hostaddress, timeout);
}
socket.setSoTimeout(timeout);
outputStream = socket.getOutputStream();
outputStream.write(cmd.getBytes());
outputStream.flush();
// 等待InputStream有数据
while (System.currentTimeMillis() - startTime <= timeout && socket.getInputStream().available() <= 0) {
BackoffUtils.backoff(10);
}
bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
StringBuilder sb = new StringBuilder();
String line;
while ((line = bufferedReader.readLine()) != null) {
sb.append(line).append("\n");
}
return sb.toString();
} catch (SocketTimeoutException e) {
throw new IOException("Exception while executing four letter word: " + cmd, e);
} finally {
if (outputStream != null) {
try {
outputStream.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||clusterPhyId={}||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
if (bufferedReader != null) {
try {
bufferedReader.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
if (socket != null) {
try {
socket.close();
} catch (IOException e) {
LOGGER.error(
"class=FourLetterWordUtil||method=send4LetterWord||host={}||port={}||cmd={}||secure={}||timeout={}||errMsg=exception!",
host, port, cmd, secure, timeout, e
);
}
}
}
}
static {
supportedCommands.add(MonitorCmd);
supportedCommands.add(ConfigCmd);
supportedCommands.add(ServerCmd);
}
}

View File

@@ -0,0 +1,59 @@
package com.xiaojukeji.know.streaming.km.common.utils.zookeeper;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import org.apache.zookeeper.client.ConnectStringParser;
import org.apache.zookeeper.common.NetUtils;
import java.util.ArrayList;
import java.util.List;
import static org.apache.zookeeper.common.StringUtils.split;
public class ZookeeperUtils {
private static final int DEFAULT_PORT = 2181;
/**
* 解析ZK地址
* @see ConnectStringParser
*/
public static List<Tuple<String, Integer>> connectStringParser(String connectString) throws Exception {
List<Tuple<String, Integer>> ipPortList = new ArrayList<>();
if (connectString == null) {
return ipPortList;
}
// parse out chroot, if any
int off = connectString.indexOf('/');
if (off >= 0) {
connectString = connectString.substring(0, off);
}
List<String> hostsList = split(connectString, ",");
for (String host : hostsList) {
int port = DEFAULT_PORT;
String[] hostAndPort = NetUtils.getIPV6HostAndPort(host);
if (hostAndPort.length != 0) {
host = hostAndPort[0];
if (hostAndPort.length == 2) {
port = Integer.parseInt(hostAndPort[1]);
}
} else {
int pidx = host.lastIndexOf(':');
if (pidx >= 0) {
// otherwise : is at the end of the string, ignore
if (pidx < host.length() - 1) {
port = Integer.parseInt(host.substring(pidx + 1));
}
host = host.substring(0, pidx);
}
}
ipPortList.add(new Tuple<>(host, port));
}
return ipPortList;
}
}

View File

@@ -12241,9 +12241,9 @@
"dev": true "dev": true
}, },
"typescript": { "typescript": {
"version": "3.9.10", "version": "4.6.4",
"resolved": "https://registry.npmmirror.com/typescript/-/typescript-3.9.10.tgz", "resolved": "https://registry.npmmirror.com/typescript/-/typescript-4.6.4.tgz",
"integrity": "sha512-w6fIxVE/H1PkLKcCPsFqKE7Kv7QUwhU8qQY2MueZXWx5cPZdwFupLgKK3vntcK98BtNHZtAF4LA/yl2a7k8R6Q==", "integrity": "sha512-9ia/jWHIEbo49HfjrLGfKbZSuWo9iTMwXO+Ca3pRsSpbsMbc7/IU8NKdCZVRRBafVPGnoJeFL76ZOAA84I9fEg==",
"dev": true "dev": true
}, },
"unbox-primitive": { "unbox-primitive": {

View File

@@ -95,7 +95,7 @@
"react-router-dom": "5.2.1", "react-router-dom": "5.2.1",
"stats-webpack-plugin": "^0.7.0", "stats-webpack-plugin": "^0.7.0",
"ts-loader": "^8.0.11", "ts-loader": "^8.0.11",
"typescript": "^3.5.3", "typescript": "4.6.4",
"webpack": "^4.40.0", "webpack": "^4.40.0",
"webpack-bundle-analyzer": "^4.5.0", "webpack-bundle-analyzer": "^4.5.0",
"webpack-cli": "^3.2.3", "webpack-cli": "^3.2.3",

View File

@@ -0,0 +1,54 @@
import React from 'react';
import { IconFont } from '@knowdesign/icons';
import { message } from 'knowdesign';
import { ArgsProps, ConfigOnClose } from 'knowdesign/es/basic/message';
type ConfigContent = React.ReactNode;
type ConfigDuration = number | (() => void);
type JointContent = ConfigContent | ArgsProps;
message.config({
top: 16,
});
function isArgsProps(content: JointContent): content is ArgsProps {
return Object.prototype.toString.call(content) === '[object Object]' && !!(content as ArgsProps).content;
}
const openMessage = (
type: 'info' | 'success' | 'warning' | 'error',
content: JointContent,
duration?: ConfigDuration,
onClose?: ConfigOnClose
) => {
if (isArgsProps(content)) {
message[type]({
icon: <IconFont type={`icon-${type}-circle`} />,
...content,
});
} else {
message[type]({
icon: <IconFont type={`icon-${type}-circle`} />,
content,
duration,
onClose,
});
}
};
const customMessage = {
info(content: JointContent, duration?: ConfigDuration, onClose?: ConfigOnClose) {
openMessage('info', content, duration, onClose);
},
success(content: JointContent, duration?: ConfigDuration, onClose?: ConfigOnClose) {
openMessage('success', content, duration, onClose);
},
warning(content: JointContent, duration?: ConfigDuration, onClose?: ConfigOnClose) {
openMessage('warning', content, duration, onClose);
},
error(content: JointContent, duration?: ConfigDuration, onClose?: ConfigOnClose) {
openMessage('error', content, duration, onClose);
},
};
export default customMessage;

View File

@@ -0,0 +1,33 @@
import React from 'react';
import { notification } from 'knowdesign';
import { ArgsProps } from 'knowdesign/es/basic/notification';
import { IconFont } from '@knowdesign/icons';
notification.config({
top: 16,
duration: 3,
});
const open = (type: 'info' | 'success' | 'warning' | 'error', content: ArgsProps) => {
notification[type]({
icon: <IconFont type={`icon-${type}-circle`} />,
...content,
});
};
const customNotification = {
info(content: ArgsProps) {
open('info', content);
},
success(content: ArgsProps) {
open('success', content);
},
warning(content: ArgsProps) {
open('warning', content);
},
error(content: ArgsProps) {
open('error', content);
},
};
export default customNotification;

View File

@@ -1,7 +1,8 @@
/* eslint-disable @typescript-eslint/ban-ts-comment */ /* eslint-disable @typescript-eslint/ban-ts-comment */
// @ts-nocheck // @ts-nocheck
import { notification, Utils } from 'knowdesign'; import { Utils } from 'knowdesign';
import notification from '@src/components/Notification';
export const goLogin = () => { export const goLogin = () => {
if (!window.location.pathname.toLowerCase().startsWith('/login')) { if (!window.location.pathname.toLowerCase().startsWith('/login')) {
@@ -37,10 +38,9 @@ serviceInstance.interceptors.response.use(
(config: any) => { (config: any) => {
const res: { code: number; message: string; data: any } = config.data; const res: { code: number; message: string; data: any } = config.data;
if (res.code !== 0 && res.code !== 200) { if (res.code !== 0 && res.code !== 200) {
const desc = res.message;
notification.error({ notification.error({
message: desc, message: '错误信息',
duration: 3, description: res.message,
}); });
throw res; throw res;
} }

View File

@@ -1,20 +1,6 @@
import React, { forwardRef, useCallback, useEffect, useImperativeHandle, useRef, useState } from 'react'; import React, { forwardRef, useCallback, useEffect, useImperativeHandle, useRef, useState } from 'react';
import { import { Button, Form, Input, Select, Switch, Modal, ProTable, Drawer, Space, Divider, Tooltip, AppContainer, Utils } from 'knowdesign';
Button, import message from '@src/components/Message';
Form,
Input,
Select,
Switch,
Modal,
message,
ProTable,
Drawer,
Space,
Divider,
Tooltip,
AppContainer,
Utils,
} from 'knowdesign';
import { IconFont } from '@knowdesign/icons'; import { IconFont } from '@knowdesign/icons';
import { PlusOutlined } from '@ant-design/icons'; import { PlusOutlined } from '@ant-design/icons';
import moment from 'moment'; import moment from 'moment';
@@ -81,7 +67,7 @@ const EditConfigDrawer = forwardRef((_, ref) => {
// 如果内容可以格式化为 JSON进行处理 // 如果内容可以格式化为 JSON进行处理
config.value = JSON.stringify(JSON.parse(config.value), null, 2); config.value = JSON.stringify(JSON.parse(config.value), null, 2);
} catch (_) { } catch (_) {
return; //
} }
} }
form.setFieldsValue({ ...config, status: config.status === 1 }); form.setFieldsValue({ ...config, status: config.status === 1 });
@@ -476,7 +462,7 @@ export default () => {
rowKey: 'id', rowKey: 'id',
dataSource: data, dataSource: data,
paginationProps: pagination, paginationProps: pagination,
columns, columns: columns as any,
lineFillColor: true, lineFillColor: true,
attrs: { attrs: {
onChange: onTableChange, onChange: onTableChange,

View File

@@ -11,7 +11,6 @@ import {
Transfer, Transfer,
Row, Row,
Col, Col,
message,
Tooltip, Tooltip,
Spin, Spin,
AppContainer, AppContainer,
@@ -19,6 +18,7 @@ import {
Popover, Popover,
IconFont, IconFont,
} from 'knowdesign'; } from 'knowdesign';
import message from '@src/components/Message';
import moment from 'moment'; import moment from 'moment';
import { LoadingOutlined, PlusOutlined } from '@ant-design/icons'; import { LoadingOutlined, PlusOutlined } from '@ant-design/icons';
import { defaultPagination } from '@src/constants/common'; import { defaultPagination } from '@src/constants/common';

View File

@@ -1,5 +1,6 @@
import React, { forwardRef, useCallback, useEffect, useImperativeHandle, useRef, useState } from 'react'; import React, { forwardRef, useCallback, useEffect, useImperativeHandle, useRef, useState } from 'react';
import { Form, ProTable, Select, Button, Input, Modal, message, Drawer, Space, Divider, AppContainer, Utils } from 'knowdesign'; import { Form, ProTable, Select, Button, Input, Modal, Drawer, Space, Divider, AppContainer, Utils } from 'knowdesign';
import message from '@src/components/Message';
import { IconFont } from '@knowdesign/icons'; import { IconFont } from '@knowdesign/icons';
import { PlusOutlined, QuestionCircleOutlined } from '@ant-design/icons'; import { PlusOutlined, QuestionCircleOutlined } from '@ant-design/icons';
import moment from 'moment'; import moment from 'moment';

View File

@@ -13341,9 +13341,9 @@
"dev": true "dev": true
}, },
"typescript": { "typescript": {
"version": "3.9.10", "version": "4.6.4",
"resolved": "https://registry.npmmirror.com/typescript/-/typescript-3.9.10.tgz", "resolved": "https://registry.npmmirror.com/typescript/-/typescript-4.6.4.tgz",
"integrity": "sha512-w6fIxVE/H1PkLKcCPsFqKE7Kv7QUwhU8qQY2MueZXWx5cPZdwFupLgKK3vntcK98BtNHZtAF4LA/yl2a7k8R6Q==", "integrity": "sha512-9ia/jWHIEbo49HfjrLGfKbZSuWo9iTMwXO+Ca3pRsSpbsMbc7/IU8NKdCZVRRBafVPGnoJeFL76ZOAA84I9fEg==",
"dev": true "dev": true
}, },
"ua-parser-js": { "ua-parser-js": {

View File

@@ -111,7 +111,7 @@
"react-refresh": "^0.10.0", "react-refresh": "^0.10.0",
"react-router-dom": "5.2.1", "react-router-dom": "5.2.1",
"ts-loader": "^8.0.11", "ts-loader": "^8.0.11",
"typescript": "^3.8.2", "typescript": "4.6.4",
"webpack": "^4.40.0", "webpack": "^4.40.0",
"webpack-cli": "^3.2.3", "webpack-cli": "^3.2.3",
"webpack-dev-server": "^3.2.1", "webpack-dev-server": "^3.2.1",

View File

@@ -14,6 +14,7 @@ export enum MetricType {
Broker = 103, Broker = 103,
Partition = 104, Partition = 104,
Replication = 105, Replication = 105,
Zookeeper = 110,
Controls = 901, Controls = 901,
} }
@@ -61,6 +62,8 @@ const api = {
phyClusterState: getApi(`/physical-clusters/state`), phyClusterState: getApi(`/physical-clusters/state`),
getOperatingStateList: (clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/groups-overview`), getOperatingStateList: (clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/groups-overview`),
getGroupTopicList: (clusterPhyId: number, groupName: string) => getApi(`/clusters/${clusterPhyId}/groups/${groupName}/topics-overview`),
// 物理集群接口 // 物理集群接口
phyCluster: getApi(`/physical-clusters`), phyCluster: getApi(`/physical-clusters`),
getPhyClusterBasic: (clusterPhyId: number) => getApi(`/physical-clusters/${clusterPhyId}/basic`), getPhyClusterBasic: (clusterPhyId: number) => getApi(`/physical-clusters/${clusterPhyId}/basic`),
@@ -127,6 +130,7 @@ const api = {
getApi(`/clusters/${clusterPhyId}/topics/${topicName}/brokers-partitions-summary`), getApi(`/clusters/${clusterPhyId}/topics/${topicName}/brokers-partitions-summary`),
getTopicPartitionsDetail: (clusterPhyId: string, topicName: string) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/partitions`), getTopicPartitionsDetail: (clusterPhyId: string, topicName: string) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/partitions`),
getTopicMessagesList: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/records`), // Messages列表 getTopicMessagesList: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/records`), // Messages列表
getTopicGroupList: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/groups-overview`), // Consumers列表
getTopicMessagesMetadata: (topicName: string, clusterPhyId: number) => getApi(`/clusters//${clusterPhyId}/topics/${topicName}/metadata`), // Messages列表 getTopicMessagesMetadata: (topicName: string, clusterPhyId: number) => getApi(`/clusters//${clusterPhyId}/topics/${topicName}/metadata`), // Messages列表
getTopicACLsList: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/acl-Bindings`), // ACLs列表 getTopicACLsList: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/topics/${topicName}/acl-Bindings`), // ACLs列表
getTopicConfigs: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/config-topics/${topicName}/configs`), // Configuration列表 getTopicConfigs: (topicName: string, clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/config-topics/${topicName}/configs`), // Configuration列表

View File

@@ -138,7 +138,7 @@ const CardBar = (props: CardBarProps) => {
dataIndex: 'updateTime', dataIndex: 'updateTime',
key: 'updateTime', key: 'updateTime',
render: (value: number) => { render: (value: number) => {
return moment(value).format('YYYY-MM-DD hh:mm:ss'); return moment(value).format('YYYY-MM-DD HH:mm:ss');
}, },
}, },
{ {

View File

@@ -1,15 +1,12 @@
import React, { useState, useEffect } from 'react'; import React, { useState, useEffect, forwardRef, useImperativeHandle } from 'react';
import { Drawer, Button, Space, Divider, AppContainer, ProTable } from 'knowdesign'; import { Drawer, Button, Space, Divider, AppContainer, ProTable, Utils } from 'knowdesign';
import { IconFont } from '@knowdesign/icons'; import { IconFont } from '@knowdesign/icons';
import { IindicatorSelectModule } from './index'; import { MetricSelect } from './index';
import './style/indicator-drawer.less'; import './style/indicator-drawer.less';
import { useLocation } from 'react-router-dom'; import { useLocation } from 'react-router-dom';
interface PropsType extends React.HTMLAttributes<HTMLDivElement> { interface PropsType extends React.HTMLAttributes<HTMLDivElement> {
onClose: () => void; metricSelect: MetricSelect;
visible: boolean;
isGroup?: boolean; // 是否分组
indicatorSelectModule: IindicatorSelectModule;
} }
interface MetricInfo { interface MetricInfo {
@@ -27,25 +24,25 @@ type CategoryData = {
metrics: MetricInfo[]; metrics: MetricInfo[];
}; };
const ExpandedRow = ({ metrics, category, selectedMetrics, selectedMetricChange }: any) => { const expandedRowColumns = [
const innerColumns = [ {
{ title: '指标名称',
title: '指标名称', dataIndex: 'name',
dataIndex: 'name', key: 'name',
key: 'name', },
}, {
{ title: '单位',
title: '单位', dataIndex: 'unit',
dataIndex: 'unit', key: 'unit',
key: 'unit', },
}, {
{ title: '描述',
title: '描述', dataIndex: 'desc',
dataIndex: 'desc', key: 'desc',
key: 'desc', },
}, ];
];
const ExpandedRow = ({ metrics, category, selectedMetrics, selectedMetricChange }: any) => {
return ( return (
<div <div
style={{ style={{
@@ -62,7 +59,7 @@ const ExpandedRow = ({ metrics, category, selectedMetrics, selectedMetricChange
showHeader: false, showHeader: false,
noPagination: true, noPagination: true,
rowKey: 'name', rowKey: 'name',
columns: innerColumns, columns: expandedRowColumns,
dataSource: metrics, dataSource: metrics,
attrs: { attrs: {
rowSelection: { rowSelection: {
@@ -79,13 +76,14 @@ const ExpandedRow = ({ metrics, category, selectedMetrics, selectedMetricChange
); );
}; };
const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType) => { const MetricSelect = forwardRef(({ metricSelect }: PropsType, ref) => {
const [global] = AppContainer.useGlobalValue(); const [global] = AppContainer.useGlobalValue();
const { pathname } = useLocation(); const { pathname } = useLocation();
const [confirmLoading, setConfirmLoading] = useState<boolean>(false); const [confirmLoading, setConfirmLoading] = useState<boolean>(false);
const [categoryData, setCategoryData] = useState<CategoryData[]>([]); const [categoryData, setCategoryData] = useState<CategoryData[]>([]);
const [selectedCategories, setSelectedCategories] = useState<string[]>([]); const [selectedCategories, setSelectedCategories] = useState<string[]>([]);
const [childrenSelectedRowKeys, setChildrenSelectedRowKeys] = useState<SelectedMetrics>({}); const [childrenSelectedRowKeys, setChildrenSelectedRowKeys] = useState<SelectedMetrics>({});
const [visible, setVisible] = useState<boolean>(false);
const columns = [ const columns = [
{ {
@@ -96,13 +94,13 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
]; ];
const formateTableData = () => { const formateTableData = () => {
const tableData = indicatorSelectModule.tableData; const tableData = metricSelect.tableData;
const categoryData: { const categoryData: {
[category: string]: MetricInfo[]; [category: string]: MetricInfo[];
} = {}; } = {};
tableData.forEach(({ name, desc }) => { tableData.forEach(({ name, desc }) => {
const metricDefine = global.getMetricDefine(indicatorSelectModule?.metricType, name); const metricDefine = global.getMetricDefine(metricSelect?.metricType, name);
const returnData = { const returnData = {
name, name,
desc, desc,
@@ -125,12 +123,12 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
}; };
const formateSelectedKeys = () => { const formateSelectedKeys = () => {
const newKeys = indicatorSelectModule.selectedRows; const newKeys = metricSelect.selectedRows;
const result: SelectedMetrics = {}; const result: SelectedMetrics = {};
const selectedCategories: string[] = []; const selectedCategories: string[] = [];
newKeys.forEach((name: string) => { newKeys.forEach((name: string) => {
const metricDefine = global.getMetricDefine(indicatorSelectModule?.metricType, name); const metricDefine = global.getMetricDefine(metricSelect?.metricType, name);
if (metricDefine) { if (metricDefine) {
if (!result[metricDefine.category]) { if (!result[metricDefine.category]) {
result[metricDefine.category] = [name]; result[metricDefine.category] = [name];
@@ -217,10 +215,10 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
const allRowKeys: string[] = []; const allRowKeys: string[] = [];
Object.entries(childrenSelectedRowKeys).forEach(([, arr]) => allRowKeys.push(...arr)); Object.entries(childrenSelectedRowKeys).forEach(([, arr]) => allRowKeys.push(...arr));
indicatorSelectModule.submitCallback(allRowKeys).then( metricSelect.submitCallback(allRowKeys).then(
() => { () => {
setConfirmLoading(false); setConfirmLoading(false);
onClose(); setVisible(false);
}, },
() => { () => {
setConfirmLoading(false); setConfirmLoading(false);
@@ -231,7 +229,7 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
const rowSelection = { const rowSelection = {
selectedRowKeys: selectedCategories, selectedRowKeys: selectedCategories,
onChange: rowChange, onChange: rowChange,
// getCheckboxProps: (record: any) => indicatorSelectModule.checkboxProps && indicatorSelectModule.checkboxProps(record), // getCheckboxProps: (record: any) => metricSelect.checkboxProps && metricSelect.checkboxProps(record),
getCheckboxProps: (record: CategoryData) => { getCheckboxProps: (record: CategoryData) => {
const isAllSelected = record.metrics.length === childrenSelectedRowKeys[record.category]?.length; const isAllSelected = record.metrics.length === childrenSelectedRowKeys[record.category]?.length;
const isNotCheck = !childrenSelectedRowKeys[record.category] || childrenSelectedRowKeys[record.category]?.length === 0; const isNotCheck = !childrenSelectedRowKeys[record.category] || childrenSelectedRowKeys[record.category]?.length === 0;
@@ -241,25 +239,33 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
}, },
}; };
useEffect(formateTableData, [indicatorSelectModule.tableData]); useEffect(formateTableData, [metricSelect.tableData]);
useEffect(() => { useEffect(() => {
visible && formateSelectedKeys(); visible && formateSelectedKeys();
}, [visible, indicatorSelectModule.selectedRows]); }, [visible, metricSelect.selectedRows]);
useImperativeHandle(
ref,
() => ({
open: () => setVisible(true),
}),
[]
);
return ( return (
<> <>
<Drawer <Drawer
className="indicator-drawer" className="indicator-drawer"
title={indicatorSelectModule.drawerTitle || '指标筛选'} title={metricSelect.drawerTitle || '指标筛选'}
width="868px" width="868px"
forceRender={true} forceRender={true}
onClose={onClose} onClose={() => setVisible(false)}
visible={visible} visible={visible}
maskClosable={false} maskClosable={false}
extra={ extra={
<Space> <Space>
<Button size="small" onClick={onClose}> <Button size="small" onClick={() => setVisible(false)}>
</Button> </Button>
<Button <Button
@@ -281,6 +287,7 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
rowKey: 'category', rowKey: 'category',
columns: columns, columns: columns,
dataSource: categoryData, dataSource: categoryData,
noPagination: true,
attrs: { attrs: {
rowSelection: rowSelection, rowSelection: rowSelection,
expandable: { expandable: {
@@ -319,6 +326,6 @@ const IndicatorDrawer = ({ onClose, visible, indicatorSelectModule }: PropsType)
</Drawer> </Drawer>
</> </>
); );
}; });
export default IndicatorDrawer; export default MetricSelect;

View File

@@ -26,6 +26,7 @@ const OptionsDefault = [
const NodeScope = ({ nodeScopeModule, change }: propsType) => { const NodeScope = ({ nodeScopeModule, change }: propsType) => {
const { const {
hasCustomScope,
customScopeList: customList, customScopeList: customList,
scopeName = '', scopeName = '',
scopeLabel = '自定义范围', scopeLabel = '自定义范围',
@@ -128,51 +129,53 @@ const NodeScope = ({ nodeScopeModule, change }: propsType) => {
</Space> </Space>
</Radio.Group> </Radio.Group>
</div> </div>
<div className="flx_r"> {hasCustomScope && (
<h6 className="time_title">{scopeLabel}</h6> <div className="flx_r">
<div className="custom-scope"> <h6 className="time_title">{scopeLabel}</h6>
<div className="check-row"> <div className="custom-scope">
<Checkbox className="check-all" indeterminate={indeterminate} onChange={onCheckAllChange} checked={checkAll}> <div className="check-row">
<Checkbox className="check-all" indeterminate={indeterminate} onChange={onCheckAllChange} checked={checkAll}>
</Checkbox>
<Input </Checkbox>
className="search-input" <Input
suffix={<IconFont type="icon-fangdajing" style={{ fontSize: '16px' }} />} className="search-input"
size="small" suffix={<IconFont type="icon-fangdajing" style={{ fontSize: '16px' }} />}
placeholder={searchPlaceholder} size="small"
onChange={(e) => setScopeSearchValue(e.target.value)} placeholder={searchPlaceholder}
/> onChange={(e) => setScopeSearchValue(e.target.value)}
</div> />
<div className="fixed-height"> </div>
<Checkbox.Group style={{ width: '100%' }} onChange={checkChange} value={checkedListTemp}> <div className="fixed-height">
<Row gutter={[10, 12]}> <Checkbox.Group style={{ width: '100%' }} onChange={checkChange} value={checkedListTemp}>
{customList <Row gutter={[10, 12]}>
.filter((item) => item.label.includes(scopeSearchValue)) {customList
.map((item) => ( .filter((item) => item.label.includes(scopeSearchValue))
<Col span={12} key={item.value}> .map((item) => (
<Checkbox value={item.value}>{item.label}</Checkbox> <Col span={12} key={item.value}>
</Col> <Checkbox value={item.value}>{item.label}</Checkbox>
))} </Col>
</Row> ))}
</Checkbox.Group> </Row>
</div> </Checkbox.Group>
</div>
<div className="btn-con"> <div className="btn-con">
<Button <Button
type="primary" type="primary"
size="small" size="small"
className="btn-sure" className="btn-sure"
onClick={customSure} onClick={customSure}
disabled={checkedListTemp?.length > 0 ? false : true} disabled={checkedListTemp?.length > 0 ? false : true}
> >
</Button> </Button>
<Button size="small" onClick={customCancel}> <Button size="small" onClick={customCancel}>
</Button> </Button>
</div>
</div> </div>
</div> </div>
</div> )}
</div> </div>
</div> </div>
); );
@@ -185,7 +188,7 @@ const NodeScope = ({ nodeScopeModule, change }: propsType) => {
visible={popVisible} visible={popVisible}
content={clickContent} content={clickContent}
placement="bottomRight" placement="bottomRight"
overlayClassName="d-node-scope-popover" overlayClassName={`d-node-scope-popover ${hasCustomScope ? 'large-size' : ''}`}
onVisibleChange={visibleChange} onVisibleChange={visibleChange}
> >
<span className="input-span"> <span className="input-span">

View File

@@ -1,9 +1,9 @@
import React, { useEffect, useState } from 'react'; import React, { useEffect, useRef, useState } from 'react';
import { Tooltip, Select, Utils, Divider, Button } from 'knowdesign'; import { Select, Divider, Button } from 'knowdesign';
import { IconFont } from '@knowdesign/icons'; import { IconFont } from '@knowdesign/icons';
import moment from 'moment'; import moment from 'moment';
import { DRangeTime } from 'knowdesign'; import { DRangeTime } from 'knowdesign';
import IndicatorDrawer from './IndicatorDrawer'; import MetricSelect from './MetricSelect';
import NodeScope from './NodeScope'; import NodeScope from './NodeScope';
import './style/index.less'; import './style/index.less';
@@ -24,8 +24,8 @@ export interface KsHeaderOptions {
data: number | number[]; data: number | number[];
}; };
} }
export interface IindicatorSelectModule { export interface MetricSelect {
metricType?: MetricType; metricType: MetricType;
hide?: boolean; hide?: boolean;
drawerTitle?: string; drawerTitle?: string;
selectedRows: (string | number)[]; selectedRows: (string | number)[];
@@ -47,20 +47,27 @@ export interface IcustomScope {
} }
export interface InodeScopeModule { export interface InodeScopeModule {
hasCustomScope: boolean;
customScopeList: IcustomScope[]; customScopeList: IcustomScope[];
scopeName?: string; scopeName?: string;
scopeLabel?: string; scopeLabel?: string;
searchPlaceholder?: string; searchPlaceholder?: string;
change?: () => void; change?: () => void;
} }
interface PropsType { interface PropsType {
indicatorSelectModule?: IindicatorSelectModule; metricSelect?: MetricSelect;
hideNodeScope?: boolean; hideNodeScope?: boolean;
hideGridSelect?: boolean; hideGridSelect?: boolean;
nodeScopeModule?: InodeScopeModule; nodeScopeModule?: InodeScopeModule;
onChange: (options: KsHeaderOptions) => void; onChange: (options: KsHeaderOptions) => void;
} }
interface ScopeData {
isTop: boolean;
data: any;
}
// 列布局选项 // 列布局选项
const GRID_SIZE_OPTIONS = [ const GRID_SIZE_OPTIONS = [
{ {
@@ -77,15 +84,17 @@ const GRID_SIZE_OPTIONS = [
}, },
]; ];
const SingleChartHeader = ({ const MetricOperateBar = ({
indicatorSelectModule, metricSelect,
nodeScopeModule = { nodeScopeModule = {
hasCustomScope: false,
customScopeList: [], customScopeList: [],
}, },
hideNodeScope = false, hideNodeScope = false,
hideGridSelect = false, hideGridSelect = false,
onChange: onChangeCallback, onChange: onChangeCallback,
}: PropsType): JSX.Element => { }: PropsType): JSX.Element => {
const metricSelectRef = useRef(null);
const [gridNum, setGridNum] = useState<number>(GRID_SIZE_OPTIONS[1].value); const [gridNum, setGridNum] = useState<number>(GRID_SIZE_OPTIONS[1].value);
const [rangeTime, setRangeTime] = useState<[number, number]>(() => { const [rangeTime, setRangeTime] = useState<[number, number]>(() => {
const curTimeStamp = moment().valueOf(); const curTimeStamp = moment().valueOf();
@@ -93,16 +102,35 @@ const SingleChartHeader = ({
}); });
const [isRelativeRangeTime, setIsRelativeRangeTime] = useState(true); const [isRelativeRangeTime, setIsRelativeRangeTime] = useState(true);
const [isAutoReload, setIsAutoReload] = useState(false); const [isAutoReload, setIsAutoReload] = useState(false);
const [indicatorDrawerVisible, setIndicatorDrawerVisible] = useState(false); const [scopeData, setScopeData] = useState<ScopeData>({
const [scopeData, setScopeData] = useState<{
isTop: boolean;
data: any;
}>({
isTop: true, isTop: true,
data: 5, data: 5,
}); });
const sizeChange = (value: number) => setGridNum(value);
const timeChange = (curRangeTime: [number, number], isRelative: boolean) => {
setRangeTime([...curRangeTime]);
setIsRelativeRangeTime(isRelative);
};
const reloadRangeTime = () => {
if (isRelativeRangeTime) {
const timeLen = rangeTime[1] - rangeTime[0] || 0;
const curTimeStamp = moment().valueOf();
setRangeTime([curTimeStamp - timeLen, curTimeStamp]);
} else {
setRangeTime([...rangeTime]);
}
};
const nodeScopeChange = (data: any, isTop?: any) => {
setScopeData({
isTop,
data,
});
};
useEffect(() => { useEffect(() => {
onChangeCallback({ onChangeCallback({
rangeTime, rangeTime,
@@ -129,68 +157,37 @@ const SingleChartHeader = ({
}; };
}, [isRelativeRangeTime, rangeTime]); }, [isRelativeRangeTime, rangeTime]);
const sizeChange = (value: number) => {
setGridNum(value);
};
const timeChange = (curRangeTime: [number, number], isRelative: boolean) => {
setRangeTime([...curRangeTime]);
setIsRelativeRangeTime(isRelative);
};
const reloadRangeTime = () => {
if (isRelativeRangeTime) {
const timeLen = rangeTime[1] - rangeTime[0] || 0;
const curTimeStamp = moment().valueOf();
setRangeTime([curTimeStamp - timeLen, curTimeStamp]);
} else {
setRangeTime([...rangeTime]);
}
};
const openIndicatorDrawer = () => {
setIndicatorDrawerVisible(true);
};
const closeIndicatorDrawer = () => {
setIndicatorDrawerVisible(false);
};
const nodeScopeChange = (data: any, isTop?: any) => {
setScopeData({
isTop,
data,
});
};
return ( return (
<> <>
<div className="ks-chart-container"> <div className="ks-chart-container">
<div className="ks-chart-container-header"> <div className="ks-chart-container-header">
<div className="header-left"> <div className="header-left">
{/* 刷新 */}
<div className="icon-box" onClick={reloadRangeTime}> <div className="icon-box" onClick={reloadRangeTime}>
<IconFont className="icon" type="icon-shuaxin1" /> <IconFont className="icon" type="icon-shuaxin1" />
</div> </div>
<Divider type="vertical" style={{ height: 20, top: 0 }} /> <Divider type="vertical" style={{ height: 20, top: 0 }} />
{/* 时间选择 */}
<DRangeTime timeChange={timeChange} rangeTimeArr={rangeTime} /> <DRangeTime timeChange={timeChange} rangeTimeArr={rangeTime} />
</div> </div>
<div className="header-right"> <div className="header-right">
{/* 节点范围 */}
{!hideNodeScope && <NodeScope nodeScopeModule={nodeScopeModule} change={nodeScopeChange} />} {!hideNodeScope && <NodeScope nodeScopeModule={nodeScopeModule} change={nodeScopeChange} />}
{/* 分栏 */}
{!hideGridSelect && ( {!hideGridSelect && (
<Select className="grid-select" style={{ width: 70 }} value={gridNum} options={GRID_SIZE_OPTIONS} onChange={sizeChange} /> <Select className="grid-select" style={{ width: 70 }} value={gridNum} options={GRID_SIZE_OPTIONS} onChange={sizeChange} />
)} )}
{(!hideNodeScope || !hideGridSelect) && <Divider type="vertical" style={{ height: 20, top: 0 }} />} {(!hideNodeScope || !hideGridSelect) && <Divider type="vertical" style={{ height: 20, top: 0 }} />}
<Button type="primary" onClick={openIndicatorDrawer}> <Button type="primary" onClick={() => metricSelectRef.current.open()}>
</Button> </Button>
</div> </div>
</div> </div>
</div> </div>
{!indicatorSelectModule?.hide && ( {/* 指标筛选 */}
<IndicatorDrawer visible={indicatorDrawerVisible} onClose={closeIndicatorDrawer} indicatorSelectModule={indicatorSelectModule} /> {!metricSelect?.hide && <MetricSelect ref={metricSelectRef} metricSelect={metricSelect} />}
)}
</> </>
); );
}; };
export default SingleChartHeader; export default MetricOperateBar;

View File

@@ -1,13 +1,8 @@
@root-entry-name: 'default'; .indicator-drawer {
@import '~knowdesign/es/basic/style/themes/index'; .dcloud-drawer-body {
@import '~knowdesign/es/basic/style/mixins/index'; padding-top: 2px !important;
.indicator-drawer{
.dcloud-drawer-body{
padding-top: 2px !important;
}
} }
}
// .dd-indicator-drawer { // .dd-indicator-drawer {
// @drawerItemH: 27px; // @drawerItemH: 27px;
// @primary-color: #556ee6; // @primary-color: #556ee6;

View File

@@ -63,9 +63,16 @@
} }
.@{ant-prefix}-popover-inner-content { .@{ant-prefix}-popover-inner-content {
padding: 16px 24px; padding: 16px 24px;
width: 479px; width: 200px;
box-sizing: border-box; box-sizing: border-box;
} }
&.large-size {
.@{ant-prefix}-popover-inner-content {
padding: 16px 24px;
width: 479px;
box-sizing: border-box;
}
}
&.@{ant-prefix}-popover-placement-bottomRight { &.@{ant-prefix}-popover-placement-bottomRight {
// padding-top: 0; // padding-top: 0;
} }

Some files were not shown because too many files have changed in this diff Show More