Compare commits

...

73 Commits
v3.2.0 ... v3.3

Author SHA1 Message Date
zengqiao
258385dc9a 升级至3.3.0版本 2023-02-24 11:12:31 +08:00
zengqiao
65238231f0 补充3.3.0版本升级信息 2023-02-24 11:11:12 +08:00
zengqiao
cb22e02fbe 补充3.3.0版本变更信息 2023-02-24 11:10:42 +08:00
erge
aa0bec1206 [Optimize]package.json锁定lerna版本,更新package-lock.json文件(#957) 2023-02-23 20:14:04 +08:00
wyb
793c780015 [Bugfix]修复mm2列表请求超时(#949)
调整代码结构
2023-02-23 11:17:48 +08:00
erge
ec6f063450 [Optimize] 去除package.json 出现内部地址(#939) 2023-02-22 17:08:21 +08:00
zengqiao
f25c65b98b [Doc]补充贡献者信息 2023-02-22 14:00:52 +08:00
Luckywustone
2d99aae779 [Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
[Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
2023-02-22 13:41:02 +08:00
erge
a8847dc282 [Bugfix] 修复打包不成功(#940) 2023-02-22 11:58:33 +08:00
zengqiao
4852c01c88 [Feature]补充贡献代码相关文档(#947)
1、补充贡献者名单,如有遗漏,辛苦告知;
2、补充贡献指南;
2023-02-22 11:53:00 +08:00
zengqiao
3d6f405b69 [Bugfix]订正失效的邮箱地址(#944)
[Bugfix]订正语句(#944)
2023-02-22 11:52:40 +08:00
erge
18e3fbf41d [Optimize] 健康检查项时间和结果显示(didi#930) 2023-02-21 10:41:49 +08:00
erge
ae8cc3092b [Optimize] 新增/编辑MM2 Topic 由当前集群获取改为对应的sourceKafka集群获取& 新增/编辑MM2入参优化(#894) 2023-02-21 10:41:44 +08:00
erge
5c26e8947b [Optimize] JSON新增MM2 Drawer Title文案变更(#894) 2023-02-21 10:41:37 +08:00
erge
fbe6945d3b [Bugfix]zookeeper页面leader节点显示异常(#873) 2023-02-21 10:41:25 +08:00
zengqiao
7dc8f2dc48 [Bugfix]修复Connector列表和MM2列表搜索不生效的问题(#928) 2023-02-21 10:40:05 +08:00
zengqiao
91c60ce72c [Bugfix]修复新接入的集群,Controller-Host不显示的问题(#927)
问题原因:
1、新接入的集群,DB中暂未存储Broker信息,因此在存储Controller至DB时,查询DB中的Broker会查询为空。

解决方式:
1、存储Controller至DB前,主动获取一次Broker的信息。
2023-02-21 10:39:46 +08:00
zengqiao
687eea80c8 补充3.3.0版本变更信息 2023-02-16 14:51:43 +08:00
zengqiao
9bfe3fd1db 设置为AGPL协议 2023-02-15 17:53:46 +08:00
shizeying
03f81bc6de [Bugfix]删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引(#918) 2023-02-15 17:45:53 +08:00
slhu
eed9571ffa [Bugfix]解决在解析命令执行后返回指标的值时发生的数据类型转换错误与指标存储上报时报空指针的问题(#912)
1.zk_min_latency、zk_max_latency指标数据类型变更为float
2.使用ConvertUtil.string2Float()方法进行string到float到类型转换
2023-02-15 16:20:39 +08:00
edengyuan_v
e4651ef749 [Optimize]新增Topic时清理策略区分单选多选(#770) 2023-02-15 11:18:33 +08:00
zengqiao
f715cf7a8d 补充 3.3.0 版本变更信息 2023-02-13 11:57:51 +08:00
wyb
fad9ddb9a1 fix: 更新登录页文案 2023-02-13 11:49:00 +08:00
wyb
b6e4f50849 fix: 健康状态详情优化 & Connector 样式优化 & 无MM2任务指标兜底页 2023-02-13 11:49:00 +08:00
wyb
5c6911e398 [Optimize]Overview指标卡片展示逻辑 2023-02-13 11:49:00 +08:00
wyb
a0371ab88b feat: 新增Topic 复制功能 2023-02-13 11:49:00 +08:00
wyb
fa2abadc25 feat: 新增Mirror Maker 2.0(MM2) 2023-02-13 11:49:00 +08:00
zengqiao
f03460f3cd [Bugfix]修复 Broker Similar Config 显示错误的问题(#872) 2023-02-13 11:22:13 +08:00
zengqiao
b5683b73c2 [Optimize]优化 MySQL & ES 测试容器的初始化(#906)
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;

被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL;
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
2023-02-13 10:33:40 +08:00
zengqiao
c062586c7e [Optimize]删除无用&多余的打包配置文件 2023-02-10 16:51:32 +08:00
fengqiongfeng
98a5c7b776 [Optimize]健康检查日志优化(#869) 2023-02-10 11:02:24 +08:00
zengqiao
e204023b1f [Feature]增加支持Topic复制的集群列表接口(#899) 2023-02-09 17:03:28 +08:00
zengqiao
4c5ffccc45 [Optimize]删除无效代码 2023-02-09 17:00:50 +08:00
zengqiao
fbcf58e19c [Feature]MM2管理-Connector元信息管理优化(#894) 2023-02-09 16:59:38 +08:00
zengqiao
e5c6d00438 [Feature]MM2管理-补充集群Group列表信息(#894) 2023-02-09 16:59:38 +08:00
zengqiao
ab6a4d7099 [Feature]MM2管理-MM2管理相关接口类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
78b2b8a45e [Feature]MM2管理-MM2管理相关业务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
add2af4f3f [Feature]MM2管理-MM2管理相关服务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
235c0ed30e [Feature]MM2管理-MM2管理相关实体类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
5bd93aa478 [Bugfix]修复正常情况下,集群状态统计错误的问题(#865) 2023-02-09 16:44:26 +08:00
zengqiao
f95be2c1b3 [Optimize]TaskResult增加返回任务分组信息 2023-02-09 16:36:19 +08:00
zengqiao
5110b30f62 [Feature]MM2管理-MM2健康巡检(#894) 2023-02-09 15:36:35 +08:00
zengqiao
861faa5df5 [Feature]HA-镜像Topic管理(#899)
1、底层Kafka需要是滴滴版本的Kafka;
2、新增镜像Topic的增删改查;
3、新增镜像Topic的指标查看;
2023-02-09 15:21:23 +08:00
zengqiao
efdf624c67 [Feature]HA-滴滴Kafka版本信息兼容(#899) 2023-02-09 15:21:23 +08:00
zengqiao
caccf9cef5 [Feature]MM2管理-采集MM2指标任务(#894) 2023-02-09 14:58:34 +08:00
zengqiao
6ba3dceb84 [Feature]MM2管理-采集MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
9b7c41e804 [Feature]MM2管理-读写ES中的MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
346aee8fe7 [Bugfix]修复Topic指标大盘获取TopN指标存在错误的问题(#896)
1、将ES排序调整为基于本地cache的排序;
2、将database的本地cache从core模块移动到persistence模块;
2023-02-09 14:20:02 +08:00
zengqiao
353d781bca [Feature]补充MM2相关索引及数据库表信息(#894) 2023-02-09 13:44:40 +08:00
EricZeng
3ce4bf231a 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
EricZeng
d046cb8bf4 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
zengqiao
da95c63503 [Optimize]优化TestContainers相关依赖(#892)
1、去除对mysql-connector-j的依赖;
2、整理代码;
2023-02-09 11:28:26 +08:00
haoqi
915e48de22 [Optimize]补充Testcontainers的使用说明(#890) 2023-02-09 11:05:44 +08:00
_haoqi
256f770971 [Feature]Support running tests with testcontainers(#870) 2023-02-08 14:56:44 +08:00
zengqiao
16e251cbe8 调整开源协议 2023-02-08 14:10:37 +08:00
zengqiao
67743b859a [Optimize]补充Ldap登录的配置说明(#888) 2023-02-08 13:51:45 +08:00
congchen0321
c275b42632 Update faq.md 2023-02-08 13:41:08 +08:00
zengqiao
a02760417b [Optimize]ZK-Overview页面补充默认展示的指标(#874) 2023-01-30 13:18:06 +08:00
zengqiao
0e50bfc5d4 优化PR模版 2023-01-13 16:04:25 +08:00
wuyouwuyoulian
eab988e18f For #781, Fix "The partition display is incomplete" bug 2023-01-12 11:03:30 +08:00
zengqiao
dd6004b9d4 [Bugfix]修复采集副本指标时,参数传递错误问题(#867) 2023-01-11 18:00:21 +08:00
zengqiao
ac7c32acd5 [Optimize]优化ES索引及模版的初始化文档(#832)
1、订正不同地方索引模版的shard数存在不一致的问题;
2、删除多余的template.sh,统一使用init_es_template.sh;
3、init_es_template.sh中,增加connect相关索引模版的初始化脚本,删除replica 和 zookeper索引模版的初始化脚本;
2023-01-09 15:18:41 +08:00
zengqiao
f4a219ceef [Optimize]去除Replica指标从ES读写的相关代码(#862) 2023-01-09 14:57:38 +08:00
zengqiao
a8b56fb613 [Bugfix]修复用户信息修改后,用户列表会抛出空指针异常的问题(#860) 2023-01-09 14:57:23 +08:00
zengqiao
2925a20e8e [Bugfix]修复查看消息时,选择分区不生效问题(#858) 2023-01-09 13:38:10 +08:00
zengqiao
6b3eb05735 [Bugfix]修复对ZK客户端进行配置后不生效的问题(#694)
1、修复在ks_km_physical_cluster表的zk_properties字段填写ZK 客户端的相关配置后,不生效的问题。
2、删除zk_properties字段中,暂时无需使用的jmxConfig字段。
2023-01-09 10:44:35 +08:00
zengqiao
17e0c39f83 [Optimize]优化Topic健康巡检的日志(#855) 2023-01-06 14:42:08 +08:00
zengqiao
4994639111 [Optimize]无ZK模块时,巡检详情忽略对ZK的展示(#764) 2023-01-04 10:32:18 +08:00
wyb
c187b5246f [Bugfix]修复connector指标筛选缺少指标的问题(#846) 2022-12-23 16:19:34 +08:00
wyb
6ed6d5ec8a [Bugfix]修复用户更新失败问题(#840) 2022-12-22 15:56:48 +08:00
wyb
0735b332a8 [Bugfix]修复函数映射错误(#842) 2022-12-22 08:48:59 +08:00
wyb
344cec19fe [Bugfix]connector指标采集算最大值错误(#836) 2022-12-20 09:50:42 +08:00
260 changed files with 13352 additions and 26314 deletions

View File

@@ -14,9 +14,10 @@ XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 确保有针对更改提交的 Github issue通常在您开始处理之前。诸如拼写错误之类的琐碎更改不需要 Github issue。您的Pull Request应该只解决个问题,而不需要进行其他更改—— 一个 PR 解决个问题
* [ ] 格式化 Pull Request 标题,如[ISSUE #123] support Confluent Schema Registry。 Pull Request 中的每个提交都应该有一个有意义的主题行和正文。
* [ ] 编写足够详细的Pull Request描述以了解Pull Request的作用、方式和原因。
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改请记住在test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过
* [ ] 一个 PRPull Request的简写)只解决个问题,禁止一个 PR 解决个问题
* [ ] 确保 PR 有对应的 Issue通常在您开始处理之前创建除非是书写错误之类的琐碎更改不需要 Issue
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PSCommit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过;

View File

@@ -4,7 +4,7 @@
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
contributors and maintainers pledge to making participation in our project, and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
@@ -56,7 +56,7 @@ further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at shirenchuang@didiglobal.com . All
reported by contacting the project team at https://knowstreaming.com/support-center . All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.

View File

@@ -1,4 +1,55 @@
## v3.3.0
**问题修复**
- 修复 Connect 的 JMX-Port 配置未生效问题;
- 修复 不存在 Connector 时OverView 页面的数据一直处于加载中的问题;
- 修复 Group 分区信息,分页时展示不全的问题;
- 修复采集副本指标时,参数传递错误的问题;
- 修复用户信息修改后,用户列表会抛出空指针异常的问题;
- 修复 Topic 详情页面,查看消息时,选择分区不生效问题;
- 修复对 ZK 客户端进行配置后不生效的问题;
- 修复 connect 模块,指标中缺少健康巡检项通过数的问题;
- 修复 connect 模块,指标获取方法存在映射错误的问题;
- 修复 connect 模块max 纬度指标获取错误的问题;
- 修复 Topic 指标大盘 TopN 指标显示信息错误的问题;
- 修复 Broker Similar Config 显示错误的问题;
- 修复解析 ZK 四字命令时,数据类型设置错误导致空指针的问题;
- 修复新增 Topic 时,清理策略选项版本控制错误的问题;
- 修复新接入集群时 Controller-Host 信息不显示的问题;
- 修复 Connector 和 MM2 列表搜索不生效的问题;
- 修复 Zookeeper 页面Leader 显示存在异常的问题;
- 修复前端打包失败的问题;
**产品优化**
- ZK Overview 页面补充默认展示的指标;
- 统一初始化 ES 索引模版的脚本为 init_es_template.sh同时新增缺失的 connect 索引模版初始化脚本,去除多余的 replica 和 zookeper 索引模版初始化脚本;
- 指标大盘页面,优化指标筛选操作后,无指标数据的指标卡片由不显示改为显示,并增加无数据的兜底;
- 删除从 ES 读写 replica 指标的相关代码;
- 优化 Topic 健康巡检的日志,明确错误的原因;
- 优化无 ZK 模块时,巡检详情忽略对 ZK 的展示;
- 优化本地缓存大小为可配置;
- Task 模块中的返回中,补充任务的分组信息;
- FAQ 补充 Ldap 的配置说明;
- FAQ 补充接入 Kerberos 认证的 Kafka 集群的配置说明;
- ks_km_kafka_change_record 表增加时间纬度的索引,优化查询性能;
- 优化 ZK 健康巡检的日志,便于问题的排查;
**功能新增**
- 新增基于滴滴 Kafka 的 Topic 复制功能(需使用滴滴 Kafka 才可具备该能力);
- Topic 指标大盘,新增 Topic 复制相关的指标;
- 新增基于 TestContainers 的单测;
**Kafka MM2 Beta版 (v3.3.0版本新增发布)**
- MM2 任务的增删改查;
- MM2 任务的指标大盘;
- MM2 任务的健康状态;
---
## v3.2.0
**问题修复**

View File

@@ -13,7 +13,7 @@ curl -s --connect-timeout 10 -o /dev/null -X POST -H 'cache-control: no-cache' -
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "2"
}
},
"mappings" : {
@@ -115,7 +115,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "2"
}
},
"mappings" : {
@@ -302,7 +302,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "6"
}
},
"mappings" : {
@@ -377,73 +377,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"brokerId" : {
"type" : "long"
},
"partitionId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"topic" : {
"type" : "keyword"
},
"metrics" : {
"properties" : {
"LogStartOffset" : {
"type" : "float"
},
"Messages" : {
"type" : "float"
},
"LogEndOffset" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "6"
}
},
"mappings" : {
@@ -509,7 +443,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "6"
}
},
"mappings" : {
@@ -626,7 +560,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
],
"settings" : {
"index" : {
"number_of_shards" : "10"
"number_of_shards" : "2"
}
},
"mappings" : {
@@ -704,6 +638,388 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_cluster_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_cluster_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"ConnectorCount" : {
"type" : "float"
},
"TaskCount" : {
"type" : "float"
},
"ConnectorStartupAttemptsTotal" : {
"type" : "float"
},
"ConnectorStartupFailurePercentage" : {
"type" : "float"
},
"ConnectorStartupFailureTotal" : {
"type" : "float"
},
"ConnectorStartupSuccessPercentage" : {
"type" : "float"
},
"ConnectorStartupSuccessTotal" : {
"type" : "float"
},
"TaskStartupAttemptsTotal" : {
"type" : "float"
},
"TaskStartupFailurePercentage" : {
"type" : "float"
},
"TaskStartupFailureTotal" : {
"type" : "float"
},
"TaskStartupSuccessPercentage" : {
"type" : "float"
},
"TaskStartupSuccessTotal" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_connector_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_connector_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"ConnectorTotalTaskCount" : {
"type" : "float"
},
"HealthCheckPassed" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ConnectorRunningTaskCount" : {
"type" : "float"
},
"ConnectorPausedTaskCount" : {
"type" : "float"
},
"ConnectorFailedTaskCount" : {
"type" : "float"
},
"ConnectorUnassignedTaskCount" : {
"type" : "float"
},
"BatchSizeAvg" : {
"type" : "float"
},
"BatchSizeMax" : {
"type" : "float"
},
"OffsetCommitAvgTimeMs" : {
"type" : "float"
},
"OffsetCommitMaxTimeMs" : {
"type" : "float"
},
"OffsetCommitFailurePercentage" : {
"type" : "float"
},
"OffsetCommitSuccessPercentage" : {
"type" : "float"
},
"PollBatchAvgTimeMs" : {
"type" : "float"
},
"PollBatchMaxTimeMs" : {
"type" : "float"
},
"SourceRecordActiveCount" : {
"type" : "float"
},
"SourceRecordActiveCountAvg" : {
"type" : "float"
},
"SourceRecordActiveCountMax" : {
"type" : "float"
},
"SourceRecordPollRate" : {
"type" : "float"
},
"SourceRecordPollTotal" : {
"type" : "float"
},
"SourceRecordWriteRate" : {
"type" : "float"
},
"SourceRecordWriteTotal" : {
"type" : "float"
},
"OffsetCommitCompletionRate" : {
"type" : "float"
},
"OffsetCommitCompletionTotal" : {
"type" : "float"
},
"OffsetCommitSkipRate" : {
"type" : "float"
},
"OffsetCommitSkipTotal" : {
"type" : "float"
},
"PartitionCount" : {
"type" : "float"
},
"PutBatchAvgTimeMs" : {
"type" : "float"
},
"PutBatchMaxTimeMs" : {
"type" : "float"
},
"SinkRecordActiveCount" : {
"type" : "float"
},
"SinkRecordActiveCountAvg" : {
"type" : "float"
},
"SinkRecordActiveCountMax" : {
"type" : "float"
},
"SinkRecordLagMax" : {
"type" : "float"
},
"SinkRecordReadRate" : {
"type" : "float"
},
"SinkRecordReadTotal" : {
"type" : "float"
},
"SinkRecordSendRate" : {
"type" : "float"
},
"SinkRecordSendTotal" : {
"type" : "float"
},
"DeadletterqueueProduceFailures" : {
"type" : "float"
},
"DeadletterqueueProduceRequests" : {
"type" : "float"
},
"LastErrorTimestamp" : {
"type" : "float"
},
"TotalErrorsLogged" : {
"type" : "float"
},
"TotalRecordErrors" : {
"type" : "float"
},
"TotalRecordFailures" : {
"type" : "float"
},
"TotalRecordsSkipped" : {
"type" : "float"
},
"TotalRetries" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_mirror_maker_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_mirror_maker_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ByteCount" : {
"type" : "float"
},
"ByteRate" : {
"type" : "float"
},
"RecordAgeMs" : {
"type" : "float"
},
"RecordAgeMsAvg" : {
"type" : "float"
},
"RecordAgeMsMax" : {
"type" : "float"
},
"RecordAgeMsMin" : {
"type" : "float"
},
"RecordCount" : {
"type" : "float"
},
"RecordRate" : {
"type" : "float"
},
"ReplicationLatencyMs" : {
"type" : "float"
},
"ReplicationLatencyMsAvg" : {
"type" : "float"
},
"ReplicationLatencyMsMax" : {
"type" : "float"
},
"ReplicationLatencyMsMin" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
for i in {0..6};
do
logdate=_$(date -d "${i} day ago" +%Y-%m-%d)
@@ -711,8 +1027,10 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_connector_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_mirror_maker_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2
done

View File

@@ -0,0 +1,111 @@
<mxfile host="65bd71144e">
<diagram id="vxzhwhZdNVAY19FZ4dgb" name="Page-1">
<mxGraphModel dx="1194" dy="733" grid="0" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="4" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="540" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="7" style="edgeStyle=none;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1" source="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="240" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="5" style="edgeStyle=none;html=1;startArrow=none;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="400" as="targetPoint"/>
<mxPoint x="360" y="360" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="3" value="C3" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#FF8000;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="340" y="280" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;entryPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="8" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="8" value="fix_928" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="320" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="9" value="github_master" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="160" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="10" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;startArrow=none;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="11" target="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="2" value="C2" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="200" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="12" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="9" target="11">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="200" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="11" value="C1" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="120" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="23" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="3">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="360" as="targetPoint"/>
<mxPoint x="360" y="400" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="17" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;endArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="2" target="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="240" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="16" value="C4" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="440" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="22" value="Tag-v3.2.0" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="120" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="24" value="Tag-v3.2.1" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="440" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="27" value="切换到主分支git checkout github_master" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="90" width="240" height="30" as="geometry"/>
</mxCell>
<mxCell id="34" style="edgeStyle=none;html=1;exitX=0;exitY=0;exitDx=0;exitDy=0;entryX=0.855;entryY=0.145;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="28" target="2">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="28" value="主分支拉最新代码git pull" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="120" width="160" height="30" as="geometry"/>
</mxCell>
<mxCell id="35" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="29">
<mxGeometry relative="1" as="geometry">
<mxPoint x="270" y="225" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="29" value="基于主分支拉新分支git checkout -b fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="210" width="250" height="30" as="geometry"/>
</mxCell>
<mxCell id="37" style="edgeStyle=none;html=1;exitX=0;exitY=1;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="30" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="30" value="提交代码git commit -m &quot;[Optimize]优化xxx问题(#928)&quot;" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="270" width="320" height="30" as="geometry"/>
</mxCell>
<mxCell id="31" value="提交到自己远端仓库git push --set-upstream origin fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="300" width="334" height="30" as="geometry"/>
</mxCell>
<mxCell id="38" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="32">
<mxGeometry relative="1" as="geometry">
<mxPoint x="280" y="380" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="32" value="GitHub页面发起Pull Request请求管理员合入主仓库" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="360" width="300" height="30" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

View File

@@ -0,0 +1,100 @@
# 贡献名单
- [贡献名单](#贡献名单)
- [1、贡献者角色](#1贡献者角色)
- [1.1、Maintainer](#11maintainer)
- [1.2、Committer](#12committer)
- [1.3、Contributor](#13contributor)
- [2、贡献者名单](#2贡献者名单)
## 1、贡献者角色
KnowStreaming 开发者包含 Maintainer、Committer、Contributor 三种角色,每种角色的标准定义如下。
### 1.1、Maintainer
Maintainer 是对 KnowStreaming 项目的演进和发展做出显著贡献的个人。具体包含以下的标准:
- 完成多个关键模块或者工程的设计与开发,是项目的核心开发人员;
- 持续的投入和激情能够积极参与社区、官网、issue、PR 等项目相关事项的维护;
- 在社区中具有有目共睹的影响力,能够代表 KnowStreaming 参加重要的社区会议和活动;
- 具有培养 Committer 和 Contributor 的意识和能力;
### 1.2、Committer
Committer 是具有 KnowStreaming 仓库写权限的个人,包含以下的标准:
- 能够在长时间内做持续贡献 issue、PR 的个人;
- 参与 issue 列表的维护及重要 feature 的讨论;
- 参与 code review
### 1.3、Contributor
Contributor 是对 KnowStreaming 项目有贡献的个人,标准为:
- 提交过 PR 并被合并;
---
## 2、贡献者名单
开源贡献者名单(不定期更新)
在名单内,但是没有收到贡献者礼品的同学,可以联系szzdzhp001
| 姓名 | Github | 角色 | 公司 |
| ------------------- | ---------------------------------------------------------- | ----------- | -------- |
| 张亮 | [@zhangliangboy](https://github.com/zhangliangboy) | Maintainer | 滴滴出行 |
| 谢鹏 | [@PenceXie](https://github.com/PenceXie) | Maintainer | 滴滴出行 |
| 赵情融 | [@zqrferrari](https://github.com/zqrferrari) | Maintainer | 滴滴出行 |
| 石臻臻 | [@shirenchuang](https://github.com/shirenchuang) | Maintainer | 滴滴出行 |
| 曾巧 | [@ZQKC](https://github.com/ZQKC) | Maintainer | 滴滴出行 |
| 孙超 | [@lucasun](https://github.com/lucasun) | Maintainer | 滴滴出行 |
| 洪华驰 | [@brodiehong](https://github.com/brodiehong) | Maintainer | 滴滴出行 |
| 许喆 | [@potaaaaaato](https://github.com/potaaaaaato) | Committer | 滴滴出行 |
| 郭宇航 | [@GraceWalk](https://github.com/GraceWalk) | Committer | 滴滴出行 |
| 李伟 | [@velee](https://github.com/velee) | Committer | 滴滴出行 |
| 张占昌 | [@zzccctv](https://github.com/zzccctv) | Committer | 滴滴出行 |
| 王东方 | [@wangdongfang-aden](https://github.com/wangdongfang-aden) | Committer | 滴滴出行 |
| 王耀波 | [@WYAOBO](https://github.com/WYAOBO) | Committer | 滴滴出行 |
| 赵寅锐 | [@ZHAOYINRUI](https://github.com/ZHAOYINRUI) | Maintainer | 字节跳动 |
| haoqi123 | [@haoqi123](https://github.com/haoqi123) | Contributor | 前程无忧 |
| chaixiaoxue | [@chaixiaoxue](https://github.com/chaixiaoxue) | Contributor | SYNNEX |
| 陆晗 | [@luhea](https://github.com/luhea) | Contributor | 竞技世界 |
| Mengqi777 | [@Mengqi777](https://github.com/Mengqi777) | Contributor | 腾讯 |
| ruanliang-hualun | [@ruanliang-hualun](https://github.com/ruanliang-hualun) | Contributor | 网易 |
| 17hao | [@17hao](https://github.com/17hao) | Contributor | |
| Huyueeer | [@Huyueeer](https://github.com/Huyueeer) | Contributor | INVENTEC |
| lomodays207 | [@lomodays207](https://github.com/lomodays207) | Contributor | 建信金科 |
| Super .Wein星痕 | [@superspeedone](https://github.com/superspeedone) | Contributor | 韵达 |
| Hongten | [@Hongten](https://github.com/Hongten) | Contributor | Shopee |
| 徐正熙 | [@hyper-xx)](https://github.com/hyper-xx) | Contributor | 滴滴出行 |
| RichardZhengkay | [@RichardZhengkay](https://github.com/RichardZhengkay) | Contributor | 趣街 |
| 罐子里的茶 | [@gzldc](https://github.com/gzldc) | Contributor | 道富 |
| 陈忠玉 | [@paula](https://github.com/chenzhongyu11) | Contributor | 平安产险 |
| 杨光 | [@yaangvipguang](https://github.com/yangvipguang) | Contributor |
| 王亚聪 | [@wangyacongi](https://github.com/wangyacongi) | Contributor |
| Yang Jing | [@yangbajing](https://github.com/yangbajing) | Contributor | |
| 刘新元 Liu XinYuan | [@Liu-XinYuan](https://github.com/Liu-XinYuan) | Contributor | |
| Joker | [@LiubeyJokerQueue](https://github.com/JokerQueue) | Contributor | 丰巢 |
| Eason Lau | [@Liubey](https://github.com/Liubey) | Contributor | |
| hailanxin | [@hailanxin](https://github.com/hailanxin) | Contributor | |
| Qi Zhang | [@zzzhangqi](https://github.com/zzzhangqi) | Contributor | 好雨科技 |
| fengxsong | [@fengxsong](https://github.com/fengxsong) | Contributor | |
| 谢晓东 | [@Strangevy](https://github.com/Strangevy) | Contributor | 花生日记 |
| ZhaoXinlong | [@ZhaoXinlong](https://github.com/ZhaoXinlong) | Contributor | |
| xuehaipeng | [@xuehaipeng](https://github.com/xuehaipeng) | Contributor | |
| 孔令续 | [@mrazkong](https://github.com/mrazkong) | Contributor | |
| pierre xiong | [@pierre94](https://github.com/pierre94) | Contributor | |
| PengShuaixin | [@PengShuaixin](https://github.com/PengShuaixin) | Contributor | |
| 梁壮 | [@lz](https://github.com/silent-night-no-trace) | Contributor | |
| 张晓寅 | [@ahu0605](https://github.com/ahu0605) | Contributor | 电信数智 |
| 黄海婷 | [@Huanghaiting](https://github.com/Huanghaiting) | Contributor | 云徙科技 |
| 任祥德 | [@RenChauncy](https://github.com/RenChauncy) | Contributor | 探马企服 |
| 胡圣林 | [@slhu997](https://github.com/slhu997) | Contributor | |
| 史泽颖 | [@shizeying](https://github.com/shizeying) | Contributor | |
| 王玉博 | [@Wyb7290](https://github.com/Wyb7290) | Committer | |
| 伍璇 | [@Luckywustone](https://github.com/Luckywustone) | Contributor ||
| 邓苑 | [@CatherineDY](https://github.com/CatherineDY) | Contributor ||
| 封琼凤 | [@Luckywustone](https://github.com/fengqiongfeng) | Committer ||

View File

@@ -0,0 +1,167 @@
# 贡献指南
- [贡献指南](#贡献指南)
- [1、行为准则](#1行为准则)
- [2、仓库规范](#2仓库规范)
- [2.1、Issue 规范](#21issue-规范)
- [2.2、Commit-Log 规范](#22commit-log-规范)
- [2.3、Pull-Request 规范](#23pull-request-规范)
- [3、操作示例](#3操作示例)
- [3.1、初始化环境](#31初始化环境)
- [3.2、认领问题](#32认领问题)
- [3.3、处理问题 \& 提交解决](#33处理问题--提交解决)
- [3.4、请求合并](#34请求合并)
- [4、常见问题](#4常见问题)
- [4.1、如何将多个 Commit-Log 合并为一个?](#41如何将多个-commit-log-合并为一个)
---
欢迎 👏🏻 👏🏻 👏🏻 来到 `KnowStreaming`。本文档是关于如何为 `KnowStreaming` 做出贡献的指南。如果您发现不正确或遗漏的内容, 请留下您的意见/建议。
---
## 1、行为准则
请务必阅读并遵守我们的:[行为准则](https://github.com/didi/KnowStreaming/blob/master/CODE_OF_CONDUCT.md)。
## 2、仓库规范
### 2.1、Issue 规范
按要求,在 [创建Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 中创建ISSUE即可。
需要重点说明的是:
- 提供出现问题的环境信息包括使用的系统使用的KS版本等
- 提供出现问题的复现方式;
### 2.2、Commit-Log 规范
`Commit-Log` 包含三部分 `Header``Body``Footer`。其中 `Header` 是必须的,格式固定,`Body` 在变更有必要详细解释时使用。
**1、`Header` 规范**
`Header` 格式为 `[Type]Message(#IssueID)` 主要有三部分组成,分别是`Type``Message``IssueID`
- `Type`:说明这个提交是哪一个类型的,比如有 Bugfix、Feature、Optimize等
- `Message`说明提交的信息比如修复xx问题
- `IssueID`该提交关联的Issue的编号
实际例子:[`[Bugfix]修复新接入的集群Controller-Host不显示的问题(#927)`](https://github.com/didi/KnowStreaming/pull/933/commits)
**2、`Body` 规范**
一般不需要,如果解决了较复杂问题,或者代码较多,需要 `Body` 说清楚解决的问题,解决的思路等信息。
---
**3、实际例子**
```
[Optimize]优化 MySQL & ES 测试容器的初始化(#906)
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;
被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
```
**TODO : 后续有兴趣的同学,可以考虑引入 Git 的 Hook 进行更好的 Commit-Log 的管理。**
### 2.3、Pull-Request 规范
详细见:[PULL-REQUEST 模版](../../.github/PULL_REQUEST_TEMPLATE.md)
需要重点说明的是:
- <font color=red > 任何 PR 都必须与有效 ISSUE 相关联。否则, PR 将被拒绝;</font>
- <font color=red> 一个分支只修改一件事,一个 PR 只修改一件事;</b></font>
---
## 3、操作示例
本节主要介绍对 `KnowStreaming` 进行代码贡献时,相关的操作方式及操作命令。
名词说明:
- 主仓库https://github.com/didi/KnowStreaming 这个仓库为主仓库。
- 分仓库Fork 到自己账号下的 KnowStreaming 仓库为分仓库;
### 3.1、初始化环境
1. `Fork KnowStreaming` 主仓库至自己账号下,见 https://github.com/didi/KnowStreaming 地址右上角的 `Fork` 按钮;
2. 克隆分仓库至本地:`git clone git@github.com:xxxxxxx/KnowStreaming.git`,该仓库的简写名通常是`origin`
3. 添加主仓库至本地:`git remote add upstream https://github.com/didi/KnowStreaming``upstream`是主仓库在本地的简写名,可以随意命名,前后保持一致即可;
4. 拉取主仓库代码:`git fetch upstream`
5. 拉取分仓库代码:`git fetch origin`
6. 将主仓库的`master`分支,拉取到本地并命名为`github_master``git checkout -b upstream/master`
最后,我们来看一下初始化完成之后的大致效果,具体如下图所示:
![环境初始化](./assets/环境初始化.jpg)
至此,我们的环境就初始化好了。后续,`github_master` 分支就是主仓库的`master`分支,我们可以使用`git pull`拉取该分支的最新代码,还可以使用`git checkout -b xxx`拉取我们想要的分支。
### 3.2、认领问题
在文末评论说明自己要处理该问题即可,具体如下图所示:
![问题认领](./assets/问题认领.jpg)
### 3.3、处理问题 & 提交解决
本节主要介绍一下处理问题 & 提交解决过程中的分支管理,具体如下图所示:
![分支管理](./assets/分支管理.png)
1. 切换到主分支:`git checkout github_master`
2. 主分支拉最新代码:`git pull`
3. 基于主分支拉新分支:`git checkout -b fix_928`
4. 提交代码安装commit的规范进行提交例如`git commit -m "[Optimize]优化xxx问题(#928)"`
5. 提交到自己远端仓库:`git push --set-upstream origin fix_928`
6. `GitHub` 页面发起 `Pull Request` 请求,管理员合入主仓库。这部分详细见下一节;
### 3.4、请求合并
代码在提交到 `GitHub` 分仓库之后,就可以在 `GitHub` 的网站创建 `Pull Request`,申请将代码合入主仓库了。 `Pull Request` 具体见下图所示:
![申请合并](./assets/申请合并.jpg)
[Pull Request 创建的例子](https://github.com/didi/KnowStreaming/pull/945)
---
## 4、常见问题
### 4.1、如何将多个 Commit-Log 合并为一个?
可以使用 `git rebase -i` 命令进行解决。

View File

@@ -1,6 +0,0 @@
开源贡献者证书发放名单(定期更新)
贡献者名单请看:[贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)

View File

@@ -1,6 +0,0 @@
<br>
<br>
请点击:[贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)

View File

@@ -216,7 +216,7 @@ curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka* 查看KS索
#### 3.1.2、解决方案
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本创建索引。
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版
@@ -245,8 +245,7 @@ curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本初始化索引和模板
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版
### 3.3、原因三集群Shard满
@@ -283,4 +282,4 @@ curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端
}'
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本补全索引。
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来补全索引。

View File

@@ -6,7 +6,65 @@
### 升级至 `master` 版本
暂无
### 升级至 `3.3.0` 版本
**SQL 变更**
```sql
ALTER TABLE `logi_security_user`
CHANGE COLUMN `phone` `phone` VARCHAR(20) NOT NULL DEFAULT '' COMMENT 'mobile' ;
ALTER TABLE ks_kc_connector ADD `heartbeat_connector_name` varchar(512) DEFAULT '' COMMENT '心跳检测connector名称';
ALTER TABLE ks_kc_connector ADD `checkpoint_connector_name` varchar(512) DEFAULT '' COMMENT '进度确认connector名称';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_TOTAL_RECORD_ERRORS', '{\"value\" : 1}', 'MirrorMaker消息处理错误的次数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX', '{\"value\" : 6000}', 'MirrorMaker消息复制最大延迟时间', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_UNASSIGNED_TASK_COUNT', '{\"value\" : 20}', 'MirrorMaker未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_FAILED_TASK_COUNT', '{\"value\" : 10}', 'MirrorMaker失败状态的任务数量', 'admin');
-- 多集群管理权限2023-01-05新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2012', 'Topic-新增Topic复制', '1593', '1', '2', 'Topic-新增Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2014', 'Topic-详情-取消Topic复制', '1593', '1', '2', 'Topic-详情-取消Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2012', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2014', '0', 'know-streaming');
-- 多集群管理权限2023-01-18新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2016', 'MM2-新增', '1593', '1', '2', 'MM2-新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2018', 'MM2-编辑', '1593', '1', '2', 'MM2-编辑', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2020', 'MM2-删除', '1593', '1', '2', 'MM2-删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2022', 'MM2-重启', '1593', '1', '2', 'MM2-重启', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2024', 'MM2-暂停&恢复', '1593', '1', '2', 'MM2-暂停&恢复', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2016', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2018', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2020', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2022', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2024', '0', 'know-streaming');
DROP TABLE IF EXISTS `ks_ha_active_standby_relation`;
CREATE TABLE `ks_ha_active_standby_relation` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`active_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '主集群ID',
`standby_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '备集群ID',
`res_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '资源名称',
`res_type` int(11) NOT NULL DEFAULT '-1' COMMENT '资源类型0集群1镜像Topic2主备Topic',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_res` (`res_type`,`active_cluster_phy_id`,`standby_cluster_phy_id`,`res_name`),
UNIQUE KEY `uniq_res_type_standby_cluster_res_name` (`res_type`,`standby_cluster_phy_id`,`res_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='HA主备关系表';
-- 删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引
ALTER TABLE `ks_km_kafka_change_record` DROP INDEX `idx_cluster_phy_id` ,
ADD INDEX `idx_cluster_update_time` (`cluster_phy_id` ASC, `update_time` ASC);
```
### 升级至 `3.2.0` 版本

View File

@@ -182,3 +182,47 @@ Node 版本: v12.22.12
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
+ 解决方案清空数据库数据将数据库字符集调整为utf8最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。
## 8.13、接入开启kerberos认证的kafka集群
1. 部署KnowStreaming的机器上安装krb客户端
2. 替换/etc/krb5.conf配置文件
3. 把kafka对应的keytab复制到改机器目录下
4. 接入集群时认证配置,配置信息根据实际情况填写;
```json
{
"security.protocol": "SASL_PLAINTEXT",
"sasl.mechanism": "GSSAPI",
"sasl.jaas.config": "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=\"/etc/keytab/kafka.keytab\" storeKey=true useTicketCache=false principal=\"kafka/kafka@TEST.COM\";",
"sasl.kerberos.service.name": "kafka"
}
```
## 8.14、对接Ldap的配置
```yaml
# 需要在application.yml中增加如下配置。相关配置的信息按实际情况进行调整
account:
ldap:
url: ldap://127.0.0.1:8080/
basedn: DC=senz,DC=local
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: CN=search,DC=senz,DC=local
credentials: xxxxxxx
auth-user-registration: false # 是否注册到mysql默认false
auth-user-registration-role: 1677 # 1677是超级管理员角色的id如果赋予想默认赋予普通角色可以到ks新建一个。
# 需要在application.yml中修改如下配置
spring:
logi-security:
login-extend-bean-name: ksLdapLoginService # 表示使用ldap的service
```
## 8.15、测试时使用Testcontainers的说明
1. 需要docker运行环境 [Testcontainers运行环境说明](https://www.testcontainers.org/supported_docker_environment/)
2. 如果本机没有docker可以使用[远程访问docker](https://docs.docker.com/config/daemon/remote-access/) [Testcontainers配置说明](https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection)

View File

@@ -62,10 +62,6 @@
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency>
<groupId>commons-codec</groupId>

View File

@@ -4,8 +4,12 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHe
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import java.util.List;
/**
* 多集群总体状态
*/
@@ -24,4 +28,6 @@ public interface MultiClusterPhyManager {
* @return
*/
PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto);
Result<List<ClusterPhyBaseVO>> getClusterPhysBasic();
}

View File

@@ -140,7 +140,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
clusterBrokersStateVO.setKafkaControllerAlive(true);
}
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, Arrays.asList("broker.id", "listeners", "name", "value")) <= 0);
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, KafkaConstant.CONFIG_SIMILAR_IGNORED_CONFIG_KEY_LIST) <= 0
);
return clusterBrokersStateVO;
}

View File

@@ -136,13 +136,13 @@ public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
//模糊匹配
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectClusterName"));
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectClusterName", dto.getSortType());
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectClusterName", dto.getSortType());
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页

View File

@@ -14,10 +14,12 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterPhyTop
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.ha.HaResTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.ha.HaActiveStandbyRelationService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import org.springframework.beans.factory.annotation.Autowired;
@@ -38,6 +40,9 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
@Autowired
private TopicMetricService topicMetricService;
@Autowired
private HaActiveStandbyRelationService haActiveStandbyRelationService;
@Override
public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) {
// 获取集群所有的Topic信息
@@ -46,8 +51,11 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
// 获取集群所有Topic的指标
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId);
// 获取HA信息
Set<String> haTopicNameSet = haActiveStandbyRelationService.listByClusterAndType(clusterPhyId, HaResTypeEnum.MIRROR_TOPIC).stream().map(elem -> elem.getResName()).collect(Collectors.toSet());
// 转换成vo
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap);
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap, haTopicNameSet);
// 请求分页信息
PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto);

View File

@@ -9,13 +9,12 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHe
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -24,7 +23,6 @@ import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -42,37 +40,26 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
@Autowired
private ClusterMetricService clusterMetricService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override
public ClusterPhysState getClusterPhysState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysState physState = new ClusterPhysState(0, 0, 0, clusterPhyList.size());
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(
clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()),
false
);
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
physState.setDownCount(physState.getDownCount() + 1);
} else if ((System.currentTimeMillis() - clusterPhy.getCreateTime().getTime() >= 5 * 60 * 1000) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
for (ClusterPhy clusterPhy : clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDownCount(physState.getDownCount() + 1);
} else {
// 其他情况都设置为alive
physState.setLiveCount(physState.getLiveCount() + 1);
}
}
return physState;
}
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
@@ -107,23 +94,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 转为vo格式方便后续进行分页筛选等
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
// 获取集群controller信息并补充到vo中,
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
for (ClusterPhyDashboardVO vo: voList) {
KafkaController kafkaController = controllerMap.get(vo.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
vo.setAlive(Constant.DOWN);
} else if ((System.currentTimeMillis() - vo.getCreateTime().getTime() >= 5 * 60L * 1000L) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
vo.setAlive(Constant.DOWN);
} else {
// 其他情况都设置为alive
vo.setAlive(Constant.ALIVE);
}
}
// 本地分页过滤
voList = this.getAndPagingDataInLocal(voList, dto);
@@ -148,6 +118,15 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
);
}
@Override
public Result<List<ClusterPhyBaseVO>> getClusterPhysBasic() {
// 获取集群
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
// 转为vo格式方便后续进行分页筛选等
return Result.buildSuc(ConvertUtil.list2List(clusterPhyList, ClusterPhyBaseVO.class));
}
/**************************************************** private method ****************************************************/

View File

@@ -10,6 +10,7 @@ public interface ConnectorManager {
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator);
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
}

View File

@@ -1,7 +1,5 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
@@ -12,6 +10,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
@@ -25,8 +24,6 @@ import java.util.stream.Collectors;
@Service
public class ConnectorManagerImpl implements ConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(ConnectorManagerImpl.class);
@Autowired
private PluginService pluginService;
@@ -52,6 +49,8 @@ public class ConnectorManagerImpl implements ConnectorManager {
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
dto.getConfigs().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
@@ -66,6 +65,29 @@ public class ConnectorManagerImpl implements ConnectorManager {
return Result.buildSuc();
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator) {
dto.getConfigs().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getAllConnectorInfoFromCluster(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
KSConnector connector = ksConnectorResult.getData();
connector.setCheckpointConnectorName(checkpointName);
connector.setHeartbeatConnectorName(heartbeatName);
connectorService.addNewToDB(connector);
return Result.buildSuc();
}
@Override
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
import java.util.Map;
import java.util.Properties;
/**
* @author wyb
* @date 2022/12/26
*/
public interface MirrorMakerManager {
Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator);
Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator);
Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId);
PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto);
Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectId, String connectName);
Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName);
Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName);
Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto);
}

View File

@@ -0,0 +1,652 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.biz.connect.mm2.MirrorMakerManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2.MetricsMirrorMakersDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.*;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.MirrorMakerUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import org.apache.commons.lang.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
import static org.apache.kafka.connect.runtime.AbstractStatus.State.RUNNING;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.*;
/**
* @author wyb
* @date 2022/12/26
*/
@Service
public class MirrorMakerManagerImpl implements MirrorMakerManager {
private static final ILog LOGGER = LogFactory.getLog(MirrorMakerManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Autowired
private WorkerService workerService;
@Autowired
private ConnectorManager connectorManager;
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private PluginService pluginService;
@Override
public Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator) {
// 检查基本参数
Result<Void> rv = this.checkCreateMirrorMakerParamAndUnifyData(dto);
if (rv.failed()) {
return rv;
}
// 创建MirrorSourceConnector
Result<Void> sourceConnectResult = connectorManager.createConnector(
dto,
dto.getCheckpointConnectorConfigs() != null? MirrorMakerUtil.genCheckpointName(dto.getConnectorName()): "",
dto.getHeartbeatConnectorConfigs() != null? MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()): "",
operator
);
if (sourceConnectResult.failed()) {
// 创建失败, 直接返回
return Result.buildFromIgnoreData(sourceConnectResult);
}
// 创建 checkpoint 任务
Result<Void> checkpointResult = Result.buildSuc();
if (dto.getCheckpointConnectorConfigs() != null) {
checkpointResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genCheckpointName(dto.getConnectorName()), dto.getCheckpointConnectorConfigs()),
operator
);
}
// 创建 heartbeat 任务
Result<Void> heartbeatResult = Result.buildSuc();
if (dto.getHeartbeatConnectorConfigs() != null) {
heartbeatResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()), dto.getHeartbeatConnectorConfigs()),
operator
);
}
// 全都成功
if (checkpointResult.successful() && checkpointResult.successful()) {
return Result.buildSuc();
} else if (checkpointResult.failed() && checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint & heartbeat 失败.\n失败信息分别为%s\n\n%s", checkpointResult.getMessage(), heartbeatResult.getMessage())
);
} else if (checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint 失败.\n失败信息分别为%s", checkpointResult.getMessage())
);
} else{
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 heartbeat 失败.\n失败信息分别为%s", heartbeatResult.getMessage())
);
}
}
@Override
public Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.deleteConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.deleteConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.deleteConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(dto.getConnectClusterId(), dto.getConnectorName());
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(dto.getConnectClusterId(), dto.getConnectorName()));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName()) && dto.getCheckpointConnectorConfigs() != null) {
rv = connectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getCheckpointConnectorName(), dto.getCheckpointConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && dto.getHeartbeatConnectorConfigs() != null) {
rv = connectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getHeartbeatConnectorName(), dto.getHeartbeatConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.updateConnectorConfig(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
}
@Override
public Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.restartConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.restartConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.restartConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.stopConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.stopConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.stopConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.resumeConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.resumeConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.resumeConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId) {
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> workerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return Result.buildSuc(convert2MirrorMakerStateVO(connectorPOList, workerConnectorList, workerList));
}
@Override
public PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto) {
List<ConnectorPO> mirrorMakerList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
Result<List<MirrorMakerMetrics>> latestMetricsResult = mirrorMakerMetricService.getLatestMetricsFromES(clusterPhyId,
mirrorMakerList.stream().map(elem -> new Tuple<>(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames());
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterMirrorMakersOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList = this.convert2ClusterMirrorMakerOverviewVO(mirrorMakerList, connectClusterList, latestMetricsResult.getData());
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = this.completeClusterInfo(mirrorMakerOverviewVOList);
PaginationResult<ClusterMirrorMakerOverviewVO> voPaginationResult = this.pagingMirrorMakerInLocal(mirrorMakerVOList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=ClusterMirrorMakerOverviewVO||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = mirrorMakerMetricService.listMirrorMakerClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
));
return PaginationResult.buildSuc(
this.supplyData2ClusterMirrorMakerOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectClusterId, String connectName) {
//mm2任务
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream()
.filter(workerConnector -> workerConnector.getConnectorName().equals(connectorPO.getConnectorName())
|| (!StringUtils.isBlank(connectorPO.getCheckpointConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName()))
|| (!StringUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())))
.collect(Collectors.toList());
MirrorMakerBaseStateVO mirrorMakerBaseStateVO = new MirrorMakerBaseStateVO();
mirrorMakerBaseStateVO.setTotalTaskCount(workerConnectorList.size());
mirrorMakerBaseStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
mirrorMakerBaseStateVO.setWorkerCount(workerConnectorList.stream().collect(Collectors.groupingBy(WorkerConnector::getWorkerId)).size());
return Result.buildSuc(mirrorMakerBaseStateVO);
}
@Override
public Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
Map<String, List<KCTaskOverviewVO>> listMap = new HashMap<>();
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId);
if (workerConnectorList.isEmpty()){
return Result.buildSuc(listMap);
}
workerConnectorList.forEach(workerConnector -> {
if (workerConnector.getConnectorName().equals(connectorPO.getConnectorName())){
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
}
});
return Result.buildSuc(listMap);
}
@Override
public Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<Properties> propList = new ArrayList<>();
// source
Result<KSConnectorInfo> connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
Properties props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
// checkpoint
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getCheckpointConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
// heartbeat
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getHeartbeatConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
return Result.buildSuc(propList);
}
@Override
public Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto) {
List<ConnectConfigInfosVO> voList = new ArrayList<>();
Result<ConnectConfigInfos> infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
if (dto.getHeartbeatConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getHeartbeatConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
if (dto.getCheckpointConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getCheckpointConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
return Result.buildSuc(voList);
}
/**************************************************** private method ****************************************************/
private MetricsMirrorMakersDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsMirrorMakersDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsMirrorMakersDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
public Result<Void> checkCreateMirrorMakerParamAndUnifyData(MirrorMakerCreateDTO dto) {
ClusterPhy sourceClusterPhy = clusterPhyService.getClusterByCluster(dto.getSourceKafkaClusterId());
if (sourceClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getSourceKafkaClusterId()));
}
ConnectCluster connectCluster = connectClusterService.getById(dto.getConnectClusterId());
if (connectCluster == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getConnectClusterNotExist(dto.getConnectClusterId()));
}
ClusterPhy targetClusterPhy = clusterPhyService.getClusterByCluster(connectCluster.getKafkaClusterPhyId());
if (targetClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(connectCluster.getKafkaClusterPhyId()));
}
if (!dto.getConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector缺少connector.class");
}
if (!MIRROR_MAKER_SOURCE_CONNECTOR_TYPE.equals(dto.getConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector的connector.class类型错误");
}
if (dto.getCheckpointConnectorConfigs() != null) {
if (!dto.getCheckpointConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "CheckpointConnector缺少connector.class");
}
if (!MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE.equals(dto.getCheckpointConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Checkpoint的connector.class类型错误");
}
}
if (dto.getHeartbeatConnectorConfigs() != null) {
if (!dto.getHeartbeatConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "HeartbeatConnector缺少connector.class");
}
if (!MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE.equals(dto.getHeartbeatConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Heartbeat的connector.class类型错误");
}
}
dto.unifyData(
sourceClusterPhy.getId(), sourceClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(sourceClusterPhy.getClientProperties(), Properties.class),
targetClusterPhy.getId(), targetClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(targetClusterPhy.getClientProperties(), Properties.class)
);
return Result.buildSuc();
}
private MirrorMakerStateVO convert2MirrorMakerStateVO(List<ConnectorPO> connectorPOList,List<WorkerConnector> workerConnectorList,List<ConnectWorker> workerList){
MirrorMakerStateVO mirrorMakerStateVO = new MirrorMakerStateVO();
List<ConnectorPO> sourceSet = connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
mirrorMakerStateVO.setMirrorMakerCount(sourceSet.size());
Set<Long> connectClusterIdSet = sourceSet.stream().map(ConnectorPO::getConnectClusterId).collect(Collectors.toSet());
mirrorMakerStateVO.setWorkerCount(workerList.stream().filter(elem -> connectClusterIdSet.contains(elem.getConnectClusterId())).collect(Collectors.toList()).size());
List<ConnectorPO> mirrorMakerConnectorList = new ArrayList<>();
mirrorMakerConnectorList.addAll(sourceSet);
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerStateVO.setTotalConnectorCount(mirrorMakerConnectorList.size());
mirrorMakerStateVO.setAliveConnectorCount(mirrorMakerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
Set<String> connectorNameSet = mirrorMakerConnectorList.stream().map(elem -> elem.getConnectorName()).collect(Collectors.toSet());
List<WorkerConnector> taskList = workerConnectorList.stream().filter(elem -> connectorNameSet.contains(elem.getConnectorName())).collect(Collectors.toList());
mirrorMakerStateVO.setTotalTaskCount(taskList.size());
mirrorMakerStateVO.setAliveTaskCount(taskList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
return mirrorMakerStateVO;
}
private List<ClusterMirrorMakerOverviewVO> convert2ClusterMirrorMakerOverviewVO(List<ConnectorPO> mirrorMakerList, List<ConnectCluster> connectClusterList, List<MirrorMakerMetrics> latestMetric) {
List<ClusterMirrorMakerOverviewVO> clusterMirrorMakerOverviewVOList = new ArrayList<>();
Map<String, MirrorMakerMetrics> metricsMap = latestMetric.stream().collect(Collectors.toMap(elem -> elem.getConnectClusterId() + "@" + elem.getConnectorName(), Function.identity()));
Map<Long, ConnectCluster> connectClusterMap = connectClusterList.stream().collect(Collectors.toMap(elem -> elem.getId(), Function.identity()));
for (ConnectorPO mirrorMaker : mirrorMakerList) {
ClusterMirrorMakerOverviewVO clusterMirrorMakerOverviewVO = new ClusterMirrorMakerOverviewVO();
clusterMirrorMakerOverviewVO.setConnectClusterId(mirrorMaker.getConnectClusterId());
clusterMirrorMakerOverviewVO.setConnectClusterName(connectClusterMap.get(mirrorMaker.getConnectClusterId()).getName());
clusterMirrorMakerOverviewVO.setConnectorName(mirrorMaker.getConnectorName());
clusterMirrorMakerOverviewVO.setState(mirrorMaker.getState());
clusterMirrorMakerOverviewVO.setCheckpointConnector(mirrorMaker.getCheckpointConnectorName());
clusterMirrorMakerOverviewVO.setTaskCount(mirrorMaker.getTaskCount());
clusterMirrorMakerOverviewVO.setHeartbeatConnector(mirrorMaker.getHeartbeatConnectorName());
clusterMirrorMakerOverviewVO.setLatestMetrics(metricsMap.getOrDefault(mirrorMaker.getConnectClusterId() + "@" + mirrorMaker.getConnectorName(), new MirrorMakerMetrics(mirrorMaker.getConnectClusterId(), mirrorMaker.getConnectorName())));
clusterMirrorMakerOverviewVOList.add(clusterMirrorMakerOverviewVO);
}
return clusterMirrorMakerOverviewVOList;
}
PaginationResult<ClusterMirrorMakerOverviewVO> pagingMirrorMakerInLocal(List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList, ClusterMirrorMakersOverviewDTO dto) {
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = PaginationUtil.pageByFuzzyFilter(mirrorMakerOverviewVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(mirrorMakerVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(mirrorMakerVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(mirrorMakerVOList, dto);
}
public static List<ClusterMirrorMakerOverviewVO> supplyData2ClusterMirrorMakerOverviewVOList(List<ClusterMirrorMakerOverviewVO> voList,
List<MetricMultiLinesVO> metricLineVOList) {
Map<String, List<MetricLineVO>> metricLineMap = new HashMap<>();
if (metricLineVOList != null) {
for (MetricMultiLinesVO metricMultiLinesVO : metricLineVOList) {
metricMultiLinesVO.getMetricLines()
.forEach(metricLineVO -> {
String key = metricLineVO.getName();
List<MetricLineVO> metricLineVOS = metricLineMap.getOrDefault(key, new ArrayList<>());
metricLineVOS.add(metricLineVO);
metricLineMap.put(key, metricLineVOS);
});
}
}
voList.forEach(elem -> {
elem.setMetricLines(metricLineMap.get(elem.getConnectClusterId() + "#" + elem.getConnectorName()));
});
return voList;
}
private List<ClusterMirrorMakerOverviewVO> completeClusterInfo(List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList) {
Map<String, KSConnectorInfo> connectorInfoMap = new HashMap<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
ApiCallThreadPoolService.runnableTask(String.format("method=completeClusterInfo||connectClusterId=%d||connectorName=%s||getMirrorMakerInfo", mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName()),
3000
, () -> {
Result<KSConnectorInfo> connectorInfoRet = connectorService.getConnectorInfoFromCluster(mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName());
if (connectorInfoRet.hasData()) {
connectorInfoMap.put(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName(), connectorInfoRet.getData());
}
return connectorInfoRet.getData();
});
}
ApiCallThreadPoolService.waitResult(1000);
List<ClusterMirrorMakerOverviewVO> newMirrorMakerVOList = new ArrayList<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
KSConnectorInfo connectorInfo = connectorInfoMap.get(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName());
if (connectorInfo == null) {
continue;
}
String sourceClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME);
String targetClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME);
//先默认设置为集群别名
mirrorMakerVO.setSourceKafkaClusterName(sourceClusterAlias);
mirrorMakerVO.setDestKafkaClusterName(targetClusterAlias);
if (!ValidateUtils.isBlank(sourceClusterAlias) && CommonUtils.isNumeric(sourceClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(sourceClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setSourceKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setSourceKafkaClusterName(clusterPhy.getName());
}
}
if (!ValidateUtils.isBlank(targetClusterAlias) && CommonUtils.isNumeric(targetClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(targetClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setDestKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setDestKafkaClusterName(clusterPhy.getName());
}
}
newMirrorMakerVOList.add(mirrorMakerVO);
}
return newMirrorMakerVOList;
}
}

View File

@@ -39,5 +39,5 @@ public interface GroupManager {
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList (Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
}

View File

@@ -34,6 +34,8 @@ import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafk
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.MirrorMakerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems.*;
@Service
public class VersionControlManagerImpl implements VersionControlManager {
@@ -48,6 +50,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct
public void init(){
// topic
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
@@ -58,6 +61,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
// cluster
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
@@ -73,11 +77,13 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
// group
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
// broker
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
@@ -91,6 +97,32 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
// zookeeper
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_MAX_REQUEST_LATENCY, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OUTSTANDING_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NODE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_WATCH_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NUM_ALIVE_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_RECEIVED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_SENT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_EPHEMERALS_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_APPROXIMATE_DATA_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OPEN_FILE_DESCRIPTOR_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_DISCONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_SYNC_CONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_REQUEST_LATENCY_99TH, true));
// mm2
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_AGE_MS_MAX, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_REPLICATION_LATENCY_MS_MAX, true));
}
@Autowired

View File

@@ -82,6 +82,11 @@ public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollec
for (VersionControlItem v : items) {
try {
//过滤已测得指标
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;

View File

@@ -0,0 +1,117 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.connect.AbstractConnectMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_MIRROR_MAKER;
/**
* @author wyb
* @date 2022/12/15
*/
@Component
public class MirrorMakerMetricCollector extends AbstractConnectMetricCollector<MirrorMakerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private MirrorMakerService mirrorMakerService;
@Autowired
private ConnectorService connectorService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_MIRROR_MAKER;
}
@Override
public List<MirrorMakerMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<ConnectorPO> mirrorMakerList = connectorService.listByConnectClusterIdFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
Map<String, MirrorMakerTopic> mirrorMakerTopicMap = mirrorMakerService.getMirrorMakerTopicMap(connectClusterId).getData();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<MirrorMakerMetrics> metricsList = new ArrayList<>();
for (ConnectorPO mirrorMaker : mirrorMakerList) {
MirrorMakerMetrics metrics = new MirrorMakerMetrics(clusterPhyId, connectClusterId, mirrorMaker.getConnectorName());
metricsList.add(metrics);
List<MirrorMakerTopic> mirrorMakerTopicList = mirrorMakerService.getMirrorMakerTopicList(mirrorMaker, mirrorMakerTopicMap);
future.runnableTask(String.format("class=MirrorMakerMetricCollector||connectClusterId=%d||mirrorMakerName=%s", connectClusterId, mirrorMaker.getConnectorName()),
30000,
() -> collectMetrics(connectClusterId, mirrorMaker.getConnectorName(), metrics, items, mirrorMakerTopicList));
}
future.waitResult(30000);
this.publishMetric(new MirrorMakerMetricEvent(this,metricsList));
return metricsList;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String mirrorMakerName, MirrorMakerMetrics metrics, List<VersionControlItem> items, List<MirrorMakerTopic> mirrorMakerTopicList) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
//已测量指标过滤
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<MirrorMakerMetrics> ret = mirrorMakerMetricService.collectMirrorMakerMetricsFromKafka(connectClusterId, mirrorMakerName, mirrorMakerTopicList, v.getName());
if (ret == null || !ret.hasData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||mirrorMakerName={}||metric={}||errMsg=exception!",
connectClusterId, mirrorMakerName, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -1,114 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ReplicationMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_REPLICATION;
/**
* @author didi
*/
@Component
public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ReplicaMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ReplicaMetricService replicaMetricService;
@Autowired
private PartitionService partitionService;
@Override
public List<ReplicationMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<Partition> partitions = partitionService.listPartitionFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<ReplicationMetrics> metricsList = new ArrayList<>();
for(Partition partition : partitions) {
for (Integer brokerId: partition.getAssignReplicaList()) {
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics);
future.runnableTask(
String.format("class=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
30000,
() -> collectMetrics(clusterPhyId, metrics, items)
);
}
}
future.waitExecute(30000);
publishMetric(new ReplicaMetricEvent(this, metricsList));
return metricsList;
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_REPLICATION;
}
/**************************************************** private method ****************************************************/
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
for(VersionControlItem v : items) {
try {
if (metrics.getMetrics().containsKey(v.getName())) {
continue;
}
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafka(
clusterPhyId,
metrics.getTopic(),
metrics.getBrokerId(),
metrics.getPartitionId(),
v.getName()
);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e
);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
return metrics;
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.REPLICATION_INDEX;
@Component
public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ReplicaMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ReplicaMetricEvent event) {
send2es(REPLICATION_INDEX, ConvertUtil.list2List(event.getReplicationMetrics(), ReplicationMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.mm2.MirrorMakerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_MM2_INDEX;
/**
* @author zengqiao
* @date 2022/12/20
*/
@Component
public class MirrorMakerMetricESSender extends AbstractMetricESSender implements ApplicationListener<MirrorMakerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(MirrorMakerMetricEvent event) {
send2es(CONNECT_MM2_INDEX, ConvertUtil.list2List(event.getMetricsList(), MirrorMakerMetricPO.class));
}
}

View File

@@ -81,10 +81,6 @@
<version>3.0.2</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationMulFuzzySearchDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterGroupsOverviewDTO extends PaginationMulFuzzySearchDTO {
@ApiModelProperty("查找该Topic")
private String topicName;
@ApiModelProperty("查找该Group")
private String groupName;
}

View File

@@ -0,0 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import lombok.Data;
/**
* @author zengqiao
* @date 22/12/12
*/
@Data
public class ClusterMirrorMakersOverviewDTO extends ClusterConnectorsOverviewDTO {
}

View File

@@ -19,11 +19,11 @@ import javax.validation.constraints.NotNull;
public class ClusterConnectorDTO extends BaseDTO {
@NotNull(message = "connectClusterId不允许为空")
@ApiModelProperty(value = "Connector集群ID", example = "1")
private Long connectClusterId;
protected Long connectClusterId;
@NotBlank(message = "name不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
private String connectorName;
protected String connectorName;
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
this.connectClusterId = connectClusterId;

View File

@@ -1,21 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "修改Connector配置")
public class ConnectorConfigModifyDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
}

View File

@@ -4,6 +4,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnector
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
import java.util.Properties;
@@ -13,9 +14,15 @@ import java.util.Properties;
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "创建Connector")
public class ConnectorCreateDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
protected Properties configs;
public ConnectorCreateDTO(Long connectClusterId, String connectorName, Properties configs) {
super(connectClusterId, connectorName);
this.configs = configs;
}
}

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "操作MM2")
public class MirrorMaker2ActionDTO extends ConnectorActionDTO {
}

View File

@@ -0,0 +1,14 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorDeleteDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "删除MM2")
public class MirrorMaker2DeleteDTO extends ConnectorDeleteDTO {
}

View File

@@ -0,0 +1,69 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import org.apache.kafka.clients.CommonClientConfigs;
import javax.validation.Valid;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "创建MM2")
public class MirrorMakerCreateDTO extends ConnectorCreateDTO {
@NotNull(message = "sourceKafkaClusterId不允许为空")
@ApiModelProperty(value = "源Kafka集群ID", example = "")
private Long sourceKafkaClusterId;
@Valid
@ApiModelProperty(value = "heartbeat-connector的信息", example = "")
private Properties heartbeatConnectorConfigs;
@Valid
@ApiModelProperty(value = "checkpoint-connector的信息", example = "")
private Properties checkpointConnectorConfigs;
public void unifyData(Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
if (sourceKafkaProps == null) {
sourceKafkaProps = new Properties();
}
if (targetKafkaProps == null) {
targetKafkaProps = new Properties();
}
this.unifyData(this.configs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
if (heartbeatConnectorConfigs != null) {
this.unifyData(this.heartbeatConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
}
if (checkpointConnectorConfigs != null) {
this.unifyData(this.checkpointConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
}
}
private void unifyData(Properties dataConfig,
Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME, sourceKafkaClusterId);
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, sourceBootstrapServers);
for (Object configKey: sourceKafkaProps.keySet()) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + configKey, sourceKafkaProps.getProperty((String) configKey));
}
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME, targetKafkaClusterId);
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, targetBootstrapServers);
for (Object configKey: targetKafkaProps.keySet()) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + configKey, targetKafkaProps.getProperty((String) configKey));
}
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.ha.mirror;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description="Topic镜像信息")
public class MirrorTopicCreateDTO extends BaseDTO {
@Min(value = 0, message = "sourceClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "源集群ID", example = "3")
private Long sourceClusterPhyId;
@Min(value = 0, message = "destClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "目标集群ID", example = "3")
private Long destClusterPhyId;
@NotBlank(message = "topicName不允许为空串")
@ApiModelProperty(value = "Topic名称", example = "mirrorTopic")
private String topicName;
@NotNull(message = "syncData不允许为空")
@ApiModelProperty(value = "同步数据", example = "true")
private Boolean syncData;
@NotNull(message = "syncConfig不允许为空")
@ApiModelProperty(value = "同步配置", example = "false")
private Boolean syncConfig;
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.ha.mirror;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description="Topic镜像信息")
public class MirrorTopicDeleteDTO extends BaseDTO {
@Min(value = 0, message = "sourceClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "源集群ID", example = "3")
private Long sourceClusterPhyId;
@Min(value = 0, message = "destClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "目标集群ID", example = "3")
private Long destClusterPhyId;
@NotBlank(message = "topicName不允许为空串")
@ApiModelProperty(value = "Topic名称", example = "mirrorTopic")
private String topicName;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "MirrorMaker指标查询信息")
public class MetricsMirrorMakersDTO extends MetricDTO {
@ApiModelProperty("MirrorMaker的SourceConnect列表")
private List<ClusterConnectorDTO> connectorNameList;
}

View File

@@ -18,5 +18,7 @@ public class ClusterPhysState {
private Integer downCount;
private Integer unknownCount;
private Integer total;
}

View File

@@ -13,9 +13,6 @@ import java.util.Properties;
*/
@ApiModel(description = "ZK配置")
public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的jmx配置")
private JmxConfig jmxConfig;
@ApiModelProperty(value="ZK是否开启secure", example = "false")
private Boolean openSecure = false;
@@ -28,14 +25,6 @@ public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的Request超时时间")
private Properties otherProps = new Properties();
public JmxConfig getJmxConfig() {
return jmxConfig == null? new JmxConfig(): jmxConfig;
}
public void setJmxConfig(JmxConfig jmxConfig) {
this.jmxConfig = jmxConfig;
}
public Boolean getOpenSecure() {
return openSecure != null && openSecure;
}
@@ -53,7 +42,7 @@ public class ZKConfig implements Serializable {
}
public Integer getRequestTimeoutUnitMs() {
return requestTimeoutUnitMs == null? Constant.DEFAULT_REQUEST_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
return requestTimeoutUnitMs == null? Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
}
public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) {

View File

@@ -7,7 +7,6 @@ import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
import java.net.URI;
@Data
@NoArgsConstructor

View File

@@ -45,4 +45,14 @@ public class KSConnector implements Serializable {
* 状态
*/
private String state;
/**
* 心跳检测connector名称
*/
private String heartbeatConnectorName;
/**
* 进度确认connector名称
*/
private String checkpointConnectorName;
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Map;
/**
* @author wyb
* @date 2022/12/14
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class MirrorMakerTopic {
/**
* mm2集群别名
*/
private String clusterAlias;
/**
* topic名称
*/
private String topicName;
/**
* partition在connect上的分布 Map<PartitionId,WorkerId>
*/
private Map<Integer,String> partitionMap;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.ha;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.enums.ha.HaResTypeEnum;
import lombok.Data;
@Data
public class HaActiveStandbyRelation extends BasePO {
private Long activeClusterPhyId;
private Long standbyClusterPhyId;
/**
* 资源名称
*/
private String resName;
/**
* 资源类型0集群1镜像Topic2主备Topic
* @see HaResTypeEnum
*/
private Integer resType;
}

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;

View File

@@ -0,0 +1,46 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@NoArgsConstructor
@ToString
public class MirrorMakerMetrics extends BaseMetrics {
private Long connectClusterId;
private String connectorName;
private String connectorNameAndClusterId;
public MirrorMakerMetrics(Long connectClusterId, String connectorName) {
super(null);
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.connectorNameAndClusterId = connectorName + "#" + connectClusterId;
}
public MirrorMakerMetrics(Long clusterPhyId, Long connectClusterId, String connectorName) {
super(clusterPhyId);
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.connectorNameAndClusterId = connectorName + "#" + connectClusterId;
}
public static MirrorMakerMetrics initWithMetric(Long connectClusterId, String connectorName, String metricName, Float value) {
MirrorMakerMetrics metrics = new MirrorMakerMetrics(connectClusterId, connectorName);
metrics.putMetric(metricName, value);
return metrics;
}
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + connectorName;
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/12/16
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class MirrorMakerTopicPartitionMetrics extends BaseMetrics {
private Long connectClusterId;
private String mirrorMakerName;
private String clusterAlias;
private String topicName;
private Integer partitionId;
private String workerId;
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + mirrorMakerName + "@" + clusterAlias + "@" + workerId + "@" + topicName + "@" + partitionId;
}
public static MirrorMakerTopicPartitionMetrics initWithMetric(Long connectClusterId, String mirrorMakerName, String clusterAlias, String topicName, Integer partitionId, String workerId, String metricName, Float value) {
MirrorMakerTopicPartitionMetrics metrics = new MirrorMakerTopicPartitionMetrics(connectClusterId, mirrorMakerName, clusterAlias, topicName, partitionId, workerId);
metrics.putMetric(metricName, value);
return metrics;
}
}

View File

@@ -1,7 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ConnectClusterParam;
import lombok.AllArgsConstructor;
import lombok.Data;
@@ -18,9 +16,12 @@ public class ConnectorParam extends ConnectClusterParam {
private String connectorName;
public ConnectorParam(Long connectClusterId, String connectorName) {
private String connectorType;
public ConnectorParam(Long connectClusterId, String connectorName, String connectorType) {
super(connectClusterId);
this.connectorName = connectorName;
this.connectorType = connectorType;
}
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ConnectClusterParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author wyb
* @date 2022/12/21
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class MirrorMakerParam extends ConnectClusterParam {
private String mirrorMakerName;
private String connectorType;
List<MirrorMakerTopic> mirrorMakerTopicList;
public MirrorMakerParam(Long connectClusterId, String connectorType, String mirrorMakerName, List<MirrorMakerTopic> mirrorMakerTopicList) {
super(connectClusterId);
this.mirrorMakerName = mirrorMakerName;
this.connectorType = connectorType;
this.mirrorMakerTopicList = mirrorMakerTopicList;
}
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.MetricParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author wyb
* @date 2022/12/15
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class MirrorMakerMetricParam extends MetricParam {
private Long connectClusterId;
private String mirrorMakerName;
private List<MirrorMakerTopic> mirrorMakerTopicList;
private String metric;
}

View File

@@ -23,8 +23,8 @@ import lombok.Data;
public class MonitorCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Float zkMaxLatency;
private Float zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;

View File

@@ -18,8 +18,8 @@ import lombok.Data;
public class ServerCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Float zkMaxLatency;
private Float zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;

View File

@@ -3,6 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletter
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.MonitorCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
@@ -57,13 +58,13 @@ public class MonitorCmdDataParser implements FourLetterWordDataParser<MonitorCmd
monitorCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "zk_avg_latency":
monitorCmdData.setZkAvgLatency(Float.valueOf(elem.getValue()));
monitorCmdData.setZkAvgLatency(ConvertUtil.string2Float(elem.getValue()));
break;
case "zk_max_latency":
monitorCmdData.setZkMaxLatency(Long.valueOf(elem.getValue()));
monitorCmdData.setZkMaxLatency(ConvertUtil.string2Float(elem.getValue()));
break;
case "zk_min_latency":
monitorCmdData.setZkMinLatency(Long.valueOf(elem.getValue()));
monitorCmdData.setZkMinLatency(ConvertUtil.string2Float(elem.getValue()));
break;
case "zk_packets_received":
monitorCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));

View File

@@ -3,6 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletter
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ServerCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
@@ -53,9 +54,9 @@ public class ServerCmdDataParser implements FourLetterWordDataParser<ServerCmdDa
break;
case "Latency min/avg/max":
String[] data = elem.getValue().split("/");
serverCmdData.setZkMinLatency(Long.valueOf(data[0]));
serverCmdData.setZkAvgLatency(Float.valueOf(data[1]));
serverCmdData.setZkMaxLatency(Long.valueOf(data[2]));
serverCmdData.setZkMinLatency(ConvertUtil.string2Float(data[0]));
serverCmdData.setZkAvgLatency(ConvertUtil.string2Float(data[1]));
serverCmdData.setZkMaxLatency(ConvertUtil.string2Float(data[2]));
break;
case "Received":
serverCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));

View File

@@ -1,20 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.event.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ReplicationMetrics;
import lombok.Getter;
import java.util.List;
/**
* @author didi
*/
@Getter
public class ReplicaMetricEvent extends BaseMetricEvent{
private final List<ReplicationMetrics> replicationMetrics;
public ReplicaMetricEvent(Object source, List<ReplicationMetrics> replicationMetrics) {
super( source );
this.replicationMetrics = replicationMetrics;
}
}

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
import lombok.Getter;
import java.util.List;
/**
* @author zengqiao
* @date 2022/12/20
*/
@Getter
public class MirrorMakerMetricEvent extends BaseMetricEvent {
private final List<MirrorMakerMetrics> metricsList;
public MirrorMakerMetricEvent(Object source, List<MirrorMakerMetrics> metricsList) {
super(source);
this.metricsList = metricsList;
}
}

View File

@@ -47,4 +47,14 @@ public class ConnectorPO extends BasePO {
* 状态
*/
private String state;
/**
* 心跳检测connector
*/
private String heartbeatConnectorName;
/**
* 进度确认connector
*/
private String checkpointConnectorName;
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.ha;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_HA_TABLE_NAME_PREFIX + "active_standby_relation")
public class HaActiveStandbyRelationPO extends BasePO {
private Long activeClusterPhyId;
private Long standbyClusterPhyId;
/**
* 资源名称
*/
private String resName;
/**
* 资源类型0集群1镜像Topic2主备Topic
*/
private Integer resType;
public HaActiveStandbyRelationPO(Long activeClusterPhyId, Long standbyClusterPhyId, String resName, Integer resType) {
this.activeClusterPhyId = activeClusterPhyId;
this.standbyClusterPhyId = standbyClusterPhyId;
this.resName = resName;
this.resType = resType;
}
}

View File

@@ -0,0 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.metrice.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BaseMetricESPO;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import static com.xiaojukeji.know.streaming.km.common.utils.CommonUtils.monitorTimestamp2min;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class MirrorMakerMetricPO extends BaseMetricESPO {
private Long connectClusterId;
private String connectorName;
/**
* 用于es内部排序
*/
private String connectorNameAndClusterId;
public MirrorMakerMetricPO(Long kafkaClusterPhyId, Long connectClusterId, String connectorName){
super(kafkaClusterPhyId);
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.connectorNameAndClusterId = connectorName + "#" + connectClusterId;
}
@Override
public String getKey() {
return "KCOR@" + clusterPhyId + "@" + connectClusterId + "@" + connectorName + "@" + monitorTimestamp2min(timestamp);
}
@Override
public String getRoutingValue() {
return String.valueOf(connectClusterId);
}
}

View File

@@ -18,6 +18,9 @@ public class ClusterPhysStateVO {
@ApiModelProperty(value = "挂掉集群数", example = "10")
private Integer downCount;
@ApiModelProperty(value = "未知状态集群数", example = "10")
private Integer unknownCount;
@ApiModelProperty(value = "集群总数", example = "40")
private Integer total;
}

View File

@@ -0,0 +1,52 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* 集群MM2信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@ApiModel(description = "MM2概览信息")
public class ClusterMirrorMakerOverviewVO extends MirrorMakerBasicVO {
@ApiModelProperty(value = "源Kafka集群Id", example = "1")
private Long sourceKafkaClusterId;
@ApiModelProperty(value = "源Kafka集群名称", example = "aaa")
private String sourceKafkaClusterName;
@ApiModelProperty(value = "目标Kafka集群Id", example = "1")
private Long destKafkaClusterId;
@ApiModelProperty(value = "目标Kafka集群名称", example = "aaa")
private String destKafkaClusterName;
/**
* @see org.apache.kafka.connect.runtime.AbstractStatus.State
*/
@ApiModelProperty(value = "状态", example = "RUNNING")
private String state;
@ApiModelProperty(value = "Task数", example = "100")
private Integer taskCount;
@ApiModelProperty(value = "心跳检测connector", example = "heartbeatConnector")
private String heartbeatConnector;
@ApiModelProperty(value = "进度确认connector", example = "checkpointConnector")
private String checkpointConnector;
@ApiModelProperty(value = "多个指标的当前值, 包括健康分/LogSize等")
private BaseMetrics latestMetrics;
@ApiModelProperty(value = "多个指标的历史曲线值包括LogSize/BytesIn等")
private List<MetricLineVO> metricLines;
}

View File

@@ -0,0 +1,25 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.vo.BaseVO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* 集群MM2状态信息
* @author fengqiongfeng
* @date 22/12/29
*/
@Data
@ApiModel(description = "集群MM2状态信息")
public class MirrorMakerBaseStateVO extends BaseVO {
@ApiModelProperty(value = "worker数", example = "1")
private Integer workerCount;
@ApiModelProperty(value = "总Task数", example = "1")
private Integer totalTaskCount;
@ApiModelProperty(value = "存活Task数", example = "1")
private Integer aliveTaskCount;
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ConnectorBasicVO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* 集群MM2信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@ApiModel(description = "MM2基本信息")
public class MirrorMakerBasicVO extends ConnectorBasicVO {
}

View File

@@ -0,0 +1,34 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.vo.BaseVO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* 集群MM2状态信息
* @author zengqiao
* @date 22/12/12
*/
@Data
@ApiModel(description = "集群MM2状态信息")
public class MirrorMakerStateVO extends BaseVO {
@ApiModelProperty(value = "MM2数", example = "1")
private Integer mirrorMakerCount;
@ApiModelProperty(value = "worker数", example = "1")
private Integer workerCount;
@ApiModelProperty(value = "总Connector数", example = "1")
private Integer totalConnectorCount;
@ApiModelProperty(value = "存活Connector数", example = "1")
private Integer aliveConnectorCount;
@ApiModelProperty(value = "总Task数", example = "1")
private Integer totalTaskCount;
@ApiModelProperty(value = "存活Task数", example = "1")
private Integer aliveTaskCount;
}

View File

@@ -32,6 +32,9 @@ public class ClusterPhyTopicsOverviewVO extends BaseTimeVO {
@ApiModelProperty(value = "副本数", example = "2")
private Integer replicaNum;
@ApiModelProperty(value = "处于镜像复制中", example = "true")
private Boolean inMirror;
@ApiModelProperty(value = "多个指标的当前值, 包括健康分/LogSize等")
private BaseMetrics latestMetrics;

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.ha.mirror;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/29
*/
@Data
@ApiModel(description="Topic复制信息")
public class TopicMirrorInfoVO {
@ApiModelProperty(value="源集群ID", example = "1")
private Long sourceClusterId;
@ApiModelProperty(value="源集群名称", example = "know-streaming-1")
private String sourceClusterName;
@ApiModelProperty(value="目标集群ID", example = "2")
private Long destClusterId;
@ApiModelProperty(value="目标集群名称", example = "know-streaming-2")
private String destClusterName;
@ApiModelProperty(value="Topic名称", example = "know-streaming")
private String topicName;
@ApiModelProperty(value="写入速率(bytes/s)", example = "100")
private Double bytesIn;
@ApiModelProperty(value="复制速率(bytes/s)", example = "100")
private Double replicationBytesIn;
@ApiModelProperty(value="延迟消息数", example = "100")
private Long lag;
}

View File

@@ -33,10 +33,6 @@ public class HealthScoreBaseResultVO extends BaseTimeVO {
@ApiModelProperty(value="检查说明", example = "Group延迟")
private String configDesc;
@Deprecated
@ApiModelProperty(value="得分", example = "100")
private Integer score = 100;
@ApiModelProperty(value="结果", example = "true")
private Boolean passed;

View File

@@ -14,6 +14,10 @@ public class ApiPrefix {
public static final String API_V3_CONNECT_PREFIX = API_V3_PREFIX + "kafka-connect/";
public static final String API_V3_MM2_PREFIX = API_V3_PREFIX + "kafka-mm2/";
public static final String API_V3_HA_MIRROR_PREFIX = API_V3_PREFIX + "ha-mirror/";
public static final String API_V3_OPEN_PREFIX = API_V3_PREFIX + "open/";
private ApiPrefix() {

View File

@@ -46,6 +46,7 @@ public class Constant {
public static final String MYSQL_TABLE_NAME_PREFIX = "ks_km_";
public static final String MYSQL_KC_TABLE_NAME_PREFIX = "ks_kc_";
public static final String MYSQL_HA_TABLE_NAME_PREFIX = "ks_ha_";
public static final String SWAGGER_API_TAG_PREFIX = "KS-KM-";

View File

@@ -45,6 +45,8 @@ public class KafkaConstant {
public static final String DEFAULT_CONNECT_VERSION = "2.5.0";
public static final List<String> CONFIG_SIMILAR_IGNORED_CONFIG_KEY_LIST = Arrays.asList("broker.id", "listeners", "name", "value", "advertised.listeners", "node.id");
public static final Map<String, ConfigDef.ConfigKey> KAFKA_ALL_CONFIG_DEF_MAP = new ConcurrentHashMap<>();
static {

View File

@@ -110,4 +110,11 @@ public class MsgConstant {
public static String getConnectorBizStr(Long clusterPhyId, String topicName) {
return String.format("Connect集群ID:[%d] Connector名称:[%s]", clusterPhyId, topicName);
}
/**************************************************** Connector ****************************************************/
public static String getConnectorNotExist(Long connectClusterId, String connectorName) {
return String.format("Connect集群ID:[%d] Connector名称:[%s] 不存在", connectClusterId, connectorName);
}
}

View File

@@ -10,6 +10,23 @@ public class KafkaConnectConstant {
public static final String CONNECTOR_TOPICS_FILED_NAME = "topics";
public static final String CONNECTOR_TOPICS_FILED_ERROR_VALUE = "know-streaming-connect-illegal-value";
public static final String MIRROR_MAKER_TOPIC_PARTITION_PATTERN = "kafka.connect.mirror:type=MirrorSourceConnector,target=*,topic=*,partition=*";
public static final String MIRROR_MAKER_SOURCE_CONNECTOR_TYPE = "org.apache.kafka.connect.mirror.MirrorSourceConnector";
public static final String MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE = "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector";
public static final String MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE = "org.apache.kafka.connect.mirror.MirrorCheckpointConnector";
public static final String MIRROR_MAKER_TARGET_CLUSTER_BOOTSTRAP_SERVERS_FIELD_NAME = "target.cluster.bootstrap.servers";
public static final String MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME = "target.cluster.alias";
public static final String MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME = "target.cluster";
public static final String MIRROR_MAKER_SOURCE_CLUSTER_BOOTSTRAP_SERVERS_FIELD_NAME = "source.cluster.bootstrap.servers";
public static final String MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME = "source.cluster.alias";
public static final String MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME = "source.cluster";
public static final String MIRROR_MAKER_NAME_FIELD_NAME = "name";
private KafkaConnectConstant() {
}
}

View File

@@ -10,6 +10,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectCl
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ConnectorBasicCombineExistVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ConnectorBasicVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBasicVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
@@ -58,6 +59,25 @@ public class ConnectConverter {
return voList;
}
public static List<MirrorMakerBasicVO> convert2MirrorMakerBasicVOList(
List<ConnectCluster> clusterList,
List<ConnectorPO> poList) {
Map<Long, ConnectCluster> clusterMap = new HashMap<>();
clusterList.stream().forEach(elem -> clusterMap.put(elem.getId(), elem));
List<MirrorMakerBasicVO> voList = new ArrayList<>();
poList.stream().filter(item -> clusterMap.containsKey(item.getConnectClusterId())).forEach(elem -> {
MirrorMakerBasicVO vo = new MirrorMakerBasicVO();
vo.setConnectClusterId(elem.getConnectClusterId());
vo.setConnectClusterName(clusterMap.get(elem.getConnectClusterId()).getName());
vo.setConnectorName(elem.getConnectorName());
voList.add(vo);
});
return voList;
}
public static ConnectClusterBasicCombineExistVO convert2ConnectClusterBasicCombineExistVO(ConnectCluster connectCluster) {
if (connectCluster == null) {
ConnectClusterBasicCombineExistVO combineExistVO = new ConnectClusterBasicCombineExistVO();

View File

@@ -77,7 +77,7 @@ public class TopicVOConverter {
return vo;
}
public static List<ClusterPhyTopicsOverviewVO> convert2ClusterPhyTopicsOverviewVOList(List<Topic> topicList, Map<String, TopicMetrics> metricsMap) {
public static List<ClusterPhyTopicsOverviewVO> convert2ClusterPhyTopicsOverviewVOList(List<Topic> topicList, Map<String, TopicMetrics> metricsMap, Set<String> haTopicNameSet) {
List<ClusterPhyTopicsOverviewVO> voList = new ArrayList<>();
for (Topic topic: topicList) {
ClusterPhyTopicsOverviewVO vo = new ClusterPhyTopicsOverviewVO();
@@ -92,6 +92,7 @@ public class TopicVOConverter {
vo.setLatestMetrics(metricsMap.getOrDefault(topic.getTopicName(), new TopicMetrics(topic.getTopicName(), topic.getClusterPhyId())));
vo.setInMirror(haTopicNameSet.contains(topic.getTopicName()));
voList.add(vo);
}

View File

@@ -0,0 +1,25 @@
package com.xiaojukeji.know.streaming.km.common.enums.ha;
import lombok.Getter;
/**
* @author zengqiao
* @date 20/7/28
*/
@Getter
public enum HaResTypeEnum {
CLUSTER(0, "Cluster"),
MIRROR_TOPIC(1, "镜像Topic"),
;
private final int code;
private final String msg;
HaResTypeEnum(int code, String msg) {
this.code = code;
this.msg = msg;
}
}

View File

@@ -24,6 +24,8 @@ public enum HealthCheckDimensionEnum {
CONNECTOR(6, "Connector", "Connect"),
MIRROR_MAKER(7,"MirrorMaker","MirrorMaker"),
MAX_VAL(100, "所有的dimension的值需要小于MAX_VAL", "Ignore")
;

View File

@@ -136,7 +136,7 @@ public enum HealthCheckNameEnum {
HealthCheckDimensionEnum.CONNECT_CLUSTER,
"TaskStartupFailurePercentage",
Constant.HC_CONFIG_NAME_PREFIX+"CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE",
"connect集群任务启动失败概率",
"Connect集群任务启动失败概率",
HealthCompareValueConfig.class,
false
),
@@ -145,7 +145,7 @@ public enum HealthCheckNameEnum {
HealthCheckDimensionEnum.CONNECTOR,
"ConnectorFailedTaskCount",
Constant.HC_CONFIG_NAME_PREFIX+"CONNECTOR_FAILED_TASK_COUNT",
"connector失败状态的任务数量",
"Connector失败状态的任务数量",
HealthCompareValueConfig.class,
false
),
@@ -154,13 +154,50 @@ public enum HealthCheckNameEnum {
HealthCheckDimensionEnum.CONNECTOR,
"ConnectorUnassignedTaskCount",
Constant.HC_CONFIG_NAME_PREFIX+"CONNECTOR_UNASSIGNED_TASK_COUNT",
"connector未被分配的任务数量",
"Connector未被分配的任务数量",
HealthCompareValueConfig.class,
false
),
MIRROR_MAKER_FAILED_TASK_COUNT(
HealthCheckDimensionEnum.MIRROR_MAKER,
"MirrorMakerFailedTaskCount",
Constant.HC_CONFIG_NAME_PREFIX+"MIRROR_MAKER_FAILED_TASK_COUNT",
"MirrorMaker失败状态的任务数量",
HealthCompareValueConfig.class,
false
),
MIRROR_MAKER_UNASSIGNED_TASK_COUNT(
HealthCheckDimensionEnum.MIRROR_MAKER,
"MirrorMakerUnassignedTaskCount",
Constant.HC_CONFIG_NAME_PREFIX+"MIRROR_MAKER_UNASSIGNED_TASK_COUNT",
"MirrorMaker未被分配的任务数量",
HealthCompareValueConfig.class,
false
),
MIRROR_MAKER_TOTAL_RECORD_ERRORS(
HealthCheckDimensionEnum.MIRROR_MAKER,
"TotalRecord-errors",
Constant.HC_CONFIG_NAME_PREFIX + "MIRROR_MAKER_TOTAL_RECORD_ERRORS",
"MirrorMaker消息处理错误的次数",
HealthCompareValueConfig.class,
false
),
MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX(
HealthCheckDimensionEnum.MIRROR_MAKER,
"ReplicationLatencyMsMax",
Constant.HC_CONFIG_NAME_PREFIX + "MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX",
"MirrorMaker消息复制最大延迟时间",
HealthCompareValueConfig.class,
false
)
;
/**

View File

@@ -53,7 +53,11 @@ public enum VersionEnum {
V_2_3_1("2.3.1", normailze("2.3.1")),
V_2_4_0("2.4.0", normailze("2.4.0")),
V_2_4_1("2.4.1", normailze("2.4.1")),
V_2_5_0("2.5.0", normailze("2.5.0")),
V_2_5_0_D_300("2.5.0-d-300", normailze("2.5.0-d-300")),
V_2_5_0_D_MAX("2.5.0-d-999", normailze("2.5.0-d-999")),
V_2_5_1("2.5.1", normailze("2.5.1")),
V_2_6_0("2.6.0", normailze("2.6.0")),
V_2_6_1("2.6.1", normailze("2.6.1")),
@@ -77,9 +81,9 @@ public enum VersionEnum {
;
private String version;
private final String version;
private Long versionL;
private final Long versionL;
VersionEnum(String version, Long versionL) {
this.version = version;

View File

@@ -144,6 +144,32 @@ public class JmxAttribute {
public static final String TOTAL_RETRIES = "total-retries";
/*********************************************************** mm2 ***********************************************************/
public static final String BYTE_COUNT = "byte-count";
public static final String BYTE_RATE = "byte-rate";
public static final String RECORD_AGE_MS = "record-age-ms";
public static final String RECORD_AGE_MS_AVG = "record-age-ms-avg";
public static final String RECORD_AGE_MS_MAX = "record-age-ms-max";
public static final String RECORD_AGE_MS_MIN = "record-age-ms-min";
public static final String RECORD_COUNT = "record-count";
public static final String RECORD_RATE = "record-rate";
public static final String REPLICATION_LATENCY_MS = "replication-latency-ms";
public static final String REPLICATION_LATENCY_MS_AVG = "replication-latency-ms-avg";
public static final String REPLICATION_LATENCY_MS_MAX = "replication-latency-ms-max";
public static final String REPLICATION_LATENCY_MS_MIN = "replication-latency-ms-min";
private JmxAttribute() {
}
}

View File

@@ -41,6 +41,8 @@ public class JmxName {
public static final String JMX_SERVER_APP_INFO ="kafka.server:type=app-info";
public static final String JMX_SERVER_TOPIC_MIRROR ="kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=*,topic=%s,partition=*";
/*********************************************************** controller ***********************************************************/
public static final String JMX_CONTROLLER_ACTIVE_COUNT = "kafka.controller:type=KafkaController,name=ActiveControllerCount";
@@ -82,6 +84,10 @@ public class JmxName {
public static final String JMX_CONNECTOR_TASK_ERROR_METRICS = "kafka.connect:type=task-error-metrics,connector=%s,task=%s";
/*********************************************************** mm2 ***********************************************************/
public static final String JMX_MIRROR_MAKER_SOURCE = "kafka.connect.mirror:type=MirrorSourceConnector,target=%s,topic=%s,partition=%s";
private JmxName() {
}

View File

@@ -0,0 +1,11 @@
package com.xiaojukeji.know.streaming.km.common.utils;
public class MirrorMakerUtil {
public static String genCheckpointName(String sourceName) {
return sourceName == null? "-checkpoint": sourceName + "-checkpoint";
}
public static String genHeartbeatName(String sourceName) {
return sourceName == null? "-heartbeat": sourceName + "-heartbeat";
}
}

View File

@@ -12,7 +12,6 @@ import lombok.Data;
@JsonIgnoreProperties(value = { "hibernateLazyInitializer", "handler" })
@Data
public class Tuple<T, V> {
private T v1;
private V v2;
@@ -58,4 +57,12 @@ public class Tuple<T, V> {
result = 31 * result + (v2 != null ? v2.hashCode() : 0);
return result;
}
@Override
public String toString() {
return "Tuple{" +
"v1=" + v1 +
", v2=" + v2 +
'}';
}
}

View File

@@ -3,29 +3,25 @@ package com.xiaojukeji.know.streaming.km.common.utils;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import org.apache.commons.lang.StringUtils;
public class VersionUtil {
/**
* apache的kafka相关的版本信息
*/
private static final long BASE_VAL = 10000L;
private static final long APACHE_STEP_VAL = 100L;
public static final long APACHE_MAX_VAL = 100000000L;
private static final int MIN_VERSION_SECTIONS_3 = 3;
private static final int MIN_VERSION_SECTIONS_4 = 4;
private static final String VERSION_FORMAT_3 = "%d.%d.%d";
private static final String VERSION_FORMAT_4 = "%d.%d.%d.%d";
public static boolean isValid(String version){
if(StringUtils.isBlank(version)){return false;}
String[] vers = version.split("\\.");
if(null == vers){return false;}
if(vers.length < MIN_VERSION_SECTIONS_3){return false;}
for(String ver : vers){
if(!ver.chars().allMatch(Character::isDigit)){
return false;
}
}
return true;
}
/**
* XiaoJu的kafka相关的版本信息
*/
private static final String XIAO_JU_VERSION_FEATURE = "-d-";
private static final String XIAO_JU_VERSION_FORMAT_4 = "%d.%d.%d-d-%d";
/**
@@ -34,20 +30,64 @@ public class VersionUtil {
* @param version
* @return
*/
public static long normailze(String version){
if (!isValid(version)) {
public static long normailze(String version) {
if(StringUtils.isBlank(version)) {
return -1;
}
String[] vers = version.split("\\.");
if(MIN_VERSION_SECTIONS_3 == vers.length){
return Long.parseLong(vers[0]) * 1000000 + Long.parseLong(vers[1]) * 10000 + Long.parseLong(vers[2]) * 100;
}else if(MIN_VERSION_SECTIONS_4 == vers.length){
return Long.parseLong(vers[0]) * 1000000 + Long.parseLong(vers[1]) * 10000 + Long.parseLong(vers[2]) * 100 + Long.parseLong(vers[3]);
if (version.contains(XIAO_JU_VERSION_FEATURE)) {
// XiaoJu的kafka
return normalizeXiaoJuVersion(version);
}
return -1;
// 检查是否合法
String[] vers = version.split("\\.");
if(vers.length < MIN_VERSION_SECTIONS_3) {
return -1;
}
for(String ver : vers){
if(!ver.chars().allMatch(Character::isDigit)){
return -1;
}
}
// 转为数字
long val = -1;
if(MIN_VERSION_SECTIONS_3 == vers.length) {
val = Long.parseLong(vers[0]) * APACHE_STEP_VAL * APACHE_STEP_VAL * APACHE_STEP_VAL + Long.parseLong(vers[1]) * APACHE_STEP_VAL * APACHE_STEP_VAL + Long.parseLong(vers[2]) * APACHE_STEP_VAL;
} else if(MIN_VERSION_SECTIONS_4 == vers.length) {
val = Long.parseLong(vers[0]) * APACHE_STEP_VAL * APACHE_STEP_VAL * APACHE_STEP_VAL + Long.parseLong(vers[1]) * APACHE_STEP_VAL * APACHE_STEP_VAL + Long.parseLong(vers[2]) * APACHE_STEP_VAL + Long.parseLong(vers[3]);
}
return val == -1? val: val * BASE_VAL;
}
public static long normalizeXiaoJuVersion(String version) {
if(StringUtils.isBlank(version)) {
return -1;
}
if (!version.contains(XIAO_JU_VERSION_FEATURE)) {
// 非XiaoJu的kafka
return normailze(version);
}
String[] vers = version.split(XIAO_JU_VERSION_FEATURE);
if (vers.length < 2) {
return -1;
}
long apacheVal = normailze(vers[0]);
if (apacheVal == -1) {
return apacheVal;
}
Long xiaoJuVal = ConvertUtil.string2Long(vers[1]);
if (xiaoJuVal == null) {
return apacheVal;
}
return apacheVal + xiaoJuVal;
}
/**
@@ -55,15 +95,17 @@ public class VersionUtil {
* @param version
* @return
*/
public static String dNormailze(long version){
long version4 = version % 100;
long version3 = (version / 100) % 100;
long version2 = (version / 10000) % 100;
long version1 = (version / 1000000) % 100;
public static String dNormailze(long version) {
long version4 = (version / BASE_VAL) % APACHE_STEP_VAL;
long version3 = (version / BASE_VAL / APACHE_STEP_VAL) % APACHE_STEP_VAL;
long version2 = (version / BASE_VAL / APACHE_STEP_VAL / APACHE_STEP_VAL) % APACHE_STEP_VAL;
long version1 = (version / BASE_VAL / APACHE_STEP_VAL / APACHE_STEP_VAL / APACHE_STEP_VAL) % APACHE_STEP_VAL;
if(0 == version4){
if (version % BASE_VAL != 0) {
return String.format(XIAO_JU_VERSION_FORMAT_4, version1, version2, version3, version % BASE_VAL);
} else if (0 == version4) {
return String.format(VERSION_FORMAT_3, version1, version2, version3);
}else {
} else {
return String.format(VERSION_FORMAT_4, version1, version2, version3, version4);
}
}
@@ -71,18 +113,24 @@ public class VersionUtil {
public static void main(String[] args){
long n1 = VersionUtil.normailze(VersionEnum.V_0_10_0_0.getVersion());
String v1 = VersionUtil.dNormailze(n1);
System.out.println(VersionEnum.V_0_10_0_0.getVersion() + ":" + n1 + ":" + v1);
System.out.println(VersionEnum.V_0_10_0_0.getVersion() + "\t:\t" + n1 + "\t:\t" + v1);
long n2 = VersionUtil.normailze(VersionEnum.V_0_10_0_1.getVersion());
String v2 = VersionUtil.dNormailze(n2);
System.out.println(VersionEnum.V_0_10_0_1.getVersion() + ":" + n2 + ":" + v2);
System.out.println(VersionEnum.V_0_10_0_1.getVersion() + "\t:\t" + n2 + "\t:\t" + v2);
long n3 = VersionUtil.normailze(VersionEnum.V_0_11_0_3.getVersion());
String v3 = VersionUtil.dNormailze(n3);
System.out.println(VersionEnum.V_0_11_0_3.getVersion() + ":" + n3 + ":" + v3);
System.out.println(VersionEnum.V_0_11_0_3.getVersion() + "\t:\t" + n3 + "\t:\t" + v3);
long n4 = VersionUtil.normailze(VersionEnum.V_2_5_0.getVersion());
String v4 = VersionUtil.dNormailze(n4);
System.out.println(VersionEnum.V_2_5_0.getVersion() + ":" + n4 + ":" + v4);
System.out.println(VersionEnum.V_2_5_0.getVersion() + "\t:\t" + n4 + "\t:\t" + v4);
long n5 = VersionUtil.normailze(VersionEnum.V_2_5_0_D_300.getVersion());
String v5 = VersionUtil.dNormailze(n5);
System.out.println(VersionEnum.V_2_5_0_D_300.getVersion() + "\t:\t" + n4 + "\t:\t" + v5);
System.out.println(Long.MAX_VALUE);
}
}

View File

@@ -30,5 +30,6 @@ module.exports = {
'prettier/prettier': 2, // 这项配置 对于不符合prettier规范的写法eslint会提示报错
'no-console': 1,
'react/display-name': 0,
'@typescript-eslint/explicit-module-boundary-types': 'off',
},
};

File diff suppressed because it is too large Load Diff

View File

@@ -17,7 +17,7 @@
"eslint-plugin-react": "7.22.0",
"eslint-plugin-react-hooks": "^4.2.0",
"husky": "4.3.7",
"lerna": "^5.5.0",
"lerna": "5.5.0",
"lint-staged": "10.5.3",
"prettier": "2.3.2"
},

File diff suppressed because it is too large Load Diff

View File

@@ -8,13 +8,6 @@
"ident": "config",
"homepage": "",
"license": "ISC",
"publishConfig": {
"registry": "http://registry.npm.xiaojukeji.com/"
},
"repository": {
"type": "git",
"url": "git@git.xiaojukeji.com:bigdata-cloud/d1.git"
},
"scripts": {
"test": "echo \"Error: run tests from root\" && exit 1",
"start": "cross-env NODE_ENV=development webpack-dev-server",

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,7 @@ export enum MetricType {
Connect = 120,
Connectors = 121,
Controls = 901,
MM2 = 122,
}
const api = {
@@ -233,9 +234,9 @@ const api = {
getConnectors: (clusterPhyId: string) => getApi(`/clusters/${clusterPhyId}/connectors-basic`),
getConnectorMetrics: (clusterPhyId: string) => getApi(`/clusters/${clusterPhyId}/connectors-metrics`),
getConnectorPlugins: (connectClusterId: number) => getApi(`/kafka-connect/clusters/${connectClusterId}/connector-plugins`),
getConnectorPluginConfig: (connectClusterId: number, pluginName: string) =>
getConnectorPluginConfig: (connectClusterId: number | string, pluginName: string) =>
getApi(`/kafka-connect/clusters/${connectClusterId}/connector-plugins/${pluginName}/config`),
getCurPluginConfig: (connectClusterId: number, connectorName: string) =>
getCurPluginConfig: (connectClusterId: number | string, connectorName: string) =>
getApi(`/kafka-connect/clusters/${connectClusterId}/connectors/${connectorName}/config`),
isConnectorExist: (connectClusterId: number, connectorName: string) =>
getApi(`/kafka-connect/clusters/${connectClusterId}/connectors/${connectorName}/basic-combine-exist`),
@@ -251,6 +252,39 @@ const api = {
getConnectClusterBasicExit: (clusterPhyId: string, clusterPhyName: string) =>
getApi(`/kafka-clusters/${clusterPhyId}/connect-clusters/${clusterPhyName}/basic-combine-exist`),
// MM2 列表
getMirrorMakerList: (clusterPhyId: number) => getApi(`/clusters/${clusterPhyId}/mirror-makers-overview`),
// MM2 状态卡片
getMirrorMakerState: (clusterPhyId: string) => getApi(`/kafka-clusters/${clusterPhyId}/mirror-makers-state`),
// MM2 指标卡片
getMirrorMakerMetrics: (clusterPhyId: string) => getApi(`/clusters/${clusterPhyId}/mirror-makers-metrics`),
// MM2 筛选
getMirrorMakerMetadata: (clusterPhyId: string) => getApi(`/clusters/${clusterPhyId}/mirror-makers-basic`),
// MM2 详情列表
getMM2DetailTasks: (connectorName: number | string, connectClusterId: number | string) =>
getApi(`/kafka-mm2/clusters/${connectClusterId}/connectors/${connectorName}/tasks`),
// MM2 详情状态卡片
getMM2DetailState: (connectorName: number | string, connectClusterId: number | string) =>
getApi(`/kafka-mm2/clusters/${connectClusterId}/connectors/${connectorName}/state`),
// MM2 操作接口 新增、暂停、重启、删除
mirrorMakerOperates: getApi('/kafka-mm2/mirror-makers'),
// MM2 操作接口 新增、编辑校验
validateMM2Config: getApi('/kafka-mm2/mirror-makers-config/validate'),
// 修改 Connector 配置
updateMM2Config: getApi('/kafka-mm2/mirror-makers-config'),
// MM2 详情
getMirrorMakerMetricPoints: (mirrorMakerName: number | string, connectClusterId: number | string) =>
getApi(`/kafka-mm2/clusters/${connectClusterId}/connectors/${mirrorMakerName}/latest-metrics`),
getSourceKafkaClusterBasic: getApi(`/physical-clusters/basic`),
getGroupBasic: (clusterPhyId: string) => getApi(`/clusters/${clusterPhyId}/groups-basic`),
// Topic复制
getMirrorClusterList: () => getApi(`/ha-mirror/physical-clusters/basic`),
handleTopicMirror: () => getApi(`/ha-mirror/topics`),
getTopicMirrorList: (clusterPhyId: number, topicName: string) =>
getApi(`/ha-mirror/clusters/${clusterPhyId}/topics/${topicName}/mirror-info`),
getMirrorMakerConfig: (connectClusterId: number | string, connectorName: string) =>
getApi(`/kafka-mm2/clusters/${connectClusterId}/connectors/${connectorName}/config`),
};
export default api;

View File

@@ -0,0 +1,119 @@
import React, { useState, useEffect } from 'react';
import { useParams } from 'react-router-dom';
import CardBar, { healthDataProps } from './index';
import { Tooltip, Utils } from 'knowdesign';
import api from '@src/api';
import { HealthStateEnum } from '../HealthState';
import { InfoCircleOutlined } from '@ant-design/icons';
interface MM2State {
workerCount: number;
aliveConnectorCount: number;
aliveTaskCount: number;
healthCheckPassed: number;
healthCheckTotal: number;
healthState: number;
totalConnectorCount: string;
totalTaskCount: number;
totalServerCount: number;
mirrorMakerCount: number;
}
const getVal = (val: string | number | undefined | null) => {
return val === undefined || val === null || val === '' ? '0' : val;
};
const ConnectCard = ({ state }: { state?: boolean }) => {
const { clusterId } = useParams<{
clusterId: string;
}>();
const [loading, setLoading] = useState(false);
const [cardData, setCardData] = useState([]);
const [healthData, setHealthData] = useState<healthDataProps>({
state: HealthStateEnum.UNKNOWN,
passed: 0,
total: 0,
});
const getHealthData = () => {
return Utils.post(api.getMetricPointsLatest(Number(clusterId)), [
'HealthCheckPassed_MirrorMaker',
'HealthCheckTotal_MirrorMaker',
'HealthState_MirrorMaker',
]).then((data: any) => {
setHealthData({
state: data?.metrics?.['HealthState_MirrorMaker'],
passed: data?.metrics?.['HealthCheckPassed_MirrorMaker'] || 0,
total: data?.metrics?.['HealthCheckTotal_MirrorMaker'] || 0,
});
});
};
const getCardInfo = () => {
return Utils.request(api.getMirrorMakerState(clusterId)).then((res: MM2State) => {
const { mirrorMakerCount, aliveConnectorCount, aliveTaskCount, totalConnectorCount, totalTaskCount, workerCount } = res || {};
const cardMap = [
{
title: 'MM2s',
value: getVal(mirrorMakerCount),
customStyle: {
// 自定义cardbar样式
marginLeft: 0,
},
},
{
title: 'Workers',
value: getVal(workerCount),
},
{
title() {
return (
<div>
<span style={{ display: 'inline-block', marginRight: '8px' }}>Connectors</span>
<Tooltip overlayClassName="rebalance-tooltip" title="conector运行数/总数">
<InfoCircleOutlined />
</Tooltip>
</div>
);
},
value() {
return (
<span>
{getVal(aliveConnectorCount)}/{getVal(totalConnectorCount)}
</span>
);
},
},
{
title() {
return (
<div>
<span style={{ display: 'inline-block', marginRight: '8px' }}>Tasks</span>
<Tooltip overlayClassName="rebalance-tooltip" title="Task运行数/总数">
<InfoCircleOutlined />
</Tooltip>
</div>
);
},
value() {
return (
<span>
{getVal(aliveTaskCount)}/{getVal(totalTaskCount)}
</span>
);
},
},
];
setCardData(cardMap);
});
};
useEffect(() => {
setLoading(true);
Promise.all([getHealthData(), getCardInfo()]).finally(() => {
setLoading(false);
});
}, [clusterId, state]);
return <CardBar scene="mm2" healthData={healthData} cardColumns={cardData} loading={loading}></CardBar>;
};
export default ConnectCard;

View File

@@ -0,0 +1,145 @@
/* eslint-disable react/display-name */
import React, { useState, useEffect } from 'react';
import { useLocation, useParams } from 'react-router-dom';
import CardBar from '@src/components/CardBar';
import { healthDataProps } from '.';
import { Tooltip, Utils } from 'knowdesign';
import Api from '@src/api';
import { hashDataParse } from '@src/constants/common';
import { HealthStateEnum } from '../HealthState';
import { InfoCircleOutlined } from '@ant-design/icons';
import { stateEnum } from '@src/pages/Connect/config';
const getVal = (val: string | number | undefined | null) => {
return val === undefined || val === null || val === '' ? '0' : val;
};
const ConnectDetailCard = (props: { record: any; tabSelectType: string }) => {
const { record, tabSelectType } = props;
const urlParams = useParams<{ clusterId: string; brokerId: string }>();
const urlLocation = useLocation<any>();
const [loading, setLoading] = useState(false);
const [cardData, setCardData] = useState([]);
const [healthData, setHealthData] = useState<healthDataProps>({
state: HealthStateEnum.UNKNOWN,
passed: 0,
total: 0,
});
const getHealthData = (tabSelectTypeName: string) => {
return Utils.post(Api.getMirrorMakerMetricPoints(tabSelectTypeName, record?.connectClusterId), [
'HealthState',
'HealthCheckPassed',
'HealthCheckTotal',
]).then((data: any) => {
setHealthData({
state: data?.metrics?.['HealthState'],
passed: data?.metrics?.['HealthCheckPassed'] || 0,
total: data?.metrics?.['HealthCheckTotal'] || 0,
});
});
};
const getCardInfo = (tabSelectTypeName: string) => {
return Utils.request(Api.getConnectDetailState(tabSelectTypeName, record?.connectClusterId)).then((res: any) => {
const { type, aliveTaskCount, state, totalTaskCount, totalWorkerCount } = res || {};
const cordRightMap = [
{
title: 'Status',
// value: Utils.firstCharUppercase(state) || '-',
value: () => {
return (
<>
{
<span style={{ fontFamily: 'HelveticaNeue-Medium', fontSize: 32, color: stateEnum[state].color }}>
{Utils.firstCharUppercase(state) || '-'}
</span>
}
</>
);
},
},
{
title() {
return (
<div>
<span style={{ display: 'inline-block', marginRight: '8px' }}>Tasks</span>
<Tooltip overlayClassName="rebalance-tooltip" title="Task运行数/总数">
<InfoCircleOutlined />
</Tooltip>
</div>
);
},
value() {
return (
<span>
{getVal(aliveTaskCount)}/{getVal(totalTaskCount)}
</span>
);
},
},
{
title: 'Workers',
value: getVal(totalWorkerCount),
},
];
setCardData(cordRightMap);
});
};
const noDataCardInfo = () => {
const cordRightMap = [
{
title: 'Status',
// value: Utils.firstCharUppercase(state) || '-',
value() {
return <span>-</span>;
},
},
{
title() {
return (
<div>
<span style={{ display: 'inline-block', marginRight: '8px' }}>Tasks</span>
<Tooltip overlayClassName="rebalance-tooltip" title="Task运行数/总数">
<InfoCircleOutlined />
</Tooltip>
</div>
);
},
value() {
return <span>-/-</span>;
},
},
{
title: 'Workers',
value() {
return <span>-</span>;
},
},
];
setCardData(cordRightMap);
};
useEffect(() => {
setLoading(true);
const filterCardInfo =
tabSelectType === 'MirrorCheckpoint' && record.checkpointConnector
? getCardInfo(record.checkpointConnector)
: tabSelectType === 'MirrorHeatbeat' && record.heartbeatConnector
? getCardInfo(record.heartbeatConnector)
: tabSelectType === 'MirrorSource' && record.connectorName
? getCardInfo(record.connectorName)
: noDataCardInfo();
Promise.all([getHealthData(record.connectorName), filterCardInfo]).finally(() => {
setLoading(false);
});
}, [record, tabSelectType]);
return (
<CardBar record={record} scene="mm2" healthData={healthData} cardColumns={cardData} showCardBg={false} loading={loading}></CardBar>
);
};
export default ConnectDetailCard;

Some files were not shown because too many files have changed in this diff Show More