Compare commits

...

172 Commits
v3.0.1 ... v3.3

Author SHA1 Message Date
zengqiao
258385dc9a 升级至3.3.0版本 2023-02-24 11:12:31 +08:00
zengqiao
65238231f0 补充3.3.0版本升级信息 2023-02-24 11:11:12 +08:00
zengqiao
cb22e02fbe 补充3.3.0版本变更信息 2023-02-24 11:10:42 +08:00
erge
aa0bec1206 [Optimize]package.json锁定lerna版本,更新package-lock.json文件(#957) 2023-02-23 20:14:04 +08:00
wyb
793c780015 [Bugfix]修复mm2列表请求超时(#949)
调整代码结构
2023-02-23 11:17:48 +08:00
erge
ec6f063450 [Optimize] 去除package.json 出现内部地址(#939) 2023-02-22 17:08:21 +08:00
zengqiao
f25c65b98b [Doc]补充贡献者信息 2023-02-22 14:00:52 +08:00
Luckywustone
2d99aae779 [Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
[Bugfix]ZK健康巡检日志不清晰导致问题难定位 #904
2023-02-22 13:41:02 +08:00
erge
a8847dc282 [Bugfix] 修复打包不成功(#940) 2023-02-22 11:58:33 +08:00
zengqiao
4852c01c88 [Feature]补充贡献代码相关文档(#947)
1、补充贡献者名单,如有遗漏,辛苦告知;
2、补充贡献指南;
2023-02-22 11:53:00 +08:00
zengqiao
3d6f405b69 [Bugfix]订正失效的邮箱地址(#944)
[Bugfix]订正语句(#944)
2023-02-22 11:52:40 +08:00
erge
18e3fbf41d [Optimize] 健康检查项时间和结果显示(didi#930) 2023-02-21 10:41:49 +08:00
erge
ae8cc3092b [Optimize] 新增/编辑MM2 Topic 由当前集群获取改为对应的sourceKafka集群获取& 新增/编辑MM2入参优化(#894) 2023-02-21 10:41:44 +08:00
erge
5c26e8947b [Optimize] JSON新增MM2 Drawer Title文案变更(#894) 2023-02-21 10:41:37 +08:00
erge
fbe6945d3b [Bugfix]zookeeper页面leader节点显示异常(#873) 2023-02-21 10:41:25 +08:00
zengqiao
7dc8f2dc48 [Bugfix]修复Connector列表和MM2列表搜索不生效的问题(#928) 2023-02-21 10:40:05 +08:00
zengqiao
91c60ce72c [Bugfix]修复新接入的集群,Controller-Host不显示的问题(#927)
问题原因:
1、新接入的集群,DB中暂未存储Broker信息,因此在存储Controller至DB时,查询DB中的Broker会查询为空。

解决方式:
1、存储Controller至DB前,主动获取一次Broker的信息。
2023-02-21 10:39:46 +08:00
zengqiao
687eea80c8 补充3.3.0版本变更信息 2023-02-16 14:51:43 +08:00
zengqiao
9bfe3fd1db 设置为AGPL协议 2023-02-15 17:53:46 +08:00
shizeying
03f81bc6de [Bugfix]删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引(#918) 2023-02-15 17:45:53 +08:00
slhu
eed9571ffa [Bugfix]解决在解析命令执行后返回指标的值时发生的数据类型转换错误与指标存储上报时报空指针的问题(#912)
1.zk_min_latency、zk_max_latency指标数据类型变更为float
2.使用ConvertUtil.string2Float()方法进行string到float到类型转换
2023-02-15 16:20:39 +08:00
edengyuan_v
e4651ef749 [Optimize]新增Topic时清理策略区分单选多选(#770) 2023-02-15 11:18:33 +08:00
zengqiao
f715cf7a8d 补充 3.3.0 版本变更信息 2023-02-13 11:57:51 +08:00
wyb
fad9ddb9a1 fix: 更新登录页文案 2023-02-13 11:49:00 +08:00
wyb
b6e4f50849 fix: 健康状态详情优化 & Connector 样式优化 & 无MM2任务指标兜底页 2023-02-13 11:49:00 +08:00
wyb
5c6911e398 [Optimize]Overview指标卡片展示逻辑 2023-02-13 11:49:00 +08:00
wyb
a0371ab88b feat: 新增Topic 复制功能 2023-02-13 11:49:00 +08:00
wyb
fa2abadc25 feat: 新增Mirror Maker 2.0(MM2) 2023-02-13 11:49:00 +08:00
zengqiao
f03460f3cd [Bugfix]修复 Broker Similar Config 显示错误的问题(#872) 2023-02-13 11:22:13 +08:00
zengqiao
b5683b73c2 [Optimize]优化 MySQL & ES 测试容器的初始化(#906)
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;

被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL;
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
2023-02-13 10:33:40 +08:00
zengqiao
c062586c7e [Optimize]删除无用&多余的打包配置文件 2023-02-10 16:51:32 +08:00
fengqiongfeng
98a5c7b776 [Optimize]健康检查日志优化(#869) 2023-02-10 11:02:24 +08:00
zengqiao
e204023b1f [Feature]增加支持Topic复制的集群列表接口(#899) 2023-02-09 17:03:28 +08:00
zengqiao
4c5ffccc45 [Optimize]删除无效代码 2023-02-09 17:00:50 +08:00
zengqiao
fbcf58e19c [Feature]MM2管理-Connector元信息管理优化(#894) 2023-02-09 16:59:38 +08:00
zengqiao
e5c6d00438 [Feature]MM2管理-补充集群Group列表信息(#894) 2023-02-09 16:59:38 +08:00
zengqiao
ab6a4d7099 [Feature]MM2管理-MM2管理相关接口类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
78b2b8a45e [Feature]MM2管理-MM2管理相关业务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
add2af4f3f [Feature]MM2管理-MM2管理相关服务类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
235c0ed30e [Feature]MM2管理-MM2管理相关实体类(#894) 2023-02-09 16:59:38 +08:00
zengqiao
5bd93aa478 [Bugfix]修复正常情况下,集群状态统计错误的问题(#865) 2023-02-09 16:44:26 +08:00
zengqiao
f95be2c1b3 [Optimize]TaskResult增加返回任务分组信息 2023-02-09 16:36:19 +08:00
zengqiao
5110b30f62 [Feature]MM2管理-MM2健康巡检(#894) 2023-02-09 15:36:35 +08:00
zengqiao
861faa5df5 [Feature]HA-镜像Topic管理(#899)
1、底层Kafka需要是滴滴版本的Kafka;
2、新增镜像Topic的增删改查;
3、新增镜像Topic的指标查看;
2023-02-09 15:21:23 +08:00
zengqiao
efdf624c67 [Feature]HA-滴滴Kafka版本信息兼容(#899) 2023-02-09 15:21:23 +08:00
zengqiao
caccf9cef5 [Feature]MM2管理-采集MM2指标任务(#894) 2023-02-09 14:58:34 +08:00
zengqiao
6ba3dceb84 [Feature]MM2管理-采集MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
9b7c41e804 [Feature]MM2管理-读写ES中的MM2指标(#894) 2023-02-09 14:58:34 +08:00
zengqiao
346aee8fe7 [Bugfix]修复Topic指标大盘获取TopN指标存在错误的问题(#896)
1、将ES排序调整为基于本地cache的排序;
2、将database的本地cache从core模块移动到persistence模块;
2023-02-09 14:20:02 +08:00
zengqiao
353d781bca [Feature]补充MM2相关索引及数据库表信息(#894) 2023-02-09 13:44:40 +08:00
EricZeng
3ce4bf231a 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
EricZeng
d046cb8bf4 修复条件判断错误问题
Co-authored-by: haoqi123 <49672871+haoqi123@users.noreply.github.com>
2023-02-09 11:28:26 +08:00
zengqiao
da95c63503 [Optimize]优化TestContainers相关依赖(#892)
1、去除对mysql-connector-j的依赖;
2、整理代码;
2023-02-09 11:28:26 +08:00
haoqi
915e48de22 [Optimize]补充Testcontainers的使用说明(#890) 2023-02-09 11:05:44 +08:00
_haoqi
256f770971 [Feature]Support running tests with testcontainers(#870) 2023-02-08 14:56:44 +08:00
zengqiao
16e251cbe8 调整开源协议 2023-02-08 14:10:37 +08:00
zengqiao
67743b859a [Optimize]补充Ldap登录的配置说明(#888) 2023-02-08 13:51:45 +08:00
congchen0321
c275b42632 Update faq.md 2023-02-08 13:41:08 +08:00
zengqiao
a02760417b [Optimize]ZK-Overview页面补充默认展示的指标(#874) 2023-01-30 13:18:06 +08:00
zengqiao
0e50bfc5d4 优化PR模版 2023-01-13 16:04:25 +08:00
wuyouwuyoulian
eab988e18f For #781, Fix "The partition display is incomplete" bug 2023-01-12 11:03:30 +08:00
zengqiao
dd6004b9d4 [Bugfix]修复采集副本指标时,参数传递错误问题(#867) 2023-01-11 18:00:21 +08:00
zengqiao
ac7c32acd5 [Optimize]优化ES索引及模版的初始化文档(#832)
1、订正不同地方索引模版的shard数存在不一致的问题;
2、删除多余的template.sh,统一使用init_es_template.sh;
3、init_es_template.sh中,增加connect相关索引模版的初始化脚本,删除replica 和 zookeper索引模版的初始化脚本;
2023-01-09 15:18:41 +08:00
zengqiao
f4a219ceef [Optimize]去除Replica指标从ES读写的相关代码(#862) 2023-01-09 14:57:38 +08:00
zengqiao
a8b56fb613 [Bugfix]修复用户信息修改后,用户列表会抛出空指针异常的问题(#860) 2023-01-09 14:57:23 +08:00
zengqiao
2925a20e8e [Bugfix]修复查看消息时,选择分区不生效问题(#858) 2023-01-09 13:38:10 +08:00
zengqiao
6b3eb05735 [Bugfix]修复对ZK客户端进行配置后不生效的问题(#694)
1、修复在ks_km_physical_cluster表的zk_properties字段填写ZK 客户端的相关配置后,不生效的问题。
2、删除zk_properties字段中,暂时无需使用的jmxConfig字段。
2023-01-09 10:44:35 +08:00
zengqiao
17e0c39f83 [Optimize]优化Topic健康巡检的日志(#855) 2023-01-06 14:42:08 +08:00
zengqiao
4994639111 [Optimize]无ZK模块时,巡检详情忽略对ZK的展示(#764) 2023-01-04 10:32:18 +08:00
wyb
c187b5246f [Bugfix]修复connector指标筛选缺少指标的问题(#846) 2022-12-23 16:19:34 +08:00
wyb
6ed6d5ec8a [Bugfix]修复用户更新失败问题(#840) 2022-12-22 15:56:48 +08:00
wyb
0735b332a8 [Bugfix]修复函数映射错误(#842) 2022-12-22 08:48:59 +08:00
wyb
344cec19fe [Bugfix]connector指标采集算最大值错误(#836) 2022-12-20 09:50:42 +08:00
zengqiao
6ef365e201 bump version to 3.2.0 2022-12-16 13:58:40 +08:00
zengqiao
edfa6a9f71 调整v3.2版本容器化部署信息 2022-12-16 13:39:51 +08:00
孙超
860d0b92e2 V3.2 2022-12-16 13:27:09 +08:00
zengqiao
5bceed7105 [Optimize]缩小ES索引默认shard数 2022-12-15 14:44:18 +08:00
zengqiao
44a2fe0398 增加3.2.0版本升级信息 2022-12-14 14:14:35 +08:00
zengqiao
218459ad1b 增加3.2.0版本变更信息 2022-12-14 14:14:20 +08:00
zengqiao
7db757bc12 [Optimize]优化Connector创建时的入参
1、增加config.action.reload的默认值;
2、增加errors.tolerance的默认值;
2022-12-14 14:12:32 +08:00
zengqiao
896a943587 [Optimize]缩短ES索引默认保存时间为15天 2022-12-14 14:10:46 +08:00
zengqiao
cd2c388e68 [Optimize]优化Sonar代码扫描结果 2022-12-14 14:07:30 +08:00
wyb
4543a339b7 [Bugfix]修复job更新中的数组越界报错(#744) 2022-12-14 13:56:29 +08:00
zengqiao
1c4fbef9f2 [Feature]支持拆分API服务和Job服务部署(#829)
1、JMX检查功能是每一个KS都必须要有的,因此从Task模块移动到Core模块;
2、application.yml中补充Task模块任务的整体开关字段;
2022-12-09 16:11:03 +08:00
zengqiao
b2f0f69365 [Optimize]Overview页面的TopN查询ES流程优化(#823)
1、复用线程池,同时支持线程池的线程数可配置;
2、优化查询TopN指标时,可能会出现重复查询的问题;
3、处理代码扫描(SonarLint)反馈的问题;
2022-12-09 14:39:17 +08:00
wyb
c4fb18a73c [Bugfix]修复迁移任务状态不一致问题(#815) 2022-12-08 17:13:14 +08:00
zengqiao
5cad7b4106 [Bugfix]修复集群Topic列表页面白屏问题(#819)
集群Topic列表健康状态对应关系存在问题,导致当健康状态指标存在时,会出现白屏。
2022-12-07 16:27:27 +08:00
zengqiao
f3c4133cd2 [Bugfix]分批从ES查询Topic最近一条指标(#817) 2022-12-07 16:15:01 +08:00
zengqiao
d9c59cb3d3 增加Connect Rest接口 2022-12-07 10:20:02 +08:00
zengqiao
7a0db7161b 增加Connect 业务层方法 2022-12-07 10:20:02 +08:00
zengqiao
6aefc16fa0 增加Connect相关任务 2022-12-07 10:20:02 +08:00
zengqiao
186dcd07e0 增加3.2版本升级信息 2022-12-07 10:20:02 +08:00
zengqiao
e8652d5db5 Connect相关代码 2022-12-07 10:20:02 +08:00
zengqiao
fb5964af84 补充kafka-connect相关包 2022-12-07 10:20:02 +08:00
zengqiao
249fe7c700 调整ES相关文件位置 & 补充connectESDAO相关类 2022-12-07 10:20:02 +08:00
zengqiao
cc2a590b33 新增自定义的KSPartialKafkaAdminClient
由于原生的KafkaAdminClient在解析Group时,会将Connect集群的Group过滤掉,因此自定义KSPartialKafkaAdminClient,使其具备获取Connect Group的能力
2022-12-07 10:20:02 +08:00
zengqiao
5b3f3e5575 移动指标入ES的代码 2022-12-07 10:20:02 +08:00
wyb
36cf285397 [Bug]修复logi-securiy模块数据库选择错误(#808) 2022-12-06 20:02:49 +08:00
zengqiao
4386563c2c 调整指标采集的默认耗时值,以便在查看Top指标时即可看到 2022-12-06 16:47:53 +08:00
zengqiao
0123ce4a5a 优化Broker列表JMX端口的返回值 2022-12-06 16:47:07 +08:00
zengqiao
c3d47d3093 池化KafkaAdminClient,避免KafkaAdminClient出现性能问题 2022-12-06 16:46:11 +08:00
zengqiao
9735c4f885 删除重复采集的指标 2022-12-06 16:41:27 +08:00
zengqiao
3a3141a361 调整ZK指标的采集时间 2022-12-06 16:40:52 +08:00
zengqiao
ac30436324 [Bugfix]修复更新健康巡检结果时出现死锁的问题(#728) 2022-12-05 16:30:37 +08:00
zengqiao
7176e418f5 [Optimize]优化健康巡检相关指标的计算(#726)
1、增加缓存,减少健康状态指标计算时的IO;
2、健康巡检调整为按照资源维度并发处理;
3、明确HealthCheckResultService和HealthStateService的功能边界;
2022-12-05 16:26:31 +08:00
zengqiao
ca794f507e [Optimize]规范日志输出格式(#800)
修改log输出配置,使其输出的日志中自带class={className}的信息,后续书写代码时,就无需书写该部分内容。
2022-12-05 14:27:02 +08:00
zengqiao
0f8be4fadc [Optimize]优化日志输出 & 本地缓存统一管理(#800) 2022-12-05 14:04:19 +08:00
zengqiao
7066246e8f [Optimize]错开采集任务触发时间,降低Offset信息获取时超时情况的发生(#726)
当前指标采集任务都是整分钟触发执行的,导致会同时向Kafka请求分区Offset信息,会导致:
1、请求过多,从而出现超时;
2、同时进行,可能会导致分区重复获取Offset信息;

因此将其错开。
2022-12-05 13:49:35 +08:00
zengqiao
7d1bb48b59 [Optimize]ZK四字命令解析日志优化(#805)
增加遗漏的指标名的处理,减少warn日志该部分的信息
2022-12-05 13:39:26 +08:00
limaiwang
dd0d519677 [Optimize]更新Zookeeper详情目录结构搜索文案(#793) 2022-12-05 12:15:03 +08:00
zengqiao
4293d05fca [Optimize]优化Topic元信息更新策略(#806) 2022-12-04 17:55:27 +08:00
zengqiao
2c82baf9fc [Optimize]指标采集性能优化-part1(#726) 2022-12-04 15:41:48 +08:00
zengqiao
921161d6d0 [Bugfix]修复ReplicaMetricCollector编译失败问题(#802) 2022-12-03 14:34:38 +08:00
zengqiao
e632c6c13f [Optimize]优化Sonar扫描结果 2022-12-02 15:34:28 +08:00
zengqiao
5833a8644c [Optimize]关闭errorLogger,去除无用输出(#801) 2022-12-02 15:29:17 +08:00
zengqiao
fab41e892f [Optimize]日志统一格式&优化输出内容-part3(#800) 2022-12-02 15:14:21 +08:00
zengqiao
7a52cf67b0 [Optimize]日志统一格式&优化输出内容-part2(#800) 2022-12-02 15:01:24 +08:00
zengqiao
175b8d643a [Optimize]统一日志格式-part1(#800) 2022-12-02 14:39:57 +08:00
zengqiao
6241eb052a [Bugfix]修复KafkaJMXClient类中logger错误的问题(#794) 2022-11-30 11:15:00 +08:00
zengqiao
c2fd0a8410 [Optimize]优化Sonar扫描出的不规范代码 2022-11-29 20:54:41 +08:00
zengqiao
5127b600ec [Optimize]优化ESClient的并发访问控制(#787) 2022-11-29 10:47:57 +08:00
zengqiao
feb03aede6 [Optimize]优化线程池的名称(#789) 2022-11-28 15:11:54 +08:00
duanxiaoqiu
47b6c5d86a [Bugfix]修复创建topic选择过期策略(kafka版本0.10.1.0之前)compact和delete只能二选一(didi#770) 2022-11-27 14:18:50 +08:00
SimonTeo58
c4a81613f4 [Optimize]更新Topic-Messages抽屉文案(#771) 2022-11-24 21:54:29 +08:00
limaiwang
daeb5c4cec [Bugfix]修复集群配置不写时,校验参数报错的问题 2022-11-24 15:30:01 +08:00
WangYaobo
38def45ad6 [Doc]增加无数据排查文档(#773) 2022-11-24 10:44:37 +08:00
pen4
4b29a2fdfd update org.springframework:spring-context 5.3.18 to 5.3.19 2022-11-23 11:38:11 +08:00
zengqiao
a165ecaeef [Bugfix]修复Broker&Topic修改时,版本设置错误问题(#762)
Kafka v2.3增加了增量修改配置的功能,但是KS中错误的将其配置为0.11.0版本就具备该能力,因此对其进行调整。
2022-11-21 15:56:33 +08:00
night.liang
6637ba4ccc [Optimize] optimize zk OutstandingRequests checker’s exception log (#738) 2022-11-18 17:12:07 +08:00
duanxiaoqiu
2f807eec2b [Feat]Topic列表健康分修改为健康状态(#758) 2022-11-18 13:56:27 +08:00
石臻臻的杂货铺
636c2c6a83 Update README.md 2022-11-17 13:33:40 +08:00
zengqiao
898a55c703 [Bugfix]修复Broker列表LogSize指标存储时名称错误的问题(#759) 2022-11-17 13:27:45 +08:00
zengqiao
8ffe7e7101 [Bugfix]修复Prometheus中Group部分指标缺少的问题(#756) 2022-11-14 13:33:16 +08:00
zengqiao
7661826ea5 [Optimize]健康巡检增加ClusterParam, 从而拆分Kafka和Connect相关的巡检任务 2022-11-10 16:24:39 +08:00
zengqiao
e456be91ef [Bugfix]集群JMX配置发生变更时,进行JMX的重新加载 2022-11-10 16:04:40 +08:00
zengqiao
da0a97cabf [Optimize] 调整Task代码结构为Connector功能做准备 2022-11-09 10:28:52 +08:00
zengqiao
c1031a492a [Optimize]增加ES索引删除的功能 2022-11-09 10:28:52 +08:00
zengqiao
3c8aaf528c [Bugfix] 修复因为指标缺失导致返回的集群数错误的问题 (#741) 2022-11-09 10:28:52 +08:00
黄海婷
70ff20a2b0 styles:cardBar卡片标题图标hover样式 2022-11-07 10:38:28 +08:00
黄海婷
6918f4babe styles:job列表自定义列按钮新增hover背景色 2022-11-07 10:38:28 +08:00
黄海婷
805a704d34 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
黄海婷
c69c289bc4 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
zengqiao
dd5869e246 [Optimize] 调整代码结构,为Connect功能做准备 2022-11-07 10:13:26 +08:00
Richard
b51ffb81a3 [Bugfix] No thread-bound request found. (#743) 2022-11-07 10:06:54 +08:00
黄海婷
ed0efd6bd2 styles:字体颜色#adb5bc变更为#74788D 2022-11-03 16:49:35 +08:00
黄海婷
39d2fe6195 styles:消息大小测试弹窗下方提示字体加粗 2022-11-03 16:49:35 +08:00
黄海婷
7471d05c20 styles:消息大小测试弹框字符数显示字体调整 2022-11-03 16:49:35 +08:00
黄海婷
3492688733 feat:Consumer列表刷新按钮新增hover提示 2022-11-01 17:37:37 +08:00
Sean
a603783615 [Optimize].gitignore 中添加 flatten.xml 过滤,为后续引入flatten 做准备(#732) 2022-11-01 14:16:53 +08:00
night.liang
5c9096d564 [Bugfix] fix replica dsl (#708) 2022-11-01 10:45:59 +08:00
zengqiao
c27786a257 bump version to 3.1.0 2022-10-31 14:55:50 +08:00
zengqiao
81910d1958 [Hotfix] 修复新接入集群时,健康状态信息页面出现空指针问题 2022-10-31 14:55:22 +08:00
zengqiao
55d5fc4bde 增加v3.1.0版本的变更项 2022-10-31 14:05:42 +08:00
GraceWalk
f30586b150 fix: 依赖安装默认采用 taobao 镜像 2022-10-29 13:55:36 +08:00
GraceWalk
37037c19f0 fix: 更新版本信息获取方式 2022-10-29 13:55:36 +08:00
GraceWalk
1a5e2c7309 fix: 错误页面优化 2022-10-29 13:55:36 +08:00
GraceWalk
941dd4fd65 feat: 支持 Zookeeper 模块 2022-10-29 13:55:36 +08:00
GraceWalk
5f6df3681c feat: 健康状态展示优化 2022-10-29 13:55:36 +08:00
zengqiao
7d045dbf05 补充ZK健康巡检任务 2022-10-29 13:55:07 +08:00
zengqiao
4ff4accdc3 补充3.1.0版本升级信息 2022-10-29 13:55:07 +08:00
zengqiao
bbe967c4a8 补充多集群健康状态概览信息 2022-10-29 13:55:07 +08:00
zengqiao
b101cec6fa 健康分调整为健康状态 2022-10-29 13:55:07 +08:00
zengqiao
e98ec562a2 Znode信息中,补充当前节点路径信息 2022-10-29 13:55:07 +08:00
zengqiao
0e71ecc587 延长健康检查结果过期时间 2022-10-29 13:55:07 +08:00
zengqiao
0f11a65df8 补充获取ZK的namespace的方法 2022-10-29 13:55:07 +08:00
zengqiao
da00c8c877 还原消费组重置失败的提示文案 2022-10-29 13:55:07 +08:00
hongtenzone@foxmail.com
8b177877bb Add release notes 2022-10-28 15:35:26 +08:00
hongtenzone@foxmail.com
ea199dca8d Add release notes 2022-10-28 15:35:26 +08:00
renxiangde
88b5833f77 [Bugfix] 修复新建Topic后,立即查看Topic-Messages信息会提示Topic不存在的问题 (#697) 2022-10-27 11:04:26 +08:00
zwen
127b5be651 [fix]Repair that preferredReplicaElection is not called as expected 2022-10-27 10:15:15 +08:00
Mengqi777
80f001cdd5 [ISSUE #723]Ignore error and continue to package km-rest if no git directory 2022-10-26 10:14:14 +08:00
zengqiao
30d297cae1 bump version to 3.1.0-SNAPSHOT 2022-10-21 17:13:02 +08:00
661 changed files with 35885 additions and 9462 deletions

View File

@@ -14,9 +14,10 @@ XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献: 请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 确保有针对更改提交的 Github issue通常在您开始处理之前。诸如拼写错误之类的琐碎更改不需要 Github issue。您的Pull Request应该只解决个问题,而不需要进行其他更改—— 一个 PR 解决个问题 * [ ] 一个 PRPull Request的简写)只解决个问题,禁止一个 PR 解决个问题
* [ ] 格式化 Pull Request 标题,如[ISSUE #123] support Confluent Schema Registry。 Pull Request 中的每个提交都应该有一个有意义的主题行和正文。 * [ ] 确保 PR 有对应的 Issue通常在您开始处理之前创建除非是书写错误之类的琐碎更改不需要 Issue
* [ ] 编写足够详细的Pull Request描述以了解Pull Request的作用、方式和原因。 * [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PSCommit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改请记住在test 模块中添加 integration-test * [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 确保编译通过,集成测试通过 * [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过;

6
.gitignore vendored
View File

@@ -109,4 +109,8 @@ out/*
dist/ dist/
dist/* dist/*
km-rest/src/main/resources/templates/ km-rest/src/main/resources/templates/
*dependency-reduced-pom* *dependency-reduced-pom*
#filter flattened xml
*/.flattened-pom.xml
.flattened-pom.xml
*/*/.flattened-pom.xml

View File

@@ -4,7 +4,7 @@
## Our Pledge ## Our Pledge
In the interest of fostering an open and welcoming environment, we as In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and contributors and maintainers pledge to making participation in our project, and
our community a harassment-free experience for everyone, regardless of age, body our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience, size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race, education, socio-economic status, nationality, personal appearance, race,
@@ -56,7 +56,7 @@ further defined and clarified by project maintainers.
## Enforcement ## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at shirenchuang@didiglobal.com . All reported by contacting the project team at https://knowstreaming.com/support-center . All
complaints will be reviewed and investigated and will result in a response that complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident. obligated to maintain confidentiality with regard to the reporter of an incident.

View File

@@ -143,7 +143,7 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`** **`2、微信群`**
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。 微信加群:添加`mike_zhangliang``PenceXie``szzdzhp001`的微信号备注KnowStreaming加群。
<br/> <br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。 加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。

View File

@@ -1,4 +1,148 @@
## v3.3.0
**问题修复**
- 修复 Connect 的 JMX-Port 配置未生效问题;
- 修复 不存在 Connector 时OverView 页面的数据一直处于加载中的问题;
- 修复 Group 分区信息,分页时展示不全的问题;
- 修复采集副本指标时,参数传递错误的问题;
- 修复用户信息修改后,用户列表会抛出空指针异常的问题;
- 修复 Topic 详情页面,查看消息时,选择分区不生效问题;
- 修复对 ZK 客户端进行配置后不生效的问题;
- 修复 connect 模块,指标中缺少健康巡检项通过数的问题;
- 修复 connect 模块,指标获取方法存在映射错误的问题;
- 修复 connect 模块max 纬度指标获取错误的问题;
- 修复 Topic 指标大盘 TopN 指标显示信息错误的问题;
- 修复 Broker Similar Config 显示错误的问题;
- 修复解析 ZK 四字命令时,数据类型设置错误导致空指针的问题;
- 修复新增 Topic 时,清理策略选项版本控制错误的问题;
- 修复新接入集群时 Controller-Host 信息不显示的问题;
- 修复 Connector 和 MM2 列表搜索不生效的问题;
- 修复 Zookeeper 页面Leader 显示存在异常的问题;
- 修复前端打包失败的问题;
**产品优化**
- ZK Overview 页面补充默认展示的指标;
- 统一初始化 ES 索引模版的脚本为 init_es_template.sh同时新增缺失的 connect 索引模版初始化脚本,去除多余的 replica 和 zookeper 索引模版初始化脚本;
- 指标大盘页面,优化指标筛选操作后,无指标数据的指标卡片由不显示改为显示,并增加无数据的兜底;
- 删除从 ES 读写 replica 指标的相关代码;
- 优化 Topic 健康巡检的日志,明确错误的原因;
- 优化无 ZK 模块时,巡检详情忽略对 ZK 的展示;
- 优化本地缓存大小为可配置;
- Task 模块中的返回中,补充任务的分组信息;
- FAQ 补充 Ldap 的配置说明;
- FAQ 补充接入 Kerberos 认证的 Kafka 集群的配置说明;
- ks_km_kafka_change_record 表增加时间纬度的索引,优化查询性能;
- 优化 ZK 健康巡检的日志,便于问题的排查;
**功能新增**
- 新增基于滴滴 Kafka 的 Topic 复制功能(需使用滴滴 Kafka 才可具备该能力);
- Topic 指标大盘,新增 Topic 复制相关的指标;
- 新增基于 TestContainers 的单测;
**Kafka MM2 Beta版 (v3.3.0版本新增发布)**
- MM2 任务的增删改查;
- MM2 任务的指标大盘;
- MM2 任务的健康状态;
---
## v3.2.0
**问题修复**
- 修复健康巡检结果更新至 DB 时,出现死锁问题;
- 修复 KafkaJMXClient 类中logger错误的问题
- 后端修复 Topic 过期策略在 0.10.1.0 版本能多选的问题,实际应该只能二选一;
- 修复接入集群时,不填写集群配置会报错的问题;
- 升级 spring-context 至 5.3.19 版本,修复安全漏洞;
- 修复 Broker & Topic 修改配置时,多版本兼容配置的版本信息错误的问题;
- 修复 Topic 列表的健康分为健康状态;
- 修复 Broker LogSize 指标存储名称错误导致查询不到的问题;
- 修复 Prometheus 中,缺少 Group 部分指标的问题;
- 修复因缺少健康状态指标导致集群数错误的问题;
- 修复后台任务记录操作日志时,因缺少操作用户信息导致出现异常的问题;
- 修复 Replica 指标查询时DSL 错误的问题;
- 关闭 errorLogger修复错误日志重复输出的问题
- 修复系统管理更新用户信息失败的问题;
- 修复因原AR信息丢失导致迁移任务一直处于执行中的错误
- 修复集群 Topic 列表实时数据查询时,出现失败的问题;
- 修复集群 Topic 列表,页面白屏问题;
- 修复副本变更时因AR数据异常导致数组访问越界的问题
**产品优化**
- 优化健康巡检为按照资源维度多线程并发处理;
- 统一日志输出格式,并优化部分输出的日志;
- 优化 ZK 四字命令结果解析过程中,容易引起误解的 WARN 日志;
- 优化 Zookeeper 详情中,目录结构的搜索文案;
- 优化线程池的名称,方便第三方系统进行相关问题的分析;
- 去除 ESClient 的并发访问控制,降低 ESClient 创建数及提升利用率;
- 优化 Topic Messages 抽屉文案;
- 优化 ZK 健康巡检失败时的错误日志信息;
- 提高 Offset 信息获取的超时时间,降低并发过高时出现请求超时的概率;
- 优化 Topic & Partition 元信息的更新策略,降低对 DB 连接的占用;
- 优化 Sonar 代码扫码问题;
- 优化分区 Offset 指标的采集;
- 优化前端图表相关组件逻辑;
- 优化产品主题色;
- Consumer 列表刷新按钮新增 hover 提示;
- 优化配置 Topic 的消息大小时的测试弹框体验;
- 优化 Overview 页面 TopN 查询的流程;
**功能新增**
- 新增页面无数据排查文档;
- 增加 ES 索引删除的功能;
- 支持拆分API服务和Job服务部署
**Kafka Connect Beta版 (v3.2.0版本新增发布)**
- Connect 集群的纳管;
- Connector 的增删改查;
- Connect 集群 & Connector 的指标大盘;
---
## v3.1.0
**Bug修复**
- 修复重置 Group Offset 的提示信息中缺少Dead状态也可进行重置的描述
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
- 修复 git 目录不存在时,打包不能正常进行的问题;
- 修复 KRaft 模式的 Kafka 集群JMX PORT 显示 -1 的问题;
**体验优化**
- 优化Cluster、Broker、Topic、Group的健康分为健康状态
- 去除健康巡检配置中的权重信息;
- 错误提示页面展示优化;
- 前端打包编译依赖默认使用 taobao 镜像;
- 重新设计优化导航栏的 icon
**新增**
- 个人头像下拉信息中,新增产品版本信息;
- 多集群列表页面,新增集群健康状态分布信息;
**Kafka ZK 部分 (v3.1.0版本正式发布)**
- 新增 ZK 集群的指标大盘信息;
- 新增 ZK 集群的服务状态概览信息;
- 新增 ZK 集群的服务节点列表信息;
- 新增 Kafka 在 ZK 的存储数据查看功能;
- 新增 ZK 的健康巡检及健康状态计算;
---
## v3.0.1 ## v3.0.1
**Bug修复** **Bug修复**

View File

@@ -13,7 +13,7 @@ curl -s --connect-timeout 10 -o /dev/null -X POST -H 'cache-control: no-cache' -
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "2"
} }
}, },
"mappings" : { "mappings" : {
@@ -115,7 +115,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "2"
} }
}, },
"mappings" : { "mappings" : {
@@ -302,7 +302,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -377,7 +377,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -436,72 +436,6 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
"aliases" : { } "aliases" : { }
}' }'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"brokerId" : {
"type" : "long"
},
"partitionId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"topic" : {
"type" : "keyword"
},
"metrics" : {
"properties" : {
"LogStartOffset" : {
"type" : "float"
},
"Messages" : {
"type" : "float"
},
"LogEndOffset" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
"order" : 10, "order" : 10,
"index_patterns" : [ "index_patterns" : [
@@ -509,7 +443,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "6"
} }
}, },
"mappings" : { "mappings" : {
@@ -626,7 +560,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
], ],
"settings" : { "settings" : {
"index" : { "index" : {
"number_of_shards" : "10" "number_of_shards" : "2"
} }
}, },
"mappings" : { "mappings" : {
@@ -704,6 +638,388 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
"aliases" : { } "aliases" : { }
}' }'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_cluster_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_cluster_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"ConnectorCount" : {
"type" : "float"
},
"TaskCount" : {
"type" : "float"
},
"ConnectorStartupAttemptsTotal" : {
"type" : "float"
},
"ConnectorStartupFailurePercentage" : {
"type" : "float"
},
"ConnectorStartupFailureTotal" : {
"type" : "float"
},
"ConnectorStartupSuccessPercentage" : {
"type" : "float"
},
"ConnectorStartupSuccessTotal" : {
"type" : "float"
},
"TaskStartupAttemptsTotal" : {
"type" : "float"
},
"TaskStartupFailurePercentage" : {
"type" : "float"
},
"TaskStartupFailureTotal" : {
"type" : "float"
},
"TaskStartupSuccessPercentage" : {
"type" : "float"
},
"TaskStartupSuccessTotal" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_connector_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_connector_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"ConnectorTotalTaskCount" : {
"type" : "float"
},
"HealthCheckPassed" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ConnectorRunningTaskCount" : {
"type" : "float"
},
"ConnectorPausedTaskCount" : {
"type" : "float"
},
"ConnectorFailedTaskCount" : {
"type" : "float"
},
"ConnectorUnassignedTaskCount" : {
"type" : "float"
},
"BatchSizeAvg" : {
"type" : "float"
},
"BatchSizeMax" : {
"type" : "float"
},
"OffsetCommitAvgTimeMs" : {
"type" : "float"
},
"OffsetCommitMaxTimeMs" : {
"type" : "float"
},
"OffsetCommitFailurePercentage" : {
"type" : "float"
},
"OffsetCommitSuccessPercentage" : {
"type" : "float"
},
"PollBatchAvgTimeMs" : {
"type" : "float"
},
"PollBatchMaxTimeMs" : {
"type" : "float"
},
"SourceRecordActiveCount" : {
"type" : "float"
},
"SourceRecordActiveCountAvg" : {
"type" : "float"
},
"SourceRecordActiveCountMax" : {
"type" : "float"
},
"SourceRecordPollRate" : {
"type" : "float"
},
"SourceRecordPollTotal" : {
"type" : "float"
},
"SourceRecordWriteRate" : {
"type" : "float"
},
"SourceRecordWriteTotal" : {
"type" : "float"
},
"OffsetCommitCompletionRate" : {
"type" : "float"
},
"OffsetCommitCompletionTotal" : {
"type" : "float"
},
"OffsetCommitSkipRate" : {
"type" : "float"
},
"OffsetCommitSkipTotal" : {
"type" : "float"
},
"PartitionCount" : {
"type" : "float"
},
"PutBatchAvgTimeMs" : {
"type" : "float"
},
"PutBatchMaxTimeMs" : {
"type" : "float"
},
"SinkRecordActiveCount" : {
"type" : "float"
},
"SinkRecordActiveCountAvg" : {
"type" : "float"
},
"SinkRecordActiveCountMax" : {
"type" : "float"
},
"SinkRecordLagMax" : {
"type" : "float"
},
"SinkRecordReadRate" : {
"type" : "float"
},
"SinkRecordReadTotal" : {
"type" : "float"
},
"SinkRecordSendRate" : {
"type" : "float"
},
"SinkRecordSendTotal" : {
"type" : "float"
},
"DeadletterqueueProduceFailures" : {
"type" : "float"
},
"DeadletterqueueProduceRequests" : {
"type" : "float"
},
"LastErrorTimestamp" : {
"type" : "float"
},
"TotalErrorsLogged" : {
"type" : "float"
},
"TotalRecordErrors" : {
"type" : "float"
},
"TotalRecordFailures" : {
"type" : "float"
},
"TotalRecordsSkipped" : {
"type" : "float"
},
"TotalRetries" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_mirror_maker_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_connect_mirror_maker_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "2"
}
},
"mappings" : {
"properties" : {
"connectClusterId" : {
"type" : "long"
},
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"connectorName" : {
"type" : "keyword"
},
"connectorNameAndClusterId" : {
"type" : "keyword"
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"HealthState" : {
"type" : "float"
},
"HealthCheckTotal" : {
"type" : "float"
},
"ByteCount" : {
"type" : "float"
},
"ByteRate" : {
"type" : "float"
},
"RecordAgeMs" : {
"type" : "float"
},
"RecordAgeMsAvg" : {
"type" : "float"
},
"RecordAgeMsMax" : {
"type" : "float"
},
"RecordAgeMsMin" : {
"type" : "float"
},
"RecordCount" : {
"type" : "float"
},
"RecordRate" : {
"type" : "float"
},
"ReplicationLatencyMs" : {
"type" : "float"
},
"ReplicationLatencyMsAvg" : {
"type" : "float"
},
"ReplicationLatencyMsMax" : {
"type" : "float"
},
"ReplicationLatencyMsMin" : {
"type" : "float"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
for i in {0..6}; for i in {0..6};
do do
logdate=_$(date -d "${i} day ago" +%Y-%m-%d) logdate=_$(date -d "${i} day ago" +%Y-%m-%d)
@@ -711,8 +1027,10 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_connector_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_mirror_maker_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \ curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2 exit 2
done done

View File

@@ -0,0 +1,111 @@
<mxfile host="65bd71144e">
<diagram id="vxzhwhZdNVAY19FZ4dgb" name="Page-1">
<mxGraphModel dx="1194" dy="733" grid="0" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="4" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="540" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="7" style="edgeStyle=none;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1" source="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="240" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="5" style="edgeStyle=none;html=1;startArrow=none;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="400" as="targetPoint"/>
<mxPoint x="360" y="360" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="3" value="C3" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#FF8000;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="340" y="280" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;entryPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="8" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="8" value="fix_928" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="320" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="9" value="github_master" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
<mxGeometry x="160" y="40" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="10" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;startArrow=none;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="11" target="2">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="2" value="C2" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="200" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="12" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="9" target="11">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="200" as="targetPoint"/>
<mxPoint x="200" y="80" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="11" value="C1" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="120" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="23" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="3">
<mxGeometry relative="1" as="geometry">
<mxPoint x="360" y="360" as="targetPoint"/>
<mxPoint x="360" y="400" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="17" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;endArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="2" target="16">
<mxGeometry relative="1" as="geometry">
<mxPoint x="200" y="640" as="targetPoint"/>
<mxPoint x="200" y="240" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<mxCell id="16" value="C4" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
<mxGeometry x="180" y="440" width="40" height="40" as="geometry"/>
</mxCell>
<mxCell id="22" value="Tag-v3.2.0" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="120" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="24" value="Tag-v3.2.1" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
<mxGeometry x="100" y="440" width="80" height="40" as="geometry"/>
</mxCell>
<mxCell id="27" value="切换到主分支git checkout github_master" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="90" width="240" height="30" as="geometry"/>
</mxCell>
<mxCell id="34" style="edgeStyle=none;html=1;exitX=0;exitY=0;exitDx=0;exitDy=0;entryX=0.855;entryY=0.145;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="28" target="2">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="28" value="主分支拉最新代码git pull" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="120" width="160" height="30" as="geometry"/>
</mxCell>
<mxCell id="35" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="29">
<mxGeometry relative="1" as="geometry">
<mxPoint x="270" y="225" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="29" value="基于主分支拉新分支git checkout -b fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="210" width="250" height="30" as="geometry"/>
</mxCell>
<mxCell id="37" style="edgeStyle=none;html=1;exitX=0;exitY=1;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="30" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="30" value="提交代码git commit -m &quot;[Optimize]优化xxx问题(#928)&quot;" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="270" width="320" height="30" as="geometry"/>
</mxCell>
<mxCell id="31" value="提交到自己远端仓库git push --set-upstream origin fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="300" width="334" height="30" as="geometry"/>
</mxCell>
<mxCell id="38" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="32">
<mxGeometry relative="1" as="geometry">
<mxPoint x="280" y="380" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="32" value="GitHub页面发起Pull Request请求管理员合入主仓库" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
<mxGeometry x="520" y="360" width="300" height="30" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

View File

@@ -0,0 +1,100 @@
# 贡献名单
- [贡献名单](#贡献名单)
- [1、贡献者角色](#1贡献者角色)
- [1.1、Maintainer](#11maintainer)
- [1.2、Committer](#12committer)
- [1.3、Contributor](#13contributor)
- [2、贡献者名单](#2贡献者名单)
## 1、贡献者角色
KnowStreaming 开发者包含 Maintainer、Committer、Contributor 三种角色,每种角色的标准定义如下。
### 1.1、Maintainer
Maintainer 是对 KnowStreaming 项目的演进和发展做出显著贡献的个人。具体包含以下的标准:
- 完成多个关键模块或者工程的设计与开发,是项目的核心开发人员;
- 持续的投入和激情能够积极参与社区、官网、issue、PR 等项目相关事项的维护;
- 在社区中具有有目共睹的影响力,能够代表 KnowStreaming 参加重要的社区会议和活动;
- 具有培养 Committer 和 Contributor 的意识和能力;
### 1.2、Committer
Committer 是具有 KnowStreaming 仓库写权限的个人,包含以下的标准:
- 能够在长时间内做持续贡献 issue、PR 的个人;
- 参与 issue 列表的维护及重要 feature 的讨论;
- 参与 code review
### 1.3、Contributor
Contributor 是对 KnowStreaming 项目有贡献的个人,标准为:
- 提交过 PR 并被合并;
---
## 2、贡献者名单
开源贡献者名单(不定期更新)
在名单内,但是没有收到贡献者礼品的同学,可以联系szzdzhp001
| 姓名 | Github | 角色 | 公司 |
| ------------------- | ---------------------------------------------------------- | ----------- | -------- |
| 张亮 | [@zhangliangboy](https://github.com/zhangliangboy) | Maintainer | 滴滴出行 |
| 谢鹏 | [@PenceXie](https://github.com/PenceXie) | Maintainer | 滴滴出行 |
| 赵情融 | [@zqrferrari](https://github.com/zqrferrari) | Maintainer | 滴滴出行 |
| 石臻臻 | [@shirenchuang](https://github.com/shirenchuang) | Maintainer | 滴滴出行 |
| 曾巧 | [@ZQKC](https://github.com/ZQKC) | Maintainer | 滴滴出行 |
| 孙超 | [@lucasun](https://github.com/lucasun) | Maintainer | 滴滴出行 |
| 洪华驰 | [@brodiehong](https://github.com/brodiehong) | Maintainer | 滴滴出行 |
| 许喆 | [@potaaaaaato](https://github.com/potaaaaaato) | Committer | 滴滴出行 |
| 郭宇航 | [@GraceWalk](https://github.com/GraceWalk) | Committer | 滴滴出行 |
| 李伟 | [@velee](https://github.com/velee) | Committer | 滴滴出行 |
| 张占昌 | [@zzccctv](https://github.com/zzccctv) | Committer | 滴滴出行 |
| 王东方 | [@wangdongfang-aden](https://github.com/wangdongfang-aden) | Committer | 滴滴出行 |
| 王耀波 | [@WYAOBO](https://github.com/WYAOBO) | Committer | 滴滴出行 |
| 赵寅锐 | [@ZHAOYINRUI](https://github.com/ZHAOYINRUI) | Maintainer | 字节跳动 |
| haoqi123 | [@haoqi123](https://github.com/haoqi123) | Contributor | 前程无忧 |
| chaixiaoxue | [@chaixiaoxue](https://github.com/chaixiaoxue) | Contributor | SYNNEX |
| 陆晗 | [@luhea](https://github.com/luhea) | Contributor | 竞技世界 |
| Mengqi777 | [@Mengqi777](https://github.com/Mengqi777) | Contributor | 腾讯 |
| ruanliang-hualun | [@ruanliang-hualun](https://github.com/ruanliang-hualun) | Contributor | 网易 |
| 17hao | [@17hao](https://github.com/17hao) | Contributor | |
| Huyueeer | [@Huyueeer](https://github.com/Huyueeer) | Contributor | INVENTEC |
| lomodays207 | [@lomodays207](https://github.com/lomodays207) | Contributor | 建信金科 |
| Super .Wein星痕 | [@superspeedone](https://github.com/superspeedone) | Contributor | 韵达 |
| Hongten | [@Hongten](https://github.com/Hongten) | Contributor | Shopee |
| 徐正熙 | [@hyper-xx)](https://github.com/hyper-xx) | Contributor | 滴滴出行 |
| RichardZhengkay | [@RichardZhengkay](https://github.com/RichardZhengkay) | Contributor | 趣街 |
| 罐子里的茶 | [@gzldc](https://github.com/gzldc) | Contributor | 道富 |
| 陈忠玉 | [@paula](https://github.com/chenzhongyu11) | Contributor | 平安产险 |
| 杨光 | [@yaangvipguang](https://github.com/yangvipguang) | Contributor |
| 王亚聪 | [@wangyacongi](https://github.com/wangyacongi) | Contributor |
| Yang Jing | [@yangbajing](https://github.com/yangbajing) | Contributor | |
| 刘新元 Liu XinYuan | [@Liu-XinYuan](https://github.com/Liu-XinYuan) | Contributor | |
| Joker | [@LiubeyJokerQueue](https://github.com/JokerQueue) | Contributor | 丰巢 |
| Eason Lau | [@Liubey](https://github.com/Liubey) | Contributor | |
| hailanxin | [@hailanxin](https://github.com/hailanxin) | Contributor | |
| Qi Zhang | [@zzzhangqi](https://github.com/zzzhangqi) | Contributor | 好雨科技 |
| fengxsong | [@fengxsong](https://github.com/fengxsong) | Contributor | |
| 谢晓东 | [@Strangevy](https://github.com/Strangevy) | Contributor | 花生日记 |
| ZhaoXinlong | [@ZhaoXinlong](https://github.com/ZhaoXinlong) | Contributor | |
| xuehaipeng | [@xuehaipeng](https://github.com/xuehaipeng) | Contributor | |
| 孔令续 | [@mrazkong](https://github.com/mrazkong) | Contributor | |
| pierre xiong | [@pierre94](https://github.com/pierre94) | Contributor | |
| PengShuaixin | [@PengShuaixin](https://github.com/PengShuaixin) | Contributor | |
| 梁壮 | [@lz](https://github.com/silent-night-no-trace) | Contributor | |
| 张晓寅 | [@ahu0605](https://github.com/ahu0605) | Contributor | 电信数智 |
| 黄海婷 | [@Huanghaiting](https://github.com/Huanghaiting) | Contributor | 云徙科技 |
| 任祥德 | [@RenChauncy](https://github.com/RenChauncy) | Contributor | 探马企服 |
| 胡圣林 | [@slhu997](https://github.com/slhu997) | Contributor | |
| 史泽颖 | [@shizeying](https://github.com/shizeying) | Contributor | |
| 王玉博 | [@Wyb7290](https://github.com/Wyb7290) | Committer | |
| 伍璇 | [@Luckywustone](https://github.com/Luckywustone) | Contributor ||
| 邓苑 | [@CatherineDY](https://github.com/CatherineDY) | Contributor ||
| 封琼凤 | [@Luckywustone](https://github.com/fengqiongfeng) | Committer ||

View File

@@ -0,0 +1,167 @@
# 贡献指南
- [贡献指南](#贡献指南)
- [1、行为准则](#1行为准则)
- [2、仓库规范](#2仓库规范)
- [2.1、Issue 规范](#21issue-规范)
- [2.2、Commit-Log 规范](#22commit-log-规范)
- [2.3、Pull-Request 规范](#23pull-request-规范)
- [3、操作示例](#3操作示例)
- [3.1、初始化环境](#31初始化环境)
- [3.2、认领问题](#32认领问题)
- [3.3、处理问题 \& 提交解决](#33处理问题--提交解决)
- [3.4、请求合并](#34请求合并)
- [4、常见问题](#4常见问题)
- [4.1、如何将多个 Commit-Log 合并为一个?](#41如何将多个-commit-log-合并为一个)
---
欢迎 👏🏻 👏🏻 👏🏻 来到 `KnowStreaming`。本文档是关于如何为 `KnowStreaming` 做出贡献的指南。如果您发现不正确或遗漏的内容, 请留下您的意见/建议。
---
## 1、行为准则
请务必阅读并遵守我们的:[行为准则](https://github.com/didi/KnowStreaming/blob/master/CODE_OF_CONDUCT.md)。
## 2、仓库规范
### 2.1、Issue 规范
按要求,在 [创建Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 中创建ISSUE即可。
需要重点说明的是:
- 提供出现问题的环境信息包括使用的系统使用的KS版本等
- 提供出现问题的复现方式;
### 2.2、Commit-Log 规范
`Commit-Log` 包含三部分 `Header``Body``Footer`。其中 `Header` 是必须的,格式固定,`Body` 在变更有必要详细解释时使用。
**1、`Header` 规范**
`Header` 格式为 `[Type]Message(#IssueID)` 主要有三部分组成,分别是`Type``Message``IssueID`
- `Type`:说明这个提交是哪一个类型的,比如有 Bugfix、Feature、Optimize等
- `Message`说明提交的信息比如修复xx问题
- `IssueID`该提交关联的Issue的编号
实际例子:[`[Bugfix]修复新接入的集群Controller-Host不显示的问题(#927)`](https://github.com/didi/KnowStreaming/pull/933/commits)
**2、`Body` 规范**
一般不需要,如果解决了较复杂问题,或者代码较多,需要 `Body` 说清楚解决的问题,解决的思路等信息。
---
**3、实际例子**
```
[Optimize]优化 MySQL & ES 测试容器的初始化(#906)
主要的变更
1、knowstreaming/knowstreaming-manager 容器;
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;
被影响的变更:
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL
2、删除无用的 km-dist/init/template 目录;
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
```
**TODO : 后续有兴趣的同学,可以考虑引入 Git 的 Hook 进行更好的 Commit-Log 的管理。**
### 2.3、Pull-Request 规范
详细见:[PULL-REQUEST 模版](../../.github/PULL_REQUEST_TEMPLATE.md)
需要重点说明的是:
- <font color=red > 任何 PR 都必须与有效 ISSUE 相关联。否则, PR 将被拒绝;</font>
- <font color=red> 一个分支只修改一件事,一个 PR 只修改一件事;</b></font>
---
## 3、操作示例
本节主要介绍对 `KnowStreaming` 进行代码贡献时,相关的操作方式及操作命令。
名词说明:
- 主仓库https://github.com/didi/KnowStreaming 这个仓库为主仓库。
- 分仓库Fork 到自己账号下的 KnowStreaming 仓库为分仓库;
### 3.1、初始化环境
1. `Fork KnowStreaming` 主仓库至自己账号下,见 https://github.com/didi/KnowStreaming 地址右上角的 `Fork` 按钮;
2. 克隆分仓库至本地:`git clone git@github.com:xxxxxxx/KnowStreaming.git`,该仓库的简写名通常是`origin`
3. 添加主仓库至本地:`git remote add upstream https://github.com/didi/KnowStreaming``upstream`是主仓库在本地的简写名,可以随意命名,前后保持一致即可;
4. 拉取主仓库代码:`git fetch upstream`
5. 拉取分仓库代码:`git fetch origin`
6. 将主仓库的`master`分支,拉取到本地并命名为`github_master``git checkout -b upstream/master`
最后,我们来看一下初始化完成之后的大致效果,具体如下图所示:
![环境初始化](./assets/环境初始化.jpg)
至此,我们的环境就初始化好了。后续,`github_master` 分支就是主仓库的`master`分支,我们可以使用`git pull`拉取该分支的最新代码,还可以使用`git checkout -b xxx`拉取我们想要的分支。
### 3.2、认领问题
在文末评论说明自己要处理该问题即可,具体如下图所示:
![问题认领](./assets/问题认领.jpg)
### 3.3、处理问题 & 提交解决
本节主要介绍一下处理问题 & 提交解决过程中的分支管理,具体如下图所示:
![分支管理](./assets/分支管理.png)
1. 切换到主分支:`git checkout github_master`
2. 主分支拉最新代码:`git pull`
3. 基于主分支拉新分支:`git checkout -b fix_928`
4. 提交代码安装commit的规范进行提交例如`git commit -m "[Optimize]优化xxx问题(#928)"`
5. 提交到自己远端仓库:`git push --set-upstream origin fix_928`
6. `GitHub` 页面发起 `Pull Request` 请求,管理员合入主仓库。这部分详细见下一节;
### 3.4、请求合并
代码在提交到 `GitHub` 分仓库之后,就可以在 `GitHub` 的网站创建 `Pull Request`,申请将代码合入主仓库了。 `Pull Request` 具体见下图所示:
![申请合并](./assets/申请合并.jpg)
[Pull Request 创建的例子](https://github.com/didi/KnowStreaming/pull/945)
---
## 4、常见问题
### 4.1、如何将多个 Commit-Log 合并为一个?
可以使用 `git rebase -i` 命令进行解决。

View File

@@ -1,6 +0,0 @@
开源贡献者证书发放名单(定期更新)
贡献者名单请看:[贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)

View File

@@ -1,6 +0,0 @@
<br>
<br>
请点击:[贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)

View File

@@ -0,0 +1,285 @@
## 1、集群接入错误
### 1.1、异常现象
如下图所示,集群非空时,大概率为地址配置错误导致。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_BRiXBvqYFK2dxSF1aqgZ width="80%">
### 1.2、解决方案
接入集群时,依据提示的错误,进行相应的解决。例如:
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_Yn4LhV8aeSEKX1zrrkUi width="50%">
### 1.3、正常情况
接入集群时,页面信息都自动正常出现,没有提示错误。
## 2、JMX连接失败需使用3.0.1及以上版本)
### 2.1异常现象
Broker列表的JMX Port列出现红色感叹号则该Broker的JMX连接异常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_MLlLCfAktne4X6MBtBUd width="90%">
#### 2.1.1、原因一JMX未开启
##### 2.1.1.1、异常现象
broker列表的JMX Port值为-1对应Broker的JMX未开启。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_E1PD8tPsMeR2zYLFBFAu width="90%">
##### 2.1.1.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.2、原因二JMX配置错误
##### 2.1.2.1、异常现象
错误日志:
```
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
##### 2.1.2.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.3、原因三JMX开启SSL
##### 2.1.3.1、解决方案
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_kNyCi8H9wtHSRkWurB6S width="50%">
#### 2.1.4、原因四连接了错误IP
##### 2.1.4.1、异常现象
Broker 配置了内外网而JMX在配置时可能配置了内网IP或者外网IP此时`KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
比如Broker在ZK的存储结构如下所示我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址。
```json
{
"listener_security_protocol_map": {
"EXTERNAL": "SASL_PLAINTEXT",
"INTERNAL": "SASL_PLAINTEXT"
},
"endpoints": [
"EXTERNAL://192.168.0.1:7092",
"INTERNAL://192.168.0.2:7093"
],
"jmx_port": 8099,
"host": "192.168.0.1",
"timestamp": "1627289710439",
"port": -1,
"version": 4
}
```
##### 2.1.4.2、解决方案
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
`jmx_properties`格式:
```json
{
"maxConn": 100, // KM对单台Broker的最大JMX连接数
"username": "xxxxx", //用户名,可以不填写
"password": "xxxx", // 密码,可以不填写
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
"useWhichEndpoint": "EXTERNAL" //指定要连接的网络名称填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
}
```
SQL例子
```sql
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
```
### 2.2、正常情况
修改完成后,如果看到 JMX PORT这一列全部为绿色则表示JMX已正常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_ymtDTCiDlzfrmSCez2lx width="90%">
## 3、Elasticsearch问题
注意mac系统在执行curl指令时可能报zsh错误。可参考以下操作。
```
1 进入.zshrc 文件 vim ~/.zshrc
2.在.zshrc中加入 setopt no_nomatch
3.更新配置 source ~/.zshrc
```
### 3.1、原因一:缺少索引
#### 3.1.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka* 查看KS索引列表发现没有索引。
#### 3.1.2、解决方案
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
### 3.2、原因二:索引模板错误
#### 3.2.1、异常现象
多集群列表有数据集群详情页图标无数据。查询KS索引模板列表发现不存在。
```
curl {ES的IP地址}:{ES的端口号}/_cat/templates/ks_kafka*?v&h=name
```
正常KS模板如下图所示。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_l79bPYSci9wr6KFwZDA6 width="90%">
#### 3.2.2、解决方案
删除KS索引模板和索引
```
curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
```
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
### 3.3、原因三集群Shard满
#### 3.3.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
尝试手动创建索引失败。
```
#创建ks_kafka_cluster_metric_test索引的指令
curl -s -XPUT http://{ES的IP地址}:{ES的端口号}/ks_kafka_cluster_metric_test
```
#### 3.3.2、解决方案
ES索引的默认分片数量为1000达到数量以后索引创建失败。
+ 扩大ES索引数量上限执行指令
```
curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端口号}/_cluster/settings -d '
{
"persistent": {
"cluster": {
"max_shards_per_node":{索引上限默认为1000}
}
}
}'
```
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来补全索引。

View File

@@ -4,11 +4,192 @@
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。 - 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。 - 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 6.2.0、升级至 `master` 版本 ### 升级至 `master` 版本
暂无
### 6.2.1、升级至 `v3.0.1` 版本 ### 升级至 `3.3.0` 版本
**SQL 变更**
```sql
ALTER TABLE `logi_security_user`
CHANGE COLUMN `phone` `phone` VARCHAR(20) NOT NULL DEFAULT '' COMMENT 'mobile' ;
ALTER TABLE ks_kc_connector ADD `heartbeat_connector_name` varchar(512) DEFAULT '' COMMENT '心跳检测connector名称';
ALTER TABLE ks_kc_connector ADD `checkpoint_connector_name` varchar(512) DEFAULT '' COMMENT '进度确认connector名称';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_TOTAL_RECORD_ERRORS', '{\"value\" : 1}', 'MirrorMaker消息处理错误的次数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX', '{\"value\" : 6000}', 'MirrorMaker消息复制最大延迟时间', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_UNASSIGNED_TASK_COUNT', '{\"value\" : 20}', 'MirrorMaker未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_FAILED_TASK_COUNT', '{\"value\" : 10}', 'MirrorMaker失败状态的任务数量', 'admin');
-- 多集群管理权限2023-01-05新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2012', 'Topic-新增Topic复制', '1593', '1', '2', 'Topic-新增Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2014', 'Topic-详情-取消Topic复制', '1593', '1', '2', 'Topic-详情-取消Topic复制', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2012', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2014', '0', 'know-streaming');
-- 多集群管理权限2023-01-18新增
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2016', 'MM2-新增', '1593', '1', '2', 'MM2-新增', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2018', 'MM2-编辑', '1593', '1', '2', 'MM2-编辑', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2020', 'MM2-删除', '1593', '1', '2', 'MM2-删除', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2022', 'MM2-重启', '1593', '1', '2', 'MM2-重启', '0', 'know-streaming');
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2024', 'MM2-暂停&恢复', '1593', '1', '2', 'MM2-暂停&恢复', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2016', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2018', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2020', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2022', '0', 'know-streaming');
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2024', '0', 'know-streaming');
DROP TABLE IF EXISTS `ks_ha_active_standby_relation`;
CREATE TABLE `ks_ha_active_standby_relation` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`active_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '主集群ID',
`standby_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '备集群ID',
`res_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '资源名称',
`res_type` int(11) NOT NULL DEFAULT '-1' COMMENT '资源类型0集群1镜像Topic2主备Topic',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_res` (`res_type`,`active_cluster_phy_id`,`standby_cluster_phy_id`,`res_name`),
UNIQUE KEY `uniq_res_type_standby_cluster_res_name` (`res_type`,`standby_cluster_phy_id`,`res_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='HA主备关系表';
-- 删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引
ALTER TABLE `ks_km_kafka_change_record` DROP INDEX `idx_cluster_phy_id` ,
ADD INDEX `idx_cluster_update_time` (`cluster_phy_id` ASC, `update_time` ASC);
```
### 升级至 `3.2.0` 版本
**配置变更**
```yaml
# 新增如下配置
spring:
logi-job: # know-streaming 依赖的 logi-job 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
enable: true # true表示开启job任务, false表关闭。KS在部署上可以考虑部署两套服务一套处理前端请求一套执行job任务此时可以通过该字段进行控制
# 线程池大小相关配置
thread-pool:
es:
search: # es查询线程池
thread-num: 20 # 线程池大小
queue-size: 10000 # 队列大小
# 客户端池大小相关配置
client-pool:
kafka-admin:
client-cnt: 1 # 每个Kafka集群创建的KafkaAdminClient数
# ES客户端配置
es:
index:
expire: 15 # 索引过期天数15表示超过15天的索引会被KS过期删除
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_kc_connect_cluster`;
CREATE TABLE `ks_kc_connect_cluster` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Connect集群ID',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
`group_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群Group名称',
`cluster_url` varchar(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
`member_leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL地址',
`version` varchar(64) NOT NULL DEFAULT '' COMMENT 'connect版本',
`jmx_properties` text COMMENT 'JMX配置',
`state` tinyint(4) NOT NULL DEFAULT '1' COMMENT '集群使用的消费组状态,也表示集群状态:-1 Unknown,0 ReBalance,1 Active,2 Dead,3 Empty',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '接入时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_id_group_name` (`id`,`group_name`),
UNIQUE KEY `uniq_name_kafka_cluster` (`name`,`kafka_cluster_phy_id`),
KEY `idx_kafka_cluster_phy_id` (`kafka_cluster_phy_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connect集群信息表';
DROP TABLE IF EXISTS `ks_kc_connector`;
CREATE TABLE `ks_kc_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`connector_class_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector类',
`connector_type` varchar(32) NOT NULL DEFAULT '' COMMENT 'Connector类型',
`state` varchar(45) NOT NULL DEFAULT '' COMMENT '状态',
`topics` text COMMENT '访问过的Topics',
`task_count` int(11) NOT NULL DEFAULT '0' COMMENT '任务数',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_connect_cluster_id_connector_name` (`connect_cluster_id`,`connector_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connector信息表';
DROP TABLE IF EXISTS `ks_kc_worker`;
CREATE TABLE `ks_kc_worker` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`member_id` varchar(512) NOT NULL DEFAULT '' COMMENT '成员ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
`jmx_port` int(16) NOT NULL DEFAULT '-1' COMMENT 'Jmx端口',
`url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL信息',
`leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'leaderURL信息',
`leader` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1是leader0不是leader',
`worker_id` varchar(128) NOT NULL COMMENT 'worker地址',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_id_member_id` (`connect_cluster_id`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='worker信息表';
DROP TABLE IF EXISTS `ks_kc_worker_connector`;
CREATE TABLE `ks_kc_worker_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`worker_member_id` varchar(256) NOT NULL DEFAULT '',
`task_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'Task的ID',
`state` varchar(128) DEFAULT NULL COMMENT '任务状态',
`worker_id` varchar(128) DEFAULT NULL COMMENT 'worker信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_relation` (`connect_cluster_id`,`connector_name`,`task_id`,`worker_member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Worker和Connector关系表';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_FAILED_TASK_COUNT', '{\"value\" : 1}', 'connector失败状态的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_UNASSIGNED_TASK_COUNT', '{\"value\" : 1}', 'connector未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE', '{\"value\" : 0.05}', 'Connect集群任务启动失败概率', 'admin');
```
---
### 升级至 `v3.1.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
```
### 升级至 `v3.0.1` 版本
**ES 索引模版** **ES 索引模版**
```bash ```bash
@@ -142,10 +323,8 @@ CREATE TABLE `ks_km_group` (
``` ```
---
### 升级至 `v3.0.0` 版本
### 6.2.2、升级至 `v3.0.0` 版本
**SQL 变更** **SQL 变更**
@@ -157,7 +336,7 @@ ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
--- ---
### 6.2.3、升级至 `v3.0.0-beta.2`版本 ### 升级至 `v3.0.0-beta.2`版本
**配置变更** **配置变更**
@@ -228,7 +407,7 @@ ALTER TABLE `logi_security_oplog`
--- ---
### 6.2.4、升级至 `v3.0.0-beta.1`版本 ### 升级至 `v3.0.0-beta.1`版本
**SQL 变更** **SQL 变更**
@@ -247,7 +426,7 @@ ALTER COLUMN `operation_methods` set default '';
--- ---
### 6.2.5、`2.x`版本 升级至 `v3.0.0-beta.0`版本 ### `2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:** **升级步骤:**

View File

@@ -182,3 +182,47 @@ Node 版本: v12.22.12
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。 + 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
+ 解决方案清空数据库数据将数据库字符集调整为utf8最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。 + 解决方案清空数据库数据将数据库字符集调整为utf8最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。
## 8.13、接入开启kerberos认证的kafka集群
1. 部署KnowStreaming的机器上安装krb客户端
2. 替换/etc/krb5.conf配置文件
3. 把kafka对应的keytab复制到改机器目录下
4. 接入集群时认证配置,配置信息根据实际情况填写;
```json
{
"security.protocol": "SASL_PLAINTEXT",
"sasl.mechanism": "GSSAPI",
"sasl.jaas.config": "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=\"/etc/keytab/kafka.keytab\" storeKey=true useTicketCache=false principal=\"kafka/kafka@TEST.COM\";",
"sasl.kerberos.service.name": "kafka"
}
```
## 8.14、对接Ldap的配置
```yaml
# 需要在application.yml中增加如下配置。相关配置的信息按实际情况进行调整
account:
ldap:
url: ldap://127.0.0.1:8080/
basedn: DC=senz,DC=local
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: CN=search,DC=senz,DC=local
credentials: xxxxxxx
auth-user-registration: false # 是否注册到mysql默认false
auth-user-registration-role: 1677 # 1677是超级管理员角色的id如果赋予想默认赋予普通角色可以到ks新建一个。
# 需要在application.yml中修改如下配置
spring:
logi-security:
login-extend-bean-name: ksLdapLoginService # 表示使用ldap的service
```
## 8.15、测试时使用Testcontainers的说明
1. 需要docker运行环境 [Testcontainers运行环境说明](https://www.testcontainers.org/supported_docker_environment/)
2. 如果本机没有docker可以使用[远程访问docker](https://docs.docker.com/config/daemon/remote-access/) [Testcontainers配置说明](https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection)

View File

@@ -62,10 +62,6 @@
<groupId>commons-lang</groupId> <groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId> <artifactId>commons-lang</artifactId>
</dependency> </dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency> <dependency>
<groupId>commons-codec</groupId> <groupId>commons-codec</groupId>

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
/**
* Kafka集群Connector概览
*/
public interface ClusterConnectorsManager {
PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto);
ConnectStateVO getClusterConnectorsState(Long clusterPhyId);
}

View File

@@ -1,10 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.cluster; package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import java.util.List;
/** /**
* 多集群总体状态 * 多集群总体状态
*/ */
@@ -15,10 +20,14 @@ public interface MultiClusterPhyManager {
*/ */
ClusterPhysState getClusterPhysState(); ClusterPhysState getClusterPhysState();
ClusterPhysHealthState getClusterPhysHealthState();
/** /**
* 查询多集群大盘 * 查询多集群大盘
* @param dto 分页信息 * @param dto 分页信息
* @return * @return
*/ */
PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto); PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto);
Result<List<ClusterPhyBaseVO>> getClusterPhysBasic();
} }

View File

@@ -6,6 +6,8 @@ import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterBrokersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController; import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -16,6 +18,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBroker
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant; import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterRunStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
@@ -24,6 +28,7 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient; import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -83,9 +88,13 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>(); Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null)); brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(clusterPhyId);
// 格式转换 // 格式转换
return PaginationResult.buildSuc( return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList( this.convert2ClusterBrokersOverviewVOList(
clusterPhy,
paginationResult.getData().getBizData(), paginationResult.getData().getBizData(),
brokerList, brokerList,
metricsResult.getData(), metricsResult.getData(),
@@ -131,7 +140,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
clusterBrokersStateVO.setKafkaControllerAlive(true); clusterBrokersStateVO.setKafkaControllerAlive(true);
} }
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, Arrays.asList("broker.id", "listeners", "name", "value")) <= 0); clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, KafkaConstant.CONFIG_SIMILAR_IGNORED_CONFIG_KEY_LIST) <= 0
);
return clusterBrokersStateVO; return clusterBrokersStateVO;
} }
@@ -169,7 +179,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
); );
} }
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(List<Integer> pagedBrokerIdList, private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(ClusterPhy clusterPhy,
List<Integer> pagedBrokerIdList,
List<Broker> brokerList, List<Broker> brokerList,
List<BrokerMetrics> metricsList, List<BrokerMetrics> metricsList,
Topic groupTopic, Topic groupTopic,
@@ -185,9 +196,15 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Broker broker = brokerMap.get(brokerId); Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId); BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId); Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected)); voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
} }
//补充非zk模式的JMXPort信息
if (!clusterPhy.getRunState().equals(ClusterRunStateEnum.RUN_ZK.getRunState())) {
JmxConfig jmxConfig = ConvertUtil.str2ObjByJson(clusterPhy.getJmxProperties(), JmxConfig.class);
voList.forEach(elem -> elem.setJmxPort(jmxConfig.getJmxPort() == null ? -1 : jmxConfig.getJmxPort()));
}
return voList; return voList;
} }

View File

@@ -0,0 +1,152 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterConnectorsManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect.MetricsConnectorsDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.converter.ConnectConverter;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Service
public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterConnectorsManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Autowired
private WorkerService workerService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto) {
List<ConnectCluster> clusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> poList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
// 查询实时指标
Result<List<ConnectorMetrics>> latestMetricsResult = connectorMetricService.getLatestMetricsFromES(
clusterPhyId,
poList.stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames()
);
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
// 转换成vo
List<ClusterConnectorOverviewVO> voList = ConnectConverter.convert2ClusterConnectorOverviewVOList(clusterList, poList,latestMetricsResult.getData());
// 请求分页信息
PaginationResult<ClusterConnectorOverviewVO> voPaginationResult = this.pagingConnectorInLocal(voList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = connectorMetricService.listConnectClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
)
);
return PaginationResult.buildSuc(
ConnectConverter.supplyData2ClusterConnectorOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public ConnectStateVO getClusterConnectorsState(Long clusterPhyId) {
//获取Connect集群Id列表
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> connectWorkerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return convert2ConnectStateVO(connectClusterList, connectorPOList, workerConnectorList, connectWorkerList);
}
/**************************************************** private method ****************************************************/
private MetricsConnectorsDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsConnectorsDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsConnectorsDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
private ConnectStateVO convert2ConnectStateVO(List<ConnectCluster> connectClusterList, List<ConnectorPO> connectorPOList, List<WorkerConnector> workerConnectorList, List<ConnectWorker> connectWorkerList) {
ConnectStateVO connectStateVO = new ConnectStateVO();
connectStateVO.setConnectClusterCount(connectClusterList.size());
connectStateVO.setTotalConnectorCount(connectorPOList.size());
connectStateVO.setAliveConnectorCount(connectorPOList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectStateVO.setWorkerCount(connectWorkerList.size());
connectStateVO.setTotalTaskCount(workerConnectorList.size());
connectStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
return connectStateVO;
}
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
//模糊匹配
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(connectorVOList, dto);
}
}

View File

@@ -14,10 +14,12 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterPhyTop
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant; import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.ha.HaResTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.ha.HaActiveStandbyRelationService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -38,16 +40,22 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
@Autowired @Autowired
private TopicMetricService topicMetricService; private TopicMetricService topicMetricService;
@Autowired
private HaActiveStandbyRelationService haActiveStandbyRelationService;
@Override @Override
public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) { public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) {
// 获取集群所有的Topic信息 // 获取集群所有的Topic信息
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
// 获取集群所有Topic的指标 // 获取集群所有Topic的指标
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCacheFirst(clusterPhyId); Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId);
// 获取HA信息
Set<String> haTopicNameSet = haActiveStandbyRelationService.listByClusterAndType(clusterPhyId, HaResTypeEnum.MIRROR_TOPIC).stream().map(elem -> elem.getResName()).collect(Collectors.toSet());
// 转换成vo // 转换成vo
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap); List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap, haTopicNameSet);
// 请求分页信息 // 请求分页信息
PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto); PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto);

View File

@@ -5,9 +5,7 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager; import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
@@ -20,9 +18,8 @@ import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum; import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
@@ -30,7 +27,6 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import java.util.Arrays; import java.util.Arrays;
import java.util.List; import java.util.List;
import java.util.stream.Collectors;
@Service @Service
@@ -56,11 +52,6 @@ public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId)); return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
} }
// // TODO
// private Integer healthState;
// private Integer healthCheckPassed;
// private Integer healthCheckTotal;
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId); List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO(); ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
@@ -90,21 +81,30 @@ public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
} }
} }
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.collectMetricsFromZookeeper(new ZookeeperMetricParam( // 指标获取
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
clusterPhyId, clusterPhyId,
infoList.stream().filter(elem -> elem.alive()).map(item -> new Tuple<String, Integer>(item.getHost(), item.getPort())).collect(Collectors.toList()), Arrays.asList(
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class), ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE,
)); ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
)
);
if (metricsResult.failed()) { if (metricsResult.failed()) {
LOGGER.error( LOGGER.error(
"class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}", "method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage() clusterPhyId, metricsResult.getMessage()
); );
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }
Float watchCount = metricsResult.getData().getMetric(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT);
vo.setWatchCount(watchCount != null? watchCount.intValue(): null); ZookeeperMetrics metrics = metricsResult.getData();
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }

View File

@@ -5,32 +5,29 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager; import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import java.util.ArrayList; import java.util.*;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@Service @Service
@@ -43,34 +40,48 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
@Autowired @Autowired
private ClusterMetricService clusterMetricService; private ClusterMetricService clusterMetricService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override @Override
public ClusterPhysState getClusterPhysState() { public ClusterPhysState getClusterPhysState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters(); List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysState physState = new ClusterPhysState(0, 0, 0, clusterPhyList.size());
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB( for (ClusterPhy clusterPhy : clusterPhyList) {
clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
false Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
); if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟 } else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
physState.setDownCount(physState.getDownCount() + 1);
} else if ((System.currentTimeMillis() - clusterPhy.getCreateTime().getTime() >= 5 * 60 * 1000) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
physState.setDownCount(physState.getDownCount() + 1); physState.setDownCount(physState.getDownCount() + 1);
} else { } else {
// 其他情况都设置为alive
physState.setLiveCount(physState.getLiveCount() + 1); physState.setLiveCount(physState.getLiveCount() + 1);
} }
} }
return physState;
}
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
physState.setGoodCount(physState.getGoodCount() + 1);
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
physState.setMediumCount(physState.getMediumCount() + 1);
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
physState.setPoorCount(physState.getPoorCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDeadCount(physState.getDeadCount() + 1);
} else {
physState.setUnknownCount(physState.getUnknownCount() + 1);
}
}
return physState; return physState;
} }
@@ -83,24 +94,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 转为vo格式方便后续进行分页筛选等 // 转为vo格式方便后续进行分页筛选等
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class); List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
// 获取集群controller信息并补充到vo中,
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
for (ClusterPhyDashboardVO vo: voList) {
KafkaController kafkaController = controllerMap.get(vo.getId());
if (kafkaController != null && !kafkaController.alive()) {
// 存在明确的信息表示controller挂了
vo.setAlive(Constant.DOWN);
} else if ((System.currentTimeMillis() - vo.getCreateTime().getTime() >= 5 * 60L * 1000L) && kafkaController == null) {
// 集群接入时间是在近5分钟内同时kafkaController信息不存在则设置为down
vo.setAlive(Constant.DOWN);
} else {
// 其他情况都设置为alive
vo.setAlive(Constant.ALIVE);
}
}
// 本地分页过滤 // 本地分页过滤
voList = this.getAndPagingDataInLocal(voList, dto); voList = this.getAndPagingDataInLocal(voList, dto);
@@ -125,6 +118,15 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
); );
} }
@Override
public Result<List<ClusterPhyBaseVO>> getClusterPhysBasic() {
// 获取集群
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
// 转为vo格式方便后续进行分页筛选等
return Result.buildSuc(ConvertUtil.list2List(clusterPhyList, ClusterPhyBaseVO.class));
}
/**************************************************** private method ****************************************************/ /**************************************************** private method ****************************************************/
@@ -149,13 +151,7 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
List<ClusterMetrics> metricsList = new ArrayList<>(); List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) { for (ClusterPhyDashboardVO vo: voList) {
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId()); ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) { clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension());
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
// 如果集群没有健康分,则设置一个默认的健康分数值
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE,
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue()
);
}
metricsList.add(clusterMetrics); metricsList.add(clusterMetrics);
} }

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import java.util.Properties;
public interface ConnectorManager {
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator);
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
public interface WorkerConnectorManager {
Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,115 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;
@Service
public class ConnectorManagerImpl implements ConnectorManager {
@Autowired
private PluginService pluginService;
@Autowired
private ConnectorService connectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator) {
Result<ConnectConfigInfos> infosResult = pluginService.validateConfig(connectClusterId, configs);
if (infosResult.failed()) {
return Result.buildFromIgnoreData(infosResult);
}
if (infosResult.getData().getErrorCount() > 0) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Connector参数错误");
}
return connectorService.updateConnectorConfig(connectClusterId, connectorName, configs, operator);
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
dto.getConfigs().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getAllConnectorInfoFromCluster(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
connectorService.addNewToDB(ksConnectorResult.getData());
return Result.buildSuc();
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator) {
dto.getConfigs().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getAllConnectorInfoFromCluster(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
KSConnector connector = ksConnectorResult.getData();
connector.setCheckpointConnectorName(checkpointName);
connector.setHeartbeatConnectorName(heartbeatName);
connectorService.addNewToDB(connector);
return Result.buildSuc();
}
@Override
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null) {
return Result.buildFailure(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorName().equals(connectorName)).collect(Collectors.toList());
return Result.buildSuc(convert2ConnectorOverviewVO(connectorPO, workerConnectorList));
}
private ConnectorStateVO convert2ConnectorOverviewVO(ConnectorPO connectorPO, List<WorkerConnector> workerConnectorList) {
ConnectorStateVO connectorStateVO = new ConnectorStateVO();
connectorStateVO.setConnectClusterId(connectorPO.getConnectClusterId());
connectorStateVO.setName(connectorPO.getConnectorName());
connectorStateVO.setType(connectorPO.getConnectorType());
connectorStateVO.setState(connectorPO.getState());
connectorStateVO.setTotalTaskCount(workerConnectorList.size());
connectorStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectorStateVO.setTotalWorkerCount(workerConnectorList.stream().map(elem -> elem.getWorkerId()).collect(Collectors.toSet()).size());
return connectorStateVO;
}
}

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.WorkerConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.persistence.connect.cache.LoadedConnectClusterCache;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
@Service
public class WorkerConnectorManageImpl implements WorkerConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(WorkerConnectorManageImpl.class);
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectCluster connectCluster = LoadedConnectClusterCache.getByPhyId(connectClusterId);
List<WorkerConnector> workerConnectorList = workerConnectorService.getWorkerConnectorListFromCluster(connectCluster, connectorName);
return Result.buildSuc(ConvertUtil.list2List(workerConnectorList, KCTaskOverviewVO.class));
}
}

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
import java.util.Map;
import java.util.Properties;
/**
* @author wyb
* @date 2022/12/26
*/
public interface MirrorMakerManager {
Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator);
Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator);
Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId);
PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto);
Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectId, String connectName);
Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName);
Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName);
Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto);
}

View File

@@ -0,0 +1,652 @@
package com.xiaojukeji.know.streaming.km.biz.connect.mm2.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.biz.connect.mm2.MirrorMakerManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2.MetricsMirrorMakersDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.*;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.MirrorMakerUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import org.apache.commons.lang.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
import static org.apache.kafka.connect.runtime.AbstractStatus.State.RUNNING;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.*;
/**
* @author wyb
* @date 2022/12/26
*/
@Service
public class MirrorMakerManagerImpl implements MirrorMakerManager {
private static final ILog LOGGER = LogFactory.getLog(MirrorMakerManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Autowired
private WorkerService workerService;
@Autowired
private ConnectorManager connectorManager;
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private PluginService pluginService;
@Override
public Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator) {
// 检查基本参数
Result<Void> rv = this.checkCreateMirrorMakerParamAndUnifyData(dto);
if (rv.failed()) {
return rv;
}
// 创建MirrorSourceConnector
Result<Void> sourceConnectResult = connectorManager.createConnector(
dto,
dto.getCheckpointConnectorConfigs() != null? MirrorMakerUtil.genCheckpointName(dto.getConnectorName()): "",
dto.getHeartbeatConnectorConfigs() != null? MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()): "",
operator
);
if (sourceConnectResult.failed()) {
// 创建失败, 直接返回
return Result.buildFromIgnoreData(sourceConnectResult);
}
// 创建 checkpoint 任务
Result<Void> checkpointResult = Result.buildSuc();
if (dto.getCheckpointConnectorConfigs() != null) {
checkpointResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genCheckpointName(dto.getConnectorName()), dto.getCheckpointConnectorConfigs()),
operator
);
}
// 创建 heartbeat 任务
Result<Void> heartbeatResult = Result.buildSuc();
if (dto.getHeartbeatConnectorConfigs() != null) {
heartbeatResult = connectorManager.createConnector(
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()), dto.getHeartbeatConnectorConfigs()),
operator
);
}
// 全都成功
if (checkpointResult.successful() && checkpointResult.successful()) {
return Result.buildSuc();
} else if (checkpointResult.failed() && checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint & heartbeat 失败.\n失败信息分别为%s\n\n%s", checkpointResult.getMessage(), heartbeatResult.getMessage())
);
} else if (checkpointResult.failed()) {
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 checkpoint 失败.\n失败信息分别为%s", checkpointResult.getMessage())
);
} else{
return Result.buildFromRSAndMsg(
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
String.format("创建 heartbeat 失败.\n失败信息分别为%s", heartbeatResult.getMessage())
);
}
}
@Override
public Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.deleteConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.deleteConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.deleteConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(dto.getConnectClusterId(), dto.getConnectorName());
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(dto.getConnectClusterId(), dto.getConnectorName()));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName()) && dto.getCheckpointConnectorConfigs() != null) {
rv = connectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getCheckpointConnectorName(), dto.getCheckpointConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && dto.getHeartbeatConnectorConfigs() != null) {
rv = connectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getHeartbeatConnectorName(), dto.getHeartbeatConnectorConfigs(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.updateConnectorConfig(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
}
@Override
public Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.restartConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.restartConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.restartConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.stopConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.stopConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.stopConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
if (connectorPO == null) {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
}
Result<Void> rv = Result.buildSuc();
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
rv = connectorService.resumeConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
rv = connectorService.resumeConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
}
if (rv.failed()) {
return rv;
}
return connectorService.resumeConnector(connectClusterId, sourceConnectorName, operator);
}
@Override
public Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId) {
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> workerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return Result.buildSuc(convert2MirrorMakerStateVO(connectorPOList, workerConnectorList, workerList));
}
@Override
public PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto) {
List<ConnectorPO> mirrorMakerList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
Result<List<MirrorMakerMetrics>> latestMetricsResult = mirrorMakerMetricService.getLatestMetricsFromES(clusterPhyId,
mirrorMakerList.stream().map(elem -> new Tuple<>(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames());
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterMirrorMakersOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList = this.convert2ClusterMirrorMakerOverviewVO(mirrorMakerList, connectClusterList, latestMetricsResult.getData());
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = this.completeClusterInfo(mirrorMakerOverviewVOList);
PaginationResult<ClusterMirrorMakerOverviewVO> voPaginationResult = this.pagingMirrorMakerInLocal(mirrorMakerVOList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=ClusterMirrorMakerOverviewVO||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = mirrorMakerMetricService.listMirrorMakerClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
));
return PaginationResult.buildSuc(
this.supplyData2ClusterMirrorMakerOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectClusterId, String connectName) {
//mm2任务
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream()
.filter(workerConnector -> workerConnector.getConnectorName().equals(connectorPO.getConnectorName())
|| (!StringUtils.isBlank(connectorPO.getCheckpointConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName()))
|| (!StringUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())))
.collect(Collectors.toList());
MirrorMakerBaseStateVO mirrorMakerBaseStateVO = new MirrorMakerBaseStateVO();
mirrorMakerBaseStateVO.setTotalTaskCount(workerConnectorList.size());
mirrorMakerBaseStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
mirrorMakerBaseStateVO.setWorkerCount(workerConnectorList.stream().collect(Collectors.groupingBy(WorkerConnector::getWorkerId)).size());
return Result.buildSuc(mirrorMakerBaseStateVO);
}
@Override
public Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
Map<String, List<KCTaskOverviewVO>> listMap = new HashMap<>();
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId);
if (workerConnectorList.isEmpty()){
return Result.buildSuc(listMap);
}
workerConnectorList.forEach(workerConnector -> {
if (workerConnector.getConnectorName().equals(connectorPO.getConnectorName())){
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
} else if (workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())) {
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE, new ArrayList<>());
listMap.get(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
}
});
return Result.buildSuc(listMap);
}
@Override
public Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null){
return Result.buildFrom(ResultStatus.NOT_EXIST);
}
List<Properties> propList = new ArrayList<>();
// source
Result<KSConnectorInfo> connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
Properties props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
// checkpoint
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getCheckpointConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
// heartbeat
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getHeartbeatConnectorName());
if (connectorResult.failed()) {
return Result.buildFromIgnoreData(connectorResult);
}
props = new Properties();
props.putAll(connectorResult.getData().getConfig());
propList.add(props);
}
return Result.buildSuc(propList);
}
@Override
public Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto) {
List<ConnectConfigInfosVO> voList = new ArrayList<>();
Result<ConnectConfigInfos> infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
if (dto.getHeartbeatConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getHeartbeatConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
if (dto.getCheckpointConnectorConfigs() != null) {
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getCheckpointConnectorConfigs());
if (infoResult.failed()) {
return Result.buildFromIgnoreData(infoResult);
}
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
}
return Result.buildSuc(voList);
}
/**************************************************** private method ****************************************************/
private MetricsMirrorMakersDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsMirrorMakersDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsMirrorMakersDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
public Result<Void> checkCreateMirrorMakerParamAndUnifyData(MirrorMakerCreateDTO dto) {
ClusterPhy sourceClusterPhy = clusterPhyService.getClusterByCluster(dto.getSourceKafkaClusterId());
if (sourceClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getSourceKafkaClusterId()));
}
ConnectCluster connectCluster = connectClusterService.getById(dto.getConnectClusterId());
if (connectCluster == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getConnectClusterNotExist(dto.getConnectClusterId()));
}
ClusterPhy targetClusterPhy = clusterPhyService.getClusterByCluster(connectCluster.getKafkaClusterPhyId());
if (targetClusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(connectCluster.getKafkaClusterPhyId()));
}
if (!dto.getConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector缺少connector.class");
}
if (!MIRROR_MAKER_SOURCE_CONNECTOR_TYPE.equals(dto.getConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector的connector.class类型错误");
}
if (dto.getCheckpointConnectorConfigs() != null) {
if (!dto.getCheckpointConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "CheckpointConnector缺少connector.class");
}
if (!MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE.equals(dto.getCheckpointConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Checkpoint的connector.class类型错误");
}
}
if (dto.getHeartbeatConnectorConfigs() != null) {
if (!dto.getHeartbeatConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "HeartbeatConnector缺少connector.class");
}
if (!MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE.equals(dto.getHeartbeatConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Heartbeat的connector.class类型错误");
}
}
dto.unifyData(
sourceClusterPhy.getId(), sourceClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(sourceClusterPhy.getClientProperties(), Properties.class),
targetClusterPhy.getId(), targetClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(targetClusterPhy.getClientProperties(), Properties.class)
);
return Result.buildSuc();
}
private MirrorMakerStateVO convert2MirrorMakerStateVO(List<ConnectorPO> connectorPOList,List<WorkerConnector> workerConnectorList,List<ConnectWorker> workerList){
MirrorMakerStateVO mirrorMakerStateVO = new MirrorMakerStateVO();
List<ConnectorPO> sourceSet = connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
mirrorMakerStateVO.setMirrorMakerCount(sourceSet.size());
Set<Long> connectClusterIdSet = sourceSet.stream().map(ConnectorPO::getConnectClusterId).collect(Collectors.toSet());
mirrorMakerStateVO.setWorkerCount(workerList.stream().filter(elem -> connectClusterIdSet.contains(elem.getConnectClusterId())).collect(Collectors.toList()).size());
List<ConnectorPO> mirrorMakerConnectorList = new ArrayList<>();
mirrorMakerConnectorList.addAll(sourceSet);
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE)).collect(Collectors.toList()));
mirrorMakerStateVO.setTotalConnectorCount(mirrorMakerConnectorList.size());
mirrorMakerStateVO.setAliveConnectorCount(mirrorMakerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
Set<String> connectorNameSet = mirrorMakerConnectorList.stream().map(elem -> elem.getConnectorName()).collect(Collectors.toSet());
List<WorkerConnector> taskList = workerConnectorList.stream().filter(elem -> connectorNameSet.contains(elem.getConnectorName())).collect(Collectors.toList());
mirrorMakerStateVO.setTotalTaskCount(taskList.size());
mirrorMakerStateVO.setAliveTaskCount(taskList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
return mirrorMakerStateVO;
}
private List<ClusterMirrorMakerOverviewVO> convert2ClusterMirrorMakerOverviewVO(List<ConnectorPO> mirrorMakerList, List<ConnectCluster> connectClusterList, List<MirrorMakerMetrics> latestMetric) {
List<ClusterMirrorMakerOverviewVO> clusterMirrorMakerOverviewVOList = new ArrayList<>();
Map<String, MirrorMakerMetrics> metricsMap = latestMetric.stream().collect(Collectors.toMap(elem -> elem.getConnectClusterId() + "@" + elem.getConnectorName(), Function.identity()));
Map<Long, ConnectCluster> connectClusterMap = connectClusterList.stream().collect(Collectors.toMap(elem -> elem.getId(), Function.identity()));
for (ConnectorPO mirrorMaker : mirrorMakerList) {
ClusterMirrorMakerOverviewVO clusterMirrorMakerOverviewVO = new ClusterMirrorMakerOverviewVO();
clusterMirrorMakerOverviewVO.setConnectClusterId(mirrorMaker.getConnectClusterId());
clusterMirrorMakerOverviewVO.setConnectClusterName(connectClusterMap.get(mirrorMaker.getConnectClusterId()).getName());
clusterMirrorMakerOverviewVO.setConnectorName(mirrorMaker.getConnectorName());
clusterMirrorMakerOverviewVO.setState(mirrorMaker.getState());
clusterMirrorMakerOverviewVO.setCheckpointConnector(mirrorMaker.getCheckpointConnectorName());
clusterMirrorMakerOverviewVO.setTaskCount(mirrorMaker.getTaskCount());
clusterMirrorMakerOverviewVO.setHeartbeatConnector(mirrorMaker.getHeartbeatConnectorName());
clusterMirrorMakerOverviewVO.setLatestMetrics(metricsMap.getOrDefault(mirrorMaker.getConnectClusterId() + "@" + mirrorMaker.getConnectorName(), new MirrorMakerMetrics(mirrorMaker.getConnectClusterId(), mirrorMaker.getConnectorName())));
clusterMirrorMakerOverviewVOList.add(clusterMirrorMakerOverviewVO);
}
return clusterMirrorMakerOverviewVOList;
}
PaginationResult<ClusterMirrorMakerOverviewVO> pagingMirrorMakerInLocal(List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList, ClusterMirrorMakersOverviewDTO dto) {
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = PaginationUtil.pageByFuzzyFilter(mirrorMakerOverviewVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(mirrorMakerVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
} else {
PaginationUtil.pageBySort(mirrorMakerVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(mirrorMakerVOList, dto);
}
public static List<ClusterMirrorMakerOverviewVO> supplyData2ClusterMirrorMakerOverviewVOList(List<ClusterMirrorMakerOverviewVO> voList,
List<MetricMultiLinesVO> metricLineVOList) {
Map<String, List<MetricLineVO>> metricLineMap = new HashMap<>();
if (metricLineVOList != null) {
for (MetricMultiLinesVO metricMultiLinesVO : metricLineVOList) {
metricMultiLinesVO.getMetricLines()
.forEach(metricLineVO -> {
String key = metricLineVO.getName();
List<MetricLineVO> metricLineVOS = metricLineMap.getOrDefault(key, new ArrayList<>());
metricLineVOS.add(metricLineVO);
metricLineMap.put(key, metricLineVOS);
});
}
}
voList.forEach(elem -> {
elem.setMetricLines(metricLineMap.get(elem.getConnectClusterId() + "#" + elem.getConnectorName()));
});
return voList;
}
private List<ClusterMirrorMakerOverviewVO> completeClusterInfo(List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList) {
Map<String, KSConnectorInfo> connectorInfoMap = new HashMap<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
ApiCallThreadPoolService.runnableTask(String.format("method=completeClusterInfo||connectClusterId=%d||connectorName=%s||getMirrorMakerInfo", mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName()),
3000
, () -> {
Result<KSConnectorInfo> connectorInfoRet = connectorService.getConnectorInfoFromCluster(mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName());
if (connectorInfoRet.hasData()) {
connectorInfoMap.put(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName(), connectorInfoRet.getData());
}
return connectorInfoRet.getData();
});
}
ApiCallThreadPoolService.waitResult(1000);
List<ClusterMirrorMakerOverviewVO> newMirrorMakerVOList = new ArrayList<>();
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
KSConnectorInfo connectorInfo = connectorInfoMap.get(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName());
if (connectorInfo == null) {
continue;
}
String sourceClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME);
String targetClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME);
//先默认设置为集群别名
mirrorMakerVO.setSourceKafkaClusterName(sourceClusterAlias);
mirrorMakerVO.setDestKafkaClusterName(targetClusterAlias);
if (!ValidateUtils.isBlank(sourceClusterAlias) && CommonUtils.isNumeric(sourceClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(sourceClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setSourceKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setSourceKafkaClusterName(clusterPhy.getName());
}
}
if (!ValidateUtils.isBlank(targetClusterAlias) && CommonUtils.isNumeric(targetClusterAlias)) {
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(targetClusterAlias));
if (clusterPhy != null) {
mirrorMakerVO.setDestKafkaClusterId(clusterPhy.getId());
mirrorMakerVO.setDestKafkaClusterName(clusterPhy.getName());
}
}
newMirrorMakerVOList.add(mirrorMakerVO);
}
return newMirrorMakerVOList;
}
}

View File

@@ -39,5 +39,5 @@ public interface GroupManager {
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception; Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList (Long clusterPhyId, List<GroupMemberPO> groupMemberPOList); List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
} }

View File

@@ -8,10 +8,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDT
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberConsumerAssignment;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
@@ -34,15 +39,13 @@ import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO; import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import org.apache.kafka.clients.admin.MemberDescription;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.common.ConsumerGroupState; import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -51,6 +54,8 @@ import org.springframework.stereotype.Component;
import java.util.*; import java.util.*;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum.CONNECT_CLUSTER_PROTOCOL_TYPE;
@Component @Component
public class GroupManagerImpl implements GroupManager { public class GroupManagerImpl implements GroupManager {
private static final ILog log = LogFactory.getLog(GroupManagerImpl.class); private static final ILog log = LogFactory.getLog(GroupManagerImpl.class);
@@ -70,6 +75,9 @@ public class GroupManagerImpl implements GroupManager {
@Autowired @Autowired
private GroupMetricESDAO groupMetricESDAO; private GroupMetricESDAO groupMetricESDAO;
@Autowired
private ClusterPhyService clusterPhyService;
@Override @Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId, public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
String topicName, String topicName,
@@ -140,6 +148,11 @@ public class GroupManagerImpl implements GroupManager {
String groupName, String groupName,
List<String> latestMetricNames, List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException { PaginationSortDTO dto) throws NotExistException, AdminOperateException {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
}
// 获取消费组消费的TopicPartition列表 // 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName); Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet() List<Integer> partitionList = consumedOffsetMap.keySet()
@@ -150,13 +163,18 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList); Collections.sort(partitionList);
// 获取消费组当前运行信息 // 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName); KSGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhy, groupName);
// 转换存储格式 // 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>(); Map<TopicPartition, KSMemberDescription> tpMemberMap = new HashMap<>();
for (MemberDescription description: groupDescription.members()) {
for (TopicPartition tp: description.assignment().topicPartitions()) { //如果不是connect集群
tpMemberMap.put(tp, description); if (!groupDescription.protocolType().equals(CONNECT_CLUSTER_PROTOCOL_TYPE)) {
for (KSMemberDescription description : groupDescription.members()) {
KSMemberConsumerAssignment assignment = (KSMemberConsumerAssignment) description.assignment();
for (TopicPartition tp : assignment.topicPartitions()) {
tpMemberMap.put(tp, description);
}
} }
} }
@@ -173,11 +191,11 @@ public class GroupManagerImpl implements GroupManager {
vo.setTopicName(topicName); vo.setTopicName(topicName);
vo.setPartitionId(groupMetrics.getPartitionId()); vo.setPartitionId(groupMetrics.getPartitionId());
MemberDescription memberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId())); KSMemberDescription ksMemberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
if (memberDescription != null) { if (ksMemberDescription != null) {
vo.setMemberId(memberDescription.consumerId()); vo.setMemberId(ksMemberDescription.consumerId());
vo.setHost(memberDescription.host()); vo.setHost(ksMemberDescription.host());
vo.setClientId(memberDescription.clientId()); vo.setClientId(ksMemberDescription.clientId());
} }
vo.setLatestMetrics(groupMetrics); vo.setLatestMetrics(groupMetrics);
@@ -203,13 +221,18 @@ public class GroupManagerImpl implements GroupManager {
return rv; return rv;
} }
ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName()); ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterId());
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterId()));
}
KSGroupDescription description = groupService.getGroupDescriptionFromKafka(clusterPhy, dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) { if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败"); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
} }
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) { if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", GroupStateEnum.getByRawState(description.state()).getState())); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
} }
// 获取offset // 获取offset
@@ -274,16 +297,16 @@ public class GroupManagerImpl implements GroupManager {
))); )));
} }
OffsetSpec offsetSpec = null; KSOffsetSpec offsetSpec = null;
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) { if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp()); offsetSpec = KSOffsetSpec.forTimestamp(dto.getTimestamp());
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) { } else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.earliest(); offsetSpec = KSOffsetSpec.earliest();
} else { } else {
offsetSpec = OffsetSpec.latest(); offsetSpec = KSOffsetSpec.latest();
} }
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec, dto.getTimestamp()); return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec);
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) { private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
@@ -345,32 +368,4 @@ public class GroupManagerImpl implements GroupManager {
dto dto
); );
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
} }

View File

@@ -22,7 +22,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService; import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;

View File

@@ -10,14 +10,18 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil; import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils; import kafka.admin.AdminUtils;
@@ -52,6 +56,9 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired @Autowired
private ClusterPhyService clusterPhyService; private ClusterPhyService clusterPhyService;
@Autowired
private PartitionService partitionService;
@Override @Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) { public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator); log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -80,7 +87,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
); );
// 创建Topic // 创建Topic
return opTopicService.createTopic( Result<Void> createTopicRes = opTopicService.createTopic(
new TopicCreateParam( new TopicCreateParam(
dto.getClusterId(), dto.getClusterId(),
dto.getTopicName(), dto.getTopicName(),
@@ -90,6 +97,21 @@ public class OpTopicManagerImpl implements OpTopicManager {
), ),
operator operator
); );
if (createTopicRes.successful()){
try{
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
BackoffUtils.backoff(3000);
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
if (partitionsResult.successful()){
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
}
});
}catch (Exception e) {
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功但记录Partition到DB中失败等待定时任务同步partition信息");
}
}
return createTopicRes;
} }
@Override @Override

View File

@@ -16,7 +16,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.version.BaseVersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.BaseKafkaVersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
@@ -27,7 +27,7 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
@Component @Component
public class TopicConfigManagerImpl extends BaseVersionControlService implements TopicConfigManager { public class TopicConfigManagerImpl extends BaseKafkaVersionControlService implements TopicConfigManager {
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class); private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig"; private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";

View File

@@ -10,6 +10,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition; import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -43,10 +44,9 @@ import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.apache.commons.lang3.ObjectUtils; import org.apache.commons.lang3.ObjectUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*; import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.TopicConfig; import org.apache.kafka.common.config.TopicConfig;
@@ -143,12 +143,12 @@ public class TopicStateManagerImpl implements TopicStateManager {
} }
// 获取分区beginOffset // 获取分区beginOffset
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.earliest(), null); Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.earliest());
if (beginOffsetsMapResult.failed()) { if (beginOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(beginOffsetsMapResult); return Result.buildFromIgnoreData(beginOffsetsMapResult);
} }
// 获取分区endOffset // 获取分区endOffset
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.latest(), null); Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.latest());
if (endOffsetsMapResult.failed()) { if (endOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(endOffsetsMapResult); return Result.buildFromIgnoreData(endOffsetsMapResult);
} }
@@ -307,7 +307,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
if (metricsResult.failed()) { if (metricsResult.failed()) {
// 仅打印错误日志,但是不直接返回错误 // 仅打印错误日志,但是不直接返回错误
log.error( log.error(
"class=TopicStateManagerImpl||method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed", "method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
clusterPhyId, topicName, metricsResult clusterPhyId, topicName, metricsResult
); );
} }

View File

@@ -20,7 +20,7 @@ public interface VersionControlManager {
* 获取当前ks所有支持的kafka版本 * 获取当前ks所有支持的kafka版本
* @return * @return
*/ */
Result<Map<String, Long>> listAllVersions(); Result<Map<String, Long>> listAllKafkaVersions();
/** /**
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持 * 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
@@ -28,7 +28,7 @@ public interface VersionControlManager {
* @param type * @param type
* @return * @return
*/ */
Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type); Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type);
/** /**
* 获取当前用户设置的用于展示的指标配置 * 获取当前用户设置的用于展示的指标配置

View File

@@ -17,6 +17,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil; import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -29,10 +30,12 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.BrokerMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.BrokerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.MirrorMakerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems.*;
@Service @Service
public class VersionControlManagerImpl implements VersionControlManager { public class VersionControlManagerImpl implements VersionControlManager {
@@ -47,7 +50,8 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct @PostConstruct
public void init(){ public void init(){
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true)); // topic
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
@@ -57,7 +61,8 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true)); // cluster
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
@@ -72,12 +77,14 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
// group
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true)); // broker
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
@@ -90,8 +97,37 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
// zookeeper
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_MAX_REQUEST_LATENCY, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OUTSTANDING_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NODE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_WATCH_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NUM_ALIVE_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_RECEIVED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_SENT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_EPHEMERALS_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_APPROXIMATE_DATA_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OPEN_FILE_DESCRIPTOR_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_DISCONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_SYNC_CONNECTS_PER_SEC, true));
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_REQUEST_LATENCY_99TH, true));
// mm2
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_AGE_MS_MAX, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_RATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_REPLICATION_LATENCY_MS_MAX, true));
} }
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -107,7 +143,13 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CLUSTER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CONNECTOR.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_MIRROR_MAKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect( Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
@@ -121,18 +163,20 @@ public class VersionControlManagerImpl implements VersionControlManager {
} }
@Override @Override
public Result<Map<String, Long>> listAllVersions() { public Result<Map<String, Long>> listAllKafkaVersions() {
return Result.buildSuc(VersionEnum.allVersionsWithOutMax()); return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
} }
@Override @Override
public Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type) { public Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type) {
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type); List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
List<VersionItemVO> versionItemVOS = new ArrayList<>(); List<VersionItemVO> versionItemVOS = new ArrayList<>();
String versionStr = clusterPhyService.getVersionFromCacheFirst(clusterId);
for (VersionControlItem item : allItem){ for (VersionControlItem item : allItem){
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class); VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
boolean support = versionControlService.isClusterSupport(clusterId, item); boolean support = versionControlService.isClusterSupport(versionStr, item);
itemVO.setSupport(support); itemVO.setSupport(support);
itemVO.setDesc(itemSupportDesc(item, support)); itemVO.setDesc(itemSupportDesc(item, support));
@@ -145,7 +189,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override @Override
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) { public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
Result<List<VersionItemVO>> ret = listClusterVersionControlItem(clusterId, type); Result<List<VersionItemVO>> ret = listKafkaClusterVersionControlItem(clusterId, type);
if(null == ret || ret.failed()){ if(null == ret || ret.failed()){
return Result.buildFail(); return Result.buildFail();
} }

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService; import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
import com.xiaojukeji.know.streaming.km.common.component.SpringTool; import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
@@ -9,17 +8,20 @@ import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
/** /**
* @author didi * @author didi
*/ */
public abstract class AbstractMetricCollector<T> { public abstract class AbstractMetricCollector<M, C> {
public abstract void collectMetrics(ClusterPhy clusterPhy); public abstract String getClusterVersion(C c);
public abstract VersionItemTypeEnum collectorType(); public abstract VersionItemTypeEnum collectorType();
@Autowired @Autowired
private CollectThreadPoolService collectThreadPoolService; private CollectThreadPoolService collectThreadPoolService;
public abstract void collectMetrics(C c);
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) { protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode()); return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
} }

View File

@@ -1,124 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ReplicationMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_REPLICATION;
/**
* @author didi
*/
@Component
public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired
private VersionControlService versionControlService;
@Autowired
private ReplicaMetricService replicaMetricService;
@Autowired
private PartitionService partitionService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<Partition> partitions = partitionService.listPartitionByCluster(clusterPhyId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<ReplicationMetrics> metricsList = new ArrayList<>();
for(Partition partition : partitions) {
for (Integer brokerId: partition.getAssignReplicaList()) {
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
metricsList.add(metrics);
future.runnableTask(
String.format("method=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
30000,
() -> collectMetrics(clusterPhyId, metrics, items)
);
}
}
future.waitExecute(30000);
publishMetric(new ReplicaMetricEvent(this, metricsList));
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_REPLICATION;
}
/**************************************************** private method ****************************************************/
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) {
try {
if (metrics.getMetrics().containsKey(v.getName())) {
continue;
}
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafka(
clusterPhyId,
metrics.getTopic(),
metrics.getBrokerId(),
metrics.getPartitionId(),
v.getName()
);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
if (!EnvUtil.isOnline()) {
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partitionId={}||metricName={}||metricValue={}",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e) {
LOGGER.error("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
return metrics;
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractConnectMetricCollector<M> extends AbstractMetricCollector<M, ConnectCluster> {
private static final ILog LOGGER = LogFactory.getLog(AbstractConnectMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ConnectClusterService connectClusterService;
public abstract List<M> collectConnectMetrics(ConnectCluster connectCluster);
@Override
public String getClusterVersion(ConnectCluster connectCluster){
return connectClusterService.getClusterVersion(connectCluster.getId());
}
@Override
public void collectMetrics(ConnectCluster connectCluster) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectConnectMetrics(connectCluster);
// 输出耗时信息
LOGGER.info(
"metricType={}||connectClusterId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), connectCluster.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||connectClusterId={}||metrics={}!",
this.collectorType().getMessage(), connectCluster.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -0,0 +1,83 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CLUSTER;
/**
* @author didi
*/
@Component
public class ConnectClusterMetricCollector extends AbstractConnectMetricCollector<ConnectClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectClusterMetricService connectClusterMetricService;
@Override
public List<ConnectClusterMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
ConnectClusterMetrics metrics = new ConnectClusterMetrics(clusterPhyId, connectClusterId);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
List<VersionControlItem> items = versionControlService.listVersionControlItem(getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
for (VersionControlItem item : items) {
future.runnableTask(
String.format("class=ConnectClusterMetricCollector||connectClusterId=%d||metricName=%s", connectClusterId, item.getName()),
30000,
() -> {
try {
Result<ConnectClusterMetrics> ret = connectClusterMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, item.getName());
if (null == ret || !ret.hasData()) {
return null;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectConnectMetrics||connectClusterId={}||metricName={}||errMsg=exception!",
connectClusterId, item.getName(), e
);
}
return null;
}
);
}
future.waitExecute(30000);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ConnectClusterMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CLUSTER;
}
}

View File

@@ -0,0 +1,107 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CONNECTOR;
/**
* @author didi
*/
@Component
public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollector<ConnectorMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectConnectorMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Override
public List<ConnectorMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
Result<List<String>> connectorList = connectorService.listConnectorsFromCluster(connectClusterId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
List<ConnectorMetrics> metricsList = new ArrayList<>();
for (String connectorName : connectorList.getData()) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.setClusterPhyId(clusterPhyId);
metricsList.add(metrics);
future.runnableTask(
String.format("class=ConnectConnectorMetricCollector||connectClusterId=%d||connectorName=%s", connectClusterId, connectorName),
30000,
() -> collectMetrics(connectClusterId, connectorName, metrics, items)
);
}
future.waitResult(30000);
this.publishMetric(new ConnectorMetricEvent(this, metricsList));
return metricsList;
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CONNECTOR;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String connectorName, ConnectorMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
ConnectorTypeEnum connectorType = connectorService.getConnectorType(connectClusterId, connectorName);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
//过滤已测得指标
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||connectorName={}||metric={}||errMsg=exception!",
connectClusterId, connectorName, v.getName(), e
);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -0,0 +1,117 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.connect.AbstractConnectMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_MIRROR_MAKER;
/**
* @author wyb
* @date 2022/12/15
*/
@Component
public class MirrorMakerMetricCollector extends AbstractConnectMetricCollector<MirrorMakerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private MirrorMakerService mirrorMakerService;
@Autowired
private ConnectorService connectorService;
@Autowired
private MirrorMakerMetricService mirrorMakerMetricService;
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_MIRROR_MAKER;
}
@Override
public List<MirrorMakerMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<ConnectorPO> mirrorMakerList = connectorService.listByConnectClusterIdFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
Map<String, MirrorMakerTopic> mirrorMakerTopicMap = mirrorMakerService.getMirrorMakerTopicMap(connectClusterId).getData();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<MirrorMakerMetrics> metricsList = new ArrayList<>();
for (ConnectorPO mirrorMaker : mirrorMakerList) {
MirrorMakerMetrics metrics = new MirrorMakerMetrics(clusterPhyId, connectClusterId, mirrorMaker.getConnectorName());
metricsList.add(metrics);
List<MirrorMakerTopic> mirrorMakerTopicList = mirrorMakerService.getMirrorMakerTopicList(mirrorMaker, mirrorMakerTopicMap);
future.runnableTask(String.format("class=MirrorMakerMetricCollector||connectClusterId=%d||mirrorMakerName=%s", connectClusterId, mirrorMaker.getConnectorName()),
30000,
() -> collectMetrics(connectClusterId, mirrorMaker.getConnectorName(), metrics, items, mirrorMakerTopicList));
}
future.waitResult(30000);
this.publishMetric(new MirrorMakerMetricEvent(this,metricsList));
return metricsList;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String mirrorMakerName, MirrorMakerMetrics metrics, List<VersionControlItem> items, List<MirrorMakerTopic> mirrorMakerTopicList) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
//已测量指标过滤
if (metrics.getMetrics().get(v.getName()) != null) {
continue;
}
Result<MirrorMakerMetrics> ret = mirrorMakerMetricService.collectMirrorMakerMetricsFromKafka(connectClusterId, mirrorMakerName, mirrorMakerTopicList, v.getName());
if (ret == null || !ret.hasData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||mirrorMakerName={}||metric={}||errMsg=exception!",
connectClusterId, mirrorMakerName, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractKafkaMetricCollector<M> extends AbstractMetricCollector<M, ClusterPhy> {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ClusterPhyService clusterPhyService;
public abstract List<M> collectKafkaMetrics(ClusterPhy clusterPhy);
@Override
public String getClusterVersion(ClusterPhy clusterPhy){
return clusterPhyService.getVersionFromCacheFirst(clusterPhy.getId());
}
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectKafkaMetrics(clusterPhy);
// 输出耗时信息
LOGGER.info(
"metricType={}||clusterPhyId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), clusterPhy.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||clusterPhyId={}||metrics={}!",
this.collectorType().getMessage(), clusterPhy.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
@@ -11,7 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
@@ -28,8 +26,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics> { public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(BrokerMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -41,32 +39,31 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private BrokerService brokerService; private BrokerService brokerService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<BrokerMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId()); List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<BrokerMetrics> brokerMetrics = new ArrayList<>(); List<BrokerMetrics> metricsList = new ArrayList<>();
for(Broker broker : brokers) { for(Broker broker : brokers) {
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort()); BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
brokerMetrics.add(metrics); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics);
future.runnableTask( future.runnableTask(
String.format("method=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()), String.format("class=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
30000, 30000,
() -> collectMetrics(clusterPhyId, metrics, items) () -> collectMetrics(clusterPhyId, metrics, items)
); );
} }
future.waitExecute(30000); future.waitExecute(30000);
this.publishMetric(new BrokerMetricEvent(this, brokerMetrics)); this.publishMetric(new BrokerMetricEvent(this, metricsList));
LOGGER.info("method=BrokerMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.", return metricsList;
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -78,7 +75,6 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
try { try {
@@ -92,14 +88,11 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||metric={}!",
clusterPhyId, metrics.getBrokerId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||errMsg=exception!", LOGGER.error(
clusterPhyId, metrics.getBrokerId(), v.getName(), e); "method=collectMetrics||clusterPhyId={}||brokerId={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getBrokerId(), v.getName(), e
);
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -7,18 +7,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetric
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Arrays; import java.util.Collections;
import java.util.List; import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
@@ -27,8 +24,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetricPO> { public class ClusterMetricCollector extends AbstractKafkaMetricCollector<ClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(ClusterMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -37,35 +34,37 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
private ClusterMetricService clusterMetricService; private ClusterMetricService clusterMetricService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<ClusterMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis(); Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion()); ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
future.runnableTask( future.runnableTask(
String.format("method=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()), String.format("class=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
30000, 30000,
() -> { () -> {
try { try {
if(null != metrics.getMetrics().get(v.getName())){return null;} if(null != metrics.getMetrics().get(v.getName())){
return null;
}
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName()); Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
if(null == ret || ret.failed() || null == ret.getData()){return null;} if(null == ret || ret.failed() || null == ret.getData()){
return null;
}
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||errMsg=exception!", LOGGER.error(
clusterPhyId, v.getName(), e); "method=collectKafkaMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
} }
return null; return null;
@@ -76,10 +75,9 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ClusterMetricEvent(this, Arrays.asList(metrics))); publishMetric(new ClusterMetricEvent(this, Collections.singletonList(metrics)));
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.", return Collections.singletonList(metrics);
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -10,20 +9,16 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.apache.commons.collections.CollectionUtils; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.ArrayList; import java.util.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
@@ -32,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetrics>> { public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(GroupMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -45,40 +40,38 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private GroupService groupService; private GroupService groupService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<GroupMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<String> groups = new ArrayList<>(); List<String> groupNameList = new ArrayList<>();
try { try {
groups = groupService.listGroupsFromKafka(clusterPhyId); groupNameList = groupService.listGroupsFromKafka(clusterPhy);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||msg=exception!", clusterPhyId, e); LOGGER.error("method=collectKafkaMetrics||clusterPhyId={}||msg=exception!", clusterPhyId, e);
} }
if(CollectionUtils.isEmpty(groups)){return;} if(ValidateUtils.isEmptyList(groupNameList)) {
return Collections.emptyList();
}
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>(); Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
for(String groupName : groups) { for(String groupName : groupNameList) {
future.runnableTask( future.runnableTask(
String.format("method=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName), String.format("class=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
30000, 30000,
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items)); () -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
} }
future.waitResult(30000); future.waitResult(30000);
List<GroupMetrics> metricsList = new ArrayList<>(); List<GroupMetrics> metricsList = metricsMap.values().stream().collect(ArrayList::new, ArrayList::addAll, ArrayList::addAll);
metricsMap.values().forEach(elem -> metricsList.addAll(elem));
publishMetric(new GroupMetricEvent(this, metricsList)); publishMetric(new GroupMetricEvent(this, metricsList));
return metricsList;
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||startTime={}||cost={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -91,9 +84,7 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
List<GroupMetrics> groupMetricsList = new ArrayList<>(); Map<TopicPartition, GroupMetrics> subMetricMap = new HashMap<>();
Map<String, GroupMetrics> tpGroupPOMap = new HashMap<>();
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true); GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
@@ -107,38 +98,31 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
continue; continue;
} }
ret.getData().stream().forEach(metrics -> { ret.getData().forEach(metrics -> {
if (metrics.isBGroupMetric()) { if (metrics.isBGroupMetric()) {
groupMetrics.putMetric(metrics.getMetrics()); groupMetrics.putMetric(metrics.getMetrics());
} else { return;
String topicName = metrics.getTopic();
Integer partitionId = metrics.getPartitionId();
String tpGroupKey = genTopicPartitionGroupKey(topicName, partitionId);
tpGroupPOMap.putIfAbsent(tpGroupKey, new GroupMetrics(clusterPhyId, partitionId, topicName, groupName, false));
tpGroupPOMap.get(tpGroupKey).putMetric(metrics.getMetrics());
} }
});
if(!EnvUtil.isOnline()){ TopicPartition tp = new TopicPartition(metrics.getTopic(), metrics.getPartitionId());
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||groupName={}||metricName={}||metricValue={}", subMetricMap.putIfAbsent(tp, new GroupMetrics(clusterPhyId, metrics.getPartitionId(), metrics.getTopic(), groupName, false));
clusterPhyId, groupName, metricName, JSON.toJSONString(ret.getData())); subMetricMap.get(tp).putMetric(metrics.getMetrics());
} });
}catch (Exception e){ } catch (Exception e) {
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||groupName={}||errMsg=exception!", clusterPhyId, groupName, e); LOGGER.error(
"method=collectMetrics||clusterPhyId={}||groupName={}||errMsg=exception!",
clusterPhyId, groupName, e
);
} }
} }
groupMetricsList.add(groupMetrics); List<GroupMetrics> metricsList = new ArrayList<>();
groupMetricsList.addAll(tpGroupPOMap.values()); metricsList.add(groupMetrics);
metricsList.addAll(subMetricMap.values());
// 记录采集性能 // 记录采集性能
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
metricsMap.put(groupName, groupMetricsList); metricsMap.put(groupName, metricsList);
}
private String genTopicPartitionGroupKey(String topic, Integer partitionId){
return topic + "@" + partitionId;
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -9,8 +9,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
@@ -27,8 +25,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class PartitionMetricCollector extends AbstractMetricCollector<PartitionMetrics> { public class PartitionMetricCollector extends AbstractKafkaMetricCollector<PartitionMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(PartitionMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -40,13 +38,10 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
private TopicService topicService; private TopicService topicService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<PartitionMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
// 获取集群所有分区
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -55,9 +50,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>()); metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
future.runnableTask( future.runnableTask(
String.format("method=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("class=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items) () -> this.collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
); );
} }
@@ -68,10 +63,7 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
this.publishMetric(new PartitionMetricEvent(this, metricsList)); this.publishMetric(new PartitionMetricEvent(this, metricsList));
LOGGER.info( return metricsList;
"method=PartitionMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
} }
@Override @Override
@@ -109,17 +101,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId()); PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
allMetrics.putMetric(subMetrics.getMetrics()); allMetrics.putMetric(subMetrics.getMetrics());
} }
if (!EnvUtil.isOnline()) {
LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||metricValue={}!",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.info( LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception", "method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -10,8 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
@@ -31,8 +29,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetrics>> { public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(TopicMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -46,11 +44,10 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
private static final Integer AGG_METRICS_BROKER_ID = -10000; private static final Integer AGG_METRICS_BROKER_ID = -10000;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<TopicMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -64,7 +61,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
allMetricsMap.put(topic.getTopicName(), metricsMap); allMetricsMap.put(topic.getTopicName(), metricsMap);
future.runnableTask( future.runnableTask(
String.format("method=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("class=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items) () -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
); );
@@ -77,8 +74,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
this.publishMetric(new TopicMetricEvent(this, metricsList)); this.publishMetric(new TopicMetricEvent(this, metricsList));
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.", return metricsList;
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -118,14 +114,9 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics()); metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
} }
}); });
if (!EnvUtil.isOnline()) {
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||metricValue={}.",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!", LOGGER.error(
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -14,10 +14,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetric
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo; import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
@@ -25,7 +23,7 @@ import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Arrays; import java.util.Collections;
import java.util.List; import java.util.List;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -35,8 +33,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> { public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<ZookeeperMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); protected static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricCollector.class);
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -51,21 +49,21 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
private KafkaControllerService kafkaControllerService; private KafkaControllerService kafkaControllerService;
@Override @Override
public void collectMetrics(ClusterPhy clusterPhy) { public List<ZookeeperMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis(); Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId) List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream() .stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus())) .filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList()); .collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId); KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE); ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
if (ValidateUtils.isEmptyList(aliveZKList)) { if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回 // 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics))); publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
return; return Collections.singletonList(metrics);
} }
// 构造参数 // 构造参数
@@ -82,6 +80,7 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
if(null != metrics.getMetrics().get(v.getName())) { if(null != metrics.getMetrics().get(v.getName())) {
continue; continue;
} }
param.setMetricName(v.getName()); param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param); Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
@@ -90,16 +89,9 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error( LOGGER.error(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!", "method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e clusterPhyId, v.getName(), e
); );
} }
@@ -107,12 +99,9 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics))); this.publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
LOGGER.info( return Collections.singletonList(metrics);
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
} }
@Override @Override

View File

@@ -237,7 +237,7 @@ public class CollectThreadPoolService {
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) { private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
// 新的 // 新的
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init( FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
"CollectorMetricsFutureUtil-Shard-" + shardId, "MetricCollect-Shard-" + shardId,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilQueueSize this.futureUtilQueueSize

View File

@@ -3,67 +3,47 @@ package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO; import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO; import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.CollectionUtils;
import java.util.List; import java.util.List;
import java.util.Objects; import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public abstract class AbstractMetricESSender { public abstract class AbstractMetricESSender {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(AbstractMetricESSender.class);
private static final int THRESHOLD = 100; private static final int THRESHOLD = 100;
private static final ThreadPoolExecutor esExecutor = new ThreadPoolExecutor( private static final FutureUtil<Void> esExecutor = FutureUtil.init(
"MetricsESSender",
10, 10,
20, 20,
6000, 10000
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount())
); );
/** /**
* 根据不同监控维度来发送 * 根据不同监控维度来发送
*/ */
protected boolean send2es(String index, List<? extends BaseESPO> statsList){ protected boolean send2es(String index, List<? extends BaseESPO> statsList) {
LOGGER.info("method=send2es||indexName={}||metricsSize={}||msg=send metrics to es", index, statsList.size());
if (CollectionUtils.isEmpty(statsList)) { if (CollectionUtils.isEmpty(statsList)) {
return true; return true;
} }
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index); BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull( baseMetricESDao )) { if (Objects.isNull(baseMetricESDao)) {
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index); LOGGER.error("method=send2es||indexName={}||errMsg=find dao failed", index);
return false; return false;
} }
int size = statsList.size(); for (int i = 0; i < statsList.size(); i += THRESHOLD) {
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1); final int idxStart = i;
if (size < THRESHOLD) { // 异步发送
esExecutor.execute( esExecutor.submitTask(
() -> baseMetricESDao.batchInsertStats(statsList) () -> baseMetricESDao.batchInsertStats(statsList.subList(idxStart, Math.min(idxStart + THRESHOLD, statsList.size())))
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
); );
} }

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.REPLICATION_INDEX;
@Component
public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ReplicaMetricEvent event) {
send2es(REPLICATION_INDEX, ConvertUtil.list2List(event.getReplicationMetrics(), ReplicationMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CLUSTER_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectClusterMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectClusterMetricEvent event) {
send2es(CONNECT_CLUSTER_INDEX, ConvertUtil.list2List(event.getConnectClusterMetrics(), ConnectClusterMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectorMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CONNECTOR_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectorMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectorMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectorMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectorMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectorMetricEvent event) {
send2es(CONNECT_CONNECTOR_INDEX, ConvertUtil.list2List(event.getConnectorMetricsList(), ConnectorMetricPO.class));
}
}

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.BROKER_INDEX; import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.BROKER_INDEX;
@Component @Component
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> { public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(BrokerMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=BrokerMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CLUSTER_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.CLUSTER_INDEX;
@Component @Component
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> { public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(ClusterMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=ClusterMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.GROUP_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.GROUP_INDEX;
@Component @Component
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> { public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(GroupMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.PARTITION_INDEX; import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.PARTITION_INDEX;
@Component @Component
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> { public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(PartitionMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=PartitionMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.TOPIC_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.TOPIC_INDEX;
@Component @Component
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> { public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(TopicMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=TopicMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink; package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX; import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.ZOOKEEPER_INDEX;
@Component @Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> { public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER"); private static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricESSender.class);
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished"); LOGGER.info("method=init||msg=init finished");
} }
@Override @Override

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.mm2;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.mm2.MirrorMakerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_MM2_INDEX;
/**
* @author zengqiao
* @date 2022/12/20
*/
@Component
public class MirrorMakerMetricESSender extends AbstractMetricESSender implements ApplicationListener<MirrorMakerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("method=init||msg=init finished");
}
@Override
public void onApplicationEvent(MirrorMakerMetricEvent event) {
send2es(CONNECT_MM2_INDEX, ConvertUtil.list2List(event.getMetricsList(), MirrorMakerMetricPO.class));
}
}

View File

@@ -81,10 +81,6 @@
<version>3.0.2</version> <version>3.0.2</version>
</dependency> </dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
<dependency> <dependency>
<groupId>org.projectlombok</groupId> <groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId> <artifactId>lombok</artifactId>
@@ -127,5 +123,9 @@
<groupId>org.apache.kafka</groupId> <groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId> <artifactId>kafka_2.13</artifactId>
</dependency> </dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-runtime</artifactId>
</dependency>
</dependencies> </dependencies>
</project> </project>

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterConnectorsOverviewDTO extends PaginationSortDTO {
@NotNull(message = "latestMetricNames不允许为空")
@ApiModelProperty("需要指标点的信息")
private List<String> latestMetricNames;
@NotNull(message = "metricLines不允许为空")
@ApiModelProperty("需要指标曲线的信息")
private MetricDTO metricLines;
@ApiModelProperty("需要排序的指标名称列表,比较第一个不为空的metric")
private List<String> sortMetricNameList;
}

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationMulFuzzySearchDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterGroupsOverviewDTO extends PaginationMulFuzzySearchDTO {
@ApiModelProperty("查找该Topic")
private String topicName;
@ApiModelProperty("查找该Group")
private String groupName;
}

View File

@@ -0,0 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import lombok.Data;
/**
* @author zengqiao
* @date 22/12/12
*/
@Data
public class ClusterMirrorMakersOverviewDTO extends ClusterConnectorsOverviewDTO {
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "集群Connector")
public class ClusterConnectorDTO extends BaseDTO {
@NotNull(message = "connectClusterId不允许为空")
@ApiModelProperty(value = "Connector集群ID", example = "1")
protected Long connectClusterId;
@NotBlank(message = "name不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
protected String connectorName;
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "集群Connector")
public class ConnectClusterDTO extends BaseDTO {
@ApiModelProperty(value = "Connect集群ID", example = "1")
private Long id;
@ApiModelProperty(value = "Connect集群名称", example = "know-streaming")
private String name;
@ApiModelProperty(value = "Connect集群URL", example = "http://127.0.0.1:8080")
private String clusterUrl;
@ApiModelProperty(value = "Connect集群版本", example = "2.5.1")
private String version;
@ApiModelProperty(value = "JMX配置", example = "")
private String jmxProperties;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Connector")
public class ConnectorActionDTO extends ClusterConnectorDTO {
@NotBlank(message = "action不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "stop|restart|resume")
private String action;
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "创建Connector")
public class ConnectorCreateDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
protected Properties configs;
public ConnectorCreateDTO(Long connectClusterId, String connectorName, Properties configs) {
super(connectClusterId, connectorName);
this.configs = configs;
}
}

View File

@@ -0,0 +1,14 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "删除Connector")
public class ConnectorDeleteDTO extends ClusterConnectorDTO {
}

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "操作MM2")
public class MirrorMaker2ActionDTO extends ConnectorActionDTO {
}

View File

@@ -0,0 +1,14 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorDeleteDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "删除MM2")
public class MirrorMaker2DeleteDTO extends ConnectorDeleteDTO {
}

View File

@@ -0,0 +1,69 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import org.apache.kafka.clients.CommonClientConfigs;
import javax.validation.Valid;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-12-12
*/
@Data
@ApiModel(description = "创建MM2")
public class MirrorMakerCreateDTO extends ConnectorCreateDTO {
@NotNull(message = "sourceKafkaClusterId不允许为空")
@ApiModelProperty(value = "源Kafka集群ID", example = "")
private Long sourceKafkaClusterId;
@Valid
@ApiModelProperty(value = "heartbeat-connector的信息", example = "")
private Properties heartbeatConnectorConfigs;
@Valid
@ApiModelProperty(value = "checkpoint-connector的信息", example = "")
private Properties checkpointConnectorConfigs;
public void unifyData(Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
if (sourceKafkaProps == null) {
sourceKafkaProps = new Properties();
}
if (targetKafkaProps == null) {
targetKafkaProps = new Properties();
}
this.unifyData(this.configs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
if (heartbeatConnectorConfigs != null) {
this.unifyData(this.heartbeatConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
}
if (checkpointConnectorConfigs != null) {
this.unifyData(this.checkpointConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
}
}
private void unifyData(Properties dataConfig,
Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME, sourceKafkaClusterId);
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, sourceBootstrapServers);
for (Object configKey: sourceKafkaProps.keySet()) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + configKey, sourceKafkaProps.getProperty((String) configKey));
}
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME, targetKafkaClusterId);
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, targetBootstrapServers);
for (Object configKey: targetKafkaProps.keySet()) {
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + configKey, targetKafkaProps.getProperty((String) configKey));
}
}
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.task;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Task")
public class TaskActionDTO extends ConnectorActionDTO {
@NotNull(message = "taskId不允许为NULL")
@ApiModelProperty(value = "taskId", example = "123")
private Long taskId;
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.ha.mirror;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description="Topic镜像信息")
public class MirrorTopicCreateDTO extends BaseDTO {
@Min(value = 0, message = "sourceClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "源集群ID", example = "3")
private Long sourceClusterPhyId;
@Min(value = 0, message = "destClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "目标集群ID", example = "3")
private Long destClusterPhyId;
@NotBlank(message = "topicName不允许为空串")
@ApiModelProperty(value = "Topic名称", example = "mirrorTopic")
private String topicName;
@NotNull(message = "syncData不允许为空")
@ApiModelProperty(value = "同步数据", example = "true")
private Boolean syncData;
@NotNull(message = "syncConfig不允许为空")
@ApiModelProperty(value = "同步配置", example = "false")
private Boolean syncConfig;
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.ha.mirror;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description="Topic镜像信息")
public class MirrorTopicDeleteDTO extends BaseDTO {
@Min(value = 0, message = "sourceClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "源集群ID", example = "3")
private Long sourceClusterPhyId;
@Min(value = 0, message = "destClusterPhyId不允许为空且最小值为0")
@ApiModelProperty(value = "目标集群ID", example = "3")
private Long destClusterPhyId;
@NotBlank(message = "topicName不允许为空串")
@ApiModelProperty(value = "Topic名称", example = "mirrorTopic")
private String topicName;
}

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connect集群指标查询信息")
public class MetricsConnectClustersDTO extends MetricDTO {
@ApiModelProperty("Connect集群ID")
private List<Long> connectClusterIdList;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connector指标查询信息")
public class MetricsConnectorsDTO extends MetricDTO {
@ApiModelProperty("Connector列表")
private List<ClusterConnectorDTO> connectorNameList;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "MirrorMaker指标查询信息")
public class MetricsMirrorMakersDTO extends MetricDTO {
@ApiModelProperty("MirrorMaker的SourceConnect列表")
private List<ClusterConnectorDTO> connectorNameList;
}

View File

@@ -3,7 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity;
/** /**
* @author didi * @author didi
*/ */
public interface EntifyIdInterface { public interface EntityIdInterface {
/** /**
* 获取id * 获取id
* @return * @return

View File

@@ -1,6 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster; package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntifyIdInterface; import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface;
import lombok.AllArgsConstructor; import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
@@ -10,7 +10,7 @@ import java.util.Date;
@Data @Data
@NoArgsConstructor @NoArgsConstructor
@AllArgsConstructor @AllArgsConstructor
public class ClusterPhy implements Comparable<ClusterPhy>, EntifyIdInterface { public class ClusterPhy implements Comparable<ClusterPhy>, EntityIdInterface {
/** /**
* 主键 * 主键
*/ */

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 集群状态信息
* @author zengqiao
* @date 22/02/24
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhysHealthState {
private Integer unknownCount;
private Integer goodCount;
private Integer mediumCount;
private Integer poorCount;
private Integer deadCount;
private Integer total;
public ClusterPhysHealthState(Integer total) {
this.unknownCount = 0;
this.goodCount = 0;
this.mediumCount = 0;
this.poorCount = 0;
this.deadCount = 0;
this.total = total;
}
}

View File

@@ -18,5 +18,7 @@ public class ClusterPhysState {
private Integer downCount; private Integer downCount;
private Integer unknownCount;
private Integer total; private Integer total;
} }

View File

@@ -13,9 +13,6 @@ import java.util.Properties;
*/ */
@ApiModel(description = "ZK配置") @ApiModel(description = "ZK配置")
public class ZKConfig implements Serializable { public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的jmx配置")
private JmxConfig jmxConfig;
@ApiModelProperty(value="ZK是否开启secure", example = "false") @ApiModelProperty(value="ZK是否开启secure", example = "false")
private Boolean openSecure = false; private Boolean openSecure = false;
@@ -28,14 +25,6 @@ public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的Request超时时间") @ApiModelProperty(value="ZK的Request超时时间")
private Properties otherProps = new Properties(); private Properties otherProps = new Properties();
public JmxConfig getJmxConfig() {
return jmxConfig == null? new JmxConfig(): jmxConfig;
}
public void setJmxConfig(JmxConfig jmxConfig) {
this.jmxConfig = jmxConfig;
}
public Boolean getOpenSecure() { public Boolean getOpenSecure() {
return openSecure != null && openSecure; return openSecure != null && openSecure;
} }
@@ -53,7 +42,7 @@ public class ZKConfig implements Serializable {
} }
public Integer getRequestTimeoutUnitMs() { public Integer getRequestTimeoutUnitMs() {
return requestTimeoutUnitMs == null? Constant.DEFAULT_REQUEST_TIMEOUT_UNIT_MS: requestTimeoutUnitMs; return requestTimeoutUnitMs == null? Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
} }
public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) { public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) {

View File

@@ -13,9 +13,4 @@ public class BaseClusterHealthConfig extends BaseClusterConfigValue {
* 健康检查名称 * 健康检查名称
*/ */
protected HealthCheckNameEnum checkNameEnum; protected HealthCheckNameEnum checkNameEnum;
/**
* 权重
*/
protected Float weight;
} }

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/26
*/
@Data
public class HealthAmountRatioConfig extends BaseClusterHealthConfig {
/**
* 总数
*/
private Integer amount;
/**
* 比例
*/
private Double ratio;
}

View File

@@ -1,7 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric; package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;

View File

@@ -0,0 +1,61 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface;
import lombok.Data;
import java.io.Serializable;
@Data
public class ConnectCluster implements Serializable, Comparable<ConnectCluster>, EntityIdInterface {
/**
* 集群ID
*/
private Long id;
/**
* 集群名字
*/
private String name;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
* @see com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum
*/
private Integer state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
/**
* 版本信息
*/
private String version;
/**
* jmx配置
* @see com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig
*/
private String jmxProperties;
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群地址
*/
private String clusterUrl;
@Override
public int compareTo(ConnectCluster connectCluster) {
return this.id.compareTo(connectCluster.getId());
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class ConnectClusterMetadata implements Serializable {
/**
* Kafka集群名字
*/
private Long kafkaClusterPhyId;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
*/
private GroupStateEnum state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
public ConnectClusterMetadata(Long kafkaClusterPhyId, String groupName, GroupStateEnum state, String memberLeaderUrl) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.groupName = groupName;
this.state = state;
this.memberLeaderUrl = memberLeaderUrl;
}
}

View File

@@ -0,0 +1,86 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class ConnectWorker implements Serializable {
protected static final ILog LOGGER = LogFactory.getLog(ConnectWorker.class);
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群ID
*/
private Long connectClusterId;
/**
* 成员ID
*/
private String memberId;
/**
* 主机
*/
private String host;
/**
* Jmx端口
*/
private Integer jmxPort;
/**
* URL
*/
private String url;
/**
* leader的URL
*/
private String leaderUrl;
/**
* 1是leader0不是leader
*/
private Integer leader;
/**
* worker地址
*/
private String workerId;
public ConnectWorker(Long kafkaClusterPhyId,
Long connectClusterId,
String memberId,
String host,
Integer jmxPort,
String url,
String leaderUrl,
Integer leader) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.memberId = memberId;
this.host = host;
this.jmxPort = jmxPort;
this.url = url;
this.leaderUrl = leaderUrl;
this.leader = leader;
String workerId = CommonUtils.getWorkerId(url);
if (workerId == null) {
workerId = memberId;
LOGGER.error("class=ConnectWorker||connectClusterId={}||memberId={}||url={}||msg=analysis url fail"
, connectClusterId, memberId, url);
}
this.workerId = workerId;
}
}

View File

@@ -0,0 +1,58 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class WorkerConnector implements Serializable {
/**
* connect集群ID
*/
private Long connectClusterId;
/**
* kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* connector名称
*/
private String connectorName;
private String workerMemberId;
/**
* 任务状态
*/
private String state;
/**
* 任务ID
*/
private Integer taskId;
/**
* worker信息
*/
private String workerId;
/**
* 错误原因
*/
private String trace;
public WorkerConnector(Long kafkaClusterPhyId, Long connectClusterId, String connectorName, String workerMemberId, Integer taskId, String state, String workerId, String trace) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.workerMemberId = workerMemberId;
this.taskId = taskId;
this.state = state;
this.workerId = workerId;
this.trace = trace;
}
}

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
/**
* @see ConfigInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfo {
private ConnectConfigKeyInfo definition;
private ConnectConfigValueInfo value;
}

View File

@@ -0,0 +1,71 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfos;
import java.util.*;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ACTION_RELOAD_NAME;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME;
/**
* @see ConfigInfos
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfos {
private static final Map<String, List<String>> recommendValuesMap = new HashMap<>();
static {
recommendValuesMap.put(CONNECTOR_CONFIG_ACTION_RELOAD_NAME, Arrays.asList("none", "restart"));
recommendValuesMap.put(CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME, Arrays.asList("none", "all"));
}
private String name;
private int errorCount;
private List<String> groups;
private List<ConnectConfigInfo> configs;
public ConnectConfigInfos(ConfigInfos configInfos) {
this.name = configInfos.name();
this.errorCount = configInfos.errorCount();
this.groups = configInfos.groups();
this.configs = new ArrayList<>();
for (ConfigInfo configInfo: configInfos.values()) {
ConnectConfigKeyInfo definition = new ConnectConfigKeyInfo();
definition.setName(configInfo.configKey().name());
definition.setType(configInfo.configKey().type());
definition.setRequired(configInfo.configKey().required());
definition.setDefaultValue(configInfo.configKey().defaultValue());
definition.setImportance(configInfo.configKey().importance());
definition.setDocumentation(configInfo.configKey().documentation());
definition.setGroup(configInfo.configKey().group());
definition.setOrderInGroup(configInfo.configKey().orderInGroup());
definition.setWidth(configInfo.configKey().width());
definition.setDisplayName(configInfo.configKey().displayName());
definition.setDependents(configInfo.configKey().dependents());
ConnectConfigValueInfo value = new ConnectConfigValueInfo();
value.setName(configInfo.configValue().name());
value.setValue(configInfo.configValue().value());
value.setRecommendedValues(recommendValuesMap.getOrDefault(configInfo.configValue().name(), configInfo.configValue().recommendedValues()));
value.setErrors(configInfo.configValue().errors());
value.setVisible(configInfo.configValue().visible());
ConnectConfigInfo connectConfigInfo = new ConnectConfigInfo();
connectConfigInfo.setDefinition(definition);
connectConfigInfo.setValue(value);
this.configs.add(connectConfigInfo);
}
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigKeyInfo;
import java.util.List;
/**
* @see ConfigKeyInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigKeyInfo {
private String name;
private String type;
private boolean required;
private String defaultValue;
private String importance;
private String documentation;
private String group;
private int orderInGroup;
private String width;
private String displayName;
private List<String> dependents;
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigValueInfo;
import java.util.List;
/**
* @see ConfigValueInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigValueInfo {
private String name;
private String value;
private List<String> recommendedValues;
private List<String> errors;
private boolean visible;
}

Some files were not shown because too many files have changed in this diff Show More