Compare commits

..

77 Commits
v3.1 ... v3.2.0

Author SHA1 Message Date
zengqiao
6ef365e201 bump version to 3.2.0 2022-12-16 13:58:40 +08:00
zengqiao
edfa6a9f71 调整v3.2版本容器化部署信息 2022-12-16 13:39:51 +08:00
孙超
860d0b92e2 V3.2 2022-12-16 13:27:09 +08:00
zengqiao
5bceed7105 [Optimize]缩小ES索引默认shard数 2022-12-15 14:44:18 +08:00
zengqiao
44a2fe0398 增加3.2.0版本升级信息 2022-12-14 14:14:35 +08:00
zengqiao
218459ad1b 增加3.2.0版本变更信息 2022-12-14 14:14:20 +08:00
zengqiao
7db757bc12 [Optimize]优化Connector创建时的入参
1、增加config.action.reload的默认值;
2、增加errors.tolerance的默认值;
2022-12-14 14:12:32 +08:00
zengqiao
896a943587 [Optimize]缩短ES索引默认保存时间为15天 2022-12-14 14:10:46 +08:00
zengqiao
cd2c388e68 [Optimize]优化Sonar代码扫描结果 2022-12-14 14:07:30 +08:00
wyb
4543a339b7 [Bugfix]修复job更新中的数组越界报错(#744) 2022-12-14 13:56:29 +08:00
zengqiao
1c4fbef9f2 [Feature]支持拆分API服务和Job服务部署(#829)
1、JMX检查功能是每一个KS都必须要有的,因此从Task模块移动到Core模块;
2、application.yml中补充Task模块任务的整体开关字段;
2022-12-09 16:11:03 +08:00
zengqiao
b2f0f69365 [Optimize]Overview页面的TopN查询ES流程优化(#823)
1、复用线程池,同时支持线程池的线程数可配置;
2、优化查询TopN指标时,可能会出现重复查询的问题;
3、处理代码扫描(SonarLint)反馈的问题;
2022-12-09 14:39:17 +08:00
wyb
c4fb18a73c [Bugfix]修复迁移任务状态不一致问题(#815) 2022-12-08 17:13:14 +08:00
zengqiao
5cad7b4106 [Bugfix]修复集群Topic列表页面白屏问题(#819)
集群Topic列表健康状态对应关系存在问题,导致当健康状态指标存在时,会出现白屏。
2022-12-07 16:27:27 +08:00
zengqiao
f3c4133cd2 [Bugfix]分批从ES查询Topic最近一条指标(#817) 2022-12-07 16:15:01 +08:00
zengqiao
d9c59cb3d3 增加Connect Rest接口 2022-12-07 10:20:02 +08:00
zengqiao
7a0db7161b 增加Connect 业务层方法 2022-12-07 10:20:02 +08:00
zengqiao
6aefc16fa0 增加Connect相关任务 2022-12-07 10:20:02 +08:00
zengqiao
186dcd07e0 增加3.2版本升级信息 2022-12-07 10:20:02 +08:00
zengqiao
e8652d5db5 Connect相关代码 2022-12-07 10:20:02 +08:00
zengqiao
fb5964af84 补充kafka-connect相关包 2022-12-07 10:20:02 +08:00
zengqiao
249fe7c700 调整ES相关文件位置 & 补充connectESDAO相关类 2022-12-07 10:20:02 +08:00
zengqiao
cc2a590b33 新增自定义的KSPartialKafkaAdminClient
由于原生的KafkaAdminClient在解析Group时,会将Connect集群的Group过滤掉,因此自定义KSPartialKafkaAdminClient,使其具备获取Connect Group的能力
2022-12-07 10:20:02 +08:00
zengqiao
5b3f3e5575 移动指标入ES的代码 2022-12-07 10:20:02 +08:00
wyb
36cf285397 [Bug]修复logi-securiy模块数据库选择错误(#808) 2022-12-06 20:02:49 +08:00
zengqiao
4386563c2c 调整指标采集的默认耗时值,以便在查看Top指标时即可看到 2022-12-06 16:47:53 +08:00
zengqiao
0123ce4a5a 优化Broker列表JMX端口的返回值 2022-12-06 16:47:07 +08:00
zengqiao
c3d47d3093 池化KafkaAdminClient,避免KafkaAdminClient出现性能问题 2022-12-06 16:46:11 +08:00
zengqiao
9735c4f885 删除重复采集的指标 2022-12-06 16:41:27 +08:00
zengqiao
3a3141a361 调整ZK指标的采集时间 2022-12-06 16:40:52 +08:00
zengqiao
ac30436324 [Bugfix]修复更新健康巡检结果时出现死锁的问题(#728) 2022-12-05 16:30:37 +08:00
zengqiao
7176e418f5 [Optimize]优化健康巡检相关指标的计算(#726)
1、增加缓存,减少健康状态指标计算时的IO;
2、健康巡检调整为按照资源维度并发处理;
3、明确HealthCheckResultService和HealthStateService的功能边界;
2022-12-05 16:26:31 +08:00
zengqiao
ca794f507e [Optimize]规范日志输出格式(#800)
修改log输出配置,使其输出的日志中自带class={className}的信息,后续书写代码时,就无需书写该部分内容。
2022-12-05 14:27:02 +08:00
zengqiao
0f8be4fadc [Optimize]优化日志输出 & 本地缓存统一管理(#800) 2022-12-05 14:04:19 +08:00
zengqiao
7066246e8f [Optimize]错开采集任务触发时间,降低Offset信息获取时超时情况的发生(#726)
当前指标采集任务都是整分钟触发执行的,导致会同时向Kafka请求分区Offset信息,会导致:
1、请求过多,从而出现超时;
2、同时进行,可能会导致分区重复获取Offset信息;

因此将其错开。
2022-12-05 13:49:35 +08:00
zengqiao
7d1bb48b59 [Optimize]ZK四字命令解析日志优化(#805)
增加遗漏的指标名的处理,减少warn日志该部分的信息
2022-12-05 13:39:26 +08:00
limaiwang
dd0d519677 [Optimize]更新Zookeeper详情目录结构搜索文案(#793) 2022-12-05 12:15:03 +08:00
zengqiao
4293d05fca [Optimize]优化Topic元信息更新策略(#806) 2022-12-04 17:55:27 +08:00
zengqiao
2c82baf9fc [Optimize]指标采集性能优化-part1(#726) 2022-12-04 15:41:48 +08:00
zengqiao
921161d6d0 [Bugfix]修复ReplicaMetricCollector编译失败问题(#802) 2022-12-03 14:34:38 +08:00
zengqiao
e632c6c13f [Optimize]优化Sonar扫描结果 2022-12-02 15:34:28 +08:00
zengqiao
5833a8644c [Optimize]关闭errorLogger,去除无用输出(#801) 2022-12-02 15:29:17 +08:00
zengqiao
fab41e892f [Optimize]日志统一格式&优化输出内容-part3(#800) 2022-12-02 15:14:21 +08:00
zengqiao
7a52cf67b0 [Optimize]日志统一格式&优化输出内容-part2(#800) 2022-12-02 15:01:24 +08:00
zengqiao
175b8d643a [Optimize]统一日志格式-part1(#800) 2022-12-02 14:39:57 +08:00
zengqiao
6241eb052a [Bugfix]修复KafkaJMXClient类中logger错误的问题(#794) 2022-11-30 11:15:00 +08:00
zengqiao
c2fd0a8410 [Optimize]优化Sonar扫描出的不规范代码 2022-11-29 20:54:41 +08:00
zengqiao
5127b600ec [Optimize]优化ESClient的并发访问控制(#787) 2022-11-29 10:47:57 +08:00
zengqiao
feb03aede6 [Optimize]优化线程池的名称(#789) 2022-11-28 15:11:54 +08:00
duanxiaoqiu
47b6c5d86a [Bugfix]修复创建topic选择过期策略(kafka版本0.10.1.0之前)compact和delete只能二选一(didi#770) 2022-11-27 14:18:50 +08:00
SimonTeo58
c4a81613f4 [Optimize]更新Topic-Messages抽屉文案(#771) 2022-11-24 21:54:29 +08:00
limaiwang
daeb5c4cec [Bugfix]修复集群配置不写时,校验参数报错的问题 2022-11-24 15:30:01 +08:00
WangYaobo
38def45ad6 [Doc]增加无数据排查文档(#773) 2022-11-24 10:44:37 +08:00
pen4
4b29a2fdfd update org.springframework:spring-context 5.3.18 to 5.3.19 2022-11-23 11:38:11 +08:00
zengqiao
a165ecaeef [Bugfix]修复Broker&Topic修改时,版本设置错误问题(#762)
Kafka v2.3增加了增量修改配置的功能,但是KS中错误的将其配置为0.11.0版本就具备该能力,因此对其进行调整。
2022-11-21 15:56:33 +08:00
night.liang
6637ba4ccc [Optimize] optimize zk OutstandingRequests checker’s exception log (#738) 2022-11-18 17:12:07 +08:00
duanxiaoqiu
2f807eec2b [Feat]Topic列表健康分修改为健康状态(#758) 2022-11-18 13:56:27 +08:00
石臻臻的杂货铺
636c2c6a83 Update README.md 2022-11-17 13:33:40 +08:00
zengqiao
898a55c703 [Bugfix]修复Broker列表LogSize指标存储时名称错误的问题(#759) 2022-11-17 13:27:45 +08:00
zengqiao
8ffe7e7101 [Bugfix]修复Prometheus中Group部分指标缺少的问题(#756) 2022-11-14 13:33:16 +08:00
zengqiao
7661826ea5 [Optimize]健康巡检增加ClusterParam, 从而拆分Kafka和Connect相关的巡检任务 2022-11-10 16:24:39 +08:00
zengqiao
e456be91ef [Bugfix]集群JMX配置发生变更时,进行JMX的重新加载 2022-11-10 16:04:40 +08:00
zengqiao
da0a97cabf [Optimize] 调整Task代码结构为Connector功能做准备 2022-11-09 10:28:52 +08:00
zengqiao
c1031a492a [Optimize]增加ES索引删除的功能 2022-11-09 10:28:52 +08:00
zengqiao
3c8aaf528c [Bugfix] 修复因为指标缺失导致返回的集群数错误的问题 (#741) 2022-11-09 10:28:52 +08:00
黄海婷
70ff20a2b0 styles:cardBar卡片标题图标hover样式 2022-11-07 10:38:28 +08:00
黄海婷
6918f4babe styles:job列表自定义列按钮新增hover背景色 2022-11-07 10:38:28 +08:00
黄海婷
805a704d34 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
黄海婷
c69c289bc4 styles:部分icon在hover的时候,需要有背景色 2022-11-07 10:38:28 +08:00
zengqiao
dd5869e246 [Optimize] 调整代码结构,为Connect功能做准备 2022-11-07 10:13:26 +08:00
Richard
b51ffb81a3 [Bugfix] No thread-bound request found. (#743) 2022-11-07 10:06:54 +08:00
黄海婷
ed0efd6bd2 styles:字体颜色#adb5bc变更为#74788D 2022-11-03 16:49:35 +08:00
黄海婷
39d2fe6195 styles:消息大小测试弹窗下方提示字体加粗 2022-11-03 16:49:35 +08:00
黄海婷
7471d05c20 styles:消息大小测试弹框字符数显示字体调整 2022-11-03 16:49:35 +08:00
黄海婷
3492688733 feat:Consumer列表刷新按钮新增hover提示 2022-11-01 17:37:37 +08:00
Sean
a603783615 [Optimize].gitignore 中添加 flatten.xml 过滤,为后续引入flatten 做准备(#732) 2022-11-01 14:16:53 +08:00
night.liang
5c9096d564 [Bugfix] fix replica dsl (#708) 2022-11-01 10:45:59 +08:00
490 changed files with 42072 additions and 4640 deletions

6
.gitignore vendored
View File

@@ -109,4 +109,8 @@ out/*
dist/
dist/*
km-rest/src/main/resources/templates/
*dependency-reduced-pom*
*dependency-reduced-pom*
#filter flattened xml
*/.flattened-pom.xml
.flattened-pom.xml
*/*/.flattened-pom.xml

View File

@@ -143,7 +143,7 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`**
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。
微信加群:添加`mike_zhangliang``PenceXie``szzdzhp001`的微信号备注KnowStreaming加群。
<br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。

View File

@@ -1,4 +1,62 @@
## v3.2.0
**问题修复**
- 修复健康巡检结果更新至 DB 时,出现死锁问题;
- 修复 KafkaJMXClient 类中logger错误的问题
- 后端修复 Topic 过期策略在 0.10.1.0 版本能多选的问题,实际应该只能二选一;
- 修复接入集群时,不填写集群配置会报错的问题;
- 升级 spring-context 至 5.3.19 版本,修复安全漏洞;
- 修复 Broker & Topic 修改配置时,多版本兼容配置的版本信息错误的问题;
- 修复 Topic 列表的健康分为健康状态;
- 修复 Broker LogSize 指标存储名称错误导致查询不到的问题;
- 修复 Prometheus 中,缺少 Group 部分指标的问题;
- 修复因缺少健康状态指标导致集群数错误的问题;
- 修复后台任务记录操作日志时,因缺少操作用户信息导致出现异常的问题;
- 修复 Replica 指标查询时DSL 错误的问题;
- 关闭 errorLogger修复错误日志重复输出的问题
- 修复系统管理更新用户信息失败的问题;
- 修复因原AR信息丢失导致迁移任务一直处于执行中的错误
- 修复集群 Topic 列表实时数据查询时,出现失败的问题;
- 修复集群 Topic 列表,页面白屏问题;
- 修复副本变更时因AR数据异常导致数组访问越界的问题
**产品优化**
- 优化健康巡检为按照资源维度多线程并发处理;
- 统一日志输出格式,并优化部分输出的日志;
- 优化 ZK 四字命令结果解析过程中,容易引起误解的 WARN 日志;
- 优化 Zookeeper 详情中,目录结构的搜索文案;
- 优化线程池的名称,方便第三方系统进行相关问题的分析;
- 去除 ESClient 的并发访问控制,降低 ESClient 创建数及提升利用率;
- 优化 Topic Messages 抽屉文案;
- 优化 ZK 健康巡检失败时的错误日志信息;
- 提高 Offset 信息获取的超时时间,降低并发过高时出现请求超时的概率;
- 优化 Topic & Partition 元信息的更新策略,降低对 DB 连接的占用;
- 优化 Sonar 代码扫码问题;
- 优化分区 Offset 指标的采集;
- 优化前端图表相关组件逻辑;
- 优化产品主题色;
- Consumer 列表刷新按钮新增 hover 提示;
- 优化配置 Topic 的消息大小时的测试弹框体验;
- 优化 Overview 页面 TopN 查询的流程;
**功能新增**
- 新增页面无数据排查文档;
- 增加 ES 索引删除的功能;
- 支持拆分API服务和Job服务部署
**Kafka Connect Beta版 (v3.2.0版本新增发布)**
- Connect 集群的纳管;
- Connector 的增删改查;
- Connect 集群 & Connector 的指标大盘;
---
## v3.1.0
**Bug修复**

View File

@@ -0,0 +1,286 @@
## 1、集群接入错误
### 1.1、异常现象
如下图所示,集群非空时,大概率为地址配置错误导致。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_BRiXBvqYFK2dxSF1aqgZ width="80%">
### 1.2、解决方案
接入集群时,依据提示的错误,进行相应的解决。例如:
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_Yn4LhV8aeSEKX1zrrkUi width="50%">
### 1.3、正常情况
接入集群时,页面信息都自动正常出现,没有提示错误。
## 2、JMX连接失败需使用3.0.1及以上版本)
### 2.1异常现象
Broker列表的JMX Port列出现红色感叹号则该Broker的JMX连接异常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_MLlLCfAktne4X6MBtBUd width="90%">
#### 2.1.1、原因一JMX未开启
##### 2.1.1.1、异常现象
broker列表的JMX Port值为-1对应Broker的JMX未开启。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_E1PD8tPsMeR2zYLFBFAu width="90%">
##### 2.1.1.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.2、原因二JMX配置错误
##### 2.1.2.1、异常现象
错误日志:
```
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
##### 2.1.2.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.3、原因三JMX开启SSL
##### 2.1.3.1、解决方案
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_kNyCi8H9wtHSRkWurB6S width="50%">
#### 2.1.4、原因四连接了错误IP
##### 2.1.4.1、异常现象
Broker 配置了内外网而JMX在配置时可能配置了内网IP或者外网IP此时`KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
比如Broker在ZK的存储结构如下所示我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址。
```json
{
"listener_security_protocol_map": {
"EXTERNAL": "SASL_PLAINTEXT",
"INTERNAL": "SASL_PLAINTEXT"
},
"endpoints": [
"EXTERNAL://192.168.0.1:7092",
"INTERNAL://192.168.0.2:7093"
],
"jmx_port": 8099,
"host": "192.168.0.1",
"timestamp": "1627289710439",
"port": -1,
"version": 4
}
```
##### 2.1.4.2、解决方案
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
`jmx_properties`格式:
```json
{
"maxConn": 100, // KM对单台Broker的最大JMX连接数
"username": "xxxxx", //用户名,可以不填写
"password": "xxxx", // 密码,可以不填写
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
"useWhichEndpoint": "EXTERNAL" //指定要连接的网络名称填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
}
```
SQL例子
```sql
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
```
### 2.2、正常情况
修改完成后,如果看到 JMX PORT这一列全部为绿色则表示JMX已正常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_ymtDTCiDlzfrmSCez2lx width="90%">
## 3、Elasticsearch问题
注意mac系统在执行curl指令时可能报zsh错误。可参考以下操作。
```
1 进入.zshrc 文件 vim ~/.zshrc
2.在.zshrc中加入 setopt no_nomatch
3.更新配置 source ~/.zshrc
```
### 3.1、原因一:缺少索引
#### 3.1.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka* 查看KS索引列表发现没有索引。
#### 3.1.2、解决方案
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本创建索引。
### 3.2、原因二:索引模板错误
#### 3.2.1、异常现象
多集群列表有数据集群详情页图标无数据。查询KS索引模板列表发现不存在。
```
curl {ES的IP地址}:{ES的端口号}/_cat/templates/ks_kafka*?v&h=name
```
正常KS模板如下图所示。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_l79bPYSci9wr6KFwZDA6 width="90%">
#### 3.2.2、解决方案
删除KS索引模板和索引
```
curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本初始化索引和模板。
### 3.3、原因三集群Shard满
#### 3.3.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
尝试手动创建索引失败。
```
#创建ks_kafka_cluster_metric_test索引的指令
curl -s -XPUT http://{ES的IP地址}:{ES的端口号}/ks_kafka_cluster_metric_test
```
#### 3.3.2、解决方案
ES索引的默认分片数量为1000达到数量以后索引创建失败。
+ 扩大ES索引数量上限执行指令
```
curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端口号}/_cluster/settings -d '
{
"persistent": {
"cluster": {
"max_shards_per_node":{索引上限默认为1000}
}
}
}'
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本补全索引。

View File

@@ -4,11 +4,122 @@
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 6.2.0、升级至 `master` 版本
### 升级至 `master` 版本
暂无
### 6.2.1、升级至 `v3.1.0` 版本
### 升级至 `3.2.0` 版本
**配置变更**
```yaml
# 新增如下配置
spring:
logi-job: # know-streaming 依赖的 logi-job 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
enable: true # true表示开启job任务, false表关闭。KS在部署上可以考虑部署两套服务一套处理前端请求一套执行job任务此时可以通过该字段进行控制
# 线程池大小相关配置
thread-pool:
es:
search: # es查询线程池
thread-num: 20 # 线程池大小
queue-size: 10000 # 队列大小
# 客户端池大小相关配置
client-pool:
kafka-admin:
client-cnt: 1 # 每个Kafka集群创建的KafkaAdminClient数
# ES客户端配置
es:
index:
expire: 15 # 索引过期天数15表示超过15天的索引会被KS过期删除
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_kc_connect_cluster`;
CREATE TABLE `ks_kc_connect_cluster` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Connect集群ID',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
`group_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群Group名称',
`cluster_url` varchar(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
`member_leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL地址',
`version` varchar(64) NOT NULL DEFAULT '' COMMENT 'connect版本',
`jmx_properties` text COMMENT 'JMX配置',
`state` tinyint(4) NOT NULL DEFAULT '1' COMMENT '集群使用的消费组状态,也表示集群状态:-1 Unknown,0 ReBalance,1 Active,2 Dead,3 Empty',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '接入时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_id_group_name` (`id`,`group_name`),
UNIQUE KEY `uniq_name_kafka_cluster` (`name`,`kafka_cluster_phy_id`),
KEY `idx_kafka_cluster_phy_id` (`kafka_cluster_phy_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connect集群信息表';
DROP TABLE IF EXISTS `ks_kc_connector`;
CREATE TABLE `ks_kc_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`connector_class_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector类',
`connector_type` varchar(32) NOT NULL DEFAULT '' COMMENT 'Connector类型',
`state` varchar(45) NOT NULL DEFAULT '' COMMENT '状态',
`topics` text COMMENT '访问过的Topics',
`task_count` int(11) NOT NULL DEFAULT '0' COMMENT '任务数',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_connect_cluster_id_connector_name` (`connect_cluster_id`,`connector_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connector信息表';
DROP TABLE IF EXISTS `ks_kc_worker`;
CREATE TABLE `ks_kc_worker` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`member_id` varchar(512) NOT NULL DEFAULT '' COMMENT '成员ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
`jmx_port` int(16) NOT NULL DEFAULT '-1' COMMENT 'Jmx端口',
`url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL信息',
`leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'leaderURL信息',
`leader` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1是leader0不是leader',
`worker_id` varchar(128) NOT NULL COMMENT 'worker地址',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_id_member_id` (`connect_cluster_id`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='worker信息表';
DROP TABLE IF EXISTS `ks_kc_worker_connector`;
CREATE TABLE `ks_kc_worker_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`worker_member_id` varchar(256) NOT NULL DEFAULT '',
`task_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'Task的ID',
`state` varchar(128) DEFAULT NULL COMMENT '任务状态',
`worker_id` varchar(128) DEFAULT NULL COMMENT 'worker信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_relation` (`connect_cluster_id`,`connector_name`,`task_id`,`worker_member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Worker和Connector关系表';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_FAILED_TASK_COUNT', '{\"value\" : 1}', 'connector失败状态的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_UNASSIGNED_TASK_COUNT', '{\"value\" : 1}', 'connector未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE', '{\"value\" : 0.05}', 'Connect集群任务启动失败概率', 'admin');
```
---
### 升级至 `v3.1.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
@@ -20,7 +131,7 @@ INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value
```
### 6.2.2、升级至 `v3.0.1` 版本
### 升级至 `v3.0.1` 版本
**ES 索引模版**
```bash
@@ -155,7 +266,7 @@ CREATE TABLE `ks_km_group` (
```
### 6.2.3、升级至 `v3.0.0` 版本
### 升级至 `v3.0.0` 版本
**SQL 变更**
@@ -167,7 +278,7 @@ ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
---
### 6.2.4、升级至 `v3.0.0-beta.2`版本
### 升级至 `v3.0.0-beta.2`版本
**配置变更**
@@ -238,7 +349,7 @@ ALTER TABLE `logi_security_oplog`
---
### 6.2.5、升级至 `v3.0.0-beta.1`版本
### 升级至 `v3.0.0-beta.1`版本
**SQL 变更**
@@ -257,7 +368,7 @@ ALTER COLUMN `operation_methods` set default '';
---
### 6.2.6、`2.x`版本 升级至 `v3.0.0-beta.0`版本
### `2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:**

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
/**
* Kafka集群Connector概览
*/
public interface ClusterConnectorsManager {
PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto);
ConnectStateVO getClusterConnectorsState(Long clusterPhyId);
}

View File

@@ -6,6 +6,8 @@ import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterBrokersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -16,6 +18,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBroker
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterRunStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
@@ -24,6 +28,7 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -83,9 +88,13 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(clusterPhyId);
// 格式转换
return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList(
clusterPhy,
paginationResult.getData().getBizData(),
brokerList,
metricsResult.getData(),
@@ -169,7 +178,8 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
);
}
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(List<Integer> pagedBrokerIdList,
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(ClusterPhy clusterPhy,
List<Integer> pagedBrokerIdList,
List<Broker> brokerList,
List<BrokerMetrics> metricsList,
Topic groupTopic,
@@ -185,9 +195,15 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
}
//补充非zk模式的JMXPort信息
if (!clusterPhy.getRunState().equals(ClusterRunStateEnum.RUN_ZK.getRunState())) {
JmxConfig jmxConfig = ConvertUtil.str2ObjByJson(clusterPhy.getJmxProperties(), JmxConfig.class);
voList.forEach(elem -> elem.setJmxPort(jmxConfig.getJmxPort() == null ? -1 : jmxConfig.getJmxPort()));
}
return voList;
}

View File

@@ -0,0 +1,152 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterConnectorsManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect.MetricsConnectorsDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.converter.ConnectConverter;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Service
public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterConnectorsManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Autowired
private WorkerService workerService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto) {
List<ConnectCluster> clusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> poList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
// 查询实时指标
Result<List<ConnectorMetrics>> latestMetricsResult = connectorMetricService.getLatestMetricsFromES(
clusterPhyId,
poList.stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames()
);
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
// 转换成vo
List<ClusterConnectorOverviewVO> voList = ConnectConverter.convert2ClusterConnectorOverviewVOList(clusterList, poList,latestMetricsResult.getData());
// 请求分页信息
PaginationResult<ClusterConnectorOverviewVO> voPaginationResult = this.pagingConnectorInLocal(voList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = connectorMetricService.listConnectClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
)
);
return PaginationResult.buildSuc(
ConnectConverter.supplyData2ClusterConnectorOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public ConnectStateVO getClusterConnectorsState(Long clusterPhyId) {
//获取Connect集群Id列表
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> connectWorkerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return convert2ConnectStateVO(connectClusterList, connectorPOList, workerConnectorList, connectWorkerList);
}
/**************************************************** private method ****************************************************/
private MetricsConnectorsDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsConnectorsDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsConnectorsDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
private ConnectStateVO convert2ConnectStateVO(List<ConnectCluster> connectClusterList, List<ConnectorPO> connectorPOList, List<WorkerConnector> workerConnectorList, List<ConnectWorker> connectWorkerList) {
ConnectStateVO connectStateVO = new ConnectStateVO();
connectStateVO.setConnectClusterCount(connectClusterList.size());
connectStateVO.setTotalConnectorCount(connectorPOList.size());
connectStateVO.setAliveConnectorCount(connectorPOList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectStateVO.setWorkerCount(connectWorkerList.size());
connectStateVO.setTotalTaskCount(workerConnectorList.size());
connectStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
return connectStateVO;
}
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
//模糊匹配
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectClusterName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectClusterName", dto.getSortType());
} else {
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectClusterName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(connectorVOList, dto);
}
}

View File

@@ -44,7 +44,7 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
// 获取集群所有Topic的指标
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCacheFirst(clusterPhyId);
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId);
// 转换成vo
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap);

View File

@@ -19,7 +19,7 @@ import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
@@ -94,7 +94,7 @@ public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
);
if (metricsResult.failed()) {
LOGGER.error(
"class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
"method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage()
);
return Result.buildSuc(vo);

View File

@@ -25,14 +25,11 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.stream.Collectors;
@Service
@@ -57,7 +54,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
false
);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
@@ -111,7 +107,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 转为vo格式方便后续进行分页筛选等
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
// 获取集群controller信息并补充到vo中,
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
for (ClusterPhyDashboardVO vo: voList) {
@@ -176,7 +171,10 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 获取所有的metrics
List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) {
metricsList.add(clusterMetricService.getLatestMetricsFromCache(vo.getId()));
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension());
metricsList.add(clusterMetrics);
}
// 范围搜索

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import java.util.Properties;
public interface ConnectorManager {
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
public interface WorkerConnectorManager {
Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName);
}

View File

@@ -0,0 +1,93 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;
@Service
public class ConnectorManagerImpl implements ConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(ConnectorManagerImpl.class);
@Autowired
private PluginService pluginService;
@Autowired
private ConnectorService connectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator) {
Result<ConnectConfigInfos> infosResult = pluginService.validateConfig(connectClusterId, configs);
if (infosResult.failed()) {
return Result.buildFromIgnoreData(infosResult);
}
if (infosResult.getData().getErrorCount() > 0) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Connector参数错误");
}
return connectorService.updateConnectorConfig(connectClusterId, connectorName, configs, operator);
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getAllConnectorInfoFromCluster(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
connectorService.addNewToDB(ksConnectorResult.getData());
return Result.buildSuc();
}
@Override
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null) {
return Result.buildFailure(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorName().equals(connectorName)).collect(Collectors.toList());
return Result.buildSuc(convert2ConnectorOverviewVO(connectorPO, workerConnectorList));
}
private ConnectorStateVO convert2ConnectorOverviewVO(ConnectorPO connectorPO, List<WorkerConnector> workerConnectorList) {
ConnectorStateVO connectorStateVO = new ConnectorStateVO();
connectorStateVO.setConnectClusterId(connectorPO.getConnectClusterId());
connectorStateVO.setName(connectorPO.getConnectorName());
connectorStateVO.setType(connectorPO.getConnectorType());
connectorStateVO.setState(connectorPO.getState());
connectorStateVO.setTotalTaskCount(workerConnectorList.size());
connectorStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectorStateVO.setTotalWorkerCount(workerConnectorList.stream().map(elem -> elem.getWorkerId()).collect(Collectors.toSet()).size());
return connectorStateVO;
}
}

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.WorkerConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.persistence.connect.cache.LoadedConnectClusterCache;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
@Service
public class WorkerConnectorManageImpl implements WorkerConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(WorkerConnectorManageImpl.class);
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectCluster connectCluster = LoadedConnectClusterCache.getByPhyId(connectClusterId);
List<WorkerConnector> workerConnectorList = workerConnectorService.getWorkerConnectorListFromCluster(connectCluster, connectorName);
return Result.buildSuc(ConvertUtil.list2List(workerConnectorList, KCTaskOverviewVO.class));
}
}

View File

@@ -8,10 +8,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDT
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberConsumerAssignment;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
@@ -34,15 +39,13 @@ import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import org.apache.kafka.clients.admin.MemberDescription;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired;
@@ -51,6 +54,8 @@ import org.springframework.stereotype.Component;
import java.util.*;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum.CONNECT_CLUSTER_PROTOCOL_TYPE;
@Component
public class GroupManagerImpl implements GroupManager {
private static final ILog log = LogFactory.getLog(GroupManagerImpl.class);
@@ -70,6 +75,9 @@ public class GroupManagerImpl implements GroupManager {
@Autowired
private GroupMetricESDAO groupMetricESDAO;
@Autowired
private ClusterPhyService clusterPhyService;
@Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
String topicName,
@@ -140,6 +148,11 @@ public class GroupManagerImpl implements GroupManager {
String groupName,
List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
}
// 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet()
@@ -150,13 +163,18 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList);
// 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName);
KSGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhy, groupName);
// 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
for (MemberDescription description: groupDescription.members()) {
for (TopicPartition tp: description.assignment().topicPartitions()) {
tpMemberMap.put(tp, description);
Map<TopicPartition, KSMemberDescription> tpMemberMap = new HashMap<>();
//如果不是connect集群
if (!groupDescription.protocolType().equals(CONNECT_CLUSTER_PROTOCOL_TYPE)) {
for (KSMemberDescription description : groupDescription.members()) {
KSMemberConsumerAssignment assignment = (KSMemberConsumerAssignment) description.assignment();
for (TopicPartition tp : assignment.topicPartitions()) {
tpMemberMap.put(tp, description);
}
}
}
@@ -173,11 +191,11 @@ public class GroupManagerImpl implements GroupManager {
vo.setTopicName(topicName);
vo.setPartitionId(groupMetrics.getPartitionId());
MemberDescription memberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
if (memberDescription != null) {
vo.setMemberId(memberDescription.consumerId());
vo.setHost(memberDescription.host());
vo.setClientId(memberDescription.clientId());
KSMemberDescription ksMemberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
if (ksMemberDescription != null) {
vo.setMemberId(ksMemberDescription.consumerId());
vo.setHost(ksMemberDescription.host());
vo.setClientId(ksMemberDescription.clientId());
}
vo.setLatestMetrics(groupMetrics);
@@ -203,7 +221,12 @@ public class GroupManagerImpl implements GroupManager {
return rv;
}
ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName());
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterId());
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterId()));
}
KSGroupDescription description = groupService.getGroupDescriptionFromKafka(clusterPhy, dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
}
@@ -274,16 +297,16 @@ public class GroupManagerImpl implements GroupManager {
)));
}
OffsetSpec offsetSpec = null;
KSOffsetSpec offsetSpec = null;
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp());
offsetSpec = KSOffsetSpec.forTimestamp(dto.getTimestamp());
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.earliest();
offsetSpec = KSOffsetSpec.earliest();
} else {
offsetSpec = OffsetSpec.latest();
offsetSpec = KSOffsetSpec.latest();
}
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec, dto.getTimestamp());
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec);
}
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
@@ -345,32 +368,4 @@ public class GroupManagerImpl implements GroupManager {
dto
);
}
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
}

View File

@@ -22,7 +22,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

View File

@@ -16,7 +16,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.version.BaseVersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.version.BaseKafkaVersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
@@ -27,7 +27,7 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
@Component
public class TopicConfigManagerImpl extends BaseVersionControlService implements TopicConfigManager {
public class TopicConfigManagerImpl extends BaseKafkaVersionControlService implements TopicConfigManager {
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";

View File

@@ -10,6 +10,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -43,10 +44,9 @@ import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
import org.apache.commons.lang3.ObjectUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.TopicConfig;
@@ -143,12 +143,12 @@ public class TopicStateManagerImpl implements TopicStateManager {
}
// 获取分区beginOffset
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.earliest(), null);
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.earliest());
if (beginOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(beginOffsetsMapResult);
}
// 获取分区endOffset
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.latest(), null);
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.latest());
if (endOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(endOffsetsMapResult);
}
@@ -307,7 +307,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
if (metricsResult.failed()) {
// 仅打印错误日志,但是不直接返回错误
log.error(
"class=TopicStateManagerImpl||method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
"method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
clusterPhyId, topicName, metricsResult
);
}

View File

@@ -20,7 +20,7 @@ public interface VersionControlManager {
* 获取当前ks所有支持的kafka版本
* @return
*/
Result<Map<String, Long>> listAllVersions();
Result<Map<String, Long>> listAllKafkaVersions();
/**
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
@@ -28,7 +28,7 @@ public interface VersionControlManager {
* @param type
* @return
*/
Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type);
Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type);
/**
* 获取当前用户设置的用于展示的指标配置

View File

@@ -17,6 +17,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -29,10 +30,10 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.BrokerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.BrokerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*;
@Service
public class VersionControlManagerImpl implements VersionControlManager {
@@ -92,6 +93,9 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
}
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private VersionControlService versionControlService;
@@ -107,7 +111,13 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CLUSTER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CONNECTOR.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_MIRROR_MAKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
@@ -121,18 +131,20 @@ public class VersionControlManagerImpl implements VersionControlManager {
}
@Override
public Result<Map<String, Long>> listAllVersions() {
public Result<Map<String, Long>> listAllKafkaVersions() {
return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
}
@Override
public Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type) {
public Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type) {
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
List<VersionItemVO> versionItemVOS = new ArrayList<>();
String versionStr = clusterPhyService.getVersionFromCacheFirst(clusterId);
for (VersionControlItem item : allItem){
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
boolean support = versionControlService.isClusterSupport(clusterId, item);
boolean support = versionControlService.isClusterSupport(versionStr, item);
itemVO.setSupport(support);
itemVO.setDesc(itemSupportDesc(item, support));
@@ -145,7 +157,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
Result<List<VersionItemVO>> ret = listClusterVersionControlItem(clusterId, type);
Result<List<VersionItemVO>> ret = listKafkaClusterVersionControlItem(clusterId, type);
if(null == ret || ret.failed()){
return Result.buildFail();
}

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
@@ -9,17 +8,20 @@ import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import org.springframework.beans.factory.annotation.Autowired;
/**
* @author didi
*/
public abstract class AbstractMetricCollector<T> {
public abstract void collectMetrics(ClusterPhy clusterPhy);
public abstract class AbstractMetricCollector<M, C> {
public abstract String getClusterVersion(C c);
public abstract VersionItemTypeEnum collectorType();
@Autowired
private CollectThreadPoolService collectThreadPoolService;
public abstract void collectMetrics(C c);
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractConnectMetricCollector<M> extends AbstractMetricCollector<M, ConnectCluster> {
private static final ILog LOGGER = LogFactory.getLog(AbstractConnectMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ConnectClusterService connectClusterService;
public abstract List<M> collectConnectMetrics(ConnectCluster connectCluster);
@Override
public String getClusterVersion(ConnectCluster connectCluster){
return connectClusterService.getClusterVersion(connectCluster.getId());
}
@Override
public void collectMetrics(ConnectCluster connectCluster) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectConnectMetrics(connectCluster);
// 输出耗时信息
LOGGER.info(
"metricType={}||connectClusterId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), connectCluster.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||connectClusterId={}||metrics={}!",
this.collectorType().getMessage(), connectCluster.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -0,0 +1,83 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CLUSTER;
/**
* @author didi
*/
@Component
public class ConnectClusterMetricCollector extends AbstractConnectMetricCollector<ConnectClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectClusterMetricService connectClusterMetricService;
@Override
public List<ConnectClusterMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
ConnectClusterMetrics metrics = new ConnectClusterMetrics(clusterPhyId, connectClusterId);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
List<VersionControlItem> items = versionControlService.listVersionControlItem(getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
for (VersionControlItem item : items) {
future.runnableTask(
String.format("class=ConnectClusterMetricCollector||connectClusterId=%d||metricName=%s", connectClusterId, item.getName()),
30000,
() -> {
try {
Result<ConnectClusterMetrics> ret = connectClusterMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, item.getName());
if (null == ret || !ret.hasData()) {
return null;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectConnectMetrics||connectClusterId={}||metricName={}||errMsg=exception!",
connectClusterId, item.getName(), e
);
}
return null;
}
);
}
future.waitExecute(30000);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ConnectClusterMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CLUSTER;
}
}

View File

@@ -0,0 +1,102 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CONNECTOR;
/**
* @author didi
*/
@Component
public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollector<ConnectorMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectConnectorMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Override
public List<ConnectorMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
Result<List<String>> connectorList = connectorService.listConnectorsFromCluster(connectClusterId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
List<ConnectorMetrics> metricsList = new ArrayList<>();
for (String connectorName : connectorList.getData()) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.setClusterPhyId(clusterPhyId);
metricsList.add(metrics);
future.runnableTask(
String.format("class=ConnectConnectorMetricCollector||connectClusterId=%d||connectorName=%s", connectClusterId, connectorName),
30000,
() -> collectMetrics(connectClusterId, connectorName, metrics, items)
);
}
future.waitResult(30000);
this.publishMetric(new ConnectorMetricEvent(this, metricsList));
return metricsList;
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CONNECTOR;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String connectorName, ConnectorMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
ConnectorTypeEnum connectorType = connectorService.getConnectorType(connectClusterId, connectorName);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||connectorName={}||metric={}||errMsg=exception!",
connectClusterId, connectorName, v.getName(), e
);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractKafkaMetricCollector<M> extends AbstractMetricCollector<M, ClusterPhy> {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ClusterPhyService clusterPhyService;
public abstract List<M> collectKafkaMetrics(ClusterPhy clusterPhy);
@Override
public String getClusterVersion(ClusterPhy clusterPhy){
return clusterPhyService.getVersionFromCacheFirst(clusterPhy.getId());
}
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectKafkaMetrics(clusterPhy);
// 输出耗时信息
LOGGER.info(
"metricType={}||clusterPhyId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), clusterPhy.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||clusterPhyId={}||metrics={}!",
this.collectorType().getMessage(), clusterPhy.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
@@ -11,7 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
@@ -28,8 +26,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMetrics> {
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -41,32 +39,31 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private BrokerService brokerService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
public List<BrokerMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<BrokerMetrics> brokerMetrics = new ArrayList<>();
List<BrokerMetrics> metricsList = new ArrayList<>();
for(Broker broker : brokers) {
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
brokerMetrics.add(metrics);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics);
future.runnableTask(
String.format("method=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
String.format("class=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
30000,
() -> collectMetrics(clusterPhyId, metrics, items)
);
}
future.waitExecute(30000);
this.publishMetric(new BrokerMetricEvent(this, brokerMetrics));
this.publishMetric(new BrokerMetricEvent(this, metricsList));
LOGGER.info("method=BrokerMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
return metricsList;
}
@Override
@@ -78,7 +75,6 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) {
try {
@@ -92,14 +88,11 @@ public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||metric={}!",
clusterPhyId, metrics.getBrokerId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e){
LOGGER.error("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||errMsg=exception!",
clusterPhyId, metrics.getBrokerId(), v.getName(), e);
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||brokerId={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getBrokerId(), v.getName(), e
);
}
}

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
@@ -7,18 +7,15 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetric
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
@@ -27,8 +24,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class ClusterMetricCollector extends AbstractKafkaMetricCollector<ClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ClusterMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -37,35 +34,37 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
private ClusterMetricService clusterMetricService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
public List<ClusterMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
for(VersionControlItem v : items) {
future.runnableTask(
String.format("method=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
String.format("class=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
30000,
() -> {
try {
if(null != metrics.getMetrics().get(v.getName())){return null;}
if(null != metrics.getMetrics().get(v.getName())){
return null;
}
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
if(null == ret || ret.failed() || null == ret.getData()){return null;}
if(null == ret || ret.failed() || null == ret.getData()){
return null;
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics()));
}
} catch (Exception e){
LOGGER.error("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e);
LOGGER.error(
"method=collectKafkaMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
return null;
@@ -76,10 +75,9 @@ public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetri
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ClusterMetricEvent(this, Arrays.asList(metrics)));
publishMetric(new ClusterMetricEvent(this, Collections.singletonList(metrics)));
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
return Collections.singletonList(metrics);
}
@Override

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -10,20 +9,16 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.apache.commons.collections.CollectionUtils;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
@@ -32,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetrics>> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(GroupMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -45,40 +40,38 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private GroupService groupService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
public List<GroupMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<String> groups = new ArrayList<>();
List<String> groupNameList = new ArrayList<>();
try {
groups = groupService.listGroupsFromKafka(clusterPhyId);
groupNameList = groupService.listGroupsFromKafka(clusterPhy);
} catch (Exception e) {
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||msg=exception!", clusterPhyId, e);
LOGGER.error("method=collectKafkaMetrics||clusterPhyId={}||msg=exception!", clusterPhyId, e);
}
if(CollectionUtils.isEmpty(groups)){return;}
if(ValidateUtils.isEmptyList(groupNameList)) {
return Collections.emptyList();
}
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = getFutureUtilByClusterPhyId(clusterPhyId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
for(String groupName : groups) {
for(String groupName : groupNameList) {
future.runnableTask(
String.format("method=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
String.format("class=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
30000,
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
}
future.waitResult(30000);
List<GroupMetrics> metricsList = new ArrayList<>();
metricsMap.values().forEach(elem -> metricsList.addAll(elem));
List<GroupMetrics> metricsList = metricsMap.values().stream().collect(ArrayList::new, ArrayList::addAll, ArrayList::addAll);
publishMetric(new GroupMetricEvent(this, metricsList));
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||startTime={}||cost={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
return metricsList;
}
@Override
@@ -91,9 +84,7 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
List<GroupMetrics> groupMetricsList = new ArrayList<>();
Map<String, GroupMetrics> tpGroupPOMap = new HashMap<>();
Map<TopicPartition, GroupMetrics> subMetricMap = new HashMap<>();
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
@@ -107,38 +98,31 @@ public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetr
continue;
}
ret.getData().stream().forEach(metrics -> {
ret.getData().forEach(metrics -> {
if (metrics.isBGroupMetric()) {
groupMetrics.putMetric(metrics.getMetrics());
} else {
String topicName = metrics.getTopic();
Integer partitionId = metrics.getPartitionId();
String tpGroupKey = genTopicPartitionGroupKey(topicName, partitionId);
tpGroupPOMap.putIfAbsent(tpGroupKey, new GroupMetrics(clusterPhyId, partitionId, topicName, groupName, false));
tpGroupPOMap.get(tpGroupKey).putMetric(metrics.getMetrics());
return;
}
});
if(!EnvUtil.isOnline()){
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||groupName={}||metricName={}||metricValue={}",
clusterPhyId, groupName, metricName, JSON.toJSONString(ret.getData()));
}
}catch (Exception e){
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||groupName={}||errMsg=exception!", clusterPhyId, groupName, e);
TopicPartition tp = new TopicPartition(metrics.getTopic(), metrics.getPartitionId());
subMetricMap.putIfAbsent(tp, new GroupMetrics(clusterPhyId, metrics.getPartitionId(), metrics.getTopic(), groupName, false));
subMetricMap.get(tp).putMetric(metrics.getMetrics());
});
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||groupName={}||errMsg=exception!",
clusterPhyId, groupName, e
);
}
}
groupMetricsList.add(groupMetrics);
groupMetricsList.addAll(tpGroupPOMap.values());
List<GroupMetrics> metricsList = new ArrayList<>();
metricsList.add(groupMetrics);
metricsList.addAll(subMetricMap.values());
// 记录采集性能
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
metricsMap.put(groupName, groupMetricsList);
}
private String genTopicPartitionGroupKey(String topic, Integer partitionId){
return topic + "@" + partitionId;
metricsMap.put(groupName, metricsList);
}
}

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
@@ -9,8 +9,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
@@ -27,8 +25,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class PartitionMetricCollector extends AbstractMetricCollector<PartitionMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class PartitionMetricCollector extends AbstractKafkaMetricCollector<PartitionMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(PartitionMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -40,13 +38,10 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
private TopicService topicService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
public List<PartitionMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
// 获取集群所有分区
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -55,9 +50,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
future.runnableTask(
String.format("method=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
String.format("class=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
() -> this.collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
);
}
@@ -68,10 +63,7 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
this.publishMetric(new PartitionMetricEvent(this, metricsList));
LOGGER.info(
"method=PartitionMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
return metricsList;
}
@Override
@@ -109,17 +101,9 @@ public class PartitionMetricCollector extends AbstractMetricCollector<PartitionM
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
allMetrics.putMetric(subMetrics.getMetrics());
}
if (!EnvUtil.isOnline()) {
LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||metricValue={}!",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) {
LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
clusterPhyId, topicName, v.getName(), e
);
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -11,7 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
@@ -28,8 +26,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ReplicaMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -41,12 +39,10 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
private PartitionService partitionService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
public List<ReplicationMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<Partition> partitions = partitionService.listPartitionByCluster(clusterPhyId);
List<Partition> partitions = partitionService.listPartitionFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -54,10 +50,11 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
for(Partition partition : partitions) {
for (Integer brokerId: partition.getAssignReplicaList()) {
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics);
future.runnableTask(
String.format("method=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
String.format("class=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
30000,
() -> collectMetrics(clusterPhyId, metrics, items)
@@ -69,8 +66,7 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
publishMetric(new ReplicaMetricEvent(this, metricsList));
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
return metricsList;
}
@Override
@@ -83,8 +79,6 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) {
try {
if (metrics.getMetrics().containsKey(v.getName())) {
@@ -104,15 +98,11 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
}
metrics.putMetric(ret.getData().getMetrics());
if (!EnvUtil.isOnline()) {
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partitionId={}||metricName={}||metricValue={}",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e) {
LOGGER.error("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e);
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e
);
}
}

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
@@ -10,8 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
@@ -31,8 +29,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetrics>> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(TopicMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -46,11 +44,10 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
private static final Integer AGG_METRICS_BROKER_ID = -10000;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
public List<TopicMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long clusterPhyId = clusterPhy.getId();
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -64,7 +61,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
allMetricsMap.put(topic.getTopicName(), metricsMap);
future.runnableTask(
String.format("method=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
String.format("class=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
);
@@ -77,8 +74,7 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
this.publishMetric(new TopicMetricEvent(this, metricsList));
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
return metricsList;
}
@Override
@@ -118,14 +114,9 @@ public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetr
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
}
});
if (!EnvUtil.isOnline()) {
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||metricValue={}.",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) {
LOGGER.error("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
LOGGER.error(
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
clusterPhyId, topicName, v.getName(), e
);
}

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
@@ -14,10 +14,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetric
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
@@ -25,7 +23,7 @@ import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
@@ -35,8 +33,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi
*/
@Component
public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<ZookeeperMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@@ -51,21 +49,21 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
private KafkaControllerService kafkaControllerService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
public List<ZookeeperMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return;
publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
// 构造参数
@@ -82,6 +80,7 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
if(null != metrics.getMetrics().get(v.getName())) {
continue;
}
param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
@@ -90,16 +89,9 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){
LOGGER.error(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
"method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
@@ -107,12 +99,9 @@ public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperM
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
this.publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
return Collections.singletonList(metrics);
}
@Override

View File

@@ -237,7 +237,7 @@ public class CollectThreadPoolService {
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
// 新的
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
"CollectorMetricsFutureUtil-Shard-" + shardId,
"MetricCollect-Shard-" + shardId,
this.futureUtilThreadNum,
this.futureUtilThreadNum,
this.futureUtilQueueSize

View File

@@ -3,67 +3,47 @@ package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public abstract class AbstractMetricESSender {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricESSender.class);
private static final int THRESHOLD = 100;
private static final int THRESHOLD = 100;
private static final ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(
private static final FutureUtil<Void> esExecutor = FutureUtil.init(
"MetricsESSender",
10,
20,
6000,
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount())
10000
);
/**
* 根据不同监控维度来发送
*/
protected boolean send2es(String index, List<? extends BaseESPO> statsList){
protected boolean send2es(String index, List<? extends BaseESPO> statsList) {
LOGGER.info("method=send2es||indexName={}||metricsSize={}||msg=send metrics to es", index, statsList.size());
if (CollectionUtils.isEmpty(statsList)) {
return true;
}
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull( baseMetricESDao )) {
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
if (Objects.isNull(baseMetricESDao)) {
LOGGER.error("method=send2es||indexName={}||errMsg=find dao failed", index);
return false;
}
int size = statsList.size();
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
for (int i = 0; i < statsList.size(); i += THRESHOLD) {
final int idxStart = i;
if (size < THRESHOLD) {
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList)
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
// 异步发送
esExecutor.submitTask(
() -> baseMetricESDao.batchInsertStats(statsList.subList(idxStart, Math.min(idxStart + THRESHOLD, statsList.size())))
);
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CLUSTER_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectClusterMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectClusterMetricEvent event) {
send2es(CONNECT_CLUSTER_INDEX, ConvertUtil.list2List(event.getConnectClusterMetrics(), ConnectClusterMetricPO.class));
}
}

View File

@@ -0,0 +1,33 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectorMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CONNECTOR_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectorMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectorMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectorMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectorMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectorMetricEvent event) {
send2es(CONNECT_CONNECTOR_INDEX, ConvertUtil.list2List(event.getConnectorMetricsList(), ConnectorMetricPO.class));
}
}

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.BROKER_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.BROKER_INDEX;
@Component
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=BrokerMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.CLUSTER_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CLUSTER_INDEX;
@Component
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(ClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ClusterMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.GROUP_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.GROUP_INDEX;
@Component
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(GroupMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.PARTITION_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.PARTITION_INDEX;
@Component
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(PartitionMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=PartitionMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.REPLICATION_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.REPLICATION_INDEX;
@Component
public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(ReplicaMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -10,16 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.TOPIC_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.TOPIC_INDEX;
@Component
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(TopicMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=TopicMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -1,7 +1,8 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
@@ -10,15 +11,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.ZOOKEEPER_INDEX;
@Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished");
LOGGER.info("method=init||msg=init finished");
}
@Override

View File

@@ -127,5 +127,9 @@
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-runtime</artifactId>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterConnectorsOverviewDTO extends PaginationSortDTO {
@NotNull(message = "latestMetricNames不允许为空")
@ApiModelProperty("需要指标点的信息")
private List<String> latestMetricNames;
@NotNull(message = "metricLines不允许为空")
@ApiModelProperty("需要指标曲线的信息")
private MetricDTO metricLines;
@ApiModelProperty("需要排序的指标名称列表,比较第一个不为空的metric")
private List<String> sortMetricNameList;
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "集群Connector")
public class ClusterConnectorDTO extends BaseDTO {
@NotNull(message = "connectClusterId不允许为空")
@ApiModelProperty(value = "Connector集群ID", example = "1")
private Long connectClusterId;
@NotBlank(message = "name不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
private String connectorName;
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "集群Connector")
public class ConnectClusterDTO extends BaseDTO {
@ApiModelProperty(value = "Connect集群ID", example = "1")
private Long id;
@ApiModelProperty(value = "Connect集群名称", example = "know-streaming")
private String name;
@ApiModelProperty(value = "Connect集群URL", example = "http://127.0.0.1:8080")
private String clusterUrl;
@ApiModelProperty(value = "Connect集群版本", example = "2.5.1")
private String version;
@ApiModelProperty(value = "JMX配置", example = "")
private String jmxProperties;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Connector")
public class ConnectorActionDTO extends ClusterConnectorDTO {
@NotBlank(message = "action不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "stop|restart|resume")
private String action;
}

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "修改Connector配置")
public class ConnectorConfigModifyDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
}

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "创建Connector")
public class ConnectorCreateDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
}

View File

@@ -0,0 +1,14 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "删除Connector")
public class ConnectorDeleteDTO extends ClusterConnectorDTO {
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.task;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Task")
public class TaskActionDTO extends ConnectorActionDTO {
@NotNull(message = "taskId不允许为NULL")
@ApiModelProperty(value = "taskId", example = "123")
private Long taskId;
}

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connect集群指标查询信息")
public class MetricsConnectClustersDTO extends MetricDTO {
@ApiModelProperty("Connect集群ID")
private List<Long> connectClusterIdList;
}

View File

@@ -0,0 +1,23 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connector指标查询信息")
public class MetricsConnectorsDTO extends MetricDTO {
@ApiModelProperty("Connector列表")
private List<ClusterConnectorDTO> connectorNameList;
}

View File

@@ -3,7 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity;
/**
* @author didi
*/
public interface EntifyIdInterface {
public interface EntityIdInterface {
/**
* 获取id
* @return

View File

@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.broker;
import com.alibaba.fastjson.TypeReference;
import com.xiaojukeji.know.streaming.km.common.bean.entity.common.IpPortData;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.po.broker.BrokerPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import lombok.AllArgsConstructor;
@@ -66,13 +65,13 @@ public class Broker implements Serializable {
*/
private Map<String, IpPortData> endpointMap;
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp, JmxConfig jmxConfig) {
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp) {
Broker metadata = new Broker();
metadata.setClusterPhyId(clusterPhyId);
metadata.setBrokerId(node.id());
metadata.setHost(node.host());
metadata.setPort(node.port());
metadata.setJmxPort(jmxConfig != null ? jmxConfig.getJmxPort() : -1);
metadata.setJmxPort(-1);
metadata.setStartTimestamp(startTimestamp);
metadata.setRack(node.rack());
metadata.setStatus(1);

View File

@@ -1,6 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntifyIdInterface;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@@ -10,7 +10,7 @@ import java.util.Date;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhy implements Comparable<ClusterPhy>, EntifyIdInterface {
public class ClusterPhy implements Comparable<ClusterPhy>, EntityIdInterface {
/**
* 主键
*/

View File

@@ -1,7 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

View File

@@ -0,0 +1,61 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface;
import lombok.Data;
import java.io.Serializable;
@Data
public class ConnectCluster implements Serializable, Comparable<ConnectCluster>, EntityIdInterface {
/**
* 集群ID
*/
private Long id;
/**
* 集群名字
*/
private String name;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
* @see com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum
*/
private Integer state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
/**
* 版本信息
*/
private String version;
/**
* jmx配置
* @see com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig
*/
private String jmxProperties;
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群地址
*/
private String clusterUrl;
@Override
public int compareTo(ConnectCluster connectCluster) {
return this.id.compareTo(connectCluster.getId());
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class ConnectClusterMetadata implements Serializable {
/**
* Kafka集群名字
*/
private Long kafkaClusterPhyId;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
*/
private GroupStateEnum state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
public ConnectClusterMetadata(Long kafkaClusterPhyId, String groupName, GroupStateEnum state, String memberLeaderUrl) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.groupName = groupName;
this.state = state;
this.memberLeaderUrl = memberLeaderUrl;
}
}

View File

@@ -0,0 +1,87 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
import java.net.URI;
@Data
@NoArgsConstructor
public class ConnectWorker implements Serializable {
protected static final ILog LOGGER = LogFactory.getLog(ConnectWorker.class);
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群ID
*/
private Long connectClusterId;
/**
* 成员ID
*/
private String memberId;
/**
* 主机
*/
private String host;
/**
* Jmx端口
*/
private Integer jmxPort;
/**
* URL
*/
private String url;
/**
* leader的URL
*/
private String leaderUrl;
/**
* 1是leader0不是leader
*/
private Integer leader;
/**
* worker地址
*/
private String workerId;
public ConnectWorker(Long kafkaClusterPhyId,
Long connectClusterId,
String memberId,
String host,
Integer jmxPort,
String url,
String leaderUrl,
Integer leader) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.memberId = memberId;
this.host = host;
this.jmxPort = jmxPort;
this.url = url;
this.leaderUrl = leaderUrl;
this.leader = leader;
String workerId = CommonUtils.getWorkerId(url);
if (workerId == null) {
workerId = memberId;
LOGGER.error("class=ConnectWorker||connectClusterId={}||memberId={}||url={}||msg=analysis url fail"
, connectClusterId, memberId, url);
}
this.workerId = workerId;
}
}

View File

@@ -0,0 +1,58 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class WorkerConnector implements Serializable {
/**
* connect集群ID
*/
private Long connectClusterId;
/**
* kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* connector名称
*/
private String connectorName;
private String workerMemberId;
/**
* 任务状态
*/
private String state;
/**
* 任务ID
*/
private Integer taskId;
/**
* worker信息
*/
private String workerId;
/**
* 错误原因
*/
private String trace;
public WorkerConnector(Long kafkaClusterPhyId, Long connectClusterId, String connectorName, String workerMemberId, Integer taskId, String state, String workerId, String trace) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.workerMemberId = workerMemberId;
this.taskId = taskId;
this.state = state;
this.workerId = workerId;
this.trace = trace;
}
}

View File

@@ -0,0 +1,19 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
/**
* @see ConfigInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfo {
private ConnectConfigKeyInfo definition;
private ConnectConfigValueInfo value;
}

View File

@@ -0,0 +1,71 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfos;
import java.util.*;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ACTION_RELOAD_NAME;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME;
/**
* @see ConfigInfos
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfos {
private static final Map<String, List<String>> recommendValuesMap = new HashMap<>();
static {
recommendValuesMap.put(CONNECTOR_CONFIG_ACTION_RELOAD_NAME, Arrays.asList("none", "restart"));
recommendValuesMap.put(CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME, Arrays.asList("none", "all"));
}
private String name;
private int errorCount;
private List<String> groups;
private List<ConnectConfigInfo> configs;
public ConnectConfigInfos(ConfigInfos configInfos) {
this.name = configInfos.name();
this.errorCount = configInfos.errorCount();
this.groups = configInfos.groups();
this.configs = new ArrayList<>();
for (ConfigInfo configInfo: configInfos.values()) {
ConnectConfigKeyInfo definition = new ConnectConfigKeyInfo();
definition.setName(configInfo.configKey().name());
definition.setType(configInfo.configKey().type());
definition.setRequired(configInfo.configKey().required());
definition.setDefaultValue(configInfo.configKey().defaultValue());
definition.setImportance(configInfo.configKey().importance());
definition.setDocumentation(configInfo.configKey().documentation());
definition.setGroup(configInfo.configKey().group());
definition.setOrderInGroup(configInfo.configKey().orderInGroup());
definition.setWidth(configInfo.configKey().width());
definition.setDisplayName(configInfo.configKey().displayName());
definition.setDependents(configInfo.configKey().dependents());
ConnectConfigValueInfo value = new ConnectConfigValueInfo();
value.setName(configInfo.configValue().name());
value.setValue(configInfo.configValue().value());
value.setRecommendedValues(recommendValuesMap.getOrDefault(configInfo.configValue().name(), configInfo.configValue().recommendedValues()));
value.setErrors(configInfo.configValue().errors());
value.setVisible(configInfo.configValue().visible());
ConnectConfigInfo connectConfigInfo = new ConnectConfigInfo();
connectConfigInfo.setDefinition(definition);
connectConfigInfo.setValue(value);
this.configs.add(connectConfigInfo);
}
}
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigKeyInfo;
import java.util.List;
/**
* @see ConfigKeyInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigKeyInfo {
private String name;
private String type;
private boolean required;
private String defaultValue;
private String importance;
private String documentation;
private String group;
private int orderInGroup;
private String width;
private String displayName;
private List<String> dependents;
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigValueInfo;
import java.util.List;
/**
* @see ConfigValueInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigValueInfo {
private String name;
private String value;
private List<String> recommendedValues;
private List<String> errors;
private boolean visible;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import com.alibaba.fastjson.annotation.JSONField;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.AbstractState
*/
@Data
public abstract class KSAbstractConnectState {
private String state;
private String trace;
@JSONField(name="worker_id")
@JsonProperty("worker_id")
private String workerId;
}

View File

@@ -0,0 +1,48 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import java.io.Serializable;
@Data
public class KSConnector implements Serializable {
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* connect集群ID
*/
private Long connectClusterId;
/**
* connector名称
*/
private String connectorName;
/**
* connector类名
*/
private String connectorClassName;
/**
* connector类型
*/
private String connectorType;
/**
* 访问过的Topic列表
*/
private String topics;
/**
* task数
*/
private Integer taskCount;
/**
* 状态
*/
private String state;
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorType;
import org.apache.kafka.connect.util.ConnectorTaskId;
import java.io.Serializable;
import java.util.List;
import java.util.Map;
/**
* copy from:
* @see org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo
*/
@Data
public class KSConnectorInfo implements Serializable {
private Long connectClusterId;
private String name;
private Map<String, String> config;
private List<ConnectorTaskId> tasks;
private ConnectorType type;
}

View File

@@ -0,0 +1,11 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.ConnectorState
*/
@Data
public class KSConnectorState extends KSAbstractConnectState {
}

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorType;
import java.util.List;
/**
* @see ConnectorStateInfo
*/
@Data
public class KSConnectorStateInfo {
private String name;
private KSConnectorState connector;
private List<KSTaskState> tasks;
private ConnectorType type;
}

View File

@@ -0,0 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.TaskState
*/
@Data
public class KSTaskState extends KSAbstractConnectState {
private int id;
}

View File

@@ -0,0 +1,38 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.plugin;
import com.alibaba.fastjson.annotation.JSONField;
import com.fasterxml.jackson.annotation.JsonProperty;
import io.swagger.annotations.ApiModel;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
/**
* @author zengqiao
* @date 22/10/17
*/
@Data
@ApiModel(description = "Connect插件信息")
@NoArgsConstructor
public class ConnectPluginBasic implements Serializable {
/**
* Json序列化时对应的字段
*/
@JSONField(name="class")
@JsonProperty("class")
private String className;
private String type;
private String version;
private String helpDocLink;
public ConnectPluginBasic(String className, String type, String version, String helpDocLink) {
this.className = className;
this.type = type;
this.version = version;
this.helpDocLink = helpDocLink;
}
}

View File

@@ -1,12 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import java.util.ArrayList;
import java.util.List;
@@ -61,14 +61,14 @@ public class Group {
*/
private int coordinatorId;
public Group(Long clusterPhyId, String groupName, ConsumerGroupDescription groupDescription) {
public Group(Long clusterPhyId, String groupName, KSGroupDescription groupDescription) {
this.clusterPhyId = clusterPhyId;
this.type = groupDescription.isSimpleConsumerGroup()? GroupTypeEnum.CONSUMER: GroupTypeEnum.CONNECTOR;
this.type = GroupTypeEnum.getTypeByProtocolType(groupDescription.protocolType());
this.name = groupName;
this.state = GroupStateEnum.getByRawState(groupDescription.state());
this.memberCount = groupDescription.members() == null? 0: groupDescription.members().size();
this.memberCount = groupDescription.members() == null ? 0 : groupDescription.members().size();
this.topicMembers = new ArrayList<>();
this.partitionAssignor = groupDescription.partitionAssignor();
this.coordinatorId = groupDescription.coordinator() == null? Constant.INVALID_CODE: groupDescription.coordinator().id();
this.coordinatorId = groupDescription.coordinator() == null ? Constant.INVALID_CODE : groupDescription.coordinator().id();
}
}

View File

@@ -14,16 +14,16 @@ import java.util.stream.Collectors;
@Data
@NoArgsConstructor
public class HealthCheckAggResult {
private HealthCheckNameEnum checkNameEnum;
protected HealthCheckNameEnum checkNameEnum;
private List<HealthCheckResultPO> poList;
protected List<HealthCheckResultPO> poList;
private Boolean passed;
protected Boolean passed;
public HealthCheckAggResult(HealthCheckNameEnum checkNameEnum, List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.poList = poList;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
if (ValidateUtils.isEmptyList(poList) || poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
@@ -45,24 +45,12 @@ public class HealthCheckAggResult {
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
*/
public Integer calRawHealthScore() {
if (poList == null || poList.isEmpty()) {
return 100;
}
return 100 * this.getPassedCount() / this.getTotalCount();
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
return poList.stream().filter(elem -> elem.getPassed() <= 0 && !ValidateUtils.isBlank(elem.getResName())).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {

View File

@@ -3,87 +3,20 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.health;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.BaseClusterHealthConfig;
import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.stream.Collectors;
@Data
@NoArgsConstructor
public class HealthScoreResult {
private HealthCheckNameEnum checkNameEnum;
public class HealthScoreResult extends HealthCheckAggResult {
private BaseClusterHealthConfig baseConfig;
private List<HealthCheckResultPO> poList;
private Boolean passed;
public HealthScoreResult(HealthCheckNameEnum checkNameEnum,
BaseClusterHealthConfig baseConfig,
List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
super(checkNameEnum, poList);
this.baseConfig = baseConfig;
this.poList = poList;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
}
}
public Integer getTotalCount() {
if (poList == null) {
return 0;
}
return poList.size();
}
public Integer getPassedCount() {
if (poList == null) {
return 0;
}
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
*/
public Integer calRawHealthScore() {
if (poList == null || poList.isEmpty()) {
return 100;
}
return 100 * this.getPassedCount() / this.getTotalCount();
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0 && !ValidateUtils.isBlank(elem.getResName())).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getCreateTime();
}
public Date getUpdateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getUpdateTime();
}
}

View File

@@ -0,0 +1,45 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.KafkaFuture;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.ExecutionException;
public class KSDescribeGroupsResult {
private final Map<String, KafkaFuture<KSGroupDescription>> futures;
public KSDescribeGroupsResult(final Map<String, KafkaFuture<KSGroupDescription>> futures) {
this.futures = futures;
}
/**
* Return a map from group id to futures which yield group descriptions.
*/
public Map<String, KafkaFuture<KSGroupDescription>> describedGroups() {
return futures;
}
/**
* Return a future which yields all ConsumerGroupDescription objects, if all the describes succeed.
*/
public KafkaFuture<Map<String, KSGroupDescription>> all() {
return KafkaFuture.allOf(futures.values().toArray(new KafkaFuture[0])).thenApply(
new KafkaFuture.BaseFunction<Void, Map<String, KSGroupDescription>>() {
@Override
public Map<String, KSGroupDescription> apply(Void v) {
try {
Map<String, KSGroupDescription> descriptions = new HashMap<>(futures.size());
for (Map.Entry<String, KafkaFuture<KSGroupDescription>> entry : futures.entrySet()) {
descriptions.put(entry.getKey(), entry.getValue().get());
}
return descriptions;
} catch (InterruptedException | ExecutionException e) {
// This should be unreachable, since the KafkaFuture#allOf already ensured
// that all of the futures completed successfully.
throw new RuntimeException(e);
}
}
});
}
}

View File

@@ -0,0 +1,124 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.utils.Utils;
import java.util.*;
public class KSGroupDescription {
private final String groupId;
private final String protocolType;
private final Collection<KSMemberDescription> members;
private final String partitionAssignor;
private final ConsumerGroupState state;
private final Node coordinator;
private final Set<AclOperation> authorizedOperations;
public KSGroupDescription(String groupId,
String protocolType,
Collection<KSMemberDescription> members,
String partitionAssignor,
ConsumerGroupState state,
Node coordinator) {
this(groupId, protocolType, members, partitionAssignor, state, coordinator, Collections.emptySet());
}
public KSGroupDescription(String groupId,
String protocolType,
Collection<KSMemberDescription> members,
String partitionAssignor,
ConsumerGroupState state,
Node coordinator,
Set<AclOperation> authorizedOperations) {
this.groupId = groupId == null ? "" : groupId;
this.protocolType = protocolType;
this.members = members == null ? Collections.emptyList() :
Collections.unmodifiableList(new ArrayList<>(members));
this.partitionAssignor = partitionAssignor == null ? "" : partitionAssignor;
this.state = state;
this.coordinator = coordinator;
this.authorizedOperations = authorizedOperations;
}
@Override
public boolean equals(final Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
final KSGroupDescription that = (KSGroupDescription) o;
return protocolType == that.protocolType &&
Objects.equals(groupId, that.groupId) &&
Objects.equals(members, that.members) &&
Objects.equals(partitionAssignor, that.partitionAssignor) &&
state == that.state &&
Objects.equals(coordinator, that.coordinator) &&
Objects.equals(authorizedOperations, that.authorizedOperations);
}
@Override
public int hashCode() {
return Objects.hash(groupId, protocolType, members, partitionAssignor, state, coordinator, authorizedOperations);
}
/**
* The id of the consumer group.
*/
public String groupId() {
return groupId;
}
/**
* If consumer group is simple or not.
*/
public String protocolType() {
return protocolType;
}
/**
* A list of the members of the consumer group.
*/
public Collection<KSMemberDescription> members() {
return members;
}
/**
* The consumer group partition assignor.
*/
public String partitionAssignor() {
return partitionAssignor;
}
/**
* The consumer group state, or UNKNOWN if the state is too new for us to parse.
*/
public ConsumerGroupState state() {
return state;
}
/**
* The consumer group coordinator, or null if the coordinator is not known.
*/
public Node coordinator() {
return coordinator;
}
/**
* authorizedOperations for this group, or null if that information is not known.
*/
public Set<AclOperation> authorizedOperations() {
return authorizedOperations;
}
@Override
public String toString() {
return "(groupId=" + groupId +
", protocolType=" + protocolType +
", members=" + Utils.join(members, ",") +
", partitionAssignor=" + partitionAssignor +
", state=" + state +
", coordinator=" + coordinator +
", authorizedOperations=" + authorizedOperations +
")";
}
}

View File

@@ -0,0 +1,79 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.clients.admin.ConsumerGroupListing;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.internals.KafkaFutureImpl;
import java.util.ArrayList;
import java.util.Collection;
public class KSListGroupsResult {
private final KafkaFutureImpl<Collection<ConsumerGroupListing>> all;
private final KafkaFutureImpl<Collection<ConsumerGroupListing>> valid;
private final KafkaFutureImpl<Collection<Throwable>> errors;
public KSListGroupsResult(KafkaFutureImpl<Collection<Object>> future) {
this.all = new KafkaFutureImpl<>();
this.valid = new KafkaFutureImpl<>();
this.errors = new KafkaFutureImpl<>();
future.thenApply(new KafkaFuture.BaseFunction<Collection<Object>, Void>() {
@Override
public Void apply(Collection<Object> results) {
ArrayList<Throwable> curErrors = new ArrayList<>();
ArrayList<ConsumerGroupListing> curValid = new ArrayList<>();
for (Object resultObject : results) {
if (resultObject instanceof Throwable) {
curErrors.add((Throwable) resultObject);
} else {
curValid.add((ConsumerGroupListing) resultObject);
}
}
if (!curErrors.isEmpty()) {
all.completeExceptionally(curErrors.get(0));
} else {
all.complete(curValid);
}
valid.complete(curValid);
errors.complete(curErrors);
return null;
}
});
}
/**
* Returns a future that yields either an exception, or the full set of consumer group
* listings.
*
* In the event of a failure, the future yields nothing but the first exception which
* occurred.
*/
public KafkaFuture<Collection<ConsumerGroupListing>> all() {
return all;
}
/**
* Returns a future which yields just the valid listings.
*
* This future never fails with an error, no matter what happens. Errors are completely
* ignored. If nothing can be fetched, an empty collection is yielded.
* If there is an error, but some results can be returned, this future will yield
* those partial results. When using this future, it is a good idea to also check
* the errors future so that errors can be displayed and handled.
*/
public KafkaFuture<Collection<ConsumerGroupListing>> valid() {
return valid;
}
/**
* Returns a future which yields just the errors which occurred.
*
* If this future yields a non-empty collection, it is very likely that elements are
* missing from the valid() set.
*
* This future itself never fails with an error. In the event of an error, this future
* will successfully yield a collection containing at least one exception.
*/
public KafkaFuture<Collection<Throwable>> errors() {
return errors;
}
}

View File

@@ -0,0 +1,4 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
public class KSMemberBaseAssignment {
}

View File

@@ -0,0 +1,25 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import lombok.Getter;
import org.apache.kafka.connect.runtime.distributed.ConnectProtocol;
@Getter
public class KSMemberConnectAssignment extends KSMemberBaseAssignment {
private final ConnectProtocol.Assignment assignment;
private final ConnectProtocol.WorkerState workerState;
public KSMemberConnectAssignment(ConnectProtocol.Assignment assignment, ConnectProtocol.WorkerState workerState) {
this.assignment = assignment;
this.workerState = workerState;
}
@Override
public String toString() {
return "KSMemberConnectAssignment{" +
"assignment=" + assignment +
", workerState=" + workerState +
'}';
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.utils.Utils;
import java.util.Collections;
import java.util.HashSet;
import java.util.Objects;
import java.util.Set;
public class KSMemberConsumerAssignment extends KSMemberBaseAssignment {
private final Set<TopicPartition> topicPartitions;
/**
* Creates an instance with the specified parameters.
*
* @param topicPartitions List of topic partitions
*/
public KSMemberConsumerAssignment(Set<TopicPartition> topicPartitions) {
this.topicPartitions = topicPartitions == null ? Collections.<TopicPartition>emptySet() :
Collections.unmodifiableSet(new HashSet<>(topicPartitions));
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KSMemberConsumerAssignment that = (KSMemberConsumerAssignment) o;
return Objects.equals(topicPartitions, that.topicPartitions);
}
@Override
public int hashCode() {
return topicPartitions != null ? topicPartitions.hashCode() : 0;
}
/**
* The topic partitions assigned to a group member.
*/
public Set<TopicPartition> topicPartitions() {
return topicPartitions;
}
@Override
public String toString() {
return "(topicPartitions=" + Utils.join(topicPartitions, ",") + ")";
}
}

View File

@@ -0,0 +1,93 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import java.util.Objects;
import java.util.Optional;
public class KSMemberDescription {
private final String memberId;
private final Optional<String> groupInstanceId;
private final String clientId;
private final String host;
private final KSMemberBaseAssignment assignment;
public KSMemberDescription(String memberId,
Optional<String> groupInstanceId,
String clientId,
String host,
KSMemberBaseAssignment assignment) {
this.memberId = memberId == null ? "" : memberId;
this.groupInstanceId = groupInstanceId;
this.clientId = clientId == null ? "" : clientId;
this.host = host == null ? "" : host;
this.assignment = assignment == null ?
new KSMemberBaseAssignment() : assignment;
}
public KSMemberDescription(String memberId,
String clientId,
String host,
KSMemberBaseAssignment assignment) {
this(memberId, Optional.empty(), clientId, host, assignment);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KSMemberDescription that = (KSMemberDescription) o;
return memberId.equals(that.memberId) &&
groupInstanceId.equals(that.groupInstanceId) &&
clientId.equals(that.clientId) &&
host.equals(that.host) &&
assignment.equals(that.assignment);
}
@Override
public int hashCode() {
return Objects.hash(memberId, groupInstanceId, clientId, host, assignment);
}
/**
* The consumer id of the group member.
*/
public String consumerId() {
return memberId;
}
/**
* The instance id of the group member.
*/
public Optional<String> groupInstanceId() {
return groupInstanceId;
}
/**
* The client id of the group member.
*/
public String clientId() {
return clientId;
}
/**
* The host where the group member is running.
*/
public String host() {
return host;
}
/**
* The assignment of the group member.
*/
public KSMemberBaseAssignment assignment() {
return assignment;
}
@Override
public String toString() {
return "(memberId=" + memberId +
", groupInstanceId=" + groupInstanceId.orElse("null") +
", clientId=" + clientId +
", host=" + host +
", assignment=" + assignment + ")";
}
}

View File

@@ -36,7 +36,7 @@ public abstract class BaseMetrics implements Serializable {
return metrics.get(key);
}
public BaseMetrics(Long clusterPhyId){
protected BaseMetrics(Long clusterPhyId) {
this.clusterPhyId = clusterPhyId;
}

View File

@@ -0,0 +1,35 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ToString
public class ConnectClusterMetrics extends BaseMetrics {
private Long connectClusterId;
public ConnectClusterMetrics(Long clusterPhyId, Long connectClusterId){
super(clusterPhyId);
this.connectClusterId = connectClusterId;
}
public static ConnectClusterMetrics initWithMetric(Long connectClusterId, String metric, Float value) {
ConnectClusterMetrics brokerMetrics = new ConnectClusterMetrics(connectClusterId, connectClusterId);
brokerMetrics.putMetric(metric, value);
return brokerMetrics;
}
@Override
public String unique() {
return "KCC@" + clusterPhyId + "@" + connectClusterId;
}
}

View File

@@ -0,0 +1,35 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author wyb
* @date 2022/11/2
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
@ToString
public class ConnectWorkerMetrics extends BaseMetrics {
private Long connectClusterId;
private String workerId;
public static ConnectWorkerMetrics initWithMetric(Long connectClusterId, String workerId, String metric, Float value) {
ConnectWorkerMetrics connectWorkerMetrics = new ConnectWorkerMetrics();
connectWorkerMetrics.setConnectClusterId(connectClusterId);
connectWorkerMetrics.setWorkerId(workerId);
connectWorkerMetrics.putMetric(metric, value);
return connectWorkerMetrics;
}
@Override
public String unique() {
return "KCC@" + clusterPhyId + "@" + connectClusterId + "@" + workerId;
}
}

View File

@@ -0,0 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@NoArgsConstructor
@ToString
public class ConnectorMetrics extends BaseMetrics {
private Long connectClusterId;
private String connectorName;
private String connectorNameAndClusterId;
public ConnectorMetrics(Long connectClusterId, String connectorName) {
super(null);
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.connectorNameAndClusterId = connectorName + "#" + connectClusterId;
}
public static ConnectorMetrics initWithMetric(Long connectClusterId, String connectorName, String metricName, Float value) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.putMetric(metricName, value);
return metrics;
}
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + connectorName;
}
}

View File

@@ -0,0 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author wyb
* @date 2022/11/4
*/
@Data
@NoArgsConstructor
@ToString
public class ConnectorTaskMetrics extends BaseMetrics {
private Long connectClusterId;
private String connectorName;
private Integer taskId;
public ConnectorTaskMetrics(Long connectClusterId, String connectorName, Integer taskId) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.taskId = taskId;
}
public static ConnectorTaskMetrics initWithMetric(Long connectClusterId, String connectorName, Integer taskId, String metricName, Float value) {
ConnectorTaskMetrics metrics = new ConnectorTaskMetrics(connectClusterId, connectorName, taskId);
metrics.putMetric(metricName,value);
return metrics;
}
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + connectorName + "@" + taskId;
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.offset;
import org.apache.kafka.clients.admin.OffsetSpec;
/**
* @see OffsetSpec
*/
public class KSOffsetSpec {
public static class KSEarliestSpec extends KSOffsetSpec { }
public static class KSLatestSpec extends KSOffsetSpec { }
public static class KSTimestampSpec extends KSOffsetSpec {
private final long timestamp;
public KSTimestampSpec(long timestamp) {
this.timestamp = timestamp;
}
public long timestamp() {
return timestamp;
}
}
/**
* Used to retrieve the latest offset of a partition
*/
public static KSOffsetSpec latest() {
return new KSOffsetSpec.KSLatestSpec();
}
/**
* Used to retrieve the earliest offset of a partition
*/
public static KSOffsetSpec earliest() {
return new KSOffsetSpec.KSEarliestSpec();
}
/**
* Used to retrieve the earliest offset whose timestamp is greater than
* or equal to the given timestamp in the corresponding partition
* @param timestamp in milliseconds
*/
public static KSOffsetSpec forTimestamp(long timestamp) {
return new KSOffsetSpec.KSTimestampSpec(timestamp);
}
private KSOffsetSpec() {
}
}

View File

@@ -0,0 +1,10 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.VersionItemParam;
/**
* @author wyc
* @date 2022/11/9
*/
public class ClusterParam extends VersionItemParam {
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.VersionItemParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@@ -8,6 +7,6 @@ import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhyParam extends VersionItemParam {
public class ClusterPhyParam extends ClusterParam {
protected Long clusterPhyId;
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/9
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectClusterParam extends ClusterParam{
protected Long connectClusterId;
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ConnectClusterParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/8
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectorParam extends ConnectClusterParam {
private String connectorName;
public ConnectorParam(Long connectClusterId, String connectorName) {
super(connectClusterId);
this.connectorName = connectorName;
}
}

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.MetricParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/1
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectClusterMetricParam extends MetricParam {
private Long connectClusterId;
private String metric;
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.MetricParam;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/2
*/
@Data
@NoArgsConstructor
public class ConnectorMetricParam extends MetricParam {
private Long connectClusterId;
private String connectorName;
private String metricName;
private ConnectorTypeEnum connectorType;
public ConnectorMetricParam(Long connectClusterId, String connectorName, String metricName, ConnectorTypeEnum connectorType) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.metricName = metricName;
this.connectorType = connectorType;
}
}

View File

@@ -1,23 +1,39 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import lombok.Data;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.utils.Triple;
import lombok.Getter;
import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.common.TopicPartition;
import java.util.Map;
import java.util.*;
import java.util.stream.Collectors;
@Data
@Getter
@NoArgsConstructor
public class PartitionOffsetParam extends TopicParam {
private Map<TopicPartition, OffsetSpec> topicPartitionOffsets;
public class PartitionOffsetParam extends ClusterPhyParam {
private List<Triple<String, KSOffsetSpec, List<TopicPartition>>> offsetSpecList;
private Long timestamp;
public PartitionOffsetParam(Long clusterPhyId, String topicName, KSOffsetSpec ksOffsetSpec, List<TopicPartition> partitionList) {
super(clusterPhyId);
this.offsetSpecList = Collections.singletonList(new Triple<>(topicName, ksOffsetSpec, partitionList));
}
public PartitionOffsetParam(Long clusterPhyId, String topicName, Map<TopicPartition, OffsetSpec> topicPartitionOffsets, Long timestamp) {
super(clusterPhyId, topicName);
this.topicPartitionOffsets = topicPartitionOffsets;
this.timestamp = timestamp;
public PartitionOffsetParam(Long clusterPhyId, String topicName, List<KSOffsetSpec> specList, List<TopicPartition> partitionList) {
super(clusterPhyId);
this.offsetSpecList = new ArrayList<>();
specList.forEach(elem -> offsetSpecList.add(new Triple<>(topicName, elem, partitionList)));
}
public PartitionOffsetParam(Long clusterPhyId, KSOffsetSpec offsetSpec, List<TopicPartition> partitionList) {
super(clusterPhyId);
Map<String, List<TopicPartition>> tpMap = new HashMap<>();
partitionList.forEach(elem -> {
tpMap.putIfAbsent(elem.topic(), new ArrayList<>());
tpMap.get(elem.topic()).add(elem);
});
this.offsetSpecList = tpMap.entrySet().stream().map(elem -> new Triple<>(elem.getKey(), offsetSpec, elem.getValue())).collect(Collectors.toList());
}
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.reassign;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import org.apache.kafka.common.TopicPartition;
@@ -20,10 +19,4 @@ public class ReassignResult {
return state.isDone();
}
public boolean checkPreferredReplicaElectionUnNeed(String reassignBrokerIds, String originalBrokerIds) {
Integer targetLeader = CommonUtils.string2IntList(reassignBrokerIds).get(0);
Integer originalLeader = CommonUtils.string2IntList(originalBrokerIds).get(0);
return originalLeader.equals(targetLeader);
}
}

View File

@@ -100,6 +100,13 @@ public class Result<T> extends BaseResult {
return result;
}
public static <T> Result<T> buildFrom(Result ret) {
Result<T> result = new Result<>();
result.setCode(ret.getCode());
result.setMessage(ret.getMessage());
return result;
}
public static <T> Result<T> buildFrom(ValidateKafkaAddressErrorEnum errorEnum, String msg) {
Result<T> result = new Result<>();
result.setCode(errorEnum.getCode());

View File

@@ -54,6 +54,8 @@ public enum ResultStatus {
* 调用错误, [8000, 9000)
*/
KAFKA_OPERATE_FAILED(8010, "Kafka操作失败"),
KAFKA_CONNECTOR_OPERATE_FAILED(8011, "KafkaConnect操作失败"),
KAFKA_CONNECTOR_READ_FAILED(8012, "KafkaConnect读失败"),
MYSQL_OPERATE_FAILED(8020, "MySQL操作失败"),
ZK_OPERATE_FAILED(8030, "ZK操作失败"),
ZK_FOUR_LETTER_CMD_FORBIDDEN(8031, "ZK四字命令被禁止"),

Some files were not shown because too many files have changed in this diff Show More