Compare commits

..

71 Commits
v3.2 ... v3.0

Author SHA1 Message Date
EricZeng
508402d8ec Merge pull request #717 from didi/master
合并主分支
2022-10-21 15:09:52 +08:00
EricZeng
eb3e573b22 Merge branch 'v3.0' into master 2022-10-21 15:07:31 +08:00
zengqiao
5e7fbcf078 增加v3.0.1变更内容 2022-10-21 14:46:41 +08:00
zengqiao
3fb35d1fcc 补充v3.0.1版本升级信息 2022-10-21 14:46:41 +08:00
zengqiao
538d54cae0 安装包中,去除docs相关的文档 2022-10-21 14:46:41 +08:00
zengqiao
78b02f80ba [Bugfix] 修复指标版本信息list转map时出现key冲突从而抛出异常的问题 2022-10-21 14:46:41 +08:00
zengqiao
f9ec890e1d [Optimize] 集群Broker列表中,补充Jmx是否成功连接的信息
1、当前页面无数据时,一部分的原因是JMX连接失败导致;
2、Broker列表中增加是否连接成功的信息,便于问题的排查;
2022-10-21 14:46:41 +08:00
zengqiao
af1bb2ccbd [Optimize] 删除Replica指标采集任务
1、当集群存在较多副本时,指标采集的性能会严重降低;
2、Replica的指标基本上都是在实时获取时才需要,因此当前先将Replica指标采集任务关闭,后续依据产品需要再看是否开启;
2022-10-21 14:46:41 +08:00
zengqiao
714e9a56a3 [Optimize] 优化ZK指标的获取,减少重复采集的出现 (#709)
1、避免不同集群,相同的ZK地址时,指标重复获取的情况;
2、避免集群某个ZK地址获取指标失败时,下一个周期还会继续尝试从该地址获取指标;
2022-10-21 14:46:41 +08:00
_haoqi
88d0a60182 [ISSUE #677] 重启会导致部分信息采集抛出空指针 2022-10-21 14:46:41 +08:00
zengqiao
05c52cd672 [Feature] 集群Group列表按照Group维度进行展示 (#580) 2022-10-21 14:46:41 +08:00
Richard
586b37caa0 fix issue:
* [issue #700] Adjust the prompt and replace the Arrays.asList() with the Collections.singletonList()
2022-10-21 14:46:41 +08:00
dianyang12138
d8aa3d64df fix:修复es模版错误 2022-10-21 14:46:41 +08:00
night.liang
13d8fd55c8 fix ldap bug 2022-10-21 14:46:41 +08:00
zengqiao
4133981048 补充Kafka-Group表 2022-10-21 14:46:41 +08:00
chenzy
2f0b18b005 修复时间展示有误的bug,由原先的12小时制改为24小时制 2022-10-21 14:46:41 +08:00
Richard
44134ce0d6 fix issue:
* [issue #662] Fix deadlocks caused by adding data using MySQL's REPLACE method
2022-10-21 14:46:41 +08:00
_haoqi
5f21e5a728 修改zk-Latency avg为小数时的数值转换异常问题 2022-10-21 14:46:41 +08:00
zengqiao
d5079a1b75 修复ZK元信息表role字段类型错误问题 2022-10-21 14:46:41 +08:00
shirenchuang
656dfc2285 update readme 2022-10-21 14:46:41 +08:00
shirenchuang
99be2d704f update readme 2022-10-21 14:46:41 +08:00
Richard
d071e31106 fix issue:
* [issue #666] Fix the type of role phase in ks_km_zookeeper table
2022-10-21 14:46:41 +08:00
shirenchuang
55b34d08dd update readme 2022-10-21 14:46:41 +08:00
赤月
7a29e58453 Update faq.md 2022-10-21 14:46:41 +08:00
shirenchuang
8892b5250e update readme add who's using know streaming 2022-10-21 14:46:41 +08:00
zengqiao
75e53a9617 修复集群ZK列表中缺少返回服务状态字段的问题 2022-10-21 14:46:41 +08:00
zengqiao
7294aba59f 指标信息中,增加返回ZK的指标信息 2022-10-21 14:46:41 +08:00
zengqiao
a8c779675a 删除未被使用的import 2022-10-21 14:46:41 +08:00
zengqiao
facae65f61 健康检查任务优化 2022-10-21 14:46:41 +08:00
zengqiao
0c6475b063 application.yml文件中增加ES用户名密码的配置项 2022-10-21 14:46:41 +08:00
zengqiao
92d6214f4f 增加ZK指标上报普罗米修斯 2022-10-21 14:46:41 +08:00
zengqiao
6ad29b9565 ZookeeperService中增加服务存活统计方法 2022-10-21 14:46:41 +08:00
zengqiao
f3b64ca463 增加float转integer方法 2022-10-21 14:46:41 +08:00
shirenchuang
9340e07662 update contribuer document 2022-10-21 14:46:41 +08:00
zengqiao
50482c40d5 修复获取TopN的Broker指标时,会出现部分指标缺失的问题 2022-10-21 14:46:41 +08:00
zengqiao
12ebc32cec Broker增加服务是否存活接口 2022-10-21 14:46:41 +08:00
zengqiao
215602bb84 调整贡献者名单 2022-10-21 14:46:41 +08:00
zengqiao
5355c5c1f3 修复DSL错误导致ZK指标查询失败问题 2022-10-21 14:46:41 +08:00
shirenchuang
e13d77c81d 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
103db39460 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
750da7c9d7 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
0fea002142 贡献者相关文档 2022-10-21 14:46:41 +08:00
shirenchuang
7163c74cba 贡献者相关文档 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
2fb3aa1c14 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
dc8604ad81 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
石臻臻的杂货铺
9c67afd170 Update CONTRIBUTING.md 2022-10-21 14:46:41 +08:00
shirenchuang
bd48bc6a3d readme 2022-10-21 14:46:41 +08:00
shirenchuang
b75e630bac Issue 模板 2022-10-21 14:46:41 +08:00
shirenchuang
ebd4e4735d PR 模板 2022-10-21 14:46:41 +08:00
shirenchuang
b3ad6a71ca 贡献者规约文档 2022-10-21 14:46:41 +08:00
shirenchuang
91e2189864 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
ddd5d1b892 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
8aa877071c issue template 2022-10-21 14:46:41 +08:00
shirenchuang
efa253fac8 issue template 2022-10-21 14:46:41 +08:00
shirenchuang
3744c0e97d issue template 2022-10-21 14:46:41 +08:00
shirenchuang
d510640e43 issue template 2022-10-21 14:46:41 +08:00
EricZeng
d7986ad8dd 恢复为原先代码
恢复为原先代码
2022-10-21 14:46:41 +08:00
zengqiao
fbc4d4a540 调整接入带Kerberos认证的ZK集群的文档 2022-10-21 14:46:41 +08:00
zengqiao
bc32c71048 ZK-增加ZK信息查询接口 2022-10-21 14:46:41 +08:00
zengqiao
c4910964db ZK-指标采集入ES 2022-10-21 14:46:41 +08:00
zengqiao
1bc725bd62 ZK-同步ZK元信息至DB 2022-10-21 14:46:41 +08:00
zengqiao
34b7c6746b ZK-增加配置的默认值 2022-10-21 14:46:41 +08:00
zengqiao
20d5b27bb6 ZK-增加四字命令信息的获取 2022-10-21 14:46:41 +08:00
zengqiao
a4abb4069d 删除无效的健康分计算代码 2022-10-21 14:46:41 +08:00
zengqiao
c73cfce780 bump version to 3.1.0 2022-10-21 14:46:41 +08:00
luhe
dfb9b6136b 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
341bd58d51 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
4386181304 修改代码支持ZK-Kerberos认证与配置文档 2022-10-21 14:46:41 +08:00
luhe
fb21d8135c 修改代码支持ZK-Kerberos认证 2022-10-21 14:46:41 +08:00
luhe
b4580277a9 修改代码支持ZK-Kerberos认证 2022-10-21 14:46:41 +08:00
EricZeng
045f65204b Merge pull request #633 from didi/master
合并主分支
2022-09-29 13:09:19 +08:00
547 changed files with 5660 additions and 45045 deletions

6
.gitignore vendored
View File

@@ -109,8 +109,4 @@ out/*
dist/ dist/
dist/* dist/*
km-rest/src/main/resources/templates/ km-rest/src/main/resources/templates/
*dependency-reduced-pom* *dependency-reduced-pom*
#filter flattened xml
*/.flattened-pom.xml
.flattened-pom.xml
*/*/.flattened-pom.xml

View File

@@ -143,7 +143,7 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`** **`2、微信群`**
微信加群:添加`mike_zhangliang``PenceXie``szzdzhp001`的微信号备注KnowStreaming加群。 微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。
<br/> <br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。 加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。

View File

@@ -1,97 +1,4 @@
## v3.2.0
**问题修复**
- 修复健康巡检结果更新至 DB 时,出现死锁问题;
- 修复 KafkaJMXClient 类中logger错误的问题
- 后端修复 Topic 过期策略在 0.10.1.0 版本能多选的问题,实际应该只能二选一;
- 修复接入集群时,不填写集群配置会报错的问题;
- 升级 spring-context 至 5.3.19 版本,修复安全漏洞;
- 修复 Broker & Topic 修改配置时,多版本兼容配置的版本信息错误的问题;
- 修复 Topic 列表的健康分为健康状态;
- 修复 Broker LogSize 指标存储名称错误导致查询不到的问题;
- 修复 Prometheus 中,缺少 Group 部分指标的问题;
- 修复因缺少健康状态指标导致集群数错误的问题;
- 修复后台任务记录操作日志时,因缺少操作用户信息导致出现异常的问题;
- 修复 Replica 指标查询时DSL 错误的问题;
- 关闭 errorLogger修复错误日志重复输出的问题
- 修复系统管理更新用户信息失败的问题;
- 修复因原AR信息丢失导致迁移任务一直处于执行中的错误
- 修复集群 Topic 列表实时数据查询时,出现失败的问题;
- 修复集群 Topic 列表,页面白屏问题;
- 修复副本变更时因AR数据异常导致数组访问越界的问题
**产品优化**
- 优化健康巡检为按照资源维度多线程并发处理;
- 统一日志输出格式,并优化部分输出的日志;
- 优化 ZK 四字命令结果解析过程中,容易引起误解的 WARN 日志;
- 优化 Zookeeper 详情中,目录结构的搜索文案;
- 优化线程池的名称,方便第三方系统进行相关问题的分析;
- 去除 ESClient 的并发访问控制,降低 ESClient 创建数及提升利用率;
- 优化 Topic Messages 抽屉文案;
- 优化 ZK 健康巡检失败时的错误日志信息;
- 提高 Offset 信息获取的超时时间,降低并发过高时出现请求超时的概率;
- 优化 Topic & Partition 元信息的更新策略,降低对 DB 连接的占用;
- 优化 Sonar 代码扫码问题;
- 优化分区 Offset 指标的采集;
- 优化前端图表相关组件逻辑;
- 优化产品主题色;
- Consumer 列表刷新按钮新增 hover 提示;
- 优化配置 Topic 的消息大小时的测试弹框体验;
- 优化 Overview 页面 TopN 查询的流程;
**功能新增**
- 新增页面无数据排查文档;
- 增加 ES 索引删除的功能;
- 支持拆分API服务和Job服务部署
**Kafka Connect Beta版 (v3.2.0版本新增发布)**
- Connect 集群的纳管;
- Connector 的增删改查;
- Connect 集群 & Connector 的指标大盘;
---
## v3.1.0
**Bug修复**
- 修复重置 Group Offset 的提示信息中缺少Dead状态也可进行重置的描述
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
- 修复 git 目录不存在时,打包不能正常进行的问题;
- 修复 KRaft 模式的 Kafka 集群JMX PORT 显示 -1 的问题;
**体验优化**
- 优化Cluster、Broker、Topic、Group的健康分为健康状态
- 去除健康巡检配置中的权重信息;
- 错误提示页面展示优化;
- 前端打包编译依赖默认使用 taobao 镜像;
- 重新设计优化导航栏的 icon
**新增**
- 个人头像下拉信息中,新增产品版本信息;
- 多集群列表页面,新增集群健康状态分布信息;
**Kafka ZK 部分 (v3.1.0版本正式发布)**
- 新增 ZK 集群的指标大盘信息;
- 新增 ZK 集群的服务状态概览信息;
- 新增 ZK 集群的服务节点列表信息;
- 新增 Kafka 在 ZK 的存储数据查看功能;
- 新增 ZK 的健康巡检及健康状态计算;
---
## v3.0.1 ## v3.0.1
**Bug修复** **Bug修复**

View File

@@ -1,286 +0,0 @@
## 1、集群接入错误
### 1.1、异常现象
如下图所示,集群非空时,大概率为地址配置错误导致。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_BRiXBvqYFK2dxSF1aqgZ width="80%">
### 1.2、解决方案
接入集群时,依据提示的错误,进行相应的解决。例如:
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_Yn4LhV8aeSEKX1zrrkUi width="50%">
### 1.3、正常情况
接入集群时,页面信息都自动正常出现,没有提示错误。
## 2、JMX连接失败需使用3.0.1及以上版本)
### 2.1异常现象
Broker列表的JMX Port列出现红色感叹号则该Broker的JMX连接异常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_MLlLCfAktne4X6MBtBUd width="90%">
#### 2.1.1、原因一JMX未开启
##### 2.1.1.1、异常现象
broker列表的JMX Port值为-1对应Broker的JMX未开启。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_E1PD8tPsMeR2zYLFBFAu width="90%">
##### 2.1.1.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.2、原因二JMX配置错误
##### 2.1.2.1、异常现象
错误日志:
```
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
##### 2.1.2.2、解决方案
开启JMX开启流程如下
1、修改kafka的bin目录下面的`kafka-server-start.sh`文件
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
2、修改kafka的bin目录下面对的`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT - Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
3、重启Kafka-Broker。
#### 2.1.3、原因三JMX开启SSL
##### 2.1.3.1、解决方案
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_kNyCi8H9wtHSRkWurB6S width="50%">
#### 2.1.4、原因四连接了错误IP
##### 2.1.4.1、异常现象
Broker 配置了内外网而JMX在配置时可能配置了内网IP或者外网IP此时`KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
比如Broker在ZK的存储结构如下所示我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址。
```json
{
"listener_security_protocol_map": {
"EXTERNAL": "SASL_PLAINTEXT",
"INTERNAL": "SASL_PLAINTEXT"
},
"endpoints": [
"EXTERNAL://192.168.0.1:7092",
"INTERNAL://192.168.0.2:7093"
],
"jmx_port": 8099,
"host": "192.168.0.1",
"timestamp": "1627289710439",
"port": -1,
"version": 4
}
```
##### 2.1.4.2、解决方案
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
`jmx_properties`格式:
```json
{
"maxConn": 100, // KM对单台Broker的最大JMX连接数
"username": "xxxxx", //用户名,可以不填写
"password": "xxxx", // 密码,可以不填写
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
"useWhichEndpoint": "EXTERNAL" //指定要连接的网络名称填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
}
```
SQL例子
```sql
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
```
### 2.2、正常情况
修改完成后,如果看到 JMX PORT这一列全部为绿色则表示JMX已正常。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_ymtDTCiDlzfrmSCez2lx width="90%">
## 3、Elasticsearch问题
注意mac系统在执行curl指令时可能报zsh错误。可参考以下操作。
```
1 进入.zshrc 文件 vim ~/.zshrc
2.在.zshrc中加入 setopt no_nomatch
3.更新配置 source ~/.zshrc
```
### 3.1、原因一:缺少索引
#### 3.1.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka* 查看KS索引列表发现没有索引。
#### 3.1.2、解决方案
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本创建索引。
### 3.2、原因二:索引模板错误
#### 3.2.1、异常现象
多集群列表有数据集群详情页图标无数据。查询KS索引模板列表发现不存在。
```
curl {ES的IP地址}:{ES的端口号}/_cat/templates/ks_kafka*?v&h=name
```
正常KS模板如下图所示。
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_l79bPYSci9wr6KFwZDA6 width="90%">
#### 3.2.2、解决方案
删除KS索引模板和索引
```
curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本初始化索引和模板。
### 3.3、原因三集群Shard满
#### 3.3.1、异常现象
报错信息
```
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
```
尝试手动创建索引失败。
```
#创建ks_kafka_cluster_metric_test索引的指令
curl -s -XPUT http://{ES的IP地址}:{ES的端口号}/ks_kafka_cluster_metric_test
```
#### 3.3.2、解决方案
ES索引的默认分片数量为1000达到数量以后索引创建失败。
+ 扩大ES索引数量上限执行指令
```
curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端口号}/_cluster/settings -d '
{
"persistent": {
"cluster": {
"max_shards_per_node":{索引上限默认为1000}
}
}
}'
```
执行[/km-dist/init/template/template.sh](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/template/template.sh)脚本补全索引。

View File

@@ -4,134 +4,11 @@
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。 - 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。 - 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
### 升级至 `master` 版本 ### 6.2.0、升级至 `master` 版本
暂无 暂无
### 升级至 `3.2.0` 版本 ### 6.2.1、升级至 `v3.0.1` 版本
**配置变更**
```yaml
# 新增如下配置
spring:
logi-job: # know-streaming 依赖的 logi-job 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
enable: true # true表示开启job任务, false表关闭。KS在部署上可以考虑部署两套服务一套处理前端请求一套执行job任务此时可以通过该字段进行控制
# 线程池大小相关配置
thread-pool:
es:
search: # es查询线程池
thread-num: 20 # 线程池大小
queue-size: 10000 # 队列大小
# 客户端池大小相关配置
client-pool:
kafka-admin:
client-cnt: 1 # 每个Kafka集群创建的KafkaAdminClient数
# ES客户端配置
es:
index:
expire: 15 # 索引过期天数15表示超过15天的索引会被KS过期删除
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_kc_connect_cluster`;
CREATE TABLE `ks_kc_connect_cluster` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Connect集群ID',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
`group_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群Group名称',
`cluster_url` varchar(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
`member_leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL地址',
`version` varchar(64) NOT NULL DEFAULT '' COMMENT 'connect版本',
`jmx_properties` text COMMENT 'JMX配置',
`state` tinyint(4) NOT NULL DEFAULT '1' COMMENT '集群使用的消费组状态,也表示集群状态:-1 Unknown,0 ReBalance,1 Active,2 Dead,3 Empty',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '接入时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_id_group_name` (`id`,`group_name`),
UNIQUE KEY `uniq_name_kafka_cluster` (`name`,`kafka_cluster_phy_id`),
KEY `idx_kafka_cluster_phy_id` (`kafka_cluster_phy_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connect集群信息表';
DROP TABLE IF EXISTS `ks_kc_connector`;
CREATE TABLE `ks_kc_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`connector_class_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector类',
`connector_type` varchar(32) NOT NULL DEFAULT '' COMMENT 'Connector类型',
`state` varchar(45) NOT NULL DEFAULT '' COMMENT '状态',
`topics` text COMMENT '访问过的Topics',
`task_count` int(11) NOT NULL DEFAULT '0' COMMENT '任务数',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_connect_cluster_id_connector_name` (`connect_cluster_id`,`connector_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connector信息表';
DROP TABLE IF EXISTS `ks_kc_worker`;
CREATE TABLE `ks_kc_worker` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`member_id` varchar(512) NOT NULL DEFAULT '' COMMENT '成员ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
`jmx_port` int(16) NOT NULL DEFAULT '-1' COMMENT 'Jmx端口',
`url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL信息',
`leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'leaderURL信息',
`leader` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1是leader0不是leader',
`worker_id` varchar(128) NOT NULL COMMENT 'worker地址',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_id_member_id` (`connect_cluster_id`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='worker信息表';
DROP TABLE IF EXISTS `ks_kc_worker_connector`;
CREATE TABLE `ks_kc_worker_connector` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
`worker_member_id` varchar(256) NOT NULL DEFAULT '',
`task_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'Task的ID',
`state` varchar(128) DEFAULT NULL COMMENT '任务状态',
`worker_id` varchar(128) DEFAULT NULL COMMENT 'worker信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_relation` (`connect_cluster_id`,`connector_name`,`task_id`,`worker_member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Worker和Connector关系表';
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_FAILED_TASK_COUNT', '{\"value\" : 1}', 'connector失败状态的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_UNASSIGNED_TASK_COUNT', '{\"value\" : 1}', 'connector未被分配的任务数量', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE', '{\"value\" : 0.05}', 'Connect集群任务启动失败概率', 'admin');
```
---
### 升级至 `v3.1.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
```
### 升级至 `v3.0.1` 版本
**ES 索引模版** **ES 索引模版**
```bash ```bash
@@ -265,8 +142,10 @@ CREATE TABLE `ks_km_group` (
``` ```
---
### 升级至 `v3.0.0` 版本
### 6.2.2、升级至 `v3.0.0` 版本
**SQL 变更** **SQL 变更**
@@ -278,7 +157,7 @@ ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
--- ---
### 升级至 `v3.0.0-beta.2`版本 ### 6.2.3、升级至 `v3.0.0-beta.2`版本
**配置变更** **配置变更**
@@ -349,7 +228,7 @@ ALTER TABLE `logi_security_oplog`
--- ---
### 升级至 `v3.0.0-beta.1`版本 ### 6.2.4、升级至 `v3.0.0-beta.1`版本
**SQL 变更** **SQL 变更**
@@ -368,7 +247,7 @@ ALTER COLUMN `operation_methods` set default '';
--- ---
### `2.x`版本 升级至 `v3.0.0-beta.0`版本 ### 6.2.5、`2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:** **升级步骤:**

View File

@@ -1,15 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
/**
* Kafka集群Connector概览
*/
public interface ClusterConnectorsManager {
PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto);
ConnectStateVO getClusterConnectorsState(Long clusterPhyId);
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.biz.cluster; package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -16,8 +15,6 @@ public interface MultiClusterPhyManager {
*/ */
ClusterPhysState getClusterPhysState(); ClusterPhysState getClusterPhysState();
ClusterPhysHealthState getClusterPhysHealthState();
/** /**
* 查询多集群大盘 * 查询多集群大盘
* @param dto 分页信息 * @param dto 分页信息

View File

@@ -6,8 +6,6 @@ import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterBrokersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController; import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -18,8 +16,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBroker
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant; import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterRunStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
@@ -28,7 +24,6 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient; import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -88,13 +83,9 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>(); Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null)); brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(clusterPhyId);
// 格式转换 // 格式转换
return PaginationResult.buildSuc( return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList( this.convert2ClusterBrokersOverviewVOList(
clusterPhy,
paginationResult.getData().getBizData(), paginationResult.getData().getBizData(),
brokerList, brokerList,
metricsResult.getData(), metricsResult.getData(),
@@ -178,8 +169,7 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
); );
} }
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(ClusterPhy clusterPhy, private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(List<Integer> pagedBrokerIdList,
List<Integer> pagedBrokerIdList,
List<Broker> brokerList, List<Broker> brokerList,
List<BrokerMetrics> metricsList, List<BrokerMetrics> metricsList,
Topic groupTopic, Topic groupTopic,
@@ -195,15 +185,9 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
Broker broker = brokerMap.get(brokerId); Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId); BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId); Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected)); voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
} }
//补充非zk模式的JMXPort信息
if (!clusterPhy.getRunState().equals(ClusterRunStateEnum.RUN_ZK.getRunState())) {
JmxConfig jmxConfig = ConvertUtil.str2ObjByJson(clusterPhy.getJmxProperties(), JmxConfig.class);
voList.forEach(elem -> elem.setJmxPort(jmxConfig.getJmxPort() == null ? -1 : jmxConfig.getJmxPort()));
}
return voList; return voList;
} }

View File

@@ -1,152 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterConnectorsManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect.MetricsConnectorsDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.converter.ConnectConverter;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Service
public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterConnectorsManagerImpl.class);
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectClusterService connectClusterService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Autowired
private WorkerService workerService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto) {
List<ConnectCluster> clusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> poList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
// 查询实时指标
Result<List<ConnectorMetrics>> latestMetricsResult = connectorMetricService.getLatestMetricsFromES(
clusterPhyId,
poList.stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getLatestMetricNames()
);
if (latestMetricsResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
return PaginationResult.buildFailure(latestMetricsResult, dto);
}
// 转换成vo
List<ClusterConnectorOverviewVO> voList = ConnectConverter.convert2ClusterConnectorOverviewVOList(clusterList, poList,latestMetricsResult.getData());
// 请求分页信息
PaginationResult<ClusterConnectorOverviewVO> voPaginationResult = this.pagingConnectorInLocal(voList, dto);
if (voPaginationResult.failed()) {
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
return PaginationResult.buildFailure(voPaginationResult, dto);
}
// 查询历史指标
Result<List<MetricMultiLinesVO>> lineMetricsResult = connectorMetricService.listConnectClusterMetricsFromES(
clusterPhyId,
this.buildMetricsConnectorsDTO(
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
dto.getMetricLines()
)
);
return PaginationResult.buildSuc(
ConnectConverter.supplyData2ClusterConnectorOverviewVOList(
voPaginationResult.getData().getBizData(),
lineMetricsResult.getData()
),
voPaginationResult
);
}
@Override
public ConnectStateVO getClusterConnectorsState(Long clusterPhyId) {
//获取Connect集群Id列表
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
List<ConnectWorker> connectWorkerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
return convert2ConnectStateVO(connectClusterList, connectorPOList, workerConnectorList, connectWorkerList);
}
/**************************************************** private method ****************************************************/
private MetricsConnectorsDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
MetricsConnectorsDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsConnectorsDTO.class);
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
return dto;
}
private ConnectStateVO convert2ConnectStateVO(List<ConnectCluster> connectClusterList, List<ConnectorPO> connectorPOList, List<WorkerConnector> workerConnectorList, List<ConnectWorker> connectWorkerList) {
ConnectStateVO connectStateVO = new ConnectStateVO();
connectStateVO.setConnectClusterCount(connectClusterList.size());
connectStateVO.setTotalConnectorCount(connectorPOList.size());
connectStateVO.setAliveConnectorCount(connectorPOList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectStateVO.setWorkerCount(connectWorkerList.size());
connectStateVO.setTotalTaskCount(workerConnectorList.size());
connectStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
return connectStateVO;
}
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
//模糊匹配
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectClusterName"));
//排序
if (!dto.getLatestMetricNames().isEmpty()) {
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectClusterName", dto.getSortType());
} else {
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectClusterName", dto.getSortType());
}
//分页
return PaginationUtil.pageBySubData(connectorVOList, dto);
}
}

View File

@@ -44,7 +44,7 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
// 获取集群所有Topic的指标 // 获取集群所有Topic的指标
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId); Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCacheFirst(clusterPhyId);
// 转换成vo // 转换成vo
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap); List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap);

View File

@@ -5,7 +5,9 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager; import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
@@ -18,8 +20,9 @@ import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum; import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
@@ -27,6 +30,7 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import java.util.Arrays; import java.util.Arrays;
import java.util.List; import java.util.List;
import java.util.stream.Collectors;
@Service @Service
@@ -52,6 +56,11 @@ public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId)); return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
} }
// // TODO
// private Integer healthState;
// private Integer healthCheckPassed;
// private Integer healthCheckTotal;
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId); List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO(); ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
@@ -81,30 +90,21 @@ public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
} }
} }
// 指标获取 Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.collectMetricsFromZookeeper(new ZookeeperMetricParam(
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
clusterPhyId, clusterPhyId,
Arrays.asList( infoList.stream().filter(elem -> elem.alive()).map(item -> new Tuple<String, Integer>(item.getHost(), item.getPort())).collect(Collectors.toList()),
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT, ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE, ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED, ));
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
)
);
if (metricsResult.failed()) { if (metricsResult.failed()) {
LOGGER.error( LOGGER.error(
"method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}", "class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage() clusterPhyId, metricsResult.getMessage()
); );
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }
Float watchCount = metricsResult.getData().getMetric(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT);
ZookeeperMetrics metrics = metricsResult.getData(); vo.setWatchCount(watchCount != null? watchCount.intValue(): null);
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
return Result.buildSuc(vo); return Result.buildSuc(vo);
} }

View File

@@ -5,7 +5,6 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager; import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
@@ -17,7 +16,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboa
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO; import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter; import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
@@ -25,11 +23,14 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import java.util.*; import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@Service @Service
@@ -54,6 +55,7 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
false false
); );
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size()); ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) { for (ClusterPhy clusterPhy: clusterPhyList) {
KafkaController kafkaController = controllerMap.get(clusterPhy.getId()); KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
@@ -73,32 +75,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
return physState; return physState;
} }
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
physState.setGoodCount(physState.getGoodCount() + 1);
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
physState.setMediumCount(physState.getMediumCount() + 1);
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
physState.setPoorCount(physState.getPoorCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDeadCount(physState.getDeadCount() + 1);
} else {
physState.setUnknownCount(physState.getUnknownCount() + 1);
}
}
return physState;
}
@Override @Override
public PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto) { public PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto) {
// 获取集群 // 获取集群
@@ -107,6 +83,7 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 转为vo格式方便后续进行分页筛选等 // 转为vo格式方便后续进行分页筛选等
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class); List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
// 获取集群controller信息并补充到vo中, // 获取集群controller信息并补充到vo中,
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false); Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
for (ClusterPhyDashboardVO vo: voList) { for (ClusterPhyDashboardVO vo: voList) {
@@ -172,7 +149,13 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
List<ClusterMetrics> metricsList = new ArrayList<>(); List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) { for (ClusterPhyDashboardVO vo: voList) {
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId()); ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension()); if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) {
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
// 如果集群没有健康分,则设置一个默认的健康分数值
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE,
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue()
);
}
metricsList.add(clusterMetrics); metricsList.add(clusterMetrics);
} }

View File

@@ -1,15 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import java.util.Properties;
public interface ConnectorManager {
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
}

View File

@@ -1,16 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
public interface WorkerConnectorManager {
Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName);
}

View File

@@ -1,93 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import org.apache.kafka.connect.runtime.AbstractStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;
@Service
public class ConnectorManagerImpl implements ConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(ConnectorManagerImpl.class);
@Autowired
private PluginService pluginService;
@Autowired
private ConnectorService connectorService;
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator) {
Result<ConnectConfigInfos> infosResult = pluginService.validateConfig(connectClusterId, configs);
if (infosResult.failed()) {
return Result.buildFromIgnoreData(infosResult);
}
if (infosResult.getData().getErrorCount() > 0) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Connector参数错误");
}
return connectorService.updateConnectorConfig(connectClusterId, connectorName, configs, operator);
}
@Override
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
Result<KSConnectorInfo> createResult = connectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getConfigs(), operator);
if (createResult.failed()) {
return Result.buildFromIgnoreData(createResult);
}
Result<KSConnector> ksConnectorResult = connectorService.getAllConnectorInfoFromCluster(dto.getConnectClusterId(), dto.getConnectorName());
if (ksConnectorResult.failed()) {
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功但是获取元信息失败页面元信息会存在1分钟延迟");
}
connectorService.addNewToDB(ksConnectorResult.getData());
return Result.buildSuc();
}
@Override
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
if (connectorPO == null) {
return Result.buildFailure(ResultStatus.NOT_EXIST);
}
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorName().equals(connectorName)).collect(Collectors.toList());
return Result.buildSuc(convert2ConnectorOverviewVO(connectorPO, workerConnectorList));
}
private ConnectorStateVO convert2ConnectorOverviewVO(ConnectorPO connectorPO, List<WorkerConnector> workerConnectorList) {
ConnectorStateVO connectorStateVO = new ConnectorStateVO();
connectorStateVO.setConnectClusterId(connectorPO.getConnectClusterId());
connectorStateVO.setName(connectorPO.getConnectorName());
connectorStateVO.setType(connectorPO.getConnectorType());
connectorStateVO.setState(connectorPO.getState());
connectorStateVO.setTotalTaskCount(workerConnectorList.size());
connectorStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
connectorStateVO.setTotalWorkerCount(workerConnectorList.stream().map(elem -> elem.getWorkerId()).collect(Collectors.toSet()).size());
return connectorStateVO;
}
}

View File

@@ -1,37 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.connect.connector.WorkerConnectorManager;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
import com.xiaojukeji.know.streaming.km.persistence.connect.cache.LoadedConnectClusterCache;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
/**
* @author wyb
* @date 2022/11/14
*/
@Service
public class WorkerConnectorManageImpl implements WorkerConnectorManager {
private static final ILog LOGGER = LogFactory.getLog(WorkerConnectorManageImpl.class);
@Autowired
private WorkerConnectorService workerConnectorService;
@Override
public Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName) {
ConnectCluster connectCluster = LoadedConnectClusterCache.getByPhyId(connectClusterId);
List<WorkerConnector> workerConnectorList = workerConnectorService.getWorkerConnectorListFromCluster(connectCluster, connectorName);
return Result.buildSuc(ConvertUtil.list2List(workerConnectorList, KCTaskOverviewVO.class));
}
}

View File

@@ -8,15 +8,10 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDT
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO; import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember; import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberConsumerAssignment;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberDescription;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
@@ -39,13 +34,15 @@ import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil; import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO; import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import org.apache.kafka.clients.admin.MemberDescription;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.common.ConsumerGroupState; import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -54,8 +51,6 @@ import org.springframework.stereotype.Component;
import java.util.*; import java.util.*;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum.CONNECT_CLUSTER_PROTOCOL_TYPE;
@Component @Component
public class GroupManagerImpl implements GroupManager { public class GroupManagerImpl implements GroupManager {
private static final ILog log = LogFactory.getLog(GroupManagerImpl.class); private static final ILog log = LogFactory.getLog(GroupManagerImpl.class);
@@ -75,9 +70,6 @@ public class GroupManagerImpl implements GroupManager {
@Autowired @Autowired
private GroupMetricESDAO groupMetricESDAO; private GroupMetricESDAO groupMetricESDAO;
@Autowired
private ClusterPhyService clusterPhyService;
@Override @Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId, public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
String topicName, String topicName,
@@ -148,11 +140,6 @@ public class GroupManagerImpl implements GroupManager {
String groupName, String groupName,
List<String> latestMetricNames, List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException { PaginationSortDTO dto) throws NotExistException, AdminOperateException {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
}
// 获取消费组消费的TopicPartition列表 // 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName); Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet() List<Integer> partitionList = consumedOffsetMap.keySet()
@@ -163,18 +150,13 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList); Collections.sort(partitionList);
// 获取消费组当前运行信息 // 获取消费组当前运行信息
KSGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhy, groupName); ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName);
// 转换存储格式 // 转换存储格式
Map<TopicPartition, KSMemberDescription> tpMemberMap = new HashMap<>(); Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
for (MemberDescription description: groupDescription.members()) {
//如果不是connect集群 for (TopicPartition tp: description.assignment().topicPartitions()) {
if (!groupDescription.protocolType().equals(CONNECT_CLUSTER_PROTOCOL_TYPE)) { tpMemberMap.put(tp, description);
for (KSMemberDescription description : groupDescription.members()) {
KSMemberConsumerAssignment assignment = (KSMemberConsumerAssignment) description.assignment();
for (TopicPartition tp : assignment.topicPartitions()) {
tpMemberMap.put(tp, description);
}
} }
} }
@@ -191,11 +173,11 @@ public class GroupManagerImpl implements GroupManager {
vo.setTopicName(topicName); vo.setTopicName(topicName);
vo.setPartitionId(groupMetrics.getPartitionId()); vo.setPartitionId(groupMetrics.getPartitionId());
KSMemberDescription ksMemberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId())); MemberDescription memberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
if (ksMemberDescription != null) { if (memberDescription != null) {
vo.setMemberId(ksMemberDescription.consumerId()); vo.setMemberId(memberDescription.consumerId());
vo.setHost(ksMemberDescription.host()); vo.setHost(memberDescription.host());
vo.setClientId(ksMemberDescription.clientId()); vo.setClientId(memberDescription.clientId());
} }
vo.setLatestMetrics(groupMetrics); vo.setLatestMetrics(groupMetrics);
@@ -221,18 +203,13 @@ public class GroupManagerImpl implements GroupManager {
return rv; return rv;
} }
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterId()); ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName());
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterId()));
}
KSGroupDescription description = groupService.getGroupDescriptionFromKafka(clusterPhy, dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) { if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败"); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
} }
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) { if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState())); return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
} }
// 获取offset // 获取offset
@@ -297,16 +274,16 @@ public class GroupManagerImpl implements GroupManager {
))); )));
} }
KSOffsetSpec offsetSpec = null; OffsetSpec offsetSpec = null;
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) { if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
offsetSpec = KSOffsetSpec.forTimestamp(dto.getTimestamp()); offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp());
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) { } else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
offsetSpec = KSOffsetSpec.earliest(); offsetSpec = OffsetSpec.earliest();
} else { } else {
offsetSpec = KSOffsetSpec.latest(); offsetSpec = OffsetSpec.latest();
} }
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec); return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec, dto.getTimestamp());
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) { private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
@@ -368,4 +345,32 @@ public class GroupManagerImpl implements GroupManager {
dto dto
); );
} }
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
} }

View File

@@ -22,7 +22,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService; import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;

View File

@@ -10,18 +10,14 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam; import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic; import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant; import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil; import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils; import kafka.admin.AdminUtils;
@@ -56,9 +52,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired @Autowired
private ClusterPhyService clusterPhyService; private ClusterPhyService clusterPhyService;
@Autowired
private PartitionService partitionService;
@Override @Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) { public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator); log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -87,7 +80,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
); );
// 创建Topic // 创建Topic
Result<Void> createTopicRes = opTopicService.createTopic( return opTopicService.createTopic(
new TopicCreateParam( new TopicCreateParam(
dto.getClusterId(), dto.getClusterId(),
dto.getTopicName(), dto.getTopicName(),
@@ -97,21 +90,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
), ),
operator operator
); );
if (createTopicRes.successful()){
try{
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
BackoffUtils.backoff(3000);
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
if (partitionsResult.successful()){
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
}
});
}catch (Exception e) {
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功但记录Partition到DB中失败等待定时任务同步partition信息");
}
}
return createTopicRes;
} }
@Override @Override

View File

@@ -16,7 +16,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.version.BaseKafkaVersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.BaseVersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
@@ -27,7 +27,7 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
@Component @Component
public class TopicConfigManagerImpl extends BaseKafkaVersionControlService implements TopicConfigManager { public class TopicConfigManagerImpl extends BaseVersionControlService implements TopicConfigManager {
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class); private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig"; private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";

View File

@@ -10,7 +10,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition; import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -44,9 +43,10 @@ import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems; import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
import org.apache.commons.lang3.ObjectUtils; import org.apache.commons.lang3.ObjectUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*; import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.TopicConfig; import org.apache.kafka.common.config.TopicConfig;
@@ -143,12 +143,12 @@ public class TopicStateManagerImpl implements TopicStateManager {
} }
// 获取分区beginOffset // 获取分区beginOffset
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.earliest()); Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.earliest(), null);
if (beginOffsetsMapResult.failed()) { if (beginOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(beginOffsetsMapResult); return Result.buildFromIgnoreData(beginOffsetsMapResult);
} }
// 获取分区endOffset // 获取分区endOffset
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.latest()); Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.latest(), null);
if (endOffsetsMapResult.failed()) { if (endOffsetsMapResult.failed()) {
return Result.buildFromIgnoreData(endOffsetsMapResult); return Result.buildFromIgnoreData(endOffsetsMapResult);
} }
@@ -307,7 +307,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
if (metricsResult.failed()) { if (metricsResult.failed()) {
// 仅打印错误日志,但是不直接返回错误 // 仅打印错误日志,但是不直接返回错误
log.error( log.error(
"method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed", "class=TopicStateManagerImpl||method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
clusterPhyId, topicName, metricsResult clusterPhyId, topicName, metricsResult
); );
} }

View File

@@ -20,7 +20,7 @@ public interface VersionControlManager {
* 获取当前ks所有支持的kafka版本 * 获取当前ks所有支持的kafka版本
* @return * @return
*/ */
Result<Map<String, Long>> listAllKafkaVersions(); Result<Map<String, Long>> listAllVersions();
/** /**
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持 * 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
@@ -28,7 +28,7 @@ public interface VersionControlManager {
* @param type * @param type
* @return * @return
*/ */
Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type); Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type);
/** /**
* 获取当前用户设置的用于展示的指标配置 * 获取当前用户设置的用于展示的指标配置

View File

@@ -17,7 +17,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil; import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
@@ -30,10 +29,10 @@ import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.BrokerMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.BrokerMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems.*;
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*; import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems.*;
@Service @Service
public class VersionControlManagerImpl implements VersionControlManager { public class VersionControlManagerImpl implements VersionControlManager {
@@ -48,7 +47,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct @PostConstruct
public void init(){ public void init(){
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
@@ -58,7 +57,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
@@ -76,9 +75,9 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
@@ -93,9 +92,6 @@ public class VersionControlManagerImpl implements VersionControlManager {
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true)); defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
} }
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -111,13 +107,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CLUSTER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CONNECTOR.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_MIRROR_MAKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class)); allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect( Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
@@ -131,20 +121,18 @@ public class VersionControlManagerImpl implements VersionControlManager {
} }
@Override @Override
public Result<Map<String, Long>> listAllKafkaVersions() { public Result<Map<String, Long>> listAllVersions() {
return Result.buildSuc(VersionEnum.allVersionsWithOutMax()); return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
} }
@Override @Override
public Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type) { public Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type) {
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type); List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
List<VersionItemVO> versionItemVOS = new ArrayList<>(); List<VersionItemVO> versionItemVOS = new ArrayList<>();
String versionStr = clusterPhyService.getVersionFromCacheFirst(clusterId);
for (VersionControlItem item : allItem){ for (VersionControlItem item : allItem){
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class); VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
boolean support = versionControlService.isClusterSupport(versionStr, item); boolean support = versionControlService.isClusterSupport(clusterId, item);
itemVO.setSupport(support); itemVO.setSupport(support);
itemVO.setDesc(itemSupportDesc(item, support)); itemVO.setDesc(itemSupportDesc(item, support));
@@ -157,7 +145,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override @Override
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) { public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
Result<List<VersionItemVO>> ret = listKafkaClusterVersionControlItem(clusterId, type); Result<List<VersionItemVO>> ret = listClusterVersionControlItem(clusterId, type);
if(null == ret || ret.failed()){ if(null == ret || ret.failed()){
return Result.buildFail(); return Result.buildFail();
} }

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.metric; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService; import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
import com.xiaojukeji.know.streaming.km.common.component.SpringTool; import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
@@ -8,20 +9,17 @@ import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
/** /**
* @author didi * @author didi
*/ */
public abstract class AbstractMetricCollector<M, C> { public abstract class AbstractMetricCollector<T> {
public abstract String getClusterVersion(C c); public abstract void collectMetrics(ClusterPhy clusterPhy);
public abstract VersionItemTypeEnum collectorType(); public abstract VersionItemTypeEnum collectorType();
@Autowired @Autowired
private CollectThreadPoolService collectThreadPoolService; private CollectThreadPoolService collectThreadPoolService;
public abstract void collectMetrics(C c);
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) { protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode()); return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
} }

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker; import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
@@ -10,6 +11,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService; import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
@@ -26,8 +28,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMetrics> { public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics> {
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -39,31 +41,32 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
private BrokerService brokerService; private BrokerService brokerService;
@Override @Override
public List<BrokerMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId()); List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
List<BrokerMetrics> metricsList = new ArrayList<>(); List<BrokerMetrics> brokerMetrics = new ArrayList<>();
for(Broker broker : brokers) { for(Broker broker : brokers) {
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort()); BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME); brokerMetrics.add(metrics);
metricsList.add(metrics);
future.runnableTask( future.runnableTask(
String.format("class=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()), String.format("method=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
30000, 30000,
() -> collectMetrics(clusterPhyId, metrics, items) () -> collectMetrics(clusterPhyId, metrics, items)
); );
} }
future.waitExecute(30000); future.waitExecute(30000);
this.publishMetric(new BrokerMetricEvent(this, metricsList)); this.publishMetric(new BrokerMetricEvent(this, brokerMetrics));
return metricsList; LOGGER.info("method=BrokerMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -75,6 +78,7 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
try { try {
@@ -88,11 +92,14 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||metric={}!",
clusterPhyId, metrics.getBrokerId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error( LOGGER.error("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||errMsg=exception!",
"method=collectMetrics||clusterPhyId={}||brokerId={}||metricName={}||errMsg=exception!", clusterPhyId, metrics.getBrokerId(), v.getName(), e);
clusterPhyId, metrics.getBrokerId(), v.getName(), e
);
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -7,15 +7,18 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetric
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result; import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService; import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Collections; import java.util.Arrays;
import java.util.List; import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
@@ -24,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ClusterMetricCollector extends AbstractKafkaMetricCollector<ClusterMetrics> { public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog(ClusterMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -34,37 +37,35 @@ public class ClusterMetricCollector extends AbstractKafkaMetricCollector<Cluster
private ClusterMetricService clusterMetricService; private ClusterMetricService clusterMetricService;
@Override @Override
public List<ClusterMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis(); Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion()); ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
future.runnableTask( future.runnableTask(
String.format("class=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()), String.format("method=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
30000, 30000,
() -> { () -> {
try { try {
if(null != metrics.getMetrics().get(v.getName())){ if(null != metrics.getMetrics().get(v.getName())){return null;}
return null;
}
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName()); Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
if(null == ret || ret.failed() || null == ret.getData()){ if(null == ret || ret.failed() || null == ret.getData()){return null;}
return null;
}
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics()));
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error( LOGGER.error("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||errMsg=exception!",
"method=collectKafkaMetrics||clusterPhyId={}||metricName={}||errMsg=exception!", clusterPhyId, v.getName(), e);
clusterPhyId, v.getName(), e
);
} }
return null; return null;
@@ -75,9 +76,10 @@ public class ClusterMetricCollector extends AbstractKafkaMetricCollector<Cluster
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ClusterMetricEvent(this, Collections.singletonList(metrics))); publishMetric(new ClusterMetricEvent(this, Arrays.asList(metrics)));
return Collections.singletonList(metrics); LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -9,16 +10,20 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService; import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.apache.kafka.common.TopicPartition; import org.apache.commons.collections.CollectionUtils;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.*; import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP; import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
@@ -27,8 +32,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetrics> { public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetrics>> {
protected static final ILog LOGGER = LogFactory.getLog(GroupMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -40,38 +45,40 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
private GroupService groupService; private GroupService groupService;
@Override @Override
public List<GroupMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<String> groupNameList = new ArrayList<>(); List<String> groups = new ArrayList<>();
try { try {
groupNameList = groupService.listGroupsFromKafka(clusterPhy); groups = groupService.listGroupsFromKafka(clusterPhyId);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("method=collectKafkaMetrics||clusterPhyId={}||msg=exception!", clusterPhyId, e); LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||msg=exception!", clusterPhyId, e);
} }
if(ValidateUtils.isEmptyList(groupNameList)) { if(CollectionUtils.isEmpty(groups)){return;}
return Collections.emptyList();
}
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = getFutureUtilByClusterPhyId(clusterPhyId);
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>(); Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
for(String groupName : groupNameList) { for(String groupName : groups) {
future.runnableTask( future.runnableTask(
String.format("class=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName), String.format("method=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
30000, 30000,
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items)); () -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
} }
future.waitResult(30000); future.waitResult(30000);
List<GroupMetrics> metricsList = metricsMap.values().stream().collect(ArrayList::new, ArrayList::addAll, ArrayList::addAll); List<GroupMetrics> metricsList = new ArrayList<>();
metricsMap.values().forEach(elem -> metricsList.addAll(elem));
publishMetric(new GroupMetricEvent(this, metricsList)); publishMetric(new GroupMetricEvent(this, metricsList));
return metricsList;
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||startTime={}||cost={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -84,7 +91,9 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) { private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
Map<TopicPartition, GroupMetrics> subMetricMap = new HashMap<>(); List<GroupMetrics> groupMetricsList = new ArrayList<>();
Map<String, GroupMetrics> tpGroupPOMap = new HashMap<>();
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true); GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
@@ -98,31 +107,38 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
continue; continue;
} }
ret.getData().forEach(metrics -> { ret.getData().stream().forEach(metrics -> {
if (metrics.isBGroupMetric()) { if (metrics.isBGroupMetric()) {
groupMetrics.putMetric(metrics.getMetrics()); groupMetrics.putMetric(metrics.getMetrics());
return; } else {
} String topicName = metrics.getTopic();
Integer partitionId = metrics.getPartitionId();
String tpGroupKey = genTopicPartitionGroupKey(topicName, partitionId);
TopicPartition tp = new TopicPartition(metrics.getTopic(), metrics.getPartitionId()); tpGroupPOMap.putIfAbsent(tpGroupKey, new GroupMetrics(clusterPhyId, partitionId, topicName, groupName, false));
subMetricMap.putIfAbsent(tp, new GroupMetrics(clusterPhyId, metrics.getPartitionId(), metrics.getTopic(), groupName, false)); tpGroupPOMap.get(tpGroupKey).putMetric(metrics.getMetrics());
subMetricMap.get(tp).putMetric(metrics.getMetrics()); }
}); });
} catch (Exception e) {
LOGGER.error( if(!EnvUtil.isOnline()){
"method=collectMetrics||clusterPhyId={}||groupName={}||errMsg=exception!", LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||groupName={}||metricName={}||metricValue={}",
clusterPhyId, groupName, e clusterPhyId, groupName, metricName, JSON.toJSONString(ret.getData()));
); }
}catch (Exception e){
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||groupName={}||errMsg=exception!", clusterPhyId, groupName, e);
} }
} }
List<GroupMetrics> metricsList = new ArrayList<>(); groupMetricsList.add(groupMetrics);
metricsList.add(groupMetrics); groupMetricsList.addAll(tpGroupPOMap.values());
metricsList.addAll(subMetricMap.values());
// 记录采集性能 // 记录采集性能
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
metricsMap.put(groupName, metricsList); metricsMap.put(groupName, groupMetricsList);
}
private String genTopicPartitionGroupKey(String topic, Integer partitionId){
return topic + "@" + partitionId;
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -9,6 +9,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem; import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
@@ -25,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class PartitionMetricCollector extends AbstractKafkaMetricCollector<PartitionMetrics> { public class PartitionMetricCollector extends AbstractMetricCollector<PartitionMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(PartitionMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -38,10 +40,13 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
private TopicService topicService; private TopicService topicService;
@Override @Override
public List<PartitionMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
// 获取集群所有分区
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -50,9 +55,9 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>()); metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
future.runnableTask( future.runnableTask(
String.format("class=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("method=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> this.collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items) () -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
); );
} }
@@ -63,7 +68,10 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
this.publishMetric(new PartitionMetricEvent(this, metricsList)); this.publishMetric(new PartitionMetricEvent(this, metricsList));
return metricsList; LOGGER.info(
"method=PartitionMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
} }
@Override @Override
@@ -101,9 +109,17 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId()); PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
allMetrics.putMetric(subMetrics.getMetrics()); allMetrics.putMetric(subMetrics.getMetrics());
} }
if (!EnvUtil.isOnline()) {
LOGGER.info(
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||metricValue={}!",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.info( LOGGER.info(
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception", "class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.alibaba.fastjson.JSON;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy; import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
@@ -10,6 +11,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService; import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService; import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
@@ -26,8 +28,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<ReplicationMetrics> { public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ReplicaMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -39,10 +41,12 @@ public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<Replica
private PartitionService partitionService; private PartitionService partitionService;
@Override @Override
public List<ReplicationMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Partition> partitions = partitionService.listPartitionFromCacheFirst(clusterPhyId); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
List<Partition> partitions = partitionService.listPartitionByCluster(clusterPhyId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -50,11 +54,10 @@ public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<Replica
for(Partition partition : partitions) { for(Partition partition : partitions) {
for (Integer brokerId: partition.getAssignReplicaList()) { for (Integer brokerId: partition.getAssignReplicaList()) {
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId()); ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
metricsList.add(metrics); metricsList.add(metrics);
future.runnableTask( future.runnableTask(
String.format("class=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d", String.format("method=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()), clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
30000, 30000,
() -> collectMetrics(clusterPhyId, metrics, items) () -> collectMetrics(clusterPhyId, metrics, items)
@@ -66,7 +69,8 @@ public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<Replica
publishMetric(new ReplicaMetricEvent(this, metricsList)); publishMetric(new ReplicaMetricEvent(this, metricsList));
return metricsList; LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -79,6 +83,8 @@ public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<Replica
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) { private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for(VersionControlItem v : items) { for(VersionControlItem v : items) {
try { try {
if (metrics.getMetrics().containsKey(v.getName())) { if (metrics.getMetrics().containsKey(v.getName())) {
@@ -98,11 +104,15 @@ public class ReplicaMetricCollector extends AbstractKafkaMetricCollector<Replica
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if (!EnvUtil.isOnline()) {
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partitionId={}||metricName={}||metricValue={}",
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.error( LOGGER.error("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
"method=collectMetrics||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!", clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e);
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e
);
} }
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -10,6 +10,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil; import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils; import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService; import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
@@ -29,8 +31,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetrics> { public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetrics>> {
protected static final ILog LOGGER = LogFactory.getLog(TopicMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -44,10 +46,11 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
private static final Integer AGG_METRICS_BROKER_ID = -10000; private static final Integer AGG_METRICS_BROKER_ID = -10000;
@Override @Override
public List<TopicMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId); List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId); FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
@@ -61,7 +64,7 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
allMetricsMap.put(topic.getTopicName(), metricsMap); allMetricsMap.put(topic.getTopicName(), metricsMap);
future.runnableTask( future.runnableTask(
String.format("class=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()), String.format("method=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
30000, 30000,
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items) () -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
); );
@@ -74,7 +77,8 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
this.publishMetric(new TopicMetricEvent(this, metricsList)); this.publishMetric(new TopicMetricEvent(this, metricsList));
return metricsList; LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
} }
@Override @Override
@@ -114,9 +118,14 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics()); metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
} }
}); });
if (!EnvUtil.isOnline()) {
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||metricValue={}.",
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
);
}
} catch (Exception e) { } catch (Exception e) {
LOGGER.error( LOGGER.error("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
clusterPhyId, topicName, v.getName(), e clusterPhyId, topicName, v.getName(), e
); );
} }

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka; package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
@@ -14,8 +14,10 @@ import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetric
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo; import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics; import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService; import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService; import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService; import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
@@ -23,7 +25,7 @@ import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Collections; import java.util.Arrays;
import java.util.List; import java.util.List;
import java.util.stream.Collectors; import java.util.stream.Collectors;
@@ -33,8 +35,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
* @author didi * @author didi
*/ */
@Component @Component
public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<ZookeeperMetrics> { public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricCollector.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired @Autowired
private VersionControlService versionControlService; private VersionControlService versionControlService;
@@ -49,21 +51,21 @@ public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<Zooke
private KafkaControllerService kafkaControllerService; private KafkaControllerService kafkaControllerService;
@Override @Override
public List<ZookeeperMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) { public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis(); Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId(); Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode()); List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId) List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream() .stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus())) .filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList()); .collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId); KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME); ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE);
if (ValidateUtils.isEmptyList(aliveZKList)) { if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回 // 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics))); publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return Collections.singletonList(metrics); return;
} }
// 构造参数 // 构造参数
@@ -80,7 +82,6 @@ public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<Zooke
if(null != metrics.getMetrics().get(v.getName())) { if(null != metrics.getMetrics().get(v.getName())) {
continue; continue;
} }
param.setMetricName(v.getName()); param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param); Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
@@ -89,9 +90,16 @@ public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<Zooke
} }
metrics.putMetric(ret.getData().getMetrics()); metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){ } catch (Exception e){
LOGGER.error( LOGGER.error(
"method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!", "class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e clusterPhyId, v.getName(), e
); );
} }
@@ -99,9 +107,12 @@ public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<Zooke
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f); metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics))); publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return Collections.singletonList(metrics); LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
} }
@Override @Override

View File

@@ -1,50 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractConnectMetricCollector<M> extends AbstractMetricCollector<M, ConnectCluster> {
private static final ILog LOGGER = LogFactory.getLog(AbstractConnectMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ConnectClusterService connectClusterService;
public abstract List<M> collectConnectMetrics(ConnectCluster connectCluster);
@Override
public String getClusterVersion(ConnectCluster connectCluster){
return connectClusterService.getClusterVersion(connectCluster.getId());
}
@Override
public void collectMetrics(ConnectCluster connectCluster) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectConnectMetrics(connectCluster);
// 输出耗时信息
LOGGER.info(
"metricType={}||connectClusterId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), connectCluster.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||connectClusterId={}||metrics={}!",
this.collectorType().getMessage(), connectCluster.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -1,83 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectClusterMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Collections;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CLUSTER;
/**
* @author didi
*/
@Component
public class ConnectClusterMetricCollector extends AbstractConnectMetricCollector<ConnectClusterMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectClusterMetricService connectClusterMetricService;
@Override
public List<ConnectClusterMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
ConnectClusterMetrics metrics = new ConnectClusterMetrics(clusterPhyId, connectClusterId);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
List<VersionControlItem> items = versionControlService.listVersionControlItem(getClusterVersion(connectCluster), collectorType().getCode());
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
for (VersionControlItem item : items) {
future.runnableTask(
String.format("class=ConnectClusterMetricCollector||connectClusterId=%d||metricName=%s", connectClusterId, item.getName()),
30000,
() -> {
try {
Result<ConnectClusterMetrics> ret = connectClusterMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, item.getName());
if (null == ret || !ret.hasData()) {
return null;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectConnectMetrics||connectClusterId={}||metricName={}||errMsg=exception!",
connectClusterId, item.getName(), e
);
}
return null;
}
);
}
future.waitExecute(30000);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
this.publishMetric(new ConnectClusterMetricEvent(this, Collections.singletonList(metrics)));
return Collections.singletonList(metrics);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CLUSTER;
}
}

View File

@@ -1,102 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.List;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CONNECTOR;
/**
* @author didi
*/
@Component
public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollector<ConnectorMetrics> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectConnectorMetricCollector.class);
@Autowired
private VersionControlService versionControlService;
@Autowired
private ConnectorService connectorService;
@Autowired
private ConnectorMetricService connectorMetricService;
@Override
public List<ConnectorMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
Long connectClusterId = connectCluster.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
Result<List<String>> connectorList = connectorService.listConnectorsFromCluster(connectClusterId);
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
List<ConnectorMetrics> metricsList = new ArrayList<>();
for (String connectorName : connectorList.getData()) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.setClusterPhyId(clusterPhyId);
metricsList.add(metrics);
future.runnableTask(
String.format("class=ConnectConnectorMetricCollector||connectClusterId=%d||connectorName=%s", connectClusterId, connectorName),
30000,
() -> collectMetrics(connectClusterId, connectorName, metrics, items)
);
}
future.waitResult(30000);
this.publishMetric(new ConnectorMetricEvent(this, metricsList));
return metricsList;
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_CONNECT_CONNECTOR;
}
/**************************************************** private method ****************************************************/
private void collectMetrics(Long connectClusterId, String connectorName, ConnectorMetrics metrics, List<VersionControlItem> items) {
long startTime = System.currentTimeMillis();
ConnectorTypeEnum connectorType = connectorService.getConnectorType(connectClusterId, connectorName);
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
for (VersionControlItem v : items) {
try {
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
if (null == ret || ret.failed() || null == ret.getData()) {
continue;
}
metrics.putMetric(ret.getData().getMetrics());
} catch (Exception e) {
LOGGER.error(
"method=collectMetrics||connectClusterId={}||connectorName={}||metric={}||errMsg=exception!",
connectClusterId, connectorName, v.getName(), e
);
}
}
// 记录采集性能
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
}
}

View File

@@ -1,50 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.List;
/**
* @author didi
*/
public abstract class AbstractKafkaMetricCollector<M> extends AbstractMetricCollector<M, ClusterPhy> {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricCollector.class);
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
@Autowired
private ClusterPhyService clusterPhyService;
public abstract List<M> collectKafkaMetrics(ClusterPhy clusterPhy);
@Override
public String getClusterVersion(ClusterPhy clusterPhy){
return clusterPhyService.getVersionFromCacheFirst(clusterPhy.getId());
}
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
long startTime = System.currentTimeMillis();
// 采集指标
List<M> metricsList = this.collectKafkaMetrics(clusterPhy);
// 输出耗时信息
LOGGER.info(
"metricType={}||clusterPhyId={}||costTimeUnitMs={}",
this.collectorType().getMessage(), clusterPhy.getId(), System.currentTimeMillis() - startTime
);
// 输出采集到的指标信息
METRIC_COLLECTED_LOGGER.debug("metricType={}||clusterPhyId={}||metrics={}!",
this.collectorType().getMessage(), clusterPhy.getId(), ConvertUtil.obj2Json(metricsList)
);
}
}

View File

@@ -237,7 +237,7 @@ public class CollectThreadPoolService {
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) { private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
// 新的 // 新的
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init( FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
"MetricCollect-Shard-" + shardId, "CollectorMetricsFutureUtil-Shard-" + shardId,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilThreadNum, this.futureUtilThreadNum,
this.futureUtilQueueSize this.futureUtilQueueSize

View File

@@ -3,47 +3,67 @@ package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO; import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil; import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO; import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.CollectionUtils;
import java.util.List; import java.util.List;
import java.util.Objects; import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public abstract class AbstractMetricESSender { public abstract class AbstractMetricESSender {
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final int THRESHOLD = 100; private static final int THRESHOLD = 100;
private static final FutureUtil<Void> esExecutor = FutureUtil.init( private static final ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(
"MetricsESSender",
10, 10,
20, 20,
10000 6000,
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount())
); );
/** /**
* 根据不同监控维度来发送 * 根据不同监控维度来发送
*/ */
protected boolean send2es(String index, List<? extends BaseESPO> statsList) { protected boolean send2es(String index, List<? extends BaseESPO> statsList){
LOGGER.info("method=send2es||indexName={}||metricsSize={}||msg=send metrics to es", index, statsList.size());
if (CollectionUtils.isEmpty(statsList)) { if (CollectionUtils.isEmpty(statsList)) {
return true; return true;
} }
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index); BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull(baseMetricESDao)) { if (Objects.isNull( baseMetricESDao )) {
LOGGER.error("method=send2es||indexName={}||errMsg=find dao failed", index); LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
return false; return false;
} }
for (int i = 0; i < statsList.size(); i += THRESHOLD) { int size = statsList.size();
final int idxStart = i; int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
// 异步发送 if (size < THRESHOLD) {
esExecutor.submitTask( esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(idxStart, Math.min(idxStart + THRESHOLD, statsList.size()))) () -> baseMetricESDao.batchInsertStats(statsList)
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
); );
} }

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.BROKER_INDEX; import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.BROKER_INDEX;
@Component @Component
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> { public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=BrokerMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,16 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CLUSTER_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.CLUSTER_INDEX;
@Component @Component
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> { public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ClusterMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=ClusterMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,16 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.GROUP_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.GROUP_INDEX;
@Component @Component
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> { public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(GroupMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.PARTITION_INDEX; import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.PARTITION_INDEX;
@Component @Component
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> { public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(PartitionMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=PartitionMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.REPLICATION_INDEX; import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.REPLICATION_INDEX;
@Component @Component
public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> { public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ReplicaMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -11,15 +10,16 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.TOPIC_INDEX;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.TOPIC_INDEX;
@Component @Component
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> { public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(TopicMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=TopicMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.collector.sink.kafka; package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog; import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory; import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil; import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent; import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO; import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
@@ -11,15 +10,15 @@ import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct; import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.ZOOKEEPER_INDEX; import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX;
@Component @Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> { public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
private static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricESSender.class); protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct @PostConstruct
public void init(){ public void init(){
LOGGER.info("method=init||msg=init finished"); LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished");
} }
@Override @Override

View File

@@ -1,33 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CLUSTER_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectClusterMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectClusterMetricEvent event) {
send2es(CONNECT_CLUSTER_INDEX, ConvertUtil.list2List(event.getConnectClusterMetrics(), ConnectClusterMetricPO.class));
}
}

View File

@@ -1,33 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectorMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CONNECTOR_INDEX;
/**
* @author wyb
* @date 2022/11/7
*/
@Component
public class ConnectorMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectorMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog(ConnectorMetricESSender.class);
@PostConstruct
public void init(){
LOGGER.info("class=ConnectorMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ConnectorMetricEvent event) {
send2es(CONNECT_CONNECTOR_INDEX, ConvertUtil.list2List(event.getConnectorMetricsList(), ConnectorMetricPO.class));
}
}

View File

@@ -127,9 +127,5 @@
<groupId>org.apache.kafka</groupId> <groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId> <artifactId>kafka_2.13</artifactId>
</dependency> </dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-runtime</artifactId>
</dependency>
</dependencies> </dependencies>
</project> </project>

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
public class ClusterConnectorsOverviewDTO extends PaginationSortDTO {
@NotNull(message = "latestMetricNames不允许为空")
@ApiModelProperty("需要指标点的信息")
private List<String> latestMetricNames;
@NotNull(message = "metricLines不允许为空")
@ApiModelProperty("需要指标曲线的信息")
private MetricDTO metricLines;
@ApiModelProperty("需要排序的指标名称列表,比较第一个不为空的metric")
private List<String> sortMetricNameList;
}

View File

@@ -1,32 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@NoArgsConstructor
@ApiModel(description = "集群Connector")
public class ClusterConnectorDTO extends BaseDTO {
@NotNull(message = "connectClusterId不允许为空")
@ApiModelProperty(value = "Connector集群ID", example = "1")
private Long connectClusterId;
@NotBlank(message = "name不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
private String connectorName;
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "集群Connector")
public class ConnectClusterDTO extends BaseDTO {
@ApiModelProperty(value = "Connect集群ID", example = "1")
private Long id;
@ApiModelProperty(value = "Connect集群名称", example = "know-streaming")
private String name;
@ApiModelProperty(value = "Connect集群URL", example = "http://127.0.0.1:8080")
private String clusterUrl;
@ApiModelProperty(value = "Connect集群版本", example = "2.5.1")
private String version;
@ApiModelProperty(value = "JMX配置", example = "")
private String jmxProperties;
}

View File

@@ -1,20 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Connector")
public class ConnectorActionDTO extends ClusterConnectorDTO {
@NotBlank(message = "action不允许为空串")
@ApiModelProperty(value = "Connector名称", example = "stop|restart|resume")
private String action;
}

View File

@@ -1,21 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "修改Connector配置")
public class ConnectorConfigModifyDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
}

View File

@@ -1,21 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.Properties;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "创建Connector")
public class ConnectorCreateDTO extends ClusterConnectorDTO {
@NotNull(message = "configs不允许为空")
@ApiModelProperty(value = "配置", example = "")
private Properties configs;
}

View File

@@ -1,14 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import io.swagger.annotations.ApiModel;
import lombok.Data;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "删除Connector")
public class ConnectorDeleteDTO extends ClusterConnectorDTO {
}

View File

@@ -1,20 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.task;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
/**
* @author zengqiao
* @date 2022-10-17
*/
@Data
@ApiModel(description = "操作Task")
public class TaskActionDTO extends ConnectorActionDTO {
@NotNull(message = "taskId不允许为NULL")
@ApiModelProperty(value = "taskId", example = "123")
private Long taskId;
}

View File

@@ -1,22 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connect集群指标查询信息")
public class MetricsConnectClustersDTO extends MetricDTO {
@ApiModelProperty("Connect集群ID")
private List<Long> connectClusterIdList;
}

View File

@@ -1,23 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect;
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Connector指标查询信息")
public class MetricsConnectorsDTO extends MetricDTO {
@ApiModelProperty("Connector列表")
private List<ClusterConnectorDTO> connectorNameList;
}

View File

@@ -3,7 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity;
/** /**
* @author didi * @author didi
*/ */
public interface EntityIdInterface { public interface EntifyIdInterface {
/** /**
* 获取id * 获取id
* @return * @return

View File

@@ -1,6 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster; package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface; import com.xiaojukeji.know.streaming.km.common.bean.entity.EntifyIdInterface;
import lombok.AllArgsConstructor; import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
@@ -10,7 +10,7 @@ import java.util.Date;
@Data @Data
@NoArgsConstructor @NoArgsConstructor
@AllArgsConstructor @AllArgsConstructor
public class ClusterPhy implements Comparable<ClusterPhy>, EntityIdInterface { public class ClusterPhy implements Comparable<ClusterPhy>, EntifyIdInterface {
/** /**
* 主键 * 主键
*/ */

View File

@@ -1,37 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 集群状态信息
* @author zengqiao
* @date 22/02/24
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhysHealthState {
private Integer unknownCount;
private Integer goodCount;
private Integer mediumCount;
private Integer poorCount;
private Integer deadCount;
private Integer total;
public ClusterPhysHealthState(Integer total) {
this.unknownCount = 0;
this.goodCount = 0;
this.mediumCount = 0;
this.poorCount = 0;
this.deadCount = 0;
this.total = total;
}
}

View File

@@ -13,4 +13,9 @@ public class BaseClusterHealthConfig extends BaseClusterConfigValue {
* 健康检查名称 * 健康检查名称
*/ */
protected HealthCheckNameEnum checkNameEnum; protected HealthCheckNameEnum checkNameEnum;
/**
* 权重
*/
protected Float weight;
} }

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/26
*/
@Data
public class HealthAmountRatioConfig extends BaseClusterHealthConfig {
/**
* 总数
*/
private Integer amount;
/**
* 比例
*/
private Double ratio;
}

View File

@@ -1,5 +1,7 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric; package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;

View File

@@ -1,61 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.EntityIdInterface;
import lombok.Data;
import java.io.Serializable;
@Data
public class ConnectCluster implements Serializable, Comparable<ConnectCluster>, EntityIdInterface {
/**
* 集群ID
*/
private Long id;
/**
* 集群名字
*/
private String name;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
* @see com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum
*/
private Integer state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
/**
* 版本信息
*/
private String version;
/**
* jmx配置
* @see com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig
*/
private String jmxProperties;
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群地址
*/
private String clusterUrl;
@Override
public int compareTo(ConnectCluster connectCluster) {
return this.id.compareTo(connectCluster.getId());
}
}

View File

@@ -1,38 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class ConnectClusterMetadata implements Serializable {
/**
* Kafka集群名字
*/
private Long kafkaClusterPhyId;
/**
* 集群使用的消费组
*/
private String groupName;
/**
* 集群使用的消费组状态,也表示集群状态
*/
private GroupStateEnum state;
/**
* worker中显示的leader url信息
*/
private String memberLeaderUrl;
public ConnectClusterMetadata(Long kafkaClusterPhyId, String groupName, GroupStateEnum state, String memberLeaderUrl) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.groupName = groupName;
this.state = state;
this.memberLeaderUrl = memberLeaderUrl;
}
}

View File

@@ -1,87 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
import java.net.URI;
@Data
@NoArgsConstructor
public class ConnectWorker implements Serializable {
protected static final ILog LOGGER = LogFactory.getLog(ConnectWorker.class);
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* 集群ID
*/
private Long connectClusterId;
/**
* 成员ID
*/
private String memberId;
/**
* 主机
*/
private String host;
/**
* Jmx端口
*/
private Integer jmxPort;
/**
* URL
*/
private String url;
/**
* leader的URL
*/
private String leaderUrl;
/**
* 1是leader0不是leader
*/
private Integer leader;
/**
* worker地址
*/
private String workerId;
public ConnectWorker(Long kafkaClusterPhyId,
Long connectClusterId,
String memberId,
String host,
Integer jmxPort,
String url,
String leaderUrl,
Integer leader) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.memberId = memberId;
this.host = host;
this.jmxPort = jmxPort;
this.url = url;
this.leaderUrl = leaderUrl;
this.leader = leader;
String workerId = CommonUtils.getWorkerId(url);
if (workerId == null) {
workerId = memberId;
LOGGER.error("class=ConnectWorker||connectClusterId={}||memberId={}||url={}||msg=analysis url fail"
, connectClusterId, memberId, url);
}
this.workerId = workerId;
}
}

View File

@@ -1,58 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
@Data
@NoArgsConstructor
public class WorkerConnector implements Serializable {
/**
* connect集群ID
*/
private Long connectClusterId;
/**
* kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* connector名称
*/
private String connectorName;
private String workerMemberId;
/**
* 任务状态
*/
private String state;
/**
* 任务ID
*/
private Integer taskId;
/**
* worker信息
*/
private String workerId;
/**
* 错误原因
*/
private String trace;
public WorkerConnector(Long kafkaClusterPhyId, Long connectClusterId, String connectorName, String workerMemberId, Integer taskId, String state, String workerId, String trace) {
this.kafkaClusterPhyId = kafkaClusterPhyId;
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.workerMemberId = workerMemberId;
this.taskId = taskId;
this.state = state;
this.workerId = workerId;
this.trace = trace;
}
}

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
/**
* @see ConfigInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfo {
private ConnectConfigKeyInfo definition;
private ConnectConfigValueInfo value;
}

View File

@@ -1,71 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfo;
import org.apache.kafka.connect.runtime.rest.entities.ConfigInfos;
import java.util.*;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ACTION_RELOAD_NAME;
import static com.xiaojukeji.know.streaming.km.common.constant.Constant.CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME;
/**
* @see ConfigInfos
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigInfos {
private static final Map<String, List<String>> recommendValuesMap = new HashMap<>();
static {
recommendValuesMap.put(CONNECTOR_CONFIG_ACTION_RELOAD_NAME, Arrays.asList("none", "restart"));
recommendValuesMap.put(CONNECTOR_CONFIG_ERRORS_TOLERANCE_NAME, Arrays.asList("none", "all"));
}
private String name;
private int errorCount;
private List<String> groups;
private List<ConnectConfigInfo> configs;
public ConnectConfigInfos(ConfigInfos configInfos) {
this.name = configInfos.name();
this.errorCount = configInfos.errorCount();
this.groups = configInfos.groups();
this.configs = new ArrayList<>();
for (ConfigInfo configInfo: configInfos.values()) {
ConnectConfigKeyInfo definition = new ConnectConfigKeyInfo();
definition.setName(configInfo.configKey().name());
definition.setType(configInfo.configKey().type());
definition.setRequired(configInfo.configKey().required());
definition.setDefaultValue(configInfo.configKey().defaultValue());
definition.setImportance(configInfo.configKey().importance());
definition.setDocumentation(configInfo.configKey().documentation());
definition.setGroup(configInfo.configKey().group());
definition.setOrderInGroup(configInfo.configKey().orderInGroup());
definition.setWidth(configInfo.configKey().width());
definition.setDisplayName(configInfo.configKey().displayName());
definition.setDependents(configInfo.configKey().dependents());
ConnectConfigValueInfo value = new ConnectConfigValueInfo();
value.setName(configInfo.configValue().name());
value.setValue(configInfo.configValue().value());
value.setRecommendedValues(recommendValuesMap.getOrDefault(configInfo.configValue().name(), configInfo.configValue().recommendedValues()));
value.setErrors(configInfo.configValue().errors());
value.setVisible(configInfo.configValue().visible());
ConnectConfigInfo connectConfigInfo = new ConnectConfigInfo();
connectConfigInfo.setDefinition(definition);
connectConfigInfo.setValue(value);
this.configs.add(connectConfigInfo);
}
}
}

View File

@@ -1,38 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigKeyInfo;
import java.util.List;
/**
* @see ConfigKeyInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigKeyInfo {
private String name;
private String type;
private boolean required;
private String defaultValue;
private String importance;
private String documentation;
private String group;
private int orderInGroup;
private String width;
private String displayName;
private List<String> dependents;
}

View File

@@ -1,27 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.connect.runtime.rest.entities.ConfigValueInfo;
import java.util.List;
/**
* @see ConfigValueInfo
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectConfigValueInfo {
private String name;
private String value;
private List<String> recommendedValues;
private List<String> errors;
private boolean visible;
}

View File

@@ -1,20 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import com.alibaba.fastjson.annotation.JSONField;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.AbstractState
*/
@Data
public abstract class KSAbstractConnectState {
private String state;
private String trace;
@JSONField(name="worker_id")
@JsonProperty("worker_id")
private String workerId;
}

View File

@@ -1,48 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import java.io.Serializable;
@Data
public class KSConnector implements Serializable {
/**
* Kafka集群ID
*/
private Long kafkaClusterPhyId;
/**
* connect集群ID
*/
private Long connectClusterId;
/**
* connector名称
*/
private String connectorName;
/**
* connector类名
*/
private String connectorClassName;
/**
* connector类型
*/
private String connectorType;
/**
* 访问过的Topic列表
*/
private String topics;
/**
* task数
*/
private Integer taskCount;
/**
* 状态
*/
private String state;
}

View File

@@ -1,26 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorType;
import org.apache.kafka.connect.util.ConnectorTaskId;
import java.io.Serializable;
import java.util.List;
import java.util.Map;
/**
* copy from:
* @see org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo
*/
@Data
public class KSConnectorInfo implements Serializable {
private Long connectClusterId;
private String name;
private Map<String, String> config;
private List<ConnectorTaskId> tasks;
private ConnectorType type;
}

View File

@@ -1,11 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.ConnectorState
*/
@Data
public class KSConnectorState extends KSAbstractConnectState {
}

View File

@@ -1,21 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorType;
import java.util.List;
/**
* @see ConnectorStateInfo
*/
@Data
public class KSConnectorStateInfo {
private String name;
private KSConnectorState connector;
private List<KSTaskState> tasks;
private ConnectorType type;
}

View File

@@ -1,12 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector;
import lombok.Data;
import org.apache.kafka.connect.runtime.rest.entities.ConnectorStateInfo;
/**
* @see ConnectorStateInfo.TaskState
*/
@Data
public class KSTaskState extends KSAbstractConnectState {
private int id;
}

View File

@@ -1,38 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.connect.plugin;
import com.alibaba.fastjson.annotation.JSONField;
import com.fasterxml.jackson.annotation.JsonProperty;
import io.swagger.annotations.ApiModel;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
/**
* @author zengqiao
* @date 22/10/17
*/
@Data
@ApiModel(description = "Connect插件信息")
@NoArgsConstructor
public class ConnectPluginBasic implements Serializable {
/**
* Json序列化时对应的字段
*/
@JSONField(name="class")
@JsonProperty("class")
private String className;
private String type;
private String version;
private String helpDocLink;
public ConnectPluginBasic(String className, String type, String version, String helpDocLink) {
this.className = className;
this.type = type;
this.version = version;
this.helpDocLink = helpDocLink;
}
}

View File

@@ -1,12 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group; package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
import com.xiaojukeji.know.streaming.km.common.constant.Constant; import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum; import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum; import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.AllArgsConstructor; import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.List; import java.util.List;
@@ -61,14 +61,14 @@ public class Group {
*/ */
private int coordinatorId; private int coordinatorId;
public Group(Long clusterPhyId, String groupName, KSGroupDescription groupDescription) { public Group(Long clusterPhyId, String groupName, ConsumerGroupDescription groupDescription) {
this.clusterPhyId = clusterPhyId; this.clusterPhyId = clusterPhyId;
this.type = GroupTypeEnum.getTypeByProtocolType(groupDescription.protocolType()); this.type = groupDescription.isSimpleConsumerGroup()? GroupTypeEnum.CONSUMER: GroupTypeEnum.CONNECTOR;
this.name = groupName; this.name = groupName;
this.state = GroupStateEnum.getByRawState(groupDescription.state()); this.state = GroupStateEnum.getByRawState(groupDescription.state());
this.memberCount = groupDescription.members() == null ? 0 : groupDescription.members().size(); this.memberCount = groupDescription.members() == null? 0: groupDescription.members().size();
this.topicMembers = new ArrayList<>(); this.topicMembers = new ArrayList<>();
this.partitionAssignor = groupDescription.partitionAssignor(); this.partitionAssignor = groupDescription.partitionAssignor();
this.coordinatorId = groupDescription.coordinator() == null ? Constant.INVALID_CODE : groupDescription.coordinator().id(); this.coordinatorId = groupDescription.coordinator() == null? Constant.INVALID_CODE: groupDescription.coordinator().id();
} }
} }

View File

@@ -1,71 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.health;
import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.stream.Collectors;
@Data
@NoArgsConstructor
public class HealthCheckAggResult {
protected HealthCheckNameEnum checkNameEnum;
protected List<HealthCheckResultPO> poList;
protected Boolean passed;
public HealthCheckAggResult(HealthCheckNameEnum checkNameEnum, List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.poList = poList;
if (ValidateUtils.isEmptyList(poList) || poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
}
}
public Integer getTotalCount() {
if (poList == null) {
return 0;
}
return poList.size();
}
public Integer getPassedCount() {
if (poList == null) {
return 0;
}
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0 && !ValidateUtils.isBlank(elem.getResName())).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getCreateTime();
}
public Date getUpdateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getUpdateTime();
}
}

View File

@@ -3,20 +3,121 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.health;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.BaseClusterHealthConfig; import com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck.BaseClusterHealthConfig;
import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO; import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum; import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.Date;
import java.util.List; import java.util.List;
import java.util.stream.Collectors;
@Data @Data
@NoArgsConstructor @NoArgsConstructor
public class HealthScoreResult extends HealthCheckAggResult { public class HealthScoreResult {
private HealthCheckNameEnum checkNameEnum;
private Float presentDimensionTotalWeight;
private Float allDimensionTotalWeight;
private BaseClusterHealthConfig baseConfig; private BaseClusterHealthConfig baseConfig;
private List<HealthCheckResultPO> poList;
private Boolean passed;
public HealthScoreResult(HealthCheckNameEnum checkNameEnum, public HealthScoreResult(HealthCheckNameEnum checkNameEnum,
Float presentDimensionTotalWeight,
Float allDimensionTotalWeight,
BaseClusterHealthConfig baseConfig, BaseClusterHealthConfig baseConfig,
List<HealthCheckResultPO> poList) { List<HealthCheckResultPO> poList) {
super(checkNameEnum, poList); this.checkNameEnum = checkNameEnum;
this.baseConfig = baseConfig; this.baseConfig = baseConfig;
this.poList = poList;
this.presentDimensionTotalWeight = presentDimensionTotalWeight;
this.allDimensionTotalWeight = allDimensionTotalWeight;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
}
}
public Integer getTotalCount() {
if (poList == null) {
return 0;
}
return poList.size();
}
public Integer getPassedCount() {
if (poList == null) {
return 0;
}
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算所有检查结果的健康分
* 比如:计算集群健康分
*/
public Float calAllWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / allDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前维度的健康分
* 比如计算集群Broker健康分
*/
public Float calDimensionWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / presentDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
*/
public Integer calRawHealthScore() {
if (poList == null || poList.isEmpty()) {
return 100;
}
return 100 * this.getPassedCount() / this.getTotalCount();
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getCreateTime();
}
public Date getUpdateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getUpdateTime();
} }
} }

View File

@@ -1,45 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.KafkaFuture;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.ExecutionException;
public class KSDescribeGroupsResult {
private final Map<String, KafkaFuture<KSGroupDescription>> futures;
public KSDescribeGroupsResult(final Map<String, KafkaFuture<KSGroupDescription>> futures) {
this.futures = futures;
}
/**
* Return a map from group id to futures which yield group descriptions.
*/
public Map<String, KafkaFuture<KSGroupDescription>> describedGroups() {
return futures;
}
/**
* Return a future which yields all ConsumerGroupDescription objects, if all the describes succeed.
*/
public KafkaFuture<Map<String, KSGroupDescription>> all() {
return KafkaFuture.allOf(futures.values().toArray(new KafkaFuture[0])).thenApply(
new KafkaFuture.BaseFunction<Void, Map<String, KSGroupDescription>>() {
@Override
public Map<String, KSGroupDescription> apply(Void v) {
try {
Map<String, KSGroupDescription> descriptions = new HashMap<>(futures.size());
for (Map.Entry<String, KafkaFuture<KSGroupDescription>> entry : futures.entrySet()) {
descriptions.put(entry.getKey(), entry.getValue().get());
}
return descriptions;
} catch (InterruptedException | ExecutionException e) {
// This should be unreachable, since the KafkaFuture#allOf already ensured
// that all of the futures completed successfully.
throw new RuntimeException(e);
}
}
});
}
}

View File

@@ -1,124 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.utils.Utils;
import java.util.*;
public class KSGroupDescription {
private final String groupId;
private final String protocolType;
private final Collection<KSMemberDescription> members;
private final String partitionAssignor;
private final ConsumerGroupState state;
private final Node coordinator;
private final Set<AclOperation> authorizedOperations;
public KSGroupDescription(String groupId,
String protocolType,
Collection<KSMemberDescription> members,
String partitionAssignor,
ConsumerGroupState state,
Node coordinator) {
this(groupId, protocolType, members, partitionAssignor, state, coordinator, Collections.emptySet());
}
public KSGroupDescription(String groupId,
String protocolType,
Collection<KSMemberDescription> members,
String partitionAssignor,
ConsumerGroupState state,
Node coordinator,
Set<AclOperation> authorizedOperations) {
this.groupId = groupId == null ? "" : groupId;
this.protocolType = protocolType;
this.members = members == null ? Collections.emptyList() :
Collections.unmodifiableList(new ArrayList<>(members));
this.partitionAssignor = partitionAssignor == null ? "" : partitionAssignor;
this.state = state;
this.coordinator = coordinator;
this.authorizedOperations = authorizedOperations;
}
@Override
public boolean equals(final Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
final KSGroupDescription that = (KSGroupDescription) o;
return protocolType == that.protocolType &&
Objects.equals(groupId, that.groupId) &&
Objects.equals(members, that.members) &&
Objects.equals(partitionAssignor, that.partitionAssignor) &&
state == that.state &&
Objects.equals(coordinator, that.coordinator) &&
Objects.equals(authorizedOperations, that.authorizedOperations);
}
@Override
public int hashCode() {
return Objects.hash(groupId, protocolType, members, partitionAssignor, state, coordinator, authorizedOperations);
}
/**
* The id of the consumer group.
*/
public String groupId() {
return groupId;
}
/**
* If consumer group is simple or not.
*/
public String protocolType() {
return protocolType;
}
/**
* A list of the members of the consumer group.
*/
public Collection<KSMemberDescription> members() {
return members;
}
/**
* The consumer group partition assignor.
*/
public String partitionAssignor() {
return partitionAssignor;
}
/**
* The consumer group state, or UNKNOWN if the state is too new for us to parse.
*/
public ConsumerGroupState state() {
return state;
}
/**
* The consumer group coordinator, or null if the coordinator is not known.
*/
public Node coordinator() {
return coordinator;
}
/**
* authorizedOperations for this group, or null if that information is not known.
*/
public Set<AclOperation> authorizedOperations() {
return authorizedOperations;
}
@Override
public String toString() {
return "(groupId=" + groupId +
", protocolType=" + protocolType +
", members=" + Utils.join(members, ",") +
", partitionAssignor=" + partitionAssignor +
", state=" + state +
", coordinator=" + coordinator +
", authorizedOperations=" + authorizedOperations +
")";
}
}

View File

@@ -1,79 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.clients.admin.ConsumerGroupListing;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.internals.KafkaFutureImpl;
import java.util.ArrayList;
import java.util.Collection;
public class KSListGroupsResult {
private final KafkaFutureImpl<Collection<ConsumerGroupListing>> all;
private final KafkaFutureImpl<Collection<ConsumerGroupListing>> valid;
private final KafkaFutureImpl<Collection<Throwable>> errors;
public KSListGroupsResult(KafkaFutureImpl<Collection<Object>> future) {
this.all = new KafkaFutureImpl<>();
this.valid = new KafkaFutureImpl<>();
this.errors = new KafkaFutureImpl<>();
future.thenApply(new KafkaFuture.BaseFunction<Collection<Object>, Void>() {
@Override
public Void apply(Collection<Object> results) {
ArrayList<Throwable> curErrors = new ArrayList<>();
ArrayList<ConsumerGroupListing> curValid = new ArrayList<>();
for (Object resultObject : results) {
if (resultObject instanceof Throwable) {
curErrors.add((Throwable) resultObject);
} else {
curValid.add((ConsumerGroupListing) resultObject);
}
}
if (!curErrors.isEmpty()) {
all.completeExceptionally(curErrors.get(0));
} else {
all.complete(curValid);
}
valid.complete(curValid);
errors.complete(curErrors);
return null;
}
});
}
/**
* Returns a future that yields either an exception, or the full set of consumer group
* listings.
*
* In the event of a failure, the future yields nothing but the first exception which
* occurred.
*/
public KafkaFuture<Collection<ConsumerGroupListing>> all() {
return all;
}
/**
* Returns a future which yields just the valid listings.
*
* This future never fails with an error, no matter what happens. Errors are completely
* ignored. If nothing can be fetched, an empty collection is yielded.
* If there is an error, but some results can be returned, this future will yield
* those partial results. When using this future, it is a good idea to also check
* the errors future so that errors can be displayed and handled.
*/
public KafkaFuture<Collection<ConsumerGroupListing>> valid() {
return valid;
}
/**
* Returns a future which yields just the errors which occurred.
*
* If this future yields a non-empty collection, it is very likely that elements are
* missing from the valid() set.
*
* This future itself never fails with an error. In the event of an error, this future
* will successfully yield a collection containing at least one exception.
*/
public KafkaFuture<Collection<Throwable>> errors() {
return errors;
}
}

View File

@@ -1,4 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
public class KSMemberBaseAssignment {
}

View File

@@ -1,25 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import lombok.Getter;
import org.apache.kafka.connect.runtime.distributed.ConnectProtocol;
@Getter
public class KSMemberConnectAssignment extends KSMemberBaseAssignment {
private final ConnectProtocol.Assignment assignment;
private final ConnectProtocol.WorkerState workerState;
public KSMemberConnectAssignment(ConnectProtocol.Assignment assignment, ConnectProtocol.WorkerState workerState) {
this.assignment = assignment;
this.workerState = workerState;
}
@Override
public String toString() {
return "KSMemberConnectAssignment{" +
"assignment=" + assignment +
", workerState=" + workerState +
'}';
}
}

View File

@@ -1,50 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.utils.Utils;
import java.util.Collections;
import java.util.HashSet;
import java.util.Objects;
import java.util.Set;
public class KSMemberConsumerAssignment extends KSMemberBaseAssignment {
private final Set<TopicPartition> topicPartitions;
/**
* Creates an instance with the specified parameters.
*
* @param topicPartitions List of topic partitions
*/
public KSMemberConsumerAssignment(Set<TopicPartition> topicPartitions) {
this.topicPartitions = topicPartitions == null ? Collections.<TopicPartition>emptySet() :
Collections.unmodifiableSet(new HashSet<>(topicPartitions));
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KSMemberConsumerAssignment that = (KSMemberConsumerAssignment) o;
return Objects.equals(topicPartitions, that.topicPartitions);
}
@Override
public int hashCode() {
return topicPartitions != null ? topicPartitions.hashCode() : 0;
}
/**
* The topic partitions assigned to a group member.
*/
public Set<TopicPartition> topicPartitions() {
return topicPartitions;
}
@Override
public String toString() {
return "(topicPartitions=" + Utils.join(topicPartitions, ",") + ")";
}
}

View File

@@ -1,93 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.kafka;
import java.util.Objects;
import java.util.Optional;
public class KSMemberDescription {
private final String memberId;
private final Optional<String> groupInstanceId;
private final String clientId;
private final String host;
private final KSMemberBaseAssignment assignment;
public KSMemberDescription(String memberId,
Optional<String> groupInstanceId,
String clientId,
String host,
KSMemberBaseAssignment assignment) {
this.memberId = memberId == null ? "" : memberId;
this.groupInstanceId = groupInstanceId;
this.clientId = clientId == null ? "" : clientId;
this.host = host == null ? "" : host;
this.assignment = assignment == null ?
new KSMemberBaseAssignment() : assignment;
}
public KSMemberDescription(String memberId,
String clientId,
String host,
KSMemberBaseAssignment assignment) {
this(memberId, Optional.empty(), clientId, host, assignment);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KSMemberDescription that = (KSMemberDescription) o;
return memberId.equals(that.memberId) &&
groupInstanceId.equals(that.groupInstanceId) &&
clientId.equals(that.clientId) &&
host.equals(that.host) &&
assignment.equals(that.assignment);
}
@Override
public int hashCode() {
return Objects.hash(memberId, groupInstanceId, clientId, host, assignment);
}
/**
* The consumer id of the group member.
*/
public String consumerId() {
return memberId;
}
/**
* The instance id of the group member.
*/
public Optional<String> groupInstanceId() {
return groupInstanceId;
}
/**
* The client id of the group member.
*/
public String clientId() {
return clientId;
}
/**
* The host where the group member is running.
*/
public String host() {
return host;
}
/**
* The assignment of the group member.
*/
public KSMemberBaseAssignment assignment() {
return assignment;
}
@Override
public String toString() {
return "(memberId=" + memberId +
", groupInstanceId=" + groupInstanceId.orElse("null") +
", clientId=" + clientId +
", host=" + host +
", assignment=" + assignment + ")";
}
}

View File

@@ -36,7 +36,7 @@ public abstract class BaseMetrics implements Serializable {
return metrics.get(key); return metrics.get(key);
} }
protected BaseMetrics(Long clusterPhyId) { public BaseMetrics(Long clusterPhyId){
this.clusterPhyId = clusterPhyId; this.clusterPhyId = clusterPhyId;
} }

View File

@@ -1,35 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ToString
public class ConnectClusterMetrics extends BaseMetrics {
private Long connectClusterId;
public ConnectClusterMetrics(Long clusterPhyId, Long connectClusterId){
super(clusterPhyId);
this.connectClusterId = connectClusterId;
}
public static ConnectClusterMetrics initWithMetric(Long connectClusterId, String metric, Float value) {
ConnectClusterMetrics brokerMetrics = new ConnectClusterMetrics(connectClusterId, connectClusterId);
brokerMetrics.putMetric(metric, value);
return brokerMetrics;
}
@Override
public String unique() {
return "KCC@" + clusterPhyId + "@" + connectClusterId;
}
}

View File

@@ -1,35 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author wyb
* @date 2022/11/2
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
@ToString
public class ConnectWorkerMetrics extends BaseMetrics {
private Long connectClusterId;
private String workerId;
public static ConnectWorkerMetrics initWithMetric(Long connectClusterId, String workerId, String metric, Float value) {
ConnectWorkerMetrics connectWorkerMetrics = new ConnectWorkerMetrics();
connectWorkerMetrics.setConnectClusterId(connectClusterId);
connectWorkerMetrics.setWorkerId(workerId);
connectWorkerMetrics.putMetric(metric, value);
return connectWorkerMetrics;
}
@Override
public String unique() {
return "KCC@" + clusterPhyId + "@" + connectClusterId + "@" + workerId;
}
}

View File

@@ -1,39 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@NoArgsConstructor
@ToString
public class ConnectorMetrics extends BaseMetrics {
private Long connectClusterId;
private String connectorName;
private String connectorNameAndClusterId;
public ConnectorMetrics(Long connectClusterId, String connectorName) {
super(null);
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.connectorNameAndClusterId = connectorName + "#" + connectClusterId;
}
public static ConnectorMetrics initWithMetric(Long connectClusterId, String connectorName, String metricName, Float value) {
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
metrics.putMetric(metricName, value);
return metrics;
}
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + connectorName;
}
}

View File

@@ -1,39 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BaseMetrics;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* @author wyb
* @date 2022/11/4
*/
@Data
@NoArgsConstructor
@ToString
public class ConnectorTaskMetrics extends BaseMetrics {
private Long connectClusterId;
private String connectorName;
private Integer taskId;
public ConnectorTaskMetrics(Long connectClusterId, String connectorName, Integer taskId) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.taskId = taskId;
}
public static ConnectorTaskMetrics initWithMetric(Long connectClusterId, String connectorName, Integer taskId, String metricName, Float value) {
ConnectorTaskMetrics metrics = new ConnectorTaskMetrics(connectClusterId, connectorName, taskId);
metrics.putMetric(metricName,value);
return metrics;
}
@Override
public String unique() {
return "KCOR@" + connectClusterId + "@" + connectorName + "@" + taskId;
}
}

View File

@@ -1,50 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.offset;
import org.apache.kafka.clients.admin.OffsetSpec;
/**
* @see OffsetSpec
*/
public class KSOffsetSpec {
public static class KSEarliestSpec extends KSOffsetSpec { }
public static class KSLatestSpec extends KSOffsetSpec { }
public static class KSTimestampSpec extends KSOffsetSpec {
private final long timestamp;
public KSTimestampSpec(long timestamp) {
this.timestamp = timestamp;
}
public long timestamp() {
return timestamp;
}
}
/**
* Used to retrieve the latest offset of a partition
*/
public static KSOffsetSpec latest() {
return new KSOffsetSpec.KSLatestSpec();
}
/**
* Used to retrieve the earliest offset of a partition
*/
public static KSOffsetSpec earliest() {
return new KSOffsetSpec.KSEarliestSpec();
}
/**
* Used to retrieve the earliest offset whose timestamp is greater than
* or equal to the given timestamp in the corresponding partition
* @param timestamp in milliseconds
*/
public static KSOffsetSpec forTimestamp(long timestamp) {
return new KSOffsetSpec.KSTimestampSpec(timestamp);
}
private KSOffsetSpec() {
}
}

View File

@@ -1,10 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.VersionItemParam;
/**
* @author wyc
* @date 2022/11/9
*/
public class ClusterParam extends VersionItemParam {
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster; package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.VersionItemParam;
import lombok.AllArgsConstructor; import lombok.AllArgsConstructor;
import lombok.Data; import lombok.Data;
import lombok.NoArgsConstructor; import lombok.NoArgsConstructor;
@@ -7,6 +8,6 @@ import lombok.NoArgsConstructor;
@Data @Data
@NoArgsConstructor @NoArgsConstructor
@AllArgsConstructor @AllArgsConstructor
public class ClusterPhyParam extends ClusterParam { public class ClusterPhyParam extends VersionItemParam {
protected Long clusterPhyId; protected Long clusterPhyId;
} }

View File

@@ -1,16 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/9
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectClusterParam extends ClusterParam{
protected Long connectClusterId;
}

View File

@@ -1,26 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ConnectClusterParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/8
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectorParam extends ConnectClusterParam {
private String connectorName;
public ConnectorParam(Long connectClusterId, String connectorName) {
super(connectClusterId);
this.connectorName = connectorName;
}
}

View File

@@ -1,21 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.MetricParam;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/1
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ConnectClusterMetricParam extends MetricParam {
private Long connectClusterId;
private String metric;
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.connect;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.MetricParam;
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/11/2
*/
@Data
@NoArgsConstructor
public class ConnectorMetricParam extends MetricParam {
private Long connectClusterId;
private String connectorName;
private String metricName;
private ConnectorTypeEnum connectorType;
public ConnectorMetricParam(Long connectClusterId, String connectorName, String metricName, ConnectorTypeEnum connectorType) {
this.connectClusterId = connectClusterId;
this.connectorName = connectorName;
this.metricName = metricName;
this.connectorType = connectorType;
}
}

Some files were not shown because too many files have changed in this diff Show More