Compare commits

..

113 Commits

Author SHA1 Message Date
EricZeng
89405fe003 Merge pull request #434 from didi/fix_2.5.0
修复console模块关闭问题及前端文件名错误问题
2022-01-13 14:00:01 +08:00
shirenchuang
b9ea3865a5 升级到2.5版本
(cherry picked from commit 5bc6eb6774)
2022-01-13 13:47:21 +08:00
孙超
b5bd643814 修复图片名称大小写问题
(cherry picked from commit ada2718b5e)
2022-01-13 13:46:06 +08:00
shirenchuang
5bc6eb6774 升级到2.5版本 2021-12-16 18:28:51 +08:00
石臻臻的杂货铺
3ba81e9aaa Merge pull request #411 from didi/v2.5.0
升级到2.5版本
2021-12-16 15:28:34 +08:00
shirenchuang
329a9b59c1 升级到2.5版本 2021-12-16 15:08:54 +08:00
EricZeng
22c26e24b1 Merge pull request #410 from lucasun/hotfix/2.5.0_fe
修复顶导V2.5.0版本
2021-12-11 14:30:22 +08:00
孙超
396045177c 修复顶导V2.5.0版本 2021-12-11 14:25:43 +08:00
EricZeng
e311d3767c Merge pull request #407 from didi/dev_v2.5.0
merge dev_v2.5.0 to master
2021-12-01 19:42:03 +08:00
EricZeng
24d7b80244 Merge pull request #406 from kingdomrushing/dev_v2.5.0
我的申请-审批中-审批时间置为空
2021-12-01 13:31:08 +08:00
xuguang
61f99e4d2e 我的申请-审批中-审批时间置为空 2021-12-01 12:54:02 +08:00
EricZeng
d5348bcf49 Merge pull request #405 from lucasun/dev_v2.5.0_fe
我的申请-审批列表列-申请时间、审批时间增加无数据判断
2021-12-01 11:27:58 +08:00
孙超
5d31d66365 我的申请-审批列表列-申请时间、审批时间增加无数据判断 2021-12-01 11:17:12 +08:00
EricZeng
29778a0154 Merge pull request #400 from lucasun/dev_v2.5.0_fe
Dev v2.5.0 fe
2021-11-30 15:01:49 +08:00
Peng
165c0a5866 Update README.md 2021-11-29 19:32:54 +08:00
EricZeng
588323961e Merge pull request #401 from didi/master
合并主分支到2.5开发分支
2021-11-23 18:50:27 +08:00
孙超
fd1c0b71c5 V2.5.0前端 更换二维码&前端bugfix 2021-11-23 17:48:10 +08:00
lucasun
54fbdcadf9 Merge branch 'didi:master' into master 2021-11-23 17:35:27 +08:00
石臻臻的杂货铺
69a30d0cf0 Merge pull request #399 from kingdomrushing/dev_v2.5.0
修复"新添加集群的时候,报watch的空指针异常"问题 & 修复"删除废弃Topic之后,Topic资源治理没有同步删除"问题
2021-11-22 17:35:36 +08:00
xuguang
b8f9b44f38 修复获取topic流量指标未按时间排序问题 2021-11-20 13:30:34 +08:00
xuguang
cbf17d4eb5 修复"新添加集群的时候,报watch的空指针异常"问题 & 修复"删除废弃Topic之后,Topic资源治理没有同步删除"问题 2021-11-19 19:27:19 +08:00
石臻臻的杂货铺
327e025262 Merge pull request #397 from kingdomrushing/dev_v2.5.0
Dev v2.5.0
2021-11-19 14:20:20 +08:00
xuguang
6b1e944bba 修复topic管理中topic编辑备注没有数据回显问题 2021-11-19 13:23:52 +08:00
EricZeng
668ed4d61b Merge pull request #396 from didi/dev_inc_monitor_indicators
补充新增上报监控系统指标说明文档
2021-11-16 22:32:20 +08:00
zengqiao
312c0584ed 补充新增上报监控系统指标说明文档 2021-11-16 22:20:35 +08:00
zengqiao
110d3acb58 补充新增上报监控系统指标说明文档 2021-11-16 22:16:35 +08:00
xuguang
ddbc60283b 将tomcat版本升级为8.5.72 & "我的审批"列表增加"通过时间"列,并支持按该列排序 & JMX连接关闭问题修复 2021-11-16 17:15:58 +08:00
shirenchuang
471bcecfd6 Merge branch 'v2.4.3' into dev_v2.5.0 2021-11-15 12:45:01 +08:00
shirenchuang
0245791b13 Merge remote-tracking branch 'origin/master' into dev_v2.5.0 2021-11-15 11:45:22 +08:00
shirenchuang
4794396ce8 Merge remote-tracking branch 'origin/master' into v2.4.3 2021-11-15 10:49:22 +08:00
EricZeng
c7088779d6 Merge pull request #395 from ZHAOYINRUI/patch-5
Update README.md
2021-11-11 15:21:11 +08:00
ZHAOYINRUI
672905da12 Update README.md 2021-11-11 15:19:29 +08:00
EricZeng
47172b13be Merge pull request #394 from ZHAOYINRUI/patch-4
Update README.md
2021-11-09 14:18:34 +08:00
ZHAOYINRUI
3668a10af6 Update README.md 2021-11-09 12:43:09 +08:00
EricZeng
a4e294c03f Merge pull request #393 from ZHAOYINRUI/patch-3
增加【Kafka中文社区】知识星球二维码
2021-11-08 17:44:15 +08:00
ZHAOYINRUI
3fd6f4003f 增加【Kafka中文社区】知识星球二维码 2021-11-08 17:39:08 +08:00
EricZeng
3eaf5cd530 Merge pull request #378 from didi/dev
白名单接口中仅保留登录接口
2021-09-21 11:09:36 +08:00
zengqiao
c344fd8ca4 白名单接口中仅保留登录接口 2021-09-21 11:00:33 +08:00
EricZeng
09639ca294 Merge pull request #377 from didi/dev
修复Sonar扫描问题
2021-09-21 10:58:36 +08:00
EricZeng
a81b6dca83 Merge pull request #376 from didi/master
merge master
2021-09-21 10:47:55 +08:00
mike.zhangliang
b74aefb08f Update README.md 2021-08-15 15:14:25 +08:00
mike.zhangliang
757f90aa7a Update README.md 2021-08-11 09:08:33 +08:00
EricZeng
7e1b3c552b Merge pull request #360 from ZHAOYINRUI/patch-1
Update 开源版与商业版特性对比.md
2021-08-03 10:02:02 +08:00
ZHAOYINRUI
69736a63b6 Update 开源版与商业版特性对比.md
补充优化
2021-08-02 22:10:15 +08:00
EricZeng
fb4a9f9056 删除多余的‘在’
删除多余的‘在’
2021-07-26 09:28:16 +08:00
zengqiao
387d89d3af optimize code format by sonar-lint 2021-07-13 10:39:28 +08:00
EricZeng
65d9ca9d39 Merge pull request #336 from fengxsong/master
feat: update dockerfile and helm chart
2021-07-10 10:47:57 +08:00
Peng
8c842af4ba Update README.md
更新小尺寸logo
2021-07-09 12:46:18 +08:00
shirenchuang
4faf9262c9 配置文件漏了 加上 2021-07-09 11:55:14 +08:00
shirenchuang
be7724c67d 2021-07-09 11:21:20 +08:00
Peng
48d26347f7 Update README.md
替换logo
2021-07-09 11:01:18 +08:00
shirenchuang
bdb01ec8b5 2021-07-07 13:28:55 +08:00
mike.zhangliang
9047815799 Update README.md 2021-07-06 17:36:23 +08:00
EricZeng
05bd94a2cc Merge pull request #344 from didi/dev
删除钉钉群二维码
2021-07-05 12:15:52 +08:00
zengqiao
c9f7da84d0 删除钉钉群二维码 2021-07-05 12:14:37 +08:00
EricZeng
bcc124e86a Merge pull request #343 from Hongten/master
修复Converts#convert2OrderDO() 出现重复赋值
2021-07-04 21:35:34 +08:00
Hongten
48d2733403 Merge pull request #2 from didi/master
sync code
2021-07-04 18:04:55 +08:00
hongtenzone@foxmail.com
31fc6e4e56 remove duplicate operation 2021-07-04 17:59:36 +08:00
hongtenzone@foxmail.com
fcdeef0146 remove duplicate operation 2021-07-04 17:55:54 +08:00
EricZeng
1cd524c0cc Merge pull request #341 from didi/dev
Topic基本信息中增加retention.bytes信息
2021-07-02 18:34:56 +08:00
zengqiao
0f746917a7 Topic基本信息中增加retention.bytes信息 2021-07-02 16:41:57 +08:00
EricZeng
a2228d0169 Merge pull request #335 from didi/dev
bump jackson-databind version to 2.9.10.8
2021-06-24 18:04:57 +08:00
shirenchuang
e8a679d34b Merge branch 'master' into v2.4.3 2021-06-24 17:18:50 +08:00
fengxusong
1912a42091 fix: default config 2021-06-24 14:00:29 +08:00
fengxusong
ca81f96635 feat: update dockerfile and charts 2021-06-24 12:13:29 +08:00
zengqiao
eb3b8c4b31 bump jackson-databind version to 2.9.10.8 2021-06-23 21:31:43 +08:00
EricZeng
6740d6d60b Merge pull request #332 from didi/dev
修复poll异常时, 超时时间不生效问题
2021-06-23 20:24:50 +08:00
zengqiao
c46c35b248 修复poll异常时, 超时时间不生效问题 2021-06-23 10:11:38 +08:00
EricZeng
0b2dcec4bc Merge pull request #323 from didi/dev
fix jmx credentials
2021-06-03 10:22:12 +08:00
shirenchuang
f8e2a4aff4 修改km的打包方式
增加启动/关闭脚本
2021-06-02 18:13:58 +08:00
zengqiao
7256db8c4e fix jmx credentials 2021-06-02 13:59:18 +08:00
shirenchuang
b14d5d9bee 修改km的打包方式
增加启动/关闭脚本
2021-06-01 20:20:40 +08:00
shirenchuang
12e15c3e4b Merge branch 'shirc_dev' into dev_v2.5.0 2021-06-01 20:19:53 +08:00
shirenchuang
51911bf272 add distribution 2021-06-01 20:17:54 +08:00
shirenchuang
6dc8061401 add distribution 2021-06-01 16:32:16 +08:00
EricZeng
b8fa4f8797 Merge pull request #319 from didi/dev
optimize n9e's default port
2021-05-31 19:44:38 +08:00
zengqiao
cc0bea7f45 optimize n9e's default port 2021-05-31 19:43:03 +08:00
EricZeng
4e9124b244 Merge pull request #316 from didi/dev
Topic账单配置说明
2021-05-29 13:46:18 +08:00
zengqiao
f0eabef7b0 Topic账单配置说明 2021-05-28 17:36:36 +08:00
EricZeng
23e5557958 Merge pull request #315 from didi/master
kafka-gateway相关功能说明
2021-05-28 17:13:29 +08:00
EricZeng
b1d02afa85 Merge pull request #312 from lucasun/master
修复clipbord 2.0.6 打包问题
2021-05-28 11:34:01 +08:00
孙超
2edc380f47 修改package.json增加内存修复和clipbord版本 2021-05-28 11:30:33 +08:00
孙超
cea8295c09 clipbord升级版本 2021-05-28 11:21:12 +08:00
EricZeng
244bfc993a Merge pull request #310 from ZHAOYINRUI/master
补充FAQ开源版和商业版特性对比
2021-05-27 14:50:52 +08:00
ZHAOYINRUI
3a272a4493 Update faq.md 2021-05-27 14:45:51 +08:00
ZHAOYINRUI
a3300db770 Update faq.md 2021-05-27 14:22:59 +08:00
ZHAOYINRUI
b0394ce261 Delete Logi-KafkaManager开源版和商业版特性对比总结.pdf 2021-05-27 14:22:03 +08:00
ZHAOYINRUI
3123089790 Create 开源版与商业版特性对比.md 2021-05-27 14:21:48 +08:00
ZHAOYINRUI
f13cf66676 Delete 开源版与商业版特性对比.md 2021-05-27 12:08:15 +08:00
ZHAOYINRUI
0c8c4d87fb Delete 开源版与商业版特性对比.md 2021-05-27 12:08:01 +08:00
ZHAOYINRUI
066088fdeb Update faq.md 2021-05-27 12:06:51 +08:00
ZHAOYINRUI
cf641e41c7 Update faq.md 2021-05-27 12:05:41 +08:00
ZHAOYINRUI
5b48322e1b Update faq.md 2021-05-27 12:04:58 +08:00
ZHAOYINRUI
9d3f680d58 Update faq.md 2021-05-27 12:04:04 +08:00
ZHAOYINRUI
bed28d57e6 Update 开源版与商业版特性对比.md 2021-05-27 12:02:09 +08:00
ZHAOYINRUI
2538525103 Update 开源版与商业版特性对比.md 2021-05-27 12:01:51 +08:00
ZHAOYINRUI
6ed798db8c Create 开源版与商业版特性对比.md 2021-05-27 12:01:15 +08:00
ZHAOYINRUI
8e9d966829 Update 开源版与商业版特性对比.md 2021-05-27 12:00:26 +08:00
ZHAOYINRUI
be16640f92 Update 开源版与商业版特性对比.md 2021-05-27 11:59:41 +08:00
ZHAOYINRUI
0e1376dd2e Create 开源版与商业版特性对比.md 2021-05-27 11:57:56 +08:00
ZHAOYINRUI
0494575aa7 Update faq.md 2021-05-27 11:32:08 +08:00
ZHAOYINRUI
bed57534e0 Add files via upload 2021-05-27 11:22:47 +08:00
EricZeng
1862d631d1 Merge pull request #305 from didi/dev
heartbeat表的数据更新时间从MySQL自动生成调整为Logi-KM的时间
2021-05-25 13:44:04 +08:00
zengqiao
c977ce5690 heartbeat表的数据更新时间从MySQL自动生成调整为Logi-KM的时间 2021-05-25 10:27:27 +08:00
zengqiao
84df377516 bump version to v2.4.2 and add release notes 2021-05-21 10:45:10 +08:00
EricZeng
4d9a284f6e Merge pull request #303 from didi/dev
bump tomcat version to 8.5.66
2021-05-21 10:21:16 +08:00
zengqiao
da7ad8b44a bump tomcat version to 8.5.66 2021-05-21 10:20:09 +08:00
EricZeng
4164046323 Merge pull request #301 from didi/dev
fix title version
2021-05-20 20:36:14 +08:00
zengqiao
72e743dfd1 fix title version 2021-05-20 20:35:23 +08:00
EricZeng
7eb7edaf0a Merge pull request #300 from didi/dev
bump tomcat version to 8.5.56
2021-05-20 20:33:59 +08:00
zengqiao
49368aaf76 bump tomcat version to 8.5.56 2021-05-20 18:17:30 +08:00
Hongten
25c3aeaa5f Merge pull request #1 from Hongten/optimize/migration_task_name
optimize the migration task name
2021-05-07 19:26:54 +08:00
Xiang Hong Wei
736d5a00b7 optimize the migration task name 2021-05-07 19:06:47 +08:00
74 changed files with 1322 additions and 324 deletions

View File

@@ -1,13 +1,13 @@
--- ---
![kafka-manager-logo](./docs/assets/images/common/logo_name.png) ![logikm_logo](https://user-images.githubusercontent.com/71620349/125024570-9e07a100-e0b3-11eb-8ebc-22e73e056771.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台** **一站式`Apache Kafka`集群指标监控与运维管控平台**
阅读本README文档您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息并通过体验地址快速体验Kafka集群指标监控与运维管控的全流程。<br>若滴滴Logi-KafkaManager已在贵司的生产环境进行使用并想要获得官方更好地支持和指导可以通过[`OCE认证`](http://obsuite.didiyun.com/open/openAuth),加入官方交流平台。 阅读本README文档您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息并通过体验地址快速体验Kafka集群指标监控与运维管控的全流程。
## 1 产品简介 ## 1 产品简介
@@ -73,15 +73,17 @@
![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png) ![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png)
微信加群:关注公众号 Obsuite(官方公众号) 回复 "Logi加群" 微信加群:添加mike_zhangliang的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群"
![dingding_group](./docs/assets/images/common/dingding_group.jpg) ## 4 知识星球
钉钉群ID32821440
![image](https://user-images.githubusercontent.com/51046167/140718512-5ab1b336-5c48-46c0-90bd-44b5c7e168c8.png)
## 4 OCE认证 ✅知识星球首个【Kafka中文社区】内测期免费加入https://z.didi.cn/5gSF9
OCE是一个认证机制和交流平台为滴滴Logi-KafkaManager生产用户量身打造我们会为OCE企业提供更好的技术支持比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等如果贵司Logi-KafkaManager上了生产[快来加入吧](http://obsuite.didiyun.com/open/openAuth) 有问必答~
互动有礼~
1600+群友一起共建国内最专业的【Kafka中文社区】
PS:提问请尽量把问题一次性描述清楚,并告知环境信息情况哦~!如使用版本、操作步骤、报错/警告信息等,方便嘉宾们快速解答~
## 5 项目成员 ## 5 项目成员
@@ -97,4 +99,4 @@ OCE是一个认证机制和交流平台为滴滴Logi-KafkaManager生产用户
## 6 协议 ## 6 协议
`kafka-manager`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE) `LogiKM`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)

View File

@@ -7,9 +7,26 @@
--- ---
## v2.4.1+
版本上线时间2021-05-21
### 能力提升
- 增加直接增加权限和配额的接口(v2.4.1)
- 增加接口调用可绕过登录的功能(v2.4.1)
### 体验优化
- tomcat 版本提升至8.5.66(v2.4.2)
- op接口优化拆分util接口为topic、leader两类接口(v2.4.1)
- 简化Gateway配置的Key长度(v2.4.1)
### bug修复
- 修复页面展示版本错误问题(v2.4.2)
## v2.4.0 ## v2.4.0
版本上线时间2021-04-26 版本上线时间2021-05-18
### 能力提升 ### 能力提升

View File

@@ -1,72 +0,0 @@
#!/bin/bash
workspace=$(cd $(dirname $0) && pwd -P)
cd $workspace
## constant
OUTPUT_DIR=./output
KM_VERSION=2.4.1
APP_NAME=kafka-manager
APP_DIR=${APP_NAME}-${KM_VERSION}
MYSQL_TABLE_SQL_FILE=./docs/install_guide/create_mysql_table.sql
CONFIG_FILE=./kafka-manager-web/src/main/resources/application.yml
## function
function build() {
# 编译命令
mvn -U clean package -Dmaven.test.skip=true
local sc=$?
if [ $sc -ne 0 ];then
## 编译失败, 退出码为 非0
echo "$APP_NAME build error"
exit $sc
else
echo "$APP_NAME build ok"
fi
}
function make_output() {
# 新建output目录
rm -rf ${OUTPUT_DIR} &>/dev/null
mkdir -p ${OUTPUT_DIR}/${APP_DIR} &>/dev/null
# 填充output目录, output内的内容
(
cp -rf ${MYSQL_TABLE_SQL_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 sql 初始化脚本 至output目录
cp -rf ${CONFIG_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 application.yml 至output目录
# 拷贝程序包到output路径
cp kafka-manager-web/target/kafka-manager-web-${KM_VERSION}-SNAPSHOT.jar ${OUTPUT_DIR}/${APP_DIR}/${APP_NAME}.jar
echo -e "make output ok."
) || { echo -e "make output error"; exit 2; } # 填充output目录失败后, 退出码为 非0
}
function make_package() {
# 压缩output目录
(
cd ${OUTPUT_DIR} && tar cvzf ${APP_DIR}.tar.gz ${APP_DIR}
echo -e "make package ok."
) || { echo -e "make package error"; exit 2; } # 压缩output目录失败后, 退出码为 非0
}
##########################################
## main
## 其中,
## 1.进行编译
## 2.生成部署包output
## 3.生成tar.gz压缩包
##########################################
# 1.进行编译
build
# 2.生成部署包output
make_output
# 3.生成tar.gz压缩包
make_package
# 编译成功
echo -e "build done"
exit 0

View File

@@ -1,43 +1,28 @@
FROM openjdk:16-jdk-alpine3.13 FROM openjdk:16-jdk-alpine3.13
LABEL author="yangvipguang" LABEL author="fengxsong"
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && apk add --no-cache tini
ENV VERSION 2.3.1
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
RUN apk add --no-cache --virtual .build-deps \
font-adobe-100dpi \
ttf-dejavu \
fontconfig \
curl \
apr \
apr-util \
apr-dev \
tomcat-native \
&& apk del .build-deps
RUN apk add --no-cache tini
ENV VERSION 2.4.2
WORKDIR /opt/
ENV AGENT_HOME /opt/agent/ ENV AGENT_HOME /opt/agent/
WORKDIR /tmp
COPY $JAR_PATH/kafka-manager.jar app.jar
# COPY application.yml application.yml ##默认使用helm 挂载,防止敏感配置泄露
COPY docker-depends/config.yaml $AGENT_HOME COPY docker-depends/config.yaml $AGENT_HOME
COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml" ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M " ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \ ENV JAVA_OPTS="-verbose:gc \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \ -XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport" -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
RUN wget https://github.com/didi/Logi-KafkaManager/releases/download/v${VERSION}/kafka-manager-${VERSION}.tar.gz && \
tar xvf kafka-manager-${VERSION}.tar.gz && \
mv kafka-manager-${VERSION}/kafka-manager.jar /opt/app.jar && \
rm -rf kafka-manager-${VERSION}*
EXPOSE 8080 9999 EXPOSE 8080 9999
ENTRYPOINT ["tini", "--"] ENTRYPOINT ["tini", "--"]
CMD ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"] CMD [ "sh", "-c", "java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"]

View File

@@ -0,0 +1,6 @@
dependencies:
- name: mysql
repository: https://charts.bitnami.com/bitnami
version: 8.6.3
digest: sha256:d250c463c1d78ba30a24a338a06a551503c7a736621d974fe4999d2db7f6143e
generated: "2021-06-24T11:34:54.625217+08:00"

View File

@@ -1,6 +1,6 @@
apiVersion: v2 apiVersion: v2
name: didi-km name: didi-km
description: A Helm chart for Kubernetes description: Logi-KafkaManager
# A chart can be either an 'application' or a 'library' chart. # A chart can be either an 'application' or a 'library' chart.
# #
@@ -21,4 +21,9 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using. # follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes. # It is recommended to use it with quotes.
appVersion: "1.16.0" appVersion: "2.4.2"
dependencies:
- condition: mysql.enabled
name: mysql
repository: https://charts.bitnami.com/bitnami
version: 8.x.x

Binary file not shown.

View File

@@ -1,7 +1,17 @@
{{- define "datasource.mysql" -}}
{{- if .Values.mysql.enabled }}
{{- printf "%s-mysql" (include "didi-km.fullname" .) -}}
{{- else -}}
{{- printf "%s" .Values.externalDatabase.host -}}
{{- end -}}
{{- end -}}
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: km-cm name: {{ include "didi-km.fullname" . }}-configs
labels:
{{- include "didi-km.labels" . | nindent 4 }}
data: data:
application.yml: | application.yml: |
server: server:
@@ -17,9 +27,9 @@ data:
name: kafkamanager name: kafkamanager
datasource: datasource:
kafka-manager: kafka-manager:
jdbc-url: jdbc:mysql://xxxxx:3306/kafka-manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false jdbc-url: jdbc:mysql://{{ include "datasource.mysql" . }}:3306/{{ .Values.mysql.auth.database }}?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false
username: admin username: {{ .Values.mysql.auth.username }}
password: admin password: {{ .Values.mysql.auth.password }}
driver-class-name: com.mysql.jdbc.Driver driver-class-name: com.mysql.jdbc.Driver
main: main:
allow-bean-definition-overriding: true allow-bean-definition-overriding: true
@@ -54,7 +64,10 @@ data:
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中 sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
account: account:
# ldap settings
ldap: ldap:
enabled: false
authUserRegistration: false
kcm: kcm:
enabled: false enabled: false

View File

@@ -42,6 +42,10 @@ spec:
protocol: TCP protocol: TCP
resources: resources:
{{- toYaml .Values.resources | nindent 12 }} {{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: configs
mountPath: /tmp/application.yml
subPath: application.yml
{{- with .Values.nodeSelector }} {{- with .Values.nodeSelector }}
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
@@ -54,3 +58,7 @@ spec:
tolerations: tolerations:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
{{- end }} {{- end }}
volumes:
- name: configs
configMap:
name: {{ include "didi-km.fullname" . }}-configs

View File

@@ -5,13 +5,14 @@
replicaCount: 1 replicaCount: 1
image: image:
repository: docker.io/yangvipguang/km repository: docker.io/fengxsong/logi-kafka-manager
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion. # Overrides the image tag whose default is the chart appVersion.
tag: "v18" tag: "v2.4.2"
imagePullSecrets: [] imagePullSecrets: []
nameOverride: "" nameOverride: ""
# fullnameOverride must set same as release name
fullnameOverride: "km" fullnameOverride: "km"
serviceAccount: serviceAccount:
@@ -59,10 +60,10 @@ resources:
# resources, such as Minikube. If you do want to specify resources, uncomment the following # resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: limits:
cpu: 50m cpu: 500m
memory: 2048Mi memory: 2048Mi
requests: requests:
cpu: 10m cpu: 100m
memory: 200Mi memory: 200Mi
autoscaling: autoscaling:
@@ -77,3 +78,16 @@ nodeSelector: {}
tolerations: [] tolerations: []
affinity: {} affinity: {}
# more configurations are set with configmap in file template/configmap.yaml
externalDatabase:
host: ""
mysql:
# if enabled is set to false, then you should manually specified externalDatabase.host
enabled: true
architecture: standalone
auth:
rootPassword: "s3cretR00t"
database: "logi_kafka_manager"
username: "logi_kafka_manager"
password: "n0tp@55w0rd"

View File

@@ -0,0 +1,16 @@
#!/bin/bash
cd `dirname $0`/../target
target_dir=`pwd`
pid=`ps ax | grep -i 'kafka-manager' | grep ${target_dir} | grep java | grep -v grep | awk '{print $1}'`
if [ -z "$pid" ] ; then
echo "No kafka-manager running."
exit -1;
fi
echo "The kafka-manager (${pid}) is running..."
kill ${pid}
echo "Send shutdown request to kafka-manager (${pid}) OK"

View File

@@ -0,0 +1,81 @@
error_exit ()
{
echo "ERROR: $1 !!"
exit 1
}
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
[ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME
if [ -z "$JAVA_HOME" ]; then
if $darwin; then
if [ -x '/usr/libexec/java_home' ] ; then
export JAVA_HOME=`/usr/libexec/java_home`
elif [ -d "/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home" ]; then
export JAVA_HOME="/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home"
fi
else
JAVA_PATH=`dirname $(readlink -f $(which javac))`
if [ "x$JAVA_PATH" != "x" ]; then
export JAVA_HOME=`dirname $JAVA_PATH 2>/dev/null`
fi
fi
if [ -z "$JAVA_HOME" ]; then
error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)! jdk8 or later is better!"
fi
fi
export WEB_SERVER="kafka-manager"
export JAVA_HOME
export JAVA="$JAVA_HOME/bin/java"
export BASE_DIR=`cd $(dirname $0)/..; pwd`
export CUSTOM_SEARCH_LOCATIONS=file:${BASE_DIR}/conf/
#===========================================================================================
# JVM Configuration
#===========================================================================================
JAVA_OPT="${JAVA_OPT} -server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${BASE_DIR}/logs/java_heapdump.hprof"
## jdk版本高的情况 有些 参数废弃了
JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
JAVA_OPT="${JAVA_OPT} -Xlog:gc*:file=${BASE_DIR}/logs/km_gc.log:time,tags:filecount=10,filesize=102400"
else
JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${JAVA_HOME}/lib/ext"
JAVA_OPT="${JAVA_OPT} -Xloggc:${BASE_DIR}/logs/km_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
fi
JAVA_OPT="${JAVA_OPT} -jar ${BASE_DIR}/target/${WEB_SERVER}.jar"
JAVA_OPT="${JAVA_OPT} --spring.config.additional-location=${CUSTOM_SEARCH_LOCATIONS}"
JAVA_OPT="${JAVA_OPT} --logging.config=${BASE_DIR}/conf/logback-spring.xml"
JAVA_OPT="${JAVA_OPT} --server.max-http-header-size=524288"
if [ ! -d "${BASE_DIR}/logs" ]; then
mkdir ${BASE_DIR}/logs
fi
echo "$JAVA ${JAVA_OPT}"
# check the start.out log output file
if [ ! -f "${BASE_DIR}/logs/start.out" ]; then
touch "${BASE_DIR}/logs/start.out"
fi
# start
echo -e "---- 启动脚本 ------\n $JAVA ${JAVA_OPT}" > ${BASE_DIR}/logs/start.out 2>&1 &
nohup $JAVA ${JAVA_OPT} >> ${BASE_DIR}/logs/start.out 2>&1 &
echo "${WEB_SERVER} is startingyou can check the ${BASE_DIR}/logs/start.out"

View File

@@ -0,0 +1,28 @@
## kafka-manager的配置文件该文件中的配置会覆盖默认配置
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
server:
port: 8080
tomcat:
accept-count: 1000
max-connections: 10000
max-threads: 800
min-spare-threads: 100
spring:
application:
name: kafkamanager
profiles:
active: dev
datasource:
kafka-manager:
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
username: root
password: 123456
driver-class-name: com.mysql.cj.jdbc.Driver
main:
allow-bean-definition-overriding: true

View File

@@ -0,0 +1,116 @@
## kafka-manager的配置文件该文件中的配置会覆盖默认配置
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
server:
port: 8080
tomcat:
accept-count: 1000
max-connections: 10000
max-threads: 800
min-spare-threads: 100
spring:
application:
name: kafkamanager
profiles:
active: dev
datasource:
kafka-manager:
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
username: root
password: 123456
driver-class-name: com.mysql.cj.jdbc.Driver
main:
allow-bean-definition-overriding: true
servlet:
multipart:
max-file-size: 100MB
max-request-size: 100MB
logging:
config: classpath:logback-spring.xml
custom:
idc: cn # 部署的数据中心, 忽略该配置, 后续会进行删除
jmx:
max-conn: 10 # 2.3版本配置不在这个地方生效
store-metrics-task:
community:
broker-metrics-enabled: true # 社区部分broker metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
topic-metrics-enabled: true # 社区部分topic的metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
didi:
app-topic-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标因此默认关闭
topic-request-time-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标因此默认关闭
topic-throttled-metrics: false # 滴滴埋入的指标, 社区AK不存在该指标因此默认关闭
save-days: 7 #指标在DB中保持的天数-1表示永久保存7表示保存近7天的数据
# 任务相关的开关
task:
op:
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
order-auto-exec: # 工单自动化审批线程的开关
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
# ldap相关的配置
account:
ldap:
enabled: false
url: ldap://127.0.0.1:389/
basedn: dc=tsign,dc=cn
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: cn=admin,dc=tsign,dc=cn
credentials: admin
auth-user-registration: true
auth-user-registration-role: normal
# 集群升级部署相关的功能需要配合夜莺及S3进行使用
kcm:
enabled: false
s3:
endpoint: s3.didiyunapi.com
access-key: 1234567890
secret-key: 0987654321
bucket: logi-kafka
n9e:
base-url: http://127.0.0.1:8004
user-token: 12345678
timeout: 300
account: root
script-file: kcm_script.sh
# 监控告警相关的功能,需要配合夜莺进行使用
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
# n9e.nid: 夜莺的节点ID
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
# n9e.mon.base-url: 监控地址
# n9e.sink.base-url: 数据上报地址
# n9e.rdb.base-url: 用户资源中心地址
monitor:
enabled: false
n9e:
nid: 2
user-token: 1234567890
mon:
base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
sink:
base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
rdb:
base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
notify: # 通知的功能
kafka: # 默认通知发送到kafka的指定Topic中
cluster-id: 95 # Topic的集群ID
topic-name: didi-kafka-notify # Topic名称
order: # 部署的KM的地址
detail-url: http://127.0.0.1

View File

@@ -0,0 +1,215 @@
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds">
<contextName>logback</contextName>
<property name="log.path" value="./logs" />
<!-- 彩色日志 -->
<!-- 彩色日志依赖的渲染类 -->
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
<conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
<conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
<!-- 彩色日志格式 -->
<property name="CONSOLE_LOG_PATTERN" value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
<!--输出到控制台-->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>info</level>
</filter>
<encoder>
<Pattern>${CONSOLE_LOG_PATTERN}</Pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<!--输出到文件-->
<!-- 时间滚动输出 level为 DEBUG 日志 -->
<appender name="DEBUG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log.path}/log_debug.log</file>
<!--日志文件输出格式-->
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset> <!-- 设置字符集 -->
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 日志归档 -->
<fileNamePattern>${log.path}/log_debug_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>7</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录debug级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>debug</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- 时间滚动输出 level为 INFO 日志 -->
<appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 正在记录的日志文件的路径及文件名 -->
<file>${log.path}/log_info.log</file>
<!--日志文件输出格式-->
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 每天日志归档路径以及格式 -->
<fileNamePattern>${log.path}/log_info_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>7</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录info级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>info</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- 时间滚动输出 level为 WARN 日志 -->
<appender name="WARN_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 正在记录的日志文件的路径及文件名 -->
<file>${log.path}/log_warn.log</file>
<!--日志文件输出格式-->
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/log_warn_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>7</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录warn级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>warn</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- 时间滚动输出 level为 ERROR 日志 -->
<appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- 正在记录的日志文件的路径及文件名 -->
<file>${log.path}/log_error.log</file>
<!--日志文件输出格式-->
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
</encoder>
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/log_error_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!--日志文件保留天数-->
<maxHistory>7</maxHistory>
</rollingPolicy>
<!-- 此日志文件只记录ERROR级别的 -->
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- Metrics信息收集日志 -->
<appender name="COLLECTOR_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log.path}/metrics/collector_metrics.log</file>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/metrics/collector_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>3</maxHistory>
</rollingPolicy>
</appender>
<!-- Metrics信息收集日志 -->
<appender name="API_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log.path}/metrics/api_metrics.log</file>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/metrics/api_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>3</maxHistory>
</rollingPolicy>
</appender>
<!-- Metrics信息收集日志 -->
<appender name="SCHEDULED_TASK_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log.path}/metrics/scheduled_tasks.log</file>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${log.path}/metrics/scheduled_tasks_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>5</maxHistory>
</rollingPolicy>
</appender>
<logger name="COLLECTOR_METRICS_LOGGER" level="DEBUG" additivity="false">
<appender-ref ref="COLLECTOR_METRICS_LOGGER"/>
</logger>
<logger name="API_METRICS_LOGGER" level="DEBUG" additivity="false">
<appender-ref ref="API_METRICS_LOGGER"/>
</logger>
<logger name="SCHEDULED_TASK_LOGGER" level="DEBUG" additivity="false">
<appender-ref ref="SCHEDULED_TASK_LOGGER"/>
</logger>
<logger name="org.apache.ibatis" level="INFO" additivity="false" />
<logger name="org.mybatis.spring" level="INFO" additivity="false" />
<logger name="com.github.miemiedev.mybatis.paginator" level="INFO" additivity="false" />
<root level="info">
<appender-ref ref="CONSOLE" />
<appender-ref ref="DEBUG_FILE" />
<appender-ref ref="INFO_FILE" />
<appender-ref ref="WARN_FILE" />
<appender-ref ref="ERROR_FILE" />
<!--<appender-ref ref="METRICS_LOG" />-->
</root>
<!--生产环境:输出到文件-->
<!--<springProfile name="pro">-->
<!--<root level="info">-->
<!--<appender-ref ref="CONSOLE" />-->
<!--<appender-ref ref="DEBUG_FILE" />-->
<!--<appender-ref ref="INFO_FILE" />-->
<!--<appender-ref ref="ERROR_FILE" />-->
<!--<appender-ref ref="WARN_FILE" />-->
<!--</root>-->
<!--</springProfile>-->
</configuration>

64
distribution/pom.xml Normal file
View File

@@ -0,0 +1,64 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>kafka-manager</artifactId>
<groupId>com.xiaojukeji.kafka</groupId>
<version>${kafka-manager.revision}</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>distribution</artifactId>
<name>distribution</name>
<packaging>pom</packaging>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>kafka-manager-web</artifactId>
<version>${kafka-manager.revision}</version>
</dependency>
</dependencies>
<profiles>
<profile>
<id>release-kafka-manager</id>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>kafka-manager-web</artifactId>
<version>${kafka-manager.revision}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptors>
<descriptor>release-km.xml</descriptor>
</descriptors>
<tarLongFileMode>posix</tarLongFileMode>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>install</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
<finalName>kafka-manager</finalName>
</build>
</profile>
</profiles>
</project>

22
distribution/readme.md Normal file
View File

@@ -0,0 +1,22 @@
## 说明
### 1.创建mysql数据库文件
> conf/create_mysql_table.sql
### 2. 修改配置文件
> conf/application.yml.example
> 请将application.yml.example 复制一份改名为application.yml
> 并放在同级目录下(conf/); 并修改成自己的配置
> 这里的优先级比jar包内配置文件的默认值高;
>
### 3.启动/关闭kafka-manager
> sh bin/startup.sh 启动
>
> sh shutdown.sh 关闭
>
### 4.升级jar包
> 如果是升级, 可以看看文件 `upgrade_config.md` 的配置变更历史;
>

51
distribution/release-km.xml Executable file
View File

@@ -0,0 +1,51 @@
<?xml version="1.0" encoding="UTF-8"?>
<assembly>
<id>${project.version}</id>
<includeBaseDirectory>true</includeBaseDirectory>
<formats>
<format>dir</format>
<format>tar.gz</format>
<format>zip</format>
</formats>
<fileSets>
<fileSet>
<includes>
<include>conf/**</include>
</includes>
</fileSet>
<fileSet>
<includes>
<include>bin/*</include>
</includes>
<fileMode>0755</fileMode>
</fileSet>
</fileSets>
<files>
<file>
<source>readme.md</source>
<destName>readme.md</destName>
</file>
<file>
<source>upgrade_config.md</source>
<destName>upgrade_config.md</destName>
</file>
<file>
<!--打好的jar包名称和放置目录-->
<source>../kafka-manager-web/target/kafka-manager.jar</source>
<outputDirectory>target/</outputDirectory>
</file>
</files>
<moduleSets>
<moduleSet>
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>com.xiaojukeji.kafka:kafka-manager-web</include>
</includes>
</moduleSet>
</moduleSets>
</assembly>

View File

@@ -0,0 +1,42 @@
## 版本升级配置变更
> 本文件 从 V2.2.0 开始记录; 如果配置有变更则会填写到下文中; 如果没有,则表示无变更;
> 当您从一个很低的版本升级时候,应该依次执行中间有过变更的sql脚本
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
### 1.升级至`V2.2.0`版本
#### 1.mysql变更
`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段因此需要执行下面的sql进行字段的增加。
```sql
# cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息
ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`;
# logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键.
# 此后, name字段还是表示集群名称, identification字段表示的是集群标识, 只能是字母数字及下划线组成,
# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段.
ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`;
UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0;
ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC);
```
### 升级至`2.3.0`版本
#### 1.mysql变更
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段因此需要执行下面的sql进行字段的增加。
```sql
ALTER TABLE `gateway_config`
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 785 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 MiB

View File

@@ -14,6 +14,8 @@
- 1、Topic定时同步任务 - 1、Topic定时同步任务
- 2、专家服务——Topic分区热点 - 2、专家服务——Topic分区热点
- 3、专家服务——Topic分区不足 - 3、专家服务——Topic分区不足
- 4、专家服务——Topic资源治理
- 5、账单配置
## 1、Topic定时同步任务 ## 1、Topic定时同步任务
@@ -140,3 +142,27 @@ EXPIRED_TOPIC_CONFIG
] ]
} }
``` ```
## 5、账单配置
Logi-KafkaManager除了作为Kafka运维管控平台之外实际上还会有一些资源定价相关的功能。
当前定价方式当月Topic的maxAvgDay天的峰值的均值流量作为Topic的使用额度。使用的额度 * 单价 * 溢价(预留buffer) 就等于当月的费用。
详细的计算逻辑见com.xiaojukeji.kafka.manager.task.dispatch.biz.CalKafkaTopicBill; 和 com.xiaojukeji.kafka.manager.task.dispatch.biz.CalTopicStatistics;
这块在计算Topic的费用的配置如下所示
配置Key
```
KAFKA_TOPIC_BILL_CONFIG
```
配置Value
```json
{
"maxAvgDay": 10, # 使用额度的计算规则
"quotaRatio": 1.5, # 溢价率
"priseUnitMB": 100 # 单价即单MB/s流量多少钱
}
```

View File

@@ -0,0 +1,53 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 如何增加上报监控系统指标?
## 0、前言
LogiKM是 **一站式`Apache Kafka`集群指标监控与运维管控平台** 当前会将消费LagTopic流量等指标上报到监控系统中从而方便用户在监控系统中对这些指标配置监控告警规则进而达到监控自身客户端是否正常的目的。
那么如果我们想增加一个新的监控指标应该如何做呢比如我们想监控Broker的流量监控Broker的存活信息监控集群Controller个数等等。
在具体介绍之前我们大家都知道Kafka监控相关的信息基本都存储于Broker、Jmx以及ZK中。当前LogiKM也已经具备从这三个地方获取数据的基本能力因此基于LogiKM我们再获取其他指标总体上还是非常方便的。
这里我们就以已经获取到的Topic流量信息为例看LogiKM如何实现Topic指标的获取并上报的。
---
## 1、确定指标位置
基于对Kafka的了解我们知道Topic流量信息这个指标是存储于Jmx中的因此我们需要从Jmx中获取。大家如果对于自己所需要获取的指标存储在何处不太清楚的可以加入我们维护的Kafka中文社区(README中有二维码)中今天沟通交流。
---
## 2、指标获取
Topic流量指标的获取详细见图中说明。
![Topic流量指标采集说明](./assets/increase_the_indicators_reported_to_monitor_system/collect_topic_metrics.jpg)
---
## 3、指标上报
上一步我们已经采集到Topic流量指标了下一步就是将该指标上报到监控系统这块只需要按照监控系统要求的格式将数据上报即可。
LogiKM中有一个monitor模块具体的如下图所示
![指标上报](./assets/increase_the_indicators_reported_to_monitor_system/sink_metrcis.png)
## 4、补充说明
监控系统对接的相关内容见:
[监控系统集成](./monitor_system_integrate_with_self.md)
[监控系统集成例子——集成夜莺](./monitor_system_integrate_with_n9e.md)

View File

@@ -58,6 +58,9 @@ custom:
task: task:
op: op:
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中 sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
order-auto-exec: # 工单自动化审批线程的开关
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
account: # ldap相关的配置, 社区版本暂时支持不够完善,可以先忽略,欢迎贡献代码对这块做优化 account: # ldap相关的配置, 社区版本暂时支持不够完善,可以先忽略,欢迎贡献代码对这块做优化
ldap: ldap:

View File

@@ -31,17 +31,23 @@
**2、源代码进行打包** **2、源代码进行打包**
下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`sh build.sh`命令即可,执行完成之后会在`output/kafka-manager-xxx`目录下面生成一个jar包。 下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`mvn -Prelease-kafka-manager -Dmaven.test.skip=true clean install -U `命令即可,
执行完成之后会在`distribution/target`目录下面生成一个`kafka-manager-*.tar.gz`
和一个`kafka-manager-*.zip` 文件,随便任意一个压缩包都可以;
当然此时同级目录有一个已经解压好的文件夹;
对于`windows`环境的用户,估计执行不了`sh build.sh`命令,因此可以直接执行`mvn install`,然后在`kafka-manager-web/target`目录下生成一个kafka-manager-web-xxx.jar的包。
获取到jar包之后我们继续下面的步骤。
--- ---
## 3、MySQL-DB初始化 ## 3. 解压安装包
解压完成后; 在文件目录中可以看到有`kafka-manager/conf/create_mysql_table.sql` 有个mysql初始化文件
先初始化DB
执行[create_mysql_table.sql](create_mysql_table.sql)中的SQL命令从而创建所需的MySQL库及表默认创建的库名是`logi_kafka_manager`
## 4、MySQL-DB初始化
执行[create_mysql_table.sql](../../distribution/conf/create_mysql_table.sql)中的SQL命令从而创建所需的MySQL库及表默认创建的库名是`logi_kafka_manager`
``` ```
# 示例: # 示例:
@@ -50,15 +56,38 @@ mysql -uXXXX -pXXX -h XXX.XXX.XXX.XXX -PXXXX < ./create_mysql_table.sql
--- ---
## 4、启动 ## 5.修该配置
请将`conf/application.yml.example` 文件复制一份出来命名为`application.yml` 放在同级目录:conf/application.yml ;
并且修改配置; 当然不修改的话 就会用默认的配置;
至少 mysql配置成自己的吧
```
# application.yml 是配置文件最简单的是仅修改MySQL相关的配置即可启动
nohup java -jar kafka-manager.jar --spring.config.location=./application.yml > /dev/null 2>&1 & ## 6、启动/关闭
``` 解压包中有启动和关闭脚本
`kafka-manager/bin/shutdown.sh`
`kafka-manager/bin/startup.sh`
### 5、使用 执行 sh startup.sh 启动
执行 sh shutdown.sh 关闭
### 6、使用
本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md) 本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md)
### 7. 升级
如果是升级版本,请查看文件 [kafka-manager 升级手册](../../distribution/upgrade_config.md)
在您下载的启动包(V2.5及其后)中也有记录,在 kafka-manager/upgrade_config.md 中
### 8. 在IDE中启动
> 如果想参与开发或者想在IDE中启动的话
> 先执行 `mvn -Dmaven.test.skip=true clean install -U `
>
> 然后这个时候可以选择去 [pom.xml](../../pom.xml) 中将`kafka-manager-console`模块注释掉;
> 注释是因为每次install的时候都会把前端文件`kafka-manager-console`重新打包进`kafka-manager-web`
>
> 完事之后,只需要直接用IDE启动运行`kafka-manager-web`模块中的
> com.xiaojukeji.kafka.manager.web.MainApplication main方法就行了

View File

@@ -28,6 +28,8 @@
- 16、为什么下线应用提示operation forbidden - 16、为什么下线应用提示operation forbidden
- 17、删除Topic成功为什么过一会儿之后又出现了 - 17、删除Topic成功为什么过一会儿之后又出现了
- 18、如何在不登录的情况下调用一些需要登录的接口 - 18、如何在不登录的情况下调用一些需要登录的接口
- 19、为什么无法看到连接信息、耗时信息等指标
- 20、AppID鉴权、生产消费配额不起作用
--- ---
@@ -200,3 +202,14 @@ for (int i= 0; i < 100000; ++i) {
### 18、如何在不登录的情况下调用一些需要登录的接口 ### 18、如何在不登录的情况下调用一些需要登录的接口
具体见:[登录绕过](./call_api_bypass_login.md) 具体见:[登录绕过](./call_api_bypass_login.md)
### 19、为什么无法看到连接信息、耗时信息等指标
连接信息、耗时信息等指标依赖于滴滴kafka-gateway和滴滴Kafka引擎通过gateway可获取到连接到该Topic的应用情况提高对Topic的管控能力。通过滴滴Kafka引擎的自带埋点可获取到耗时信息提升Topic生产消费时的可观测性。这部分内容是属于商业版的范畴暂未开源。如有需要可进行商业合作。
具体见:[滴滴Logi-KafkaManager开源版和商业版特性对比](../开源版与商业版特性对比.md)
### 20、AppID鉴权、生产消费配额不起作用
AppID鉴权、生产消费配额依赖于滴滴kafka-gateway通过gateway进行身份鉴权和生产消费限流可避免用户无限制的使用集群的流量流量大的用户会耗尽系统资源从而影响其他用户的使用造成集群的节点故障。这部分内容是属于商业版的范畴暂未开源。如有需要可进行商业合作。
具体见:[滴滴Logi-KafkaManager开源版和商业版特性对比](../开源版与商业版特性对比.md)

View File

@@ -0,0 +1,55 @@
---
![kafka-manager-logo](assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
**开源版、商业版对比**
纲要Logi-KafakManager的商业特性是强依赖于滴滴Kafka Gateway和滴滴kafka引擎。
滴滴KafkaGateway主要负责服务发现、安全管控身份鉴权、生产消费鉴权等、流量管控应用配额等
滴滴Kafka引擎主要负责更丰富的监控指标broker实时耗时、压缩指标、分区落盘等、磁盘过载保护等
备注两个版本的产品功能页面是一样的。区别在于开源版未使用滴滴KafkaGateway滴滴Kafka引擎部分产品功能/功能不起作用或者页面无数据
| 模块 |对比指标 |底层依赖 |开源版 |商业版 |备注 |
| --- | --- | --- | --- | --- | --- |
| 服务发现 | bootstrap地址变更对客户端无影响 | Gateway | | 是| |
| 安全管控 | 身份鉴权appID+password | Gateway | | 是 | |
| | 权限鉴权Topic+appID | Gateway | | 是 | |
| 指标监控 | Topic实时流量、历史流量 | | 是 | 是 | |
| | Broker实时耗时、历史耗时 | 引擎 | | 是 | |
| | 分区落盘 | 引擎 | | 是 | |
| | Topic里的数据压缩格式 | 引擎 | | 是 | |
| | 连接信息Topic上有哪些应用连接 | Gateway | | 是| |
| | 流量管控(应用配额、生产消费限流等) | Gateway | | 是 | |
| 监控报警 | | | 是 | 是 | 监控指标上报需对接外部监控系统夜莺or企业内部监控系统 |
| Topic运维 | 申请分区 | | 是 | 是 | |
| | 调整配额 | Gateway | | | 是 |
| | Topic数据采样 | | 是 | 是 | |
| | 消费组管理(重置消费偏移等) | | 是 | 是 | |
| 集群管理 | 集群接入(部署) | | 是 | 是 | 需手动部署集群,或借助外部的自动化部署系统(夜莺)来部署系统 |
| | 集群指标监控 | | 是 | 是 | |
| | 按照Region、逻辑集群进行管理 | | 是 | 是 | |
| | Topic迁移 | | 是 | 是| |
| | 集群任务(集群版本管理、升级、扩缩容、回滚等) | | 是 | 是 | 需借助夜莺或自动化部署系统来实现 |
| | 磁盘过载保护 | 引擎 | | 是 | |
| | 指定broker作为优选controller | Gateway | | 是 | |
| Gateway管理 | 管理 Gateway的配置文件 | Gateway | | 是 | |
| 资源治理 | 专家服务Topic分区热点、Topic分区不足、Topic长期未使用、Topic流量异常 | | 是 | 是 | 开源版:具备问题发现与基础的问题解决能力;商业版:可在此基础上,融入滴滴内部的资源治理经验,提供更加专家化的问题解决方法 |
| | 健康分 | | 是 | 是 | 开源版:具备基础的健康分算法;商业版:可融入更多的指标统计,及定制化的健康分算法 |
| 运营管理 | 资源审批应用申请、Topic申请、分区申请、配额申请、集群申请等都需要通过工单进行审批 | |是 | 是 | |
| | 账单体系根据流量核算Topic、集群费用 | | 是 | 是| |
**总结**
滴滴LogiKM的商业特性体现在滴滴Kafka Gateway、滴滴Kafka引擎、内部沉淀出的资源治理专家经验、可定制化的健康分算法。
从场景来看滴滴Logi-KafkaManager的开源版本在kafka集群运维、的Topic管理、监控告警、资源治理等kafka核心场景都充分开源用户的使用需求并且有着出色的表现。而商业版相较于开源版在安全管控、流量管控、更丰富的指标监控、资源治理专家经验的具有明显提升更加符合企业业务需求。
除此之外,商业版还可根据企业实际需求对平台源码进行定制化改造,并提供运维保障,稳定性保障,运营保障等服务。

View File

@@ -25,6 +25,8 @@ public class TopicCreationConstant {
public static final String TOPIC_RETENTION_TIME_KEY_NAME = "retention.ms"; public static final String TOPIC_RETENTION_TIME_KEY_NAME = "retention.ms";
public static final String TOPIC_RETENTION_BYTES_KEY_NAME = "retention.bytes";
public static final Long DEFAULT_QUOTA = 3 * 1024 * 1024L; public static final Long DEFAULT_QUOTA = 3 * 1024 * 1024L;
public static Properties createNewProperties(Long retentionTime) { public static Properties createNewProperties(Long retentionTime) {

View File

@@ -25,6 +25,8 @@ public class MineTopicSummary {
private Integer access; private Integer access;
private String description;
public Long getLogicalClusterId() { public Long getLogicalClusterId() {
return logicalClusterId; return logicalClusterId;
} }
@@ -105,6 +107,14 @@ public class MineTopicSummary {
this.access = access; this.access = access;
} }
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
@Override @Override
public String toString() { public String toString() {
return "MineTopicSummary{" + return "MineTopicSummary{" +

View File

@@ -37,6 +37,8 @@ public class TopicBasicDTO {
private Long retentionTime; private Long retentionTime;
private Long retentionBytes;
public Long getClusterId() { public Long getClusterId() {
return clusterId; return clusterId;
} }
@@ -157,6 +159,14 @@ public class TopicBasicDTO {
this.retentionTime = retentionTime; this.retentionTime = retentionTime;
} }
public Long getRetentionBytes() {
return retentionBytes;
}
public void setRetentionBytes(Long retentionBytes) {
this.retentionBytes = retentionBytes;
}
@Override @Override
public String toString() { public String toString() {
return "TopicBasicDTO{" + return "TopicBasicDTO{" +
@@ -166,7 +176,7 @@ public class TopicBasicDTO {
", principals='" + principals + '\'' + ", principals='" + principals + '\'' +
", topicName='" + topicName + '\'' + ", topicName='" + topicName + '\'' +
", description='" + description + '\'' + ", description='" + description + '\'' +
", regionNameList='" + regionNameList + '\'' + ", regionNameList=" + regionNameList +
", score=" + score + ", score=" + score +
", topicCodeC='" + topicCodeC + '\'' + ", topicCodeC='" + topicCodeC + '\'' +
", partitionNum=" + partitionNum + ", partitionNum=" + partitionNum +
@@ -175,6 +185,7 @@ public class TopicBasicDTO {
", modifyTime=" + modifyTime + ", modifyTime=" + modifyTime +
", createTime=" + createTime + ", createTime=" + createTime +
", retentionTime=" + retentionTime + ", retentionTime=" + retentionTime +
", retentionBytes=" + retentionBytes +
'}'; '}';
} }
} }

View File

@@ -27,8 +27,11 @@ public class OrderVO {
@ApiModelProperty(value = "工单状态, 0:待审批, 1:通过, 2:拒绝, 3:取消") @ApiModelProperty(value = "工单状态, 0:待审批, 1:通过, 2:拒绝, 3:取消")
private Integer status; private Integer status;
@ApiModelProperty(value = "申请/审核时间") @ApiModelProperty(value = "申请时间")
private Date gmtTime; private Date gmtCreate;
@ApiModelProperty(value = "审核时间")
private Date gmtHandle;
public Long getId() { public Long getId() {
return id; return id;
@@ -70,12 +73,20 @@ public class OrderVO {
this.status = status; this.status = status;
} }
public Date getGmtTime() { public Date getGmtCreate() {
return gmtTime; return gmtCreate;
} }
public void setGmtTime(Date gmtTime) { public void setGmtCreate(Date gmtCreate) {
this.gmtTime = gmtTime; this.gmtCreate = gmtCreate;
}
public Date getGmtHandle() {
return gmtHandle;
}
public void setGmtHandle(Date gmtHandle) {
this.gmtHandle = gmtHandle;
} }
public String getApplicant() { public String getApplicant() {
@@ -95,7 +106,7 @@ public class OrderVO {
", applicant='" + applicant + '\'' + ", applicant='" + applicant + '\'' +
", description='" + description + '\'' + ", description='" + description + '\'' +
", status=" + status + ", status=" + status +
", gmtTime=" + gmtTime + ", gmtTime=" + gmtCreate +
'}'; '}';
} }
} }

View File

@@ -33,6 +33,9 @@ public class TopicBasicVO {
@ApiModelProperty(value = "存储时间(ms)") @ApiModelProperty(value = "存储时间(ms)")
private Long retentionTime; private Long retentionTime;
@ApiModelProperty(value = "单分区数据保存大小(Byte)")
private Long retentionBytes;
@ApiModelProperty(value = "创建时间") @ApiModelProperty(value = "创建时间")
private Long createTime; private Long createTime;
@@ -62,12 +65,20 @@ public class TopicBasicVO {
this.clusterId = clusterId; this.clusterId = clusterId;
} }
public String getTopicCodeC() { public String getAppId() {
return topicCodeC; return appId;
} }
public void setTopicCodeC(String topicCodeC) { public void setAppId(String appId) {
this.topicCodeC = topicCodeC; this.appId = appId;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
} }
public Integer getPartitionNum() { public Integer getPartitionNum() {
@@ -86,22 +97,6 @@ public class TopicBasicVO {
this.replicaNum = replicaNum; this.replicaNum = replicaNum;
} }
public Long getModifyTime() {
return modifyTime;
}
public void setModifyTime(Long modifyTime) {
this.modifyTime = modifyTime;
}
public Long getCreateTime() {
return createTime;
}
public void setCreateTime(Long createTime) {
this.createTime = createTime;
}
public String getPrincipals() { public String getPrincipals() {
return principals; return principals;
} }
@@ -110,30 +105,6 @@ public class TopicBasicVO {
this.principals = principals; this.principals = principals;
} }
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public void setAppId(String appId) {
this.appId = appId;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public String getAppId() {
return appId;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public Long getRetentionTime() { public Long getRetentionTime() {
return retentionTime; return retentionTime;
} }
@@ -142,12 +113,28 @@ public class TopicBasicVO {
this.retentionTime = retentionTime; this.retentionTime = retentionTime;
} }
public String getAppName() { public Long getRetentionBytes() {
return appName; return retentionBytes;
} }
public void setAppName(String appName) { public void setRetentionBytes(Long retentionBytes) {
this.appName = appName; this.retentionBytes = retentionBytes;
}
public Long getCreateTime() {
return createTime;
}
public void setCreateTime(Long createTime) {
this.createTime = createTime;
}
public Long getModifyTime() {
return modifyTime;
}
public void setModifyTime(Long modifyTime) {
this.modifyTime = modifyTime;
} }
public Integer getScore() { public Integer getScore() {
@@ -158,6 +145,30 @@ public class TopicBasicVO {
this.score = score; this.score = score;
} }
public String getTopicCodeC() {
return topicCodeC;
}
public void setTopicCodeC(String topicCodeC) {
this.topicCodeC = topicCodeC;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public List<String> getRegionNameList() { public List<String> getRegionNameList() {
return regionNameList; return regionNameList;
} }
@@ -176,6 +187,7 @@ public class TopicBasicVO {
", replicaNum=" + replicaNum + ", replicaNum=" + replicaNum +
", principals='" + principals + '\'' + ", principals='" + principals + '\'' +
", retentionTime=" + retentionTime + ", retentionTime=" + retentionTime +
", retentionBytes=" + retentionBytes +
", createTime=" + createTime + ", createTime=" + createTime +
", modifyTime=" + modifyTime + ", modifyTime=" + modifyTime +
", score=" + score + ", score=" + score +

View File

@@ -36,6 +36,9 @@ public class TopicMineVO {
@ApiModelProperty(value = "状态, 0:无权限, 1:可消费 2:可发送 3:可消费发送 4:可管理") @ApiModelProperty(value = "状态, 0:无权限, 1:可消费 2:可发送 3:可消费发送 4:可管理")
private Integer access; private Integer access;
@ApiModelProperty(value = "备注")
private String description;
public Long getClusterId() { public Long getClusterId() {
return clusterId; return clusterId;
} }
@@ -108,6 +111,14 @@ public class TopicMineVO {
this.access = access; this.access = access;
} }
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
@Override @Override
public String toString() { public String toString() {
return "TopicMineVO{" + return "TopicMineVO{" +

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.kafka.manager.common.utils;
public class BackoffUtils {
private BackoffUtils() {
}
public static void backoff(long timeUnitMs) {
if (timeUnitMs <= 0) {
return;
}
try {
Thread.sleep(timeUnitMs);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (Exception e) {
// ignore
}
}
}

View File

@@ -1,7 +1,7 @@
package com.xiaojukeji.kafka.manager.common.utils.factory; package com.xiaojukeji.kafka.manager.common.utils.factory;
import com.alibaba.fastjson.JSONObject;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.apache.commons.pool2.BasePooledObjectFactory; import org.apache.commons.pool2.BasePooledObjectFactory;
import org.apache.commons.pool2.PooledObject; import org.apache.commons.pool2.PooledObject;
@@ -16,7 +16,7 @@ import java.util.Properties;
* @author zengqiao * @author zengqiao
* @date 20/8/24 * @date 20/8/24
*/ */
public class KafkaConsumerFactory extends BasePooledObjectFactory<KafkaConsumer> { public class KafkaConsumerFactory extends BasePooledObjectFactory<KafkaConsumer<String, String>> {
private ClusterDO clusterDO; private ClusterDO clusterDO;
public KafkaConsumerFactory(ClusterDO clusterDO) { public KafkaConsumerFactory(ClusterDO clusterDO) {
@@ -25,17 +25,17 @@ public class KafkaConsumerFactory extends BasePooledObjectFactory<KafkaConsumer>
@Override @Override
public KafkaConsumer create() { public KafkaConsumer create() {
return new KafkaConsumer(createKafkaConsumerProperties(clusterDO)); return new KafkaConsumer<String, String>(createKafkaConsumerProperties(clusterDO));
} }
@Override @Override
public PooledObject<KafkaConsumer> wrap(KafkaConsumer obj) { public PooledObject<KafkaConsumer<String, String>> wrap(KafkaConsumer<String, String> obj) {
return new DefaultPooledObject<KafkaConsumer>(obj); return new DefaultPooledObject<>(obj);
} }
@Override @Override
public void destroyObject(final PooledObject<KafkaConsumer> p) throws Exception { public void destroyObject(final PooledObject<KafkaConsumer<String, String>> p) throws Exception {
KafkaConsumer kafkaConsumer = p.getObject(); KafkaConsumer<String, String> kafkaConsumer = p.getObject();
if (ValidateUtils.isNull(kafkaConsumer)) { if (ValidateUtils.isNull(kafkaConsumer)) {
return; return;
} }
@@ -57,7 +57,7 @@ public class KafkaConsumerFactory extends BasePooledObjectFactory<KafkaConsumer>
if (ValidateUtils.isBlank(clusterDO.getSecurityProperties())) { if (ValidateUtils.isBlank(clusterDO.getSecurityProperties())) {
return properties; return properties;
} }
properties.putAll(JSONObject.parseObject(clusterDO.getSecurityProperties(), Properties.class)); properties.putAll(JsonUtils.stringToObj(clusterDO.getSecurityProperties(), Properties.class));
return properties; return properties;
} }
} }

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.kafka.manager.common.utils.jmx; package com.xiaojukeji.kafka.manager.common.utils.jmx;
import com.xiaojukeji.kafka.manager.common.utils.BackoffUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -79,7 +80,8 @@ public class JmxConnectorWrap {
try { try {
Map<String, Object> environment = new HashMap<String, Object>(); Map<String, Object> environment = new HashMap<String, Object>();
if (!ValidateUtils.isBlank(this.jmxConfig.getUsername()) && !ValidateUtils.isBlank(this.jmxConfig.getPassword())) { if (!ValidateUtils.isBlank(this.jmxConfig.getUsername()) && !ValidateUtils.isBlank(this.jmxConfig.getPassword())) {
environment.put(JMXConnector.CREDENTIALS, Arrays.asList(this.jmxConfig.getUsername(), this.jmxConfig.getPassword())); // fixed by riyuetianmu
environment.put(JMXConnector.CREDENTIALS, new String[]{this.jmxConfig.getUsername(), this.jmxConfig.getPassword()});
} }
if (jmxConfig.isOpenSSL() != null && this.jmxConfig.isOpenSSL()) { if (jmxConfig.isOpenSSL() != null && this.jmxConfig.isOpenSSL()) {
environment.put(Context.SECURITY_PROTOCOL, "ssl"); environment.put(Context.SECURITY_PROTOCOL, "ssl");
@@ -145,18 +147,16 @@ public class JmxConnectorWrap {
long now = System.currentTimeMillis(); long now = System.currentTimeMillis();
while (true) { while (true) {
try { try {
if (System.currentTimeMillis() - now > 60000) {
break;
}
int num = atomicInteger.get(); int num = atomicInteger.get();
if (num <= 0) { if (num <= 0) {
Thread.sleep(2); BackoffUtils.backoff(2);
continue;
} }
if (atomicInteger.compareAndSet(num, num - 1)) {
if (atomicInteger.compareAndSet(num, num - 1) || System.currentTimeMillis() - now > 6000) {
break; break;
} }
} catch (Exception e) { } catch (Exception e) {
// ignore
} }
} }
} }

View File

@@ -1,18 +1,18 @@
{ {
"name": "logi-kafka", "name": "logi-kafka",
"version": "2.4.1", "version": "2.5.0",
"description": "", "description": "",
"scripts": { "scripts": {
"start": "webpack-dev-server", "start": "webpack-dev-server",
"daily-build": "cross-env NODE_ENV=production webpack", "daily-build": "cross-env NODE_ENV=production webpack",
"pre-build": "cross-env NODE_ENV=production webpack", "pre-build": "cross-env NODE_ENV=production webpack",
"prod-build": "cross-env NODE_ENV=production webpack" "prod-build": "cross-env NODE_ENV=production webpack",
"fix-memory": "cross-env LIMIT=4096 increase-memory-limit"
}, },
"author": "", "author": "",
"license": "ISC", "license": "ISC",
"devDependencies": { "devDependencies": {
"@hot-loader/react-dom": "^16.8.6", "@hot-loader/react-dom": "^16.8.6",
"@types/clipboard": "^2.0.1",
"@types/echarts": "^4.4.1", "@types/echarts": "^4.4.1",
"@types/lodash.debounce": "^4.0.6", "@types/lodash.debounce": "^4.0.6",
"@types/react": "^16.8.8", "@types/react": "^16.8.8",
@@ -21,12 +21,13 @@
"@types/spark-md5": "^3.0.2", "@types/spark-md5": "^3.0.2",
"antd": "^3.26.15", "antd": "^3.26.15",
"clean-webpack-plugin": "^3.0.0", "clean-webpack-plugin": "^3.0.0",
"clipboard": "2.0.6", "clipboard": "^2.0.8",
"cross-env": "^7.0.2", "cross-env": "^7.0.2",
"css-loader": "^2.1.0", "css-loader": "^2.1.0",
"echarts": "^4.5.0", "echarts": "^4.5.0",
"file-loader": "^5.0.2", "file-loader": "^5.0.2",
"html-webpack-plugin": "^3.2.0", "html-webpack-plugin": "^3.2.0",
"increase-memory-limit": "^1.0.7",
"less": "^3.9.0", "less": "^3.9.0",
"less-loader": "^4.1.0", "less-loader": "^4.1.0",
"mini-css-extract-plugin": "^0.6.0", "mini-css-extract-plugin": "^0.6.0",

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Before

Width:  |  Height:  |  Size: 125 KiB

After

Width:  |  Height:  |  Size: 125 KiB

View File

@@ -60,6 +60,22 @@ export class ChartWithDatePicker extends React.Component<IChartProps> {
public changeChartOptions(options: any) { public changeChartOptions(options: any) {
const noData = options.series.length ? false : true; const noData = options.series.length ? false : true;
this.setState({ noData }); this.setState({ noData });
options.tooltip.formatter = (params: any) => {
var res =
"<div style='margin-bottom:5px;padding:0 12px;width:100%;height:24px;line-height:24px;border-radius:3px;'><p>" +
params[0].data.time +
" </p></div>";
for (var i = 0; i < params.length; i++) {
res += `<div key=${params[i].seriesName} style="color: #fff;padding:0 12px;line-height: 24px">
<span style="display:inline-block;margin-right:5px;border-radius:50%;width:10px;height:10px;background-color:${[
params[i].color,
]};"></span>
${params[i].seriesName}
${params[i].data[params[i].seriesName]}
</div>`;
}
return res;
}
this.chart.setOption(options, true); this.chart.setOption(options, true);
} }
@@ -79,7 +95,7 @@ export class ChartWithDatePicker extends React.Component<IChartProps> {
public render() { public render() {
const { customerNode } = this.props; const { customerNode } = this.props;
return ( return (
<div className="status-box" style={{minWidth: '930px'}}> <div className="status-box" style={{ minWidth: '930px' }}>
<div className="status-graph"> <div className="status-graph">
<div className="k-toolbar"> <div className="k-toolbar">
{customerNode} {customerNode}

View File

@@ -54,7 +54,7 @@ export class AlarmSelect extends React.Component<IAlarmSelectProps> {
<a <a
className="icon-color" className="icon-color"
target="_blank" target="_blank"
href="https://github.com/didi/kafka-manager" href="https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/faq.md"
> >
</a> </a>

View File

@@ -7,7 +7,7 @@ import { urlPrefix } from 'constants/left-menu';
import { region, IRegionIdcs } from 'store/region'; import { region, IRegionIdcs } from 'store/region';
import logoUrl from '../../assets/image/kafka-logo.png'; import logoUrl from '../../assets/image/kafka-logo.png';
import userIcon from '../../assets/image/normal.png'; import userIcon from '../../assets/image/normal.png';
import weChat from '../../assets/image/wechat.jpeg'; import weChat from '../../assets/image/weChat.png';
import { users } from 'store/users'; import { users } from 'store/users';
import { observer } from 'mobx-react'; import { observer } from 'mobx-react';
import { Link } from 'react-router-dom'; import { Link } from 'react-router-dom';
@@ -60,8 +60,8 @@ export const Header = observer((props: IHeader) => {
}); });
}; };
const content = ( const content = (
<div style={{ height: '250px', padding: '5px' }} className="kafka-avatar-img"> <div style={{ height: '200px', padding: '5px' }} className="kafka-avatar-img">
<img style={{ width: '190px', height: '246px' }} src={weChat} alt="" /> <img style={{ width: '190px', height: '190px' }} src={weChat} alt="" />
</div> </div>
); );
const helpCenter = ( const helpCenter = (
@@ -144,8 +144,8 @@ export const Header = observer((props: IHeader) => {
<div className="kafka-header-container"> <div className="kafka-header-container">
<div className="left-content"> <div className="left-content">
<img className="kafka-header-icon" src={logoUrl} alt="" /> <img className="kafka-header-icon" src={logoUrl} alt="" />
<span className="kafka-header-text">Kafka Manager</span> <span className="kafka-header-text">LogiKM</span>
<a className='kafka-header-version' href="https://github.com/didi/Logi-KafkaManager/releases" target='_blank'>v2.4.0</a> <a className='kafka-header-version' href="https://github.com/didi/Logi-KafkaManager/releases" target='_blank'>v2.5.0</a>
{/* 添加版本超链接 */} {/* 添加版本超链接 */}
</div> </div>
<div className="mid-content"> <div className="mid-content">

View File

@@ -115,11 +115,19 @@ export class OrderList extends SearchAndFilterContainer {
status, status,
{ {
title: '申请时间', title: '申请时间',
dataIndex: 'gmtTime', dataIndex: 'gmtCreate',
key: 'gmtTime', key: 'gmtCreate',
sorter: (a: IBaseOrder, b: IBaseOrder) => b.gmtTime - a.gmtTime, sorter: (a: IBaseOrder, b: IBaseOrder) => b.gmtCreate - a.gmtCreate,
render: (t: number) => moment(t).format(timeFormat), render: (t: number) => t ? moment(t).format(timeFormat) : '-',
}, { },
{
title: '审批时间',
dataIndex: 'gmtHandle',
key: 'gmtHandle',
sorter: (a: IBaseOrder, b: IBaseOrder) => b.gmtHandle - a.gmtHandle,
render: (t: number) => t ? moment(t).format(timeFormat) : '-',
},
{
title: '操作', title: '操作',
key: 'operation', key: 'operation',
dataIndex: 'operation', dataIndex: 'operation',

View File

@@ -1,12 +1,15 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="UTF-8"> <meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=2"> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=2">
<title>KafkaManager</title> <title>LogiKM</title>
</head> </head>
<body> <body>
<div id="root"></div> <div id="root"></div>
<div id="modal"></div> <div id="modal"></div>
</body> </body>
</html> </html>

View File

@@ -17,6 +17,9 @@ public class ConsumerMetadataCache {
private static final Map<Long, ConsumerMetadata> CG_METADATA_IN_BK_MAP = new ConcurrentHashMap<>(); private static final Map<Long, ConsumerMetadata> CG_METADATA_IN_BK_MAP = new ConcurrentHashMap<>();
private ConsumerMetadataCache() {
}
public static void putConsumerMetadataInZK(Long clusterId, ConsumerMetadata consumerMetadata) { public static void putConsumerMetadataInZK(Long clusterId, ConsumerMetadata consumerMetadata) {
if (clusterId == null || consumerMetadata == null) { if (clusterId == null || consumerMetadata == null) {
return; return;

View File

@@ -1,7 +1,7 @@
package com.xiaojukeji.kafka.manager.service.cache; package com.xiaojukeji.kafka.manager.service.cache;
import com.alibaba.fastjson.JSONObject;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory; import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory;
import kafka.admin.AdminClient; import kafka.admin.AdminClient;
@@ -26,19 +26,22 @@ import java.util.concurrent.locks.ReentrantLock;
* @date 19/12/24 * @date 19/12/24
*/ */
public class KafkaClientPool { public class KafkaClientPool {
private final static Logger LOGGER = LoggerFactory.getLogger(KafkaClientPool.class); private static final Logger LOGGER = LoggerFactory.getLogger(KafkaClientPool.class);
/** /**
* AdminClient * AdminClient
*/ */
private static Map<Long, AdminClient> AdminClientMap = new ConcurrentHashMap<>(); private static final Map<Long, AdminClient> ADMIN_CLIENT_MAP = new ConcurrentHashMap<>();
private static Map<Long, KafkaProducer<String, String>> KAFKA_PRODUCER_MAP = new ConcurrentHashMap<>(); private static final Map<Long, KafkaProducer<String, String>> KAFKA_PRODUCER_MAP = new ConcurrentHashMap<>();
private static Map<Long, GenericObjectPool<KafkaConsumer>> KAFKA_CONSUMER_POOL = new ConcurrentHashMap<>(); private static final Map<Long, GenericObjectPool<KafkaConsumer<String, String>>> KAFKA_CONSUMER_POOL = new ConcurrentHashMap<>();
private static ReentrantLock lock = new ReentrantLock(); private static ReentrantLock lock = new ReentrantLock();
private KafkaClientPool() {
}
private static void initKafkaProducerMap(Long clusterId) { private static void initKafkaProducerMap(Long clusterId) {
ClusterDO clusterDO = PhysicalClusterMetadataManager.getClusterFromCache(clusterId); ClusterDO clusterDO = PhysicalClusterMetadataManager.getClusterFromCache(clusterId);
if (clusterDO == null) { if (clusterDO == null) {
@@ -55,7 +58,7 @@ public class KafkaClientPool {
properties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, "lz4"); properties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, "lz4");
properties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "10"); properties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "10");
properties.setProperty(ProducerConfig.RETRIES_CONFIG, "3"); properties.setProperty(ProducerConfig.RETRIES_CONFIG, "3");
KAFKA_PRODUCER_MAP.put(clusterId, new KafkaProducer<String, String>(properties)); KAFKA_PRODUCER_MAP.put(clusterId, new KafkaProducer<>(properties));
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("create kafka producer failed, clusterDO:{}.", clusterDO, e); LOGGER.error("create kafka producer failed, clusterDO:{}.", clusterDO, e);
} finally { } finally {
@@ -77,25 +80,22 @@ public class KafkaClientPool {
if (ValidateUtils.isNull(kafkaProducer)) { if (ValidateUtils.isNull(kafkaProducer)) {
return false; return false;
} }
kafkaProducer.send(new ProducerRecord<String, String>(topicName, data)); kafkaProducer.send(new ProducerRecord<>(topicName, data));
return true; return true;
} }
private static void initKafkaConsumerPool(ClusterDO clusterDO) { private static void initKafkaConsumerPool(ClusterDO clusterDO) {
lock.lock(); lock.lock();
try { try {
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId()); GenericObjectPool<KafkaConsumer<String, String>> objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId());
if (objectPool != null) { if (objectPool != null) {
return; return;
} }
GenericObjectPoolConfig config = new GenericObjectPoolConfig(); GenericObjectPoolConfig<KafkaConsumer<String, String>> config = new GenericObjectPoolConfig<>();
config.setMaxIdle(24); config.setMaxIdle(24);
config.setMinIdle(24); config.setMinIdle(24);
config.setMaxTotal(24); config.setMaxTotal(24);
KAFKA_CONSUMER_POOL.put( KAFKA_CONSUMER_POOL.put(clusterDO.getId(), new GenericObjectPool<>(new KafkaConsumerFactory(clusterDO), config));
clusterDO.getId(),
new GenericObjectPool<KafkaConsumer>(new KafkaConsumerFactory(clusterDO), config)
);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("create kafka consumer pool failed, clusterDO:{}.", clusterDO, e); LOGGER.error("create kafka consumer pool failed, clusterDO:{}.", clusterDO, e);
} finally { } finally {
@@ -106,7 +106,7 @@ public class KafkaClientPool {
public static void closeKafkaConsumerPool(Long clusterId) { public static void closeKafkaConsumerPool(Long clusterId) {
lock.lock(); lock.lock();
try { try {
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.remove(clusterId); GenericObjectPool<KafkaConsumer<String, String>> objectPool = KAFKA_CONSUMER_POOL.remove(clusterId);
if (objectPool == null) { if (objectPool == null) {
return; return;
} }
@@ -118,11 +118,11 @@ public class KafkaClientPool {
} }
} }
public static KafkaConsumer borrowKafkaConsumerClient(ClusterDO clusterDO) { public static KafkaConsumer<String, String> borrowKafkaConsumerClient(ClusterDO clusterDO) {
if (ValidateUtils.isNull(clusterDO)) { if (ValidateUtils.isNull(clusterDO)) {
return null; return null;
} }
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId()); GenericObjectPool<KafkaConsumer<String, String>> objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId());
if (ValidateUtils.isNull(objectPool)) { if (ValidateUtils.isNull(objectPool)) {
initKafkaConsumerPool(clusterDO); initKafkaConsumerPool(clusterDO);
objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId()); objectPool = KAFKA_CONSUMER_POOL.get(clusterDO.getId());
@@ -139,11 +139,11 @@ public class KafkaClientPool {
return null; return null;
} }
public static void returnKafkaConsumerClient(Long physicalClusterId, KafkaConsumer kafkaConsumer) { public static void returnKafkaConsumerClient(Long physicalClusterId, KafkaConsumer<String, String> kafkaConsumer) {
if (ValidateUtils.isNull(physicalClusterId) || ValidateUtils.isNull(kafkaConsumer)) { if (ValidateUtils.isNull(physicalClusterId) || ValidateUtils.isNull(kafkaConsumer)) {
return; return;
} }
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.get(physicalClusterId); GenericObjectPool<KafkaConsumer<String, String>> objectPool = KAFKA_CONSUMER_POOL.get(physicalClusterId);
if (ValidateUtils.isNull(objectPool)) { if (ValidateUtils.isNull(objectPool)) {
return; return;
} }
@@ -155,7 +155,7 @@ public class KafkaClientPool {
} }
public static AdminClient getAdminClient(Long clusterId) { public static AdminClient getAdminClient(Long clusterId) {
AdminClient adminClient = AdminClientMap.get(clusterId); AdminClient adminClient = ADMIN_CLIENT_MAP.get(clusterId);
if (adminClient != null) { if (adminClient != null) {
return adminClient; return adminClient;
} }
@@ -166,26 +166,26 @@ public class KafkaClientPool {
Properties properties = createProperties(clusterDO, false); Properties properties = createProperties(clusterDO, false);
lock.lock(); lock.lock();
try { try {
adminClient = AdminClientMap.get(clusterId); adminClient = ADMIN_CLIENT_MAP.get(clusterId);
if (adminClient != null) { if (adminClient != null) {
return adminClient; return adminClient;
} }
AdminClientMap.put(clusterId, AdminClient.create(properties)); ADMIN_CLIENT_MAP.put(clusterId, AdminClient.create(properties));
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("create kafka admin client failed, clusterId:{}.", clusterId, e); LOGGER.error("create kafka admin client failed, clusterId:{}.", clusterId, e);
} finally { } finally {
lock.unlock(); lock.unlock();
} }
return AdminClientMap.get(clusterId); return ADMIN_CLIENT_MAP.get(clusterId);
} }
public static void closeAdminClient(ClusterDO cluster) { public static void closeAdminClient(ClusterDO cluster) {
if (AdminClientMap.containsKey(cluster.getId())) { if (ADMIN_CLIENT_MAP.containsKey(cluster.getId())) {
AdminClientMap.get(cluster.getId()).close(); ADMIN_CLIENT_MAP.get(cluster.getId()).close();
} }
} }
public static Properties createProperties(ClusterDO clusterDO, Boolean serialize) { public static Properties createProperties(ClusterDO clusterDO, boolean serialize) {
Properties properties = new Properties(); Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, clusterDO.getBootstrapServers()); properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, clusterDO.getBootstrapServers());
if (serialize) { if (serialize) {
@@ -198,8 +198,7 @@ public class KafkaClientPool {
if (ValidateUtils.isBlank(clusterDO.getSecurityProperties())) { if (ValidateUtils.isBlank(clusterDO.getSecurityProperties())) {
return properties; return properties;
} }
Properties securityProperties = JSONObject.parseObject(clusterDO.getSecurityProperties(), Properties.class); properties.putAll(JsonUtils.stringToObj(clusterDO.getSecurityProperties(), Properties.class));
properties.putAll(securityProperties);
return properties; return properties;
} }
} }

View File

@@ -14,7 +14,10 @@ public class KafkaMetricsCache {
/** /**
* <clusterId, Metrics List> * <clusterId, Metrics List>
*/ */
private static Map<Long, Map<String, TopicMetrics>> TopicMetricsMap = new ConcurrentHashMap<>(); private static final Map<Long, Map<String, TopicMetrics>> TOPIC_METRICS_MAP = new ConcurrentHashMap<>();
private KafkaMetricsCache() {
}
public static void putTopicMetricsToCache(Long clusterId, List<TopicMetrics> dataList) { public static void putTopicMetricsToCache(Long clusterId, List<TopicMetrics> dataList) {
if (clusterId == null || dataList == null) { if (clusterId == null || dataList == null) {
@@ -24,22 +27,22 @@ public class KafkaMetricsCache {
for (TopicMetrics topicMetrics : dataList) { for (TopicMetrics topicMetrics : dataList) {
subMetricsMap.put(topicMetrics.getTopicName(), topicMetrics); subMetricsMap.put(topicMetrics.getTopicName(), topicMetrics);
} }
TopicMetricsMap.put(clusterId, subMetricsMap); TOPIC_METRICS_MAP.put(clusterId, subMetricsMap);
} }
public static Map<String, TopicMetrics> getTopicMetricsFromCache(Long clusterId) { public static Map<String, TopicMetrics> getTopicMetricsFromCache(Long clusterId) {
return TopicMetricsMap.getOrDefault(clusterId, Collections.emptyMap()); return TOPIC_METRICS_MAP.getOrDefault(clusterId, Collections.emptyMap());
} }
public static Map<Long, Map<String, TopicMetrics>> getAllTopicMetricsFromCache() { public static Map<Long, Map<String, TopicMetrics>> getAllTopicMetricsFromCache() {
return TopicMetricsMap; return TOPIC_METRICS_MAP;
} }
public static TopicMetrics getTopicMetricsFromCache(Long clusterId, String topicName) { public static TopicMetrics getTopicMetricsFromCache(Long clusterId, String topicName) {
if (clusterId == null || topicName == null) { if (clusterId == null || topicName == null) {
return null; return null;
} }
Map<String, TopicMetrics> subMap = TopicMetricsMap.getOrDefault(clusterId, Collections.emptyMap()); Map<String, TopicMetrics> subMap = TOPIC_METRICS_MAP.getOrDefault(clusterId, Collections.emptyMap());
return subMap.get(topicName); return subMap.get(topicName);
} }
} }

View File

@@ -160,7 +160,7 @@ public class LogicalClusterMetadataManager {
public void flush() { public void flush() {
List<LogicalClusterDO> logicalClusterDOList = logicalClusterService.listAll(); List<LogicalClusterDO> logicalClusterDOList = logicalClusterService.listAll();
if (ValidateUtils.isNull(logicalClusterDOList)) { if (ValidateUtils.isNull(logicalClusterDOList)) {
logicalClusterDOList = Collections.EMPTY_LIST; logicalClusterDOList = Collections.emptyList();
} }
Set<Long> inDbLogicalClusterIds = logicalClusterDOList.stream() Set<Long> inDbLogicalClusterIds = logicalClusterDOList.stream()
.map(LogicalClusterDO::getId) .map(LogicalClusterDO::getId)

View File

@@ -3,10 +3,12 @@ package com.xiaojukeji.kafka.manager.service.cache;
import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum; import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum;
import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant;
import com.xiaojukeji.kafka.manager.common.constant.TopicCreationConstant;
import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion; import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
import com.xiaojukeji.kafka.manager.common.utils.NumberUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig; import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap; import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap;
@@ -37,7 +39,7 @@ import java.util.concurrent.ConcurrentHashMap;
*/ */
@Service @Service
public class PhysicalClusterMetadataManager { public class PhysicalClusterMetadataManager {
private final static Logger LOGGER = LoggerFactory.getLogger(PhysicalClusterMetadataManager.class); private static final Logger LOGGER = LoggerFactory.getLogger(PhysicalClusterMetadataManager.class);
@Autowired @Autowired
private ControllerDao controllerDao; private ControllerDao controllerDao;
@@ -48,22 +50,22 @@ public class PhysicalClusterMetadataManager {
@Autowired @Autowired
private ClusterService clusterService; private ClusterService clusterService;
private final static Map<Long, ClusterDO> CLUSTER_MAP = new ConcurrentHashMap<>(); private static final Map<Long, ClusterDO> CLUSTER_MAP = new ConcurrentHashMap<>();
private final static Map<Long, ControllerData> CONTROLLER_DATA_MAP = new ConcurrentHashMap<>(); private static final Map<Long, ControllerData> CONTROLLER_DATA_MAP = new ConcurrentHashMap<>();
private final static Map<Long, ZkConfigImpl> ZK_CONFIG_MAP = new ConcurrentHashMap<>(); private static final Map<Long, ZkConfigImpl> ZK_CONFIG_MAP = new ConcurrentHashMap<>();
private final static Map<Long, Map<String, TopicMetadata>> TOPIC_METADATA_MAP = new ConcurrentHashMap<>(); private static final Map<Long, Map<String, TopicMetadata>> TOPIC_METADATA_MAP = new ConcurrentHashMap<>();
private final static Map<Long, Map<String, Long>> TOPIC_RETENTION_TIME_MAP = new ConcurrentHashMap<>(); private static final Map<Long, Map<String, Properties>> TOPIC_PROPERTIES_MAP = new ConcurrentHashMap<>();
private final static Map<Long, Map<Integer, BrokerMetadata>> BROKER_METADATA_MAP = new ConcurrentHashMap<>(); private static final Map<Long, Map<Integer, BrokerMetadata>> BROKER_METADATA_MAP = new ConcurrentHashMap<>();
/** /**
* JXM连接, 延迟连接 * JXM连接, 延迟连接
*/ */
private final static Map<Long, Map<Integer, JmxConnectorWrap>> JMX_CONNECTOR_MAP = new ConcurrentHashMap<>(); private static final Map<Long, Map<Integer, JmxConnectorWrap>> JMX_CONNECTOR_MAP = new ConcurrentHashMap<>();
/** /**
* KafkaBroker版本, 延迟获取 * KafkaBroker版本, 延迟获取
@@ -95,7 +97,7 @@ public class PhysicalClusterMetadataManager {
// 初始化topic-map // 初始化topic-map
TOPIC_METADATA_MAP.put(clusterDO.getId(), new ConcurrentHashMap<>()); TOPIC_METADATA_MAP.put(clusterDO.getId(), new ConcurrentHashMap<>());
TOPIC_RETENTION_TIME_MAP.put(clusterDO.getId(), new ConcurrentHashMap<>()); TOPIC_PROPERTIES_MAP.put(clusterDO.getId(), new ConcurrentHashMap<>());
// 初始化cluster-map // 初始化cluster-map
CLUSTER_MAP.put(clusterDO.getId(), clusterDO); CLUSTER_MAP.put(clusterDO.getId(), clusterDO);
@@ -158,7 +160,7 @@ public class PhysicalClusterMetadataManager {
KAFKA_VERSION_MAP.remove(clusterId); KAFKA_VERSION_MAP.remove(clusterId);
TOPIC_METADATA_MAP.remove(clusterId); TOPIC_METADATA_MAP.remove(clusterId);
TOPIC_RETENTION_TIME_MAP.remove(clusterId); TOPIC_PROPERTIES_MAP.remove(clusterId);
CLUSTER_MAP.remove(clusterId); CLUSTER_MAP.remove(clusterId);
} }
@@ -262,24 +264,45 @@ public class PhysicalClusterMetadataManager {
//---------------------------配置相关元信息-------------- //---------------------------配置相关元信息--------------
public static void putTopicRetentionTime(Long clusterId, String topicName, Long retentionTime) { public static void putTopicProperties(Long clusterId, String topicName, Properties properties) {
Map<String, Long> timeMap = TOPIC_RETENTION_TIME_MAP.get(clusterId); if (ValidateUtils.isNull(clusterId) || ValidateUtils.isBlank(topicName) || ValidateUtils.isNull(properties)) {
if (timeMap == null) {
return; return;
} }
timeMap.put(topicName, retentionTime);
Map<String, Properties> propertiesMap = TOPIC_PROPERTIES_MAP.get(clusterId);
if (ValidateUtils.isNull(propertiesMap)) {
return;
}
propertiesMap.put(topicName, properties);
} }
public static Long getTopicRetentionTime(Long clusterId, String topicName) { public static Long getTopicRetentionTime(Long clusterId, String topicName) {
Map<String, Long> timeMap = TOPIC_RETENTION_TIME_MAP.get(clusterId); Map<String, Properties> propertiesMap = TOPIC_PROPERTIES_MAP.get(clusterId);
if (timeMap == null) { if (ValidateUtils.isNull(propertiesMap)) {
return null; return null;
} }
return timeMap.get(topicName);
Properties properties = propertiesMap.get(topicName);
if (ValidateUtils.isNull(properties)) {
return null;
}
return NumberUtils.string2Long(properties.getProperty(TopicCreationConstant.TOPIC_RETENTION_TIME_KEY_NAME));
} }
public static Long getTopicRetentionBytes(Long clusterId, String topicName) {
Map<String, Properties> propertiesMap = TOPIC_PROPERTIES_MAP.get(clusterId);
if (ValidateUtils.isNull(propertiesMap)) {
return null;
}
Properties properties = propertiesMap.get(topicName);
if (ValidateUtils.isNull(properties)) {
return null;
}
return NumberUtils.string2Long(properties.getProperty(TopicCreationConstant.TOPIC_RETENTION_BYTES_KEY_NAME));
}
//---------------------------Broker元信息相关-------------- //---------------------------Broker元信息相关--------------
@@ -375,7 +398,7 @@ public class PhysicalClusterMetadataManager {
KafkaBrokerRoleEnum roleEnum) { KafkaBrokerRoleEnum roleEnum) {
BrokerMetadata brokerMetadata = BrokerMetadata brokerMetadata =
PhysicalClusterMetadataManager.getBrokerMetadata(clusterId, brokerId); PhysicalClusterMetadataManager.getBrokerMetadata(clusterId, brokerId);
if (ValidateUtils.isNull(brokerMetadata)) { if (brokerMetadata == null) {
return; return;
} }
String hostname = brokerMetadata.getHost().replace(KafkaConstant.BROKER_HOST_NAME_SUFFIX, ""); String hostname = brokerMetadata.getHost().replace(KafkaConstant.BROKER_HOST_NAME_SUFFIX, "");
@@ -415,7 +438,7 @@ public class PhysicalClusterMetadataManager {
KafkaBrokerRoleEnum roleEnum) { KafkaBrokerRoleEnum roleEnum) {
BrokerMetadata brokerMetadata = BrokerMetadata brokerMetadata =
PhysicalClusterMetadataManager.getBrokerMetadata(clusterId, brokerId); PhysicalClusterMetadataManager.getBrokerMetadata(clusterId, brokerId);
if (ValidateUtils.isNull(brokerMetadata)) { if (brokerMetadata == null) {
return; return;
} }

View File

@@ -13,4 +13,12 @@ public interface TopicExpiredService {
List<TopicExpiredData> getExpiredTopicDataList(String username); List<TopicExpiredData> getExpiredTopicDataList(String username);
ResultStatus retainExpiredTopic(Long physicalClusterId, String topicName, Integer retainDays); ResultStatus retainExpiredTopic(Long physicalClusterId, String topicName, Integer retainDays);
/**
* 通过topictopic名称删除
* @param clusterId 集群id
* @param topicName topic名称
* @return int
*/
int deleteByTopicName(Long clusterId, String topicName);
} }

View File

@@ -43,6 +43,9 @@ public class AdminServiceImpl implements AdminService {
@Autowired @Autowired
private TopicManagerService topicManagerService; private TopicManagerService topicManagerService;
@Autowired
private TopicExpiredService topicExpiredService;
@Autowired @Autowired
private TopicService topicService; private TopicService topicService;
@@ -143,6 +146,7 @@ public class AdminServiceImpl implements AdminService {
// 3. 数据库中删除topic // 3. 数据库中删除topic
topicManagerService.deleteByTopicName(clusterDO.getId(), topicName); topicManagerService.deleteByTopicName(clusterDO.getId(), topicName);
topicExpiredService.deleteByTopicName(clusterDO.getId(), topicName);
// 4. 数据库中删除authority // 4. 数据库中删除authority
authorityService.deleteAuthorityByTopic(clusterDO.getId(), topicName); authorityService.deleteAuthorityByTopic(clusterDO.getId(), topicName);

View File

@@ -19,6 +19,8 @@ import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.*; import com.xiaojukeji.kafka.manager.service.service.*;
import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooKeeper;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@@ -210,7 +212,7 @@ public class ClusterServiceImpl implements ClusterService {
ZooKeeper zk = null; ZooKeeper zk = null;
try { try {
zk = new ZooKeeper(zookeeper, 1000, null); zk = new ZooKeeper(zookeeper, 1000, watchedEvent -> LOGGER.info(" receive event : " + watchedEvent.getType().name()));
for (int i = 0; i < 15; ++i) { for (int i = 0; i < 15; ++i) {
if (zk.getState().isConnected()) { if (zk.getState().isConnected()) {
// 只有状态是connected的时候才表示地址是合法的 // 只有状态是connected的时候才表示地址是合法的

View File

@@ -75,4 +75,14 @@ public class TopicExpiredServiceImpl implements TopicExpiredService {
} }
return ResultStatus.MYSQL_ERROR; return ResultStatus.MYSQL_ERROR;
} }
@Override
public int deleteByTopicName(Long clusterId, String topicName) {
try {
return topicExpiredDao.deleteByName(clusterId, topicName);
} catch (Exception e) {
LOGGER.error("delete topic failed, clusterId:{} topicName:{}", clusterId, topicName, e);
}
return 0;
}
} }

View File

@@ -210,7 +210,7 @@ public class TopicManagerServiceImpl implements TopicManagerService {
} }
} }
// 增加流量信息 // 增加流量和描述信息
Map<Long, Map<String, TopicMetrics>> metricMap = KafkaMetricsCache.getAllTopicMetricsFromCache(); Map<Long, Map<String, TopicMetrics>> metricMap = KafkaMetricsCache.getAllTopicMetricsFromCache();
for (MineTopicSummary mineTopicSummary : summaryList) { for (MineTopicSummary mineTopicSummary : summaryList) {
TopicMetrics topicMetrics = getTopicMetricsFromCacheOrJmx( TopicMetrics topicMetrics = getTopicMetricsFromCacheOrJmx(
@@ -219,6 +219,10 @@ public class TopicManagerServiceImpl implements TopicManagerService {
metricMap); metricMap);
mineTopicSummary.setBytesIn(topicMetrics.getSpecifiedMetrics("BytesInPerSecOneMinuteRate")); mineTopicSummary.setBytesIn(topicMetrics.getSpecifiedMetrics("BytesInPerSecOneMinuteRate"));
mineTopicSummary.setBytesOut(topicMetrics.getSpecifiedMetrics("BytesOutPerSecOneMinuteRate")); mineTopicSummary.setBytesOut(topicMetrics.getSpecifiedMetrics("BytesOutPerSecOneMinuteRate"));
// 增加topic描述信息
TopicDO topicDO = topicDao.getByTopicName(mineTopicSummary.getPhysicalClusterId(), mineTopicSummary.getTopicName());
mineTopicSummary.setDescription(topicDO.getDescription());
} }
return summaryList; return summaryList;
} }

View File

@@ -223,6 +223,7 @@ public class TopicServiceImpl implements TopicService {
basicDTO.setCreateTime(topicMetadata.getCreateTime()); basicDTO.setCreateTime(topicMetadata.getCreateTime());
basicDTO.setModifyTime(topicMetadata.getModifyTime()); basicDTO.setModifyTime(topicMetadata.getModifyTime());
basicDTO.setRetentionTime(PhysicalClusterMetadataManager.getTopicRetentionTime(clusterId, topicName)); basicDTO.setRetentionTime(PhysicalClusterMetadataManager.getTopicRetentionTime(clusterId, topicName));
basicDTO.setRetentionBytes(PhysicalClusterMetadataManager.getTopicRetentionBytes(clusterId, topicName));
TopicDO topicDO = topicManagerService.getByTopicName(clusterId, topicName); TopicDO topicDO = topicManagerService.getByTopicName(clusterId, topicName);
if (!ValidateUtils.isNull(topicDO)) { if (!ValidateUtils.isNull(topicDO)) {
@@ -648,10 +649,11 @@ public class TopicServiceImpl implements TopicService {
List<String> dataList = new ArrayList<>(); List<String> dataList = new ArrayList<>();
int currentSize = dataList.size(); int currentSize = dataList.size();
while (dataList.size() < maxMsgNum) { while (dataList.size() < maxMsgNum) {
if (remainingWaitMs <= 0) {
break;
}
try { try {
if (remainingWaitMs <= 0) {
break;
}
ConsumerRecords<String, String> records = kafkaConsumer.poll(TopicSampleConstant.POLL_TIME_OUT_UNIT_MS); ConsumerRecords<String, String> records = kafkaConsumer.poll(TopicSampleConstant.POLL_TIME_OUT_UNIT_MS);
for (ConsumerRecord record : records) { for (ConsumerRecord record : records) {
String value = (String) record.value(); String value = (String) record.value();
@@ -661,20 +663,22 @@ public class TopicServiceImpl implements TopicService {
: value : value
); );
} }
// 当前批次一条数据都没拉取到,则结束拉取
if (dataList.size() - currentSize == 0) {
break;
}
currentSize = dataList.size();
// 检查是否超时
long elapsed = System.currentTimeMillis() - begin;
if (elapsed >= maxWaitMs) {
break;
}
remainingWaitMs = maxWaitMs - elapsed;
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("fetch topic data failed, TopicPartitions:{}.", kafkaConsumer.assignment(), e); LOGGER.error("fetch topic data failed, TopicPartitions:{}.", kafkaConsumer.assignment(), e);
} }
// 当前批次一条数据都没拉取到,则结束拉取
if (dataList.size() - currentSize == 0) {
break;
}
currentSize = dataList.size();
// 检查是否超时
long elapsed = System.currentTimeMillis() - begin;
if (elapsed >= maxWaitMs) {
break;
}
remainingWaitMs = maxWaitMs - elapsed;
} }
return dataList.subList(0, Math.min(dataList.size(), maxMsgNum)); return dataList.subList(0, Math.min(dataList.size(), maxMsgNum));
} }
@@ -698,14 +702,15 @@ public class TopicServiceImpl implements TopicService {
: value : value
); );
} }
if (System.currentTimeMillis() - timestamp > timeout
|| dataList.size() >= maxMsgNum) {
break;
}
Thread.sleep(10); Thread.sleep(10);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("fetch topic data failed, TopicPartitions:{}.", kafkaConsumer.assignment(), e); LOGGER.error("fetch topic data failed, TopicPartitions:{}.", kafkaConsumer.assignment(), e);
} }
if (System.currentTimeMillis() - timestamp > timeout || dataList.size() >= maxMsgNum) {
// 超时或者是数据已采集足够时, 直接返回
break;
}
} }
return dataList.subList(0, Math.min(dataList.size(), maxMsgNum)); return dataList.subList(0, Math.min(dataList.size(), maxMsgNum));
} }

View File

@@ -17,4 +17,6 @@ public interface TopicExpiredDao {
int replace(TopicExpiredDO expiredDO); int replace(TopicExpiredDO expiredDO);
TopicExpiredDO getByTopic(Long clusterId, String topicName); TopicExpiredDO getByTopic(Long clusterId, String topicName);
int deleteByName(Long clusterId, String topicName);
} }

View File

@@ -50,4 +50,12 @@ public class TopicExpiredDaoImpl implements TopicExpiredDao {
params.put("topicName", topicName); params.put("topicName", topicName);
return sqlSession.selectOne("TopicExpiredDao.getByTopic", params); return sqlSession.selectOne("TopicExpiredDao.getByTopic", params);
} }
@Override
public int deleteByName(Long clusterId, String topicName) {
Map<String, Object> params = new HashMap<>(2);
params.put("clusterId", clusterId);
params.put("topicName", topicName);
return sqlSession.delete("TopicExpiredDao.deleteByName", params);
}
} }

View File

@@ -11,7 +11,7 @@
</resultMap> </resultMap>
<insert id="replace" parameterType="com.xiaojukeji.kafka.manager.common.entity.pojo.HeartbeatDO"> <insert id="replace" parameterType="com.xiaojukeji.kafka.manager.common.entity.pojo.HeartbeatDO">
REPLACE heartbeat (ip, hostname) VALUES (#{ip}, #{hostname}) REPLACE heartbeat (ip, hostname, modify_time) VALUES (#{ip}, #{hostname}, #{modifyTime})
</insert> </insert>
<select id="selectActiveHosts" parameterType="java.util.Date" resultMap="HeartbeatMap"> <select id="selectActiveHosts" parameterType="java.util.Date" resultMap="HeartbeatMap">

View File

@@ -36,4 +36,8 @@
<select id="getByTopic" parameterType="java.util.Map" resultMap="TopicExpiredMap"> <select id="getByTopic" parameterType="java.util.Map" resultMap="TopicExpiredMap">
SELECT * FROM topic_expired WHERE cluster_id = #{clusterId} AND topic_name = #{topicName} SELECT * FROM topic_expired WHERE cluster_id = #{clusterId} AND topic_name = #{topicName}
</select> </select>
<delete id="deleteByName" parameterType="java.util.Map">
DELETE FROM topic_expired WHERE cluster_id=#{clusterId} AND topic_name=#{topicName}
</delete>
</mapper> </mapper>

View File

@@ -25,6 +25,7 @@
WHERE cluster_id = #{clusterId} WHERE cluster_id = #{clusterId}
AND topic_name = #{topicName} AND topic_name = #{topicName}
AND gmt_create BETWEEN #{startTime} AND #{endTime} AND gmt_create BETWEEN #{startTime} AND #{endTime}
ORDER BY gmt_create
]]> ]]>
</select> </select>
@@ -32,6 +33,7 @@
<![CDATA[ <![CDATA[
SELECT * FROM topic_metrics SELECT * FROM topic_metrics
WHERE cluster_id = #{clusterId} AND #{afterTime} <= gmt_create WHERE cluster_id = #{clusterId} AND #{afterTime} <= gmt_create
ORDER BY gmt_create
]]> ]]>
</select> </select>

View File

@@ -75,11 +75,7 @@ public class LoginServiceImpl implements LoginService {
return false; return false;
} }
if (classRequestMappingValue.equals(ApiPrefix.API_V1_SSO_PREFIX) if (classRequestMappingValue.equals(ApiPrefix.API_V1_SSO_PREFIX)) {
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_OP_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_NORMAL_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.GATEWAY_API_V1_PREFIX)) {
// 白名单接口直接true // 白名单接口直接true
return true; return true;
} }

View File

@@ -19,7 +19,6 @@ public class Converts {
orderDO.setApprover(""); orderDO.setApprover("");
orderDO.setOpinion(""); orderDO.setOpinion("");
orderDO.setExtensions(orderDTO.getExtensions()); orderDO.setExtensions(orderDTO.getExtensions());
orderDO.setType(orderDTO.getType());
return orderDO; return orderDO;
} }
} }

View File

@@ -10,6 +10,8 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled; import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.Date;
/** /**
* @author limeng * @author limeng
* @date 20/8/10 * @date 20/8/10
@@ -30,6 +32,7 @@ public class Heartbeat {
HeartbeatDO heartbeatDO = new HeartbeatDO(); HeartbeatDO heartbeatDO = new HeartbeatDO();
heartbeatDO.setIp(NetUtils.localIp()); heartbeatDO.setIp(NetUtils.localIp());
heartbeatDO.setHostname(NetUtils.localHostname()); heartbeatDO.setHostname(NetUtils.localHostname());
heartbeatDO.setModifyTime(new Date());
heartbeatDao.replace(heartbeatDO); heartbeatDao.replace(heartbeatDO);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("flush heartbeat failed.", e); LOGGER.error("flush heartbeat failed.", e);

View File

@@ -30,16 +30,23 @@ public class CollectAndPublishCommunityTopicMetrics extends AbstractScheduledTas
@Override @Override
protected List<ClusterDO> listAllTasks() { protected List<ClusterDO> listAllTasks() {
// 获取需要进行指标采集的集群列表这些集群将会被拆分到多台KM中进行执行。
return clusterService.list(); return clusterService.list();
} }
@Override @Override
public void processTask(ClusterDO clusterDO) { public void processTask(ClusterDO clusterDO) {
// 这里需要实现对clusterDO这个集群进行Topic指标采集的代码逻辑
// 进行Topic指标获取
List<TopicMetrics> metricsList = getTopicMetrics(clusterDO.getId()); List<TopicMetrics> metricsList = getTopicMetrics(clusterDO.getId());
// 获取到Topic流量指标之后发布一个事件
SpringTool.publish(new TopicMetricsCollectedEvent(this, clusterDO.getId(), metricsList)); SpringTool.publish(new TopicMetricsCollectedEvent(this, clusterDO.getId(), metricsList));
} }
private List<TopicMetrics> getTopicMetrics(Long clusterId) { private List<TopicMetrics> getTopicMetrics(Long clusterId) {
// 具体获取Topic流量指标的入口代码
List<TopicMetrics> metricsList = List<TopicMetrics> metricsList =
jmxService.getTopicMetrics(clusterId, KafkaMetricsCollections.TOPIC_METRICS_TO_DB, true); jmxService.getTopicMetrics(clusterId, KafkaMetricsCollections.TOPIC_METRICS_TO_DB, true);
if (ValidateUtils.isEmptyList(metricsList)) { if (ValidateUtils.isEmptyList(metricsList)) {

View File

@@ -14,13 +14,14 @@ import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import java.util.List; import java.util.List;
import java.util.Properties;
/** /**
* @author zengqiao * @author zengqiao
* @date 20/7/23 * @date 20/7/23
*/ */
@Component @Component
public class FlushTopicRetentionTime { public class FlushTopicProperties {
private final static Logger LOGGER = LoggerFactory.getLogger(LogConstant.SCHEDULED_TASK_LOGGER); private final static Logger LOGGER = LoggerFactory.getLogger(LogConstant.SCHEDULED_TASK_LOGGER);
@Autowired @Autowired
@@ -33,7 +34,7 @@ public class FlushTopicRetentionTime {
try { try {
flush(clusterDO); flush(clusterDO);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("flush topic retention time failed, clusterId:{}.", clusterDO.getId(), e); LOGGER.error("flush topic properties failed, clusterId:{}.", clusterDO.getId(), e);
} }
} }
} }
@@ -41,22 +42,20 @@ public class FlushTopicRetentionTime {
private void flush(ClusterDO clusterDO) { private void flush(ClusterDO clusterDO) {
ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterDO.getId()); ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterDO.getId());
if (ValidateUtils.isNull(zkConfig)) { if (ValidateUtils.isNull(zkConfig)) {
LOGGER.error("flush topic retention time, get zk config failed, clusterId:{}.", clusterDO.getId()); LOGGER.error("flush topic properties, get zk config failed, clusterId:{}.", clusterDO.getId());
return; return;
} }
for (String topicName: PhysicalClusterMetadataManager.getTopicNameList(clusterDO.getId())) { for (String topicName: PhysicalClusterMetadataManager.getTopicNameList(clusterDO.getId())) {
try { try {
Long retentionTime = KafkaZookeeperUtils.getTopicRetentionTime(zkConfig, topicName); Properties properties = KafkaZookeeperUtils.getTopicProperties(zkConfig, topicName);
if (retentionTime == null) { if (ValidateUtils.isNull(properties)) {
LOGGER.warn("get topic retentionTime failed, clusterId:{} topicName:{}.", LOGGER.warn("get topic properties failed, clusterId:{} topicName:{}.", clusterDO.getId(), topicName);
clusterDO.getId(), topicName);
continue; continue;
} }
PhysicalClusterMetadataManager.putTopicRetentionTime(clusterDO.getId(), topicName, retentionTime); PhysicalClusterMetadataManager.putTopicProperties(clusterDO.getId(), topicName, properties);
} catch (Exception e) { } catch (Exception e) {
LOGGER.error("get topic retentionTime failed, clusterId:{} topicName:{}.", LOGGER.error("get topic properties failed, clusterId:{} topicName:{}.", clusterDO.getId(), topicName, e);
clusterDO.getId(), topicName, e);
} }
} }
} }

View File

@@ -19,7 +19,7 @@
<springframework.boot.version>2.1.1.RELEASE</springframework.boot.version> <springframework.boot.version>2.1.1.RELEASE</springframework.boot.version>
<spring-version>5.1.3.RELEASE</spring-version> <spring-version>5.1.3.RELEASE</spring-version>
<failOnMissingWebXml>false</failOnMissingWebXml> <failOnMissingWebXml>false</failOnMissingWebXml>
<tomcat.version>8.5.37</tomcat.version> <tomcat.version>8.5.72</tomcat.version>
</properties> </properties>
<dependencies> <dependencies>
@@ -109,8 +109,10 @@
</dependencies> </dependencies>
<build> <build>
<finalName>kafka-manager</finalName>
<plugins> <plugins>
<plugin> <plugin>
<groupId>org.springframework.boot</groupId> <groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId> <artifactId>spring-boot-maven-plugin</artifactId>
<version>${springframework.boot.version}</version> <version>${springframework.boot.version}</version>
@@ -121,6 +123,7 @@
</goals> </goals>
</execution> </execution>
</executions> </executions>
</plugin> </plugin>
</plugins> </plugins>
</build> </build>

View File

@@ -61,10 +61,7 @@ public class NormalTopicController {
@ApiOperation(value = "Topic基本信息", notes = "") @ApiOperation(value = "Topic基本信息", notes = "")
@RequestMapping(value = "{clusterId}/topics/{topicName}/basic-info", method = RequestMethod.GET) @RequestMapping(value = "{clusterId}/topics/{topicName}/basic-info", method = RequestMethod.GET)
@ResponseBody @ResponseBody
public Result<TopicBasicVO> getTopicBasic( public Result<TopicBasicVO> getTopicBasic(@PathVariable Long clusterId, @PathVariable String topicName, @RequestParam(value = "isPhysicalClusterId", required = false) Boolean isPhysicalClusterId) {
@PathVariable Long clusterId,
@PathVariable String topicName,
@RequestParam(value = "isPhysicalClusterId", required = false) Boolean isPhysicalClusterId) {
Long physicalClusterId = logicalClusterMetadataManager.getPhysicalClusterId(clusterId, isPhysicalClusterId); Long physicalClusterId = logicalClusterMetadataManager.getPhysicalClusterId(clusterId, isPhysicalClusterId);
if (ValidateUtils.isNull(physicalClusterId)) { if (ValidateUtils.isNull(physicalClusterId)) {
return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST);

View File

@@ -1,15 +1,16 @@
package com.xiaojukeji.kafka.manager.web.converters; package com.xiaojukeji.kafka.manager.web.converters;
import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account;
import com.xiaojukeji.kafka.manager.bpm.common.OrderResult; import com.xiaojukeji.kafka.manager.bpm.common.OrderResult;
import com.xiaojukeji.kafka.manager.bpm.common.OrderStatusEnum;
import com.xiaojukeji.kafka.manager.bpm.common.entry.BaseOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.entry.BaseOrderDetailData;
import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO;
import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountVO; import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountVO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.OrderResultVO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.OrderResultVO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.OrderVO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.OrderVO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.detail.OrderDetailBaseVO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.order.detail.OrderDetailBaseVO;
import com.xiaojukeji.kafka.manager.common.utils.CopyUtils; import com.xiaojukeji.kafka.manager.common.utils.CopyUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Collections; import java.util.Collections;
@@ -41,7 +42,9 @@ public class OrderConverter {
} }
OrderVO orderVO = new OrderVO(); OrderVO orderVO = new OrderVO();
CopyUtils.copyProperties(orderVO, orderDO); CopyUtils.copyProperties(orderVO, orderDO);
orderVO.setGmtTime(orderDO.getGmtCreate()); if (OrderStatusEnum.WAIT_DEAL.getCode().equals(orderDO.getStatus())) {
orderVO.setGmtHandle(null);
}
return orderVO; return orderVO;
} }

View File

@@ -29,6 +29,7 @@ public class TopicMineConverter {
vo.setClusterName(data.getLogicalClusterName()); vo.setClusterName(data.getLogicalClusterName());
vo.setBytesIn(data.getBytesIn()); vo.setBytesIn(data.getBytesIn());
vo.setBytesOut(data.getBytesOut()); vo.setBytesOut(data.getBytesOut());
vo.setDescription(data.getDescription());
voList.add(vo); voList.add(vo);
} }
return voList; return voList;

View File

@@ -31,6 +31,7 @@ public class TopicModelConverter {
vo.setReplicaNum(dto.getReplicaNum()); vo.setReplicaNum(dto.getReplicaNum());
vo.setPrincipals(dto.getPrincipals()); vo.setPrincipals(dto.getPrincipals());
vo.setRetentionTime(dto.getRetentionTime()); vo.setRetentionTime(dto.getRetentionTime());
vo.setRetentionBytes(dto.getRetentionBytes());
vo.setCreateTime(dto.getCreateTime()); vo.setCreateTime(dto.getCreateTime());
vo.setModifyTime(dto.getModifyTime()); vo.setModifyTime(dto.getModifyTime());
vo.setScore(dto.getScore()); vo.setScore(dto.getScore());

View File

@@ -9,6 +9,8 @@ server:
spring: spring:
application: application:
name: kafkamanager name: kafkamanager
profiles:
active: dev
datasource: datasource:
kafka-manager: kafka-manager:
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8 jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
@@ -18,8 +20,6 @@ spring:
main: main:
allow-bean-definition-overriding: true allow-bean-definition-overriding: true
profiles:
active: dev
servlet: servlet:
multipart: multipart:
max-file-size: 100MB max-file-size: 100MB
@@ -84,11 +84,11 @@ monitor:
nid: 2 nid: 2
user-token: 1234567890 user-token: 1234567890
mon: mon:
base-url: http://127.0.0.1:8032 base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
sink: sink:
base-url: http://127.0.0.1:8006 base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
rdb: rdb:
base-url: http://127.0.0.1:80 base-url: http://127.0.0.1:8000 # 夜莺v4版本默认端口统一调整为了8000
notify: notify:
kafka: kafka:

21
pom.xml
View File

@@ -16,7 +16,7 @@
</parent> </parent>
<properties> <properties>
<kafka-manager.revision>2.4.1-SNAPSHOT</kafka-manager.revision> <kafka-manager.revision>2.5</kafka-manager.revision>
<swagger2.version>2.7.0</swagger2.version> <swagger2.version>2.7.0</swagger2.version>
<swagger.version>1.5.13</swagger.version> <swagger.version>1.5.13</swagger.version>
@@ -26,7 +26,9 @@
<java_target_version>1.8</java_target_version> <java_target_version>1.8</java_target_version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<file_encoding>UTF-8</file_encoding> <file_encoding>UTF-8</file_encoding>
<tomcat.version>8.5.37</tomcat.version> <tomcat.version>8.5.72</tomcat.version>
<maven-assembly-plugin.version>3.0.0</maven-assembly-plugin.version>
</properties> </properties>
<modules> <modules>
@@ -42,6 +44,7 @@
<module>kafka-manager-extends/kafka-manager-openapi</module> <module>kafka-manager-extends/kafka-manager-openapi</module>
<module>kafka-manager-task</module> <module>kafka-manager-task</module>
<module>kafka-manager-web</module> <module>kafka-manager-web</module>
<module>distribution</module>
</modules> </modules>
<dependencyManagement> <dependencyManagement>
@@ -147,7 +150,7 @@
<dependency> <dependency>
<groupId>com.fasterxml.jackson.core</groupId> <groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId> <artifactId>jackson-databind</artifactId>
<version>2.9.10.5</version> <version>2.9.10.8</version>
</dependency> </dependency>
<!-- commons --> <!-- commons -->
@@ -231,4 +234,16 @@
</dependency> </dependency>
</dependencies> </dependencies>
</dependencyManagement> </dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>${maven-assembly-plugin.version}</version>
</plugin>
</plugins>
</build>
</project> </project>