Compare commits

...

81 Commits

Author SHA1 Message Date
孙超
0f15c773ef build dependencies version lock 2023-02-23 14:43:33 +08:00
fengqiongfeng
02b05fc7c8 HA-添加高可用相关表结构 2023-02-23 11:31:33 +08:00
zengqiao
b16a7b9bff v2.8.1_e初始化
1、测试代码,开源用户尽量不要使用;
2、包含Kafka-HA的相关功能,在v2.8.0_e的基础上,补充按照clientId切换的功能;
3、基于v2.8.0_e拉的分支;
2023-02-13 16:48:59 +08:00
zengqiao
e81c0f3040 v2.8.0_e初始化
1、测试代码,开源用户尽量不要使用;
2、包含Kafka-HA的相关功能;
3、并非基于2.6.0拉的分支,是基于master分支的 commit-id: 462303fca0 拉的2.8.0_e的分支。出现这个情况的原因是v2.6.0的代码并不是最新的,2.x最新的代码是 462303fca0 这个commit对应的代码;
2023-02-13 16:35:43 +08:00
Peng
462303fca0 Update README.md 2022-08-11 14:49:04 +08:00
Peng
4405703e42 Update README.md 2022-08-09 11:11:03 +08:00
Peng
23e398e121 Update README.md 2022-08-09 10:05:24 +08:00
Peng
b17bb89d04 Update README.md 2022-08-09 09:56:35 +08:00
Peng
5590cebf8f Update README.md 2022-08-09 09:54:44 +08:00
Peng
1fa043f09d Update README.md 2022-08-09 09:52:30 +08:00
Peng
3bd0af1451 Update README.md 2022-08-09 09:49:09 +08:00
Peng
1545962745 Update README.md
添加star趋势
2022-07-29 16:05:16 +08:00
EricZeng
d032571681 Merge pull request #503 from didi/dev
补充FutureUtil类
2022-07-06 16:21:24 +08:00
zengqiao
33fb0acc7e 补充FutureUtil类 2022-07-06 15:18:53 +08:00
EricZeng
1ec68a91e2 Merge pull request #499 from gzldc/master
关于logback的版本漏洞修复 #488
2022-07-01 08:57:11 +08:00
shishuai
a23c113a46 关于logback的版本漏洞修复 #488 2022-06-29 22:06:33 +08:00
Peng
371ae2c0a5 Update README.md 2022-06-28 10:19:18 +08:00
Peng
8f8f6ffa27 Update README.md 2022-06-28 10:18:03 +08:00
EricZeng
475fe0d91f Merge pull request #496 from didi/dev
删除application中无效的版本信息配置
2022-06-23 10:31:53 +08:00
zengqiao
3d74e60d03 删除application中无效的版本信息配置 2022-06-23 10:31:07 +08:00
EricZeng
83ac83bb28 Merge pull request #495 from didi/master
合并主分支
2022-06-23 10:29:22 +08:00
EricZeng
8478fb857c Merge pull request #494 from didi/dev
1、打包时自动生成版本信息及git提交信息;2、优化swagger对版本信息的获取;
2022-06-23 10:23:10 +08:00
zengqiao
7074bdaa9f 1、打包时自动生成版本信息及git提交信息;2、优化swagger对版本信息的获取 2022-06-23 10:17:36 +08:00
EricZeng
58164294cc Update README.md 2022-03-17 10:02:10 +08:00
EricZeng
7c0e9df156 Merge pull request #479 from didi/dev
集成测试&单元测试补充
2022-03-15 13:49:33 +08:00
EricZeng
bd62212ecb Merge pull request #472 from didi/dev_v2.5.0_addtest
LogiKM增加单元测试和集成测试
2022-03-07 17:29:20 +08:00
EricZeng
2292039b42 Merge pull request #474 from houxiufeng/dev_v2.5.0_addtest
modify AbstractSingleSignOnTest error
2022-03-07 17:28:39 +08:00
houxiufeng
73f8da8d5a modify AbstractSingleSignOnTest error 2022-03-07 17:10:15 +08:00
EricZeng
e51dbe0ca7 Merge pull request #473 from didi/dev
Dev
2022-03-07 14:49:52 +08:00
xuguang
482a375e31 Merge branch 'v2.5.0' of github.com:didi/LogiKM into dev_v2.5.0_addtest 2022-03-04 16:13:54 +08:00
xuguang
689c5ce455 增加单元测试和集成测试文档 & 问题修改 2022-03-04 16:04:36 +08:00
EricZeng
734a020ecc Merge pull request #470 from didi/dev
修复删除指标时,数据越界问题
2022-03-03 19:04:03 +08:00
zengqiao
44d537f78c 修复删除指标时,数据越界问题 2022-03-03 11:26:50 +08:00
zengqiao
b4c60eb910 增加过滤掉Broker的连接信息时,增加请求类型的判断 2022-02-28 12:07:50 +08:00
xuguang
e120b32375 剩余controller接口集成测试 2022-02-25 17:04:46 +08:00
xuguang
de54966d30 NormalJmxController注释修改 2022-02-24 16:20:37 +08:00
xuguang
39a6302c18 部分controller接口集成测试 2022-02-21 10:43:54 +08:00
xuguang
05ceeea4b0 bugfix: LogicalClusterDTO和RegionDTO校验参数冗余 & RdLogicalClusterController解释修正 2022-02-21 10:41:50 +08:00
EricZeng
9f8e3373a8 Merge pull request #465 from hailanxin/master
增加jmx连接失败的一个情况和解决方法
2022-02-17 21:08:21 +08:00
hailanxin
42521cbae4 Update connect_jmx_failed.md 2022-02-17 14:02:43 +08:00
hailanxin
b23c35197e 增加jmx连接失败的一个情况和解决方法 2022-02-17 14:02:13 +08:00
EricZeng
70f28d9ac4 Merge pull request #461 from zzzhangqi/rainbond
add rainbond installation
2022-02-16 10:18:15 +08:00
zhangqi
912d73d98a add rainbond installation 2022-02-15 18:34:49 +08:00
EricZeng
2a720fce6f Merge pull request #451 from zzzhangqi/master
Support docker source code construction
2022-02-15 18:05:46 +08:00
zhangqi
e4534c359f Support docker source code construction 2022-02-15 10:43:11 +08:00
zengqiao
b91bec15f2 bump version to v2.6.1 2022-01-26 15:33:08 +08:00
xuguang
4d5e4d0f00 去掉单元测试敏感信息 2022-01-21 10:08:15 +08:00
didi
82c9b6481e 真实环境配置定义在配置文件中 2022-01-20 22:33:52 +08:00
xuguang
f195847c68 Merge branch 'dev_v2.5.0_addtest' of github.com:didi/LogiKM into dev_v2.5.0_addtest 2022-01-20 10:24:10 +08:00
xuguang
5beb13b17e NormalAppControllerTest,OpAuthorityControllerTest 2022-01-20 10:22:20 +08:00
xuguang
fc604a9eaf 集成测试:物理集群的增删改查 2022-01-20 10:15:42 +08:00
didi
1afb633b4f Merge branch 'dev_v2.5.0_addtest' of github.com:didi/LogiKM into dev_v2.5.0_addtest 2022-01-18 17:08:49 +08:00
didi
34d9f9174b 所有单测重新测试 2022-01-18 17:07:21 +08:00
EricZeng
89405fe003 Merge pull request #434 from didi/fix_2.5.0
修复console模块关闭问题及前端文件名错误问题
2022-01-13 14:00:01 +08:00
shirenchuang
b9ea3865a5 升级到2.5版本
(cherry picked from commit 5bc6eb6774)
2022-01-13 13:47:21 +08:00
孙超
b5bd643814 修复图片名称大小写问题
(cherry picked from commit ada2718b5e)
2022-01-13 13:46:06 +08:00
xuguang
1f353e10ce application.yml修改 2022-01-07 15:58:12 +08:00
didi
055ba9bda6 Merge branch 'dev_v2.5.0_addtest' of github.com:didi/LogiKM into dev_v2.5.0_addtest
 Conflicts:
	kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/ExpertServiceTest.java
2022-01-07 11:50:15 +08:00
didi
ec19c3b4dd monitor、openapi、account模块下的单元测试 2022-01-07 11:43:31 +08:00
xuguang
37aa526404 单元测试:ClusterHostTaskServiceTest,ClusterRoleTaskServiceTest,ClusterTaskServiceTest,KafkaFileServiceTest,TopicCommandsTest,TopicReassignUtilsTest 2022-01-06 20:00:25 +08:00
xuguang
86c1faa40f bugfix: Result类问题修改 2022-01-06 11:33:17 +08:00
xuguang
8dcf15d0f9 kafka-manager-bpm包单元测试的编写 & bugfix 2022-01-04 17:29:28 +08:00
xuguang
4f317b76fa SpringTool.getUserName()方法中获取requestAttributes可能为null, 增加为null判断 2021-12-27 16:35:22 +08:00
didi
61672637dc Merge branch 'dev_v2.5.0_addtest' of github.com:didi/LogiKM into dev_v2.5.0_addtest 2021-12-27 14:56:48 +08:00
didi
ecf6e8f664 ConfigService,OperateRecordService,RegionService,ThrottleService,TopicExpiredService,TopicManagerService接口下的单元测试方法 2021-12-27 14:55:35 +08:00
xuguang
4115975320 kafka-manager-account, kafka-manager-bpm, kafka-manager-kcm, kafka-manager-monitor单元测试模块添加 2021-12-27 10:28:08 +08:00
didi
21904a8609 `TopicManagerServiceImpl的addAuthority中使用的是getId,应该是getAppId 2021-12-24 14:41:39 +08:00
xuguang
b2091e9aed 单元测试:AnalysisServiceTest && ConsumerServiceTest && JmxServiceTest &&
LogicalClusterServiceTest && ReassignServiceTest && TopicServiceTest
2021-12-23 18:17:47 +08:00
xuguang
f2cb5bd77c bugfix: TopicServiceImpl && JmxServiceImpl && ConsumerService && ConsumerServiceImpl 2021-12-23 18:15:40 +08:00
xuguang
19c61c52e6 bugfix: TopicService && TopicServiceImpl && ZookeeperServiceImpl 2021-12-22 16:04:06 +08:00
didi
b327359183 TopicManagerServiceImpl的modifyTopicByOp没有return ResultStatus.SUCCESS; 2021-12-21 18:06:23 +08:00
xuguang
9e9bb72e17 BrokerServiceTest && KafkaBillService && LogicalClusterServiceTest && AbstractAllocateQuotaStrategy && AbstractHealthScoreStrategy 单元测试 2021-12-20 10:26:43 +08:00
xuguang
ad131f5a2c bugfix: DidiHealthScoreStrategy.calTopicHealthScore计算topic健康分时,获取topic的brokerId异常 2021-12-17 14:41:04 +08:00
xuguang
39cccd568e DiDiHealthScoreStrategy类中某些变量开头字母改成小写 2021-12-16 14:17:45 +08:00
didi
19b7f6ad8c RegionService下的updateRegion的重载方法含义错误;应该是根据regionId更新region,参数应该是regionId,不是clusterId 2021-12-14 18:32:14 +08:00
xuguang
41c000cf47 AuthorityServiceTest && SecurityServiceTest && TopicReportServiceTest && ClusterServiceTest && ZookeeperServiceTest单元测试 2021-12-14 18:30:12 +08:00
xuguang
1b8ea61e87 openTopicJmx方法的参数jmxSwitch需要判空 2021-12-13 18:16:23 +08:00
xuguang
4538593236 实现core包下TopicReportService接口单元测试 & TopicReportDao.xml中字段和关键字错误修改 2021-12-08 15:50:53 +08:00
xuguang
8086ef355b 实现core包下AppService,AuthorityService,QuotaService接口单元测试 & TopicQuotaData中bug修改 2021-12-07 14:08:09 +08:00
xuguang
60d038fe46 实现core包下AppService接口单元测试 2021-12-06 14:46:11 +08:00
huqidong
ff0f4463be Logi-KM testng 测试环境搭建 & springboot 集成 & mock 测试用例编写. 2021-12-03 18:04:20 +08:00
371 changed files with 33518 additions and 2067 deletions

1
.gitignore vendored
View File

@@ -111,3 +111,4 @@ dist/
dist/*
kafka-manager-web/src/main/resources/templates/
.DS_Store
kafka-manager-console/package-lock.json

41
Dockerfile Normal file
View File

@@ -0,0 +1,41 @@
ARG MAVEN_VERSION=3.8.4-openjdk-8-slim
ARG JAVA_VERSION=8-jdk-alpine3.9
FROM maven:${MAVEN_VERSION} AS builder
ARG CONSOLE_ENABLE=true
WORKDIR /opt
COPY . .
COPY distribution/conf/settings.xml /root/.m2/settings.xml
# whether to build console
RUN set -eux; \
if [ $CONSOLE_ENABLE = 'false' ]; then \
sed -i "/kafka-manager-console/d" pom.xml; \
fi \
&& mvn -Dmaven.test.skip=true clean install -U
FROM openjdk:${JAVA_VERSION}
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && apk add --no-cache tini
ENV TZ=Asia/Shanghai
ENV AGENT_HOME=/opt/agent/
COPY --from=builder /opt/kafka-manager-web/target/kafka-manager.jar /opt
COPY --from=builder /opt/container/dockerfiles/docker-depends/config.yaml $AGENT_HOME
COPY --from=builder /opt/container/dockerfiles/docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
COPY --from=builder /opt/distribution/conf/application-docker.yml /opt
WORKDIR /opt
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
EXPOSE 8080 9999
ENTRYPOINT ["tini", "--"]
CMD [ "sh", "-c", "java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS kafka-manager.jar --spring.config.location=application-docker.yml"]

View File

@@ -5,7 +5,7 @@
**一站式`Apache Kafka`集群指标监控与运维管控平台**
`LogiKM开源至今备受关注考虑到开源项目应该更贴合Apache Kafka未来发展方向经项目组慎重考虑预计22年5月份将其品牌升级成Know Streaming届时项目名称和Logo也将统一更新感谢大家一如既往的支持敬请期待`
`LogiKM开源至今备受关注考虑到开源项目应该更贴合Apache Kafka未来发展方向经项目组慎重考虑预计22年下半年将其品牌升级成Know Streaming届时项目名称和Logo也将统一更新感谢大家一如既往的支持敬请期待`
阅读本README文档您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息并通过体验地址快速体验Kafka集群指标监控与运维管控的全流程。
@@ -69,7 +69,7 @@
- [kafka最强最全知识图谱](https://www.szzdzhp.com/kafka/)
- [滴滴LogiKM新用户入门系列文章专栏 --石臻臻](https://www.szzdzhp.com/categories/LogIKM/)
- [kafka实践十五滴滴开源Kafka管控平台 LogiKM研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
- [基于云原生应用管理平台Rainbond安装 滴滴LogiKM](https://www.rainbond.com/docs/opensource-app/logikm/?channel=logikm)
## 3 滴滴Logi开源用户交流群
@@ -77,7 +77,7 @@
想跟各个大佬交流Kafka Es 等中间件/大数据相关技术请 加微信进群。
微信加群:添加<font color=red>mike_zhangliang</font><font color=red>danke-xie</font>的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群"
微信加群:添加<font color=red>mike_zhangliang</font><font color=red>danke-x</font>的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群"
## 4 知识星球
@@ -92,7 +92,7 @@
<font color=red size=5><b>【Kafka中文社区】</b></font>
</center>
在这里你可以结交各大互联网Kafka大佬以及近2000+Kafka爱好者一起实现知识共享实时掌控最新行业资讯期待您的加入中https://z.didi.cn/5gSF9
在这里你可以结交各大互联网Kafka大佬以及3000+Kafka爱好者一起实现知识共享实时掌控最新行业资讯期待您的加入中https://z.didi.cn/5gSF9
<font color=red size=5>有问必答~ </font>
@@ -104,7 +104,7 @@ PS:提问请尽量把问题一次性描述清楚,并告知环境信息情况
### 5.1 内部核心人员
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``xiepeng``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw``zhaoyinrui``marzkonglingxu``joysunchao``石臻臻`
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``zhaoqingrong``xiepeng``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw``zhaoyinrui``marzkonglingxu``joysunchao``石臻臻`
### 5.2 外部贡献者

View File

@@ -1,29 +0,0 @@
FROM openjdk:16-jdk-alpine3.13
LABEL author="fengxsong"
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && apk add --no-cache tini
ENV VERSION 2.4.2
WORKDIR /opt/
ENV AGENT_HOME /opt/agent/
COPY docker-depends/config.yaml $AGENT_HOME
COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
RUN wget https://github.com/didi/Logi-KafkaManager/releases/download/v${VERSION}/kafka-manager-${VERSION}.tar.gz && \
tar xvf kafka-manager-${VERSION}.tar.gz && \
mv kafka-manager-${VERSION}/kafka-manager.jar /opt/app.jar && \
mv kafka-manager-${VERSION}/application.yml /opt/application.yml && \
rm -rf kafka-manager-${VERSION}*
EXPOSE 8080 9999
ENTRYPOINT ["tini", "--"]
CMD [ "sh", "-c", "java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"]

View File

@@ -0,0 +1,13 @@
FROM mysql:5.7.37
COPY mysqld.cnf /etc/mysql/mysql.conf.d/
ENV TZ=Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD=root
RUN apt-get update \
&& apt -y install wget \
&& wget https://ghproxy.com/https://raw.githubusercontent.com/didi/LogiKM/master/distribution/conf/create_mysql_table.sql -O /docker-entrypoint-initdb.d/create_mysql_table.sql
EXPOSE 3306
VOLUME ["/var/lib/mysql"]

View File

@@ -0,0 +1,24 @@
[client]
default-character-set = utf8
[mysqld]
character_set_server = utf8
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
symbolic-links=0
max_allowed_packet = 10M
sort_buffer_size = 1M
read_rnd_buffer_size = 2M
max_connections=2000
lower_case_table_names=1
character-set-server=utf8
max_allowed_packet = 1G
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
group_concat_max_len = 102400
default-time-zone = '+08:00'
[mysql]
default-character-set = utf8

View File

@@ -0,0 +1,28 @@
## kafka-manager的配置文件该文件中的配置会覆盖默认配置
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
server:
port: 8080
tomcat:
accept-count: 1000
max-connections: 10000
max-threads: 800
min-spare-threads: 100
spring:
application:
name: kafkamanager
version: 2.6.0
profiles:
active: dev
datasource:
kafka-manager:
jdbc-url: jdbc:mysql://${LOGI_MYSQL_HOST:mysql}:${LOGI_MYSQL_PORT:3306}/${LOGI_MYSQL_DATABASE:logi_kafka_manager}?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
username: ${LOGI_MYSQL_USER:root}
password: ${LOGI_MYSQL_PASSWORD:root}
driver-class-name: com.mysql.cj.jdbc.Driver
main:
allow-bean-definition-overriding: true

View File

@@ -15,7 +15,6 @@ server:
spring:
application:
name: kafkamanager
version: 2.6.0
profiles:
active: dev
datasource:

View File

@@ -15,7 +15,6 @@ server:
spring:
application:
name: kafkamanager
version: 2.6.0
profiles:
active: dev
datasource:

View File

@@ -592,3 +592,66 @@ CREATE TABLE `work_order` (
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='工单表';
ALTER TABLE `topic_connections` ADD COLUMN `client_id` VARCHAR(1024) NOT NULL DEFAULT '' COMMENT '客户端ID' AFTER `client_version`;
create table ha_active_standby_relation
(
id bigint unsigned auto_increment comment 'id'
primary key,
active_cluster_phy_id bigint default -1 not null comment '主集群ID',
active_res_name varchar(192) collate utf8_bin default '' not null comment '主资源名称',
standby_cluster_phy_id bigint default -1 not null comment '备集群ID',
standby_res_name varchar(192) collate utf8_bin default '' not null comment '备资源名称',
res_type int default -1 not null comment '资源类型',
status int default -1 not null comment '关系状态',
unique_field varchar(1024) default '' not null comment '唯一字段',
create_time timestamp default CURRENT_TIMESTAMP not null comment '创建时间',
modify_time timestamp default CURRENT_TIMESTAMP not null on update CURRENT_TIMESTAMP comment '修改时间',
kafka_status int default 0 null comment '高可用配置是否完全建立 1:Kafka上该主备关系正常0:Kafka上该主备关系异常',
constraint uniq_unique_field
unique (unique_field)
)
comment 'HA主备关系表' charset = utf8;
create index idx_type_active
on ha_active_standby_relation (res_type, active_cluster_phy_id);
create index idx_type_standby
on ha_active_standby_relation (res_type, standby_cluster_phy_id);
create table ha_active_standby_switch_job
(
id bigint unsigned auto_increment comment 'id'
primary key,
active_cluster_phy_id bigint default -1 not null comment '主集群ID',
standby_cluster_phy_id bigint default -1 not null comment '备集群ID',
job_status int default -1 not null comment '任务状态',
operator varchar(256) default '' not null comment '操作人',
create_time timestamp default CURRENT_TIMESTAMP not null comment '创建时间',
modify_time timestamp default CURRENT_TIMESTAMP not null on update CURRENT_TIMESTAMP comment '修改时间',
type int default 5 not null comment '1:topic 2:实例 3逻辑集群 4物理集群',
active_business_id varchar(100) default '-1' not null comment '主业务id(topicName,实例id,逻辑集群id,物理集群id)',
standby_business_id varchar(100) default '-1' not null comment '备业务id(topicName,实例id,逻辑集群id,物理集群id)'
)
comment 'HA主备关系切换-子任务表' charset = utf8;
create table ha_active_standby_switch_sub_job
(
id bigint unsigned auto_increment comment 'id'
primary key,
job_id bigint default -1 not null comment '任务ID',
active_cluster_phy_id bigint default -1 not null comment '主集群ID',
active_res_name varchar(192) collate utf8_bin default '' not null comment '主资源名称',
standby_cluster_phy_id bigint default -1 not null comment '备集群ID',
standby_res_name varchar(192) collate utf8_bin default '' not null comment '备资源名称',
res_type int default -1 not null comment '资源类型',
job_status int default -1 not null comment '任务状态',
extend_data text null comment '扩展数据',
create_time timestamp default CURRENT_TIMESTAMP not null comment '创建时间',
modify_time timestamp default CURRENT_TIMESTAMP not null on update CURRENT_TIMESTAMP comment '修改时间'
)
comment 'HA主备关系-切换任务表' charset = utf8;

View File

@@ -0,0 +1,10 @@
<settings>
<mirrors>
<mirror>
<id>aliyunmaven</id>
<mirrorOf>*</mirrorOf>
<name>阿里云公共仓库</name>
<url>https://maven.aliyun.com/repository/public</url>
</mirror>
</mirrors>
</settings>

View File

@@ -0,0 +1,47 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# LogiKM单元测试和集成测试
## 1、单元测试
### 1.1 单元测试介绍
单元测试又称模块测试,是针对软件设计的最小单位——程序模块进行正确性检验的测试工作。
其目的在于检查每个程序单元能否正确实现详细设计说明中的模块功能、性能、接口和设计约束等要求,
发现各模块内部可能存在的各种错误。单元测试需要从程序的内部结构出发设计测试用例。
多个模块可以平行地独立进行单元测试。
### 1.2 LogiKM单元测试思路
LogiKM单元测试思路主要是测试Service层的方法通过罗列方法的各种参数
判断方法返回的结果是否符合预期。单元测试的基类加了@SpringBootTest注解,即每次运行单测用例都启动容器
### 1.3 LogiKM单元测试注意事项
1. 单元测试用例在kafka-manager-core以及kafka-manager-extends下的test包中
2. 配置在resources/application.yml包括运行单元测试用例启用的数据库配置等等
3. 编译打包项目时,加上参数-DskipTests可不执行测试用例例如使用命令行mvn -DskipTests进行打包
## 2、集成测试
### 2.1 集成测试介绍
集成测试又称组装测试,是一种黑盒测试。通常在单元测试的基础上,将所有的程序模块进行有序的、递增的测试。
集成测试是检验程序单元或部件的接口关系,逐步集成为符合概要设计要求的程序部件或整个系统。
### 2.2 LogiKM集成测试思路
LogiKM集成测试主要思路是对Controller层的接口发送Http请求。
通过罗列测试用例模拟用户的操作对接口发送Http请求判断结果是否达到预期。
本地运行集成测试用例时,无需加@SpringBootTest注解(即无需每次运行测试用例都启动容器)
### 2.3 LogiKM集成测试注意事项
1. 集成测试用例在kafka-manager-web的test包下
2. 因为对某些接口发送Http请求需要先登陆比较麻烦可以绕过登陆方法可见教程见docs -> user_guide -> call_api_bypass_login
3. 集成测试的配置在resources/integrationTest-settings.properties文件下包括集群地址zk地址的配置等等
4. 如果需要运行集成测试用例需要本地先启动LogiKM项目
5. 编译打包项目时,加上参数-DskipTests可不执行测试用例例如使用命令行mvn -DskipTests进行打包

View File

@@ -29,6 +29,7 @@
- `JMX`配置错误:见`2、解决方法`
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`
- 当logikm和kafka不在同一台机器上时,kafka的Jmx端口不允许其他机器访问:见`4、解决方法`
错误日志例子:
@@ -99,3 +100,8 @@ SQL的例子
```sql
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx};
```
### 4、解决方法 —— 不允许其他机器访问
![1971b46243fe1d547063ee55b1505ed](https://user-images.githubusercontent.com/2869938/154413486-f6531946-8c4c-447e-aa2e-b112e5e623d6.png)
该图中的127.0.0.1表明该端口只允许本机访问.
在cdh中可以点击配置->搜索jmx->寻找broker_java_opts 修改com.sun.management.jmxremote.host和java.rmi.server.hostname为本机ip

View File

@@ -0,0 +1,97 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# Kafka主备切换流程简介
## 1、客户端读写流程
在介绍Kafka主备切换流程之前我们先来了解一下客户端通过我们自研的网关的大致读写流程。
![基于网关的生产消费流程](./assets/Kafka基于网关的生产消费流程.png)
如上图所示,客户端读写流程大致为:
1. 客户端向网关请求Topic元信息
2. 网关发现客户端使用的KafkaUser是A集群的KafkaUser因此将Topic元信息请求转发到A集群
3. A集群收到网关的Topic元信息处理并返回给网关
4. 网关将集群A返回的结果返回给客户端
5. 客户端从Topic元信息中获取到Topic实际位于集群A然后客户端会连接集群A进行生产消费
**备注客户端为Kafka原生客户端无任何定制。**
---
## 2、主备切换流程
介绍完基于网关的客户端读写流程之后我们再来看一下主备高可用版的Kafka需要如何进行主备切换。
### 2.1、大体流程
![Kafka主备切换流程](./assets/Kafka主备切换流程.png)
图有点多,总结起来就是:
1. 先阻止客户端数据的读写;
2. 等待主备数据同步完成;
3. 调整主备集群数据同步方向;
4. 调整配置,引导客户端到备集群进行读写;
### 2.2、详细操作
看完大体流程,我们再来看一下实际操作的命令。
```bash
1. 阻止用户生产和消费
bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=None --alter
2. 等待FetcherLag 和 Offset 同步
无需操作仅需检查主备Topic的Offset是否一致了。
3. 取消备集群B向主集群A进行同步数据的配置
bin/kafka-configs.sh --zookeeper ${备集群B的ZK地址} --entity-type ha-topics --entity-name ${Topic名称} --delete-config didi.ha.remote.cluster --alter
4. 增加主集群A向备集群B进行同步数据的配置
bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type ha-topics --entity-name ${Topic名称} --add-config didi.ha.remote.cluster=${备集群B的集群ID} --alter
5. 修改主集群A备集群B网关中kafkaUser对应的集群从而引导请求走向备集群
bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter
bin/kafka-configs.sh --zookeeper ${备集群B的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter
bin/kafka-configs.sh --zookeeper ${网关的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter
```
---
## 3、FAQ
**问题一:使用中,有没有什么需要注意的地方?**
1. 主备切换是按照KafkaUser维度进行切换的因此建议**不同服务之间使用不同的KafkaUser**。这不仅有助于主备切换,也有助于做权限管控等。
2. 在建立主备关系的过程中如果主Topic的数据量比较大建议逐步建立主备关系避免一次性建立太多主备关系的Topic导致主集群需要被同步大量数据从而出现压力。
&nbsp;
**问题二:消费客户端如果重启之后,会不会导致变成从最旧或者最新的数据开始消费?**
不会。主备集群会相互同步__consumer_offsets这个Topic的数据因此客户端在主集群的消费进度信息也会被同步到备集群客户端在备集群进行消费时也会从上次提交在主集群Topic的位置开始消费。
&nbsp;
**问题三如果是类似Flink任务是自己维护消费进度的程序在主备切换之后会不会存在数据丢失或者重复消费的情况**
如果Flink自己管理好了消费进度那么就不会。因为主备集群之间的数据同步就和一个集群内的副本同步一样备集群会将主集群Topic中的Offset信息等都同步过来因此不会。
&nbsp;
**问题四:可否做到不重启客户端?**
即将开发完成的高可用版Kafka二期将具备该能力敬请期待。
&nbsp;

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -0,0 +1,367 @@
<mxfile host="65bd71144e">
<diagram id="bhaMuW99Q1BzDTtcfRXp" name="Page-1">
<mxGraphModel dx="1384" dy="785" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="81" value="1、主集群拒绝客户端的写入" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="630" y="70" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="79" value="主备高可用集群稳定时的状态" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="30" y="70" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="27" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" parent="1" vertex="1">
<mxGeometry x="200" y="100" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="32" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" parent="1" vertex="1">
<mxGeometry x="210" y="110" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="33" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" parent="1" vertex="1">
<mxGeometry x="210" y="150" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="36" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" parent="1" vertex="1">
<mxGeometry x="200" y="220" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="37" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" parent="1" vertex="1">
<mxGeometry x="210" y="230" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="38" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" parent="1" vertex="1">
<mxGeometry x="210" y="270" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="63" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="39" target="27">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="440" y="380"/>
<mxPoint x="440" y="140"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="64" value="备集群B 不断向 主集群A &lt;br&gt;发送Fetch请求&lt;br&gt;从而同步主集群A的数据" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="63">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="6" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="39" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" parent="1" vertex="1">
<mxGeometry x="200" y="340" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="40" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" parent="1" vertex="1">
<mxGeometry x="210" y="350" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="41" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" parent="1" vertex="1">
<mxGeometry x="210" y="390" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="57" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=default;startArrow=classic;startFill=1;" parent="1" source="42" target="27" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="58" value="对主集群进行读写" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" parent="57" vertex="1" connectable="0">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="42" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" parent="1" vertex="1">
<mxGeometry x="40" y="240" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="65" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="800" y="100" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="66" value="Zookeeper(修改ZK)" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FF3333;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="110" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="67" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="150" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="68" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="220" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="69" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="230" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="70" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="270" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="71" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="73" target="65">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="1040" y="380"/>
<mxPoint x="1040" y="140"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="72" value="备集群B 不断向 主集群A&lt;br&gt;发送Fetch请求&lt;br&gt;从而同步主集群A的数据" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="71">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="6" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="73" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="340" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="74" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="350" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="75" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="390" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="76" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=#FF3333;startArrow=none;startFill=0;strokeWidth=3;endArrow=none;endFill=0;dashed=1;" edge="1" parent="1" source="78" target="65">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="77" value="对主集群进行读写会出现失败" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="76">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="78" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="640" y="240" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="82" value="2、等待主备同步完成避免丢数据" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="630" y="590" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="83" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="800" y="620" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="84" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="630" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="85" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="670" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="86" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="740" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="87" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="750" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="88" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="790" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="89" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="91" target="83">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="1040" y="900"/>
<mxPoint x="1040" y="660"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="90" value="备集群B 不断向 主集群A&lt;br&gt;发送Fetch请求&lt;br&gt;从而同步主集群A的&lt;br&gt;指定Topic的数据" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="89">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="6" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="91" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="860" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="92" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="870" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="93" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="910" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="94" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=#FF3333;startArrow=none;startFill=0;strokeWidth=3;endArrow=none;endFill=0;dashed=1;" edge="1" parent="1" source="96" target="83">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="95" value="对主集群进行读写会出现失败" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="94">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="96" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="640" y="760" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="97" value="3、Topic粒度数据同步方向调整由主集群A向备集群B同步数据" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="30" y="590" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="98" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="200" y="620" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="99" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="630" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="100" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="670" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="101" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="200" y="740" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="102" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="750" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="103" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="790" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="104" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;endArrow=none;endFill=0;strokeColor=#FF3333;strokeWidth=1;startArrow=classic;startFill=1;" edge="1" parent="1" source="106" target="98">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="440" y="900"/>
<mxPoint x="440" y="660"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="105" value="&lt;span style=&quot;font-size: 11px;&quot;&gt;主集群A 不断向 备集群B&lt;/span&gt;&lt;br style=&quot;font-size: 11px;&quot;&gt;&lt;span style=&quot;font-size: 11px;&quot;&gt;发送Fetch请求&lt;/span&gt;&lt;br style=&quot;font-size: 11px;&quot;&gt;&lt;span style=&quot;font-size: 11px;&quot;&gt;从而同步备集群B的&lt;br&gt;指定Topic的数据&lt;/span&gt;" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="104">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="-4" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="106" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="200" y="860" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="107" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="870" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="108" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="910" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="109" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=#FF3333;startArrow=none;startFill=0;strokeWidth=3;endArrow=none;endFill=0;dashed=1;" edge="1" parent="1" source="111" target="98">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="110" value="对主集群进行读写会出现失败" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="109">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="111" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="40" y="760" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="127" value="4、修改ZK使得客户端使用的KafkaUser对应的集群为备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="30" y="1110" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="128" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="200" y="1140" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="130" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="1190" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="131" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="200" y="1260" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="132" value="Zookeeper(修改ZK)" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FF3333;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="1270" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="133" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="1310" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="134" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;endArrow=none;endFill=0;strokeColor=#000000;strokeWidth=1;startArrow=classic;startFill=1;" edge="1" parent="1" source="136" target="128">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="440" y="1420"/>
<mxPoint x="440" y="1180"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="135" value="&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;主集群A 不断向 备集群B&lt;/span&gt;&lt;br style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;发送Fetch请求&lt;/span&gt;&lt;br style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;从而同步备集群B的&lt;br&gt;指定Topic的数据&lt;/span&gt;" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="134">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="-4" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="136" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="200" y="1380" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="138" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="210" y="1430" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="139" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=#FF3333;startArrow=none;startFill=0;strokeWidth=3;endArrow=none;endFill=0;dashed=1;" edge="1" parent="1" source="141" target="128">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="140" value="对主集群进行读写会出现失败" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="139">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="141" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="40" y="1280" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="142" value="5、重启客户端网关将请求转向集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FFFFFF;strokeColor=#d79b00;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;fontSize=16;" vertex="1" parent="1">
<mxGeometry x="630" y="1110" width="490" height="380" as="geometry"/>
</mxCell>
<mxCell id="143" value="Kafka——主集群A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="800" y="1140" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="144" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="1150" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="145" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="1190" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="146" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="1260" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="148" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="1310" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="149" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;endArrow=none;endFill=0;strokeColor=#000000;strokeWidth=1;startArrow=classic;startFill=1;" edge="1" parent="1" source="151" target="143">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="1040" y="1420"/>
<mxPoint x="1040" y="1180"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="150" value="&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;主集群A 不断向 备集群B&lt;/span&gt;&lt;br style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;发送Fetch请求&lt;/span&gt;&lt;br style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;&lt;span style=&quot;color: rgb(0 , 0 , 0) ; font-size: 11px&quot;&gt;从而同步备集群B的&lt;br&gt;指定Topic的数据&lt;/span&gt;" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#FF3333;fontSize=13;" vertex="1" connectable="0" parent="149">
<mxGeometry x="-0.05" y="-4" relative="1" as="geometry">
<mxPoint x="-4" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="151" value="Kafka——备集群B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="800" y="1380" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="152" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="1390" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="153" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="810" y="1430" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="156" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="640" y="1280" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="157" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=default;startArrow=classic;startFill=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1" source="156" target="151">
<mxGeometry relative="1" as="geometry">
<mxPoint x="529.9966666666667" y="1400" as="sourcePoint"/>
<mxPoint x="613.3299999999999" y="1300" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="158" value="对B集群进行读写" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="157">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="159" value="Zookeeper(修改ZK)" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FF3333;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="1150" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="160" value="Zookeeper(修改ZK)" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#FF3333;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="210" y="1390" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="161" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="810" y="1270" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="162" value="" style="shape=flexArrow;endArrow=classic;html=1;fontSize=13;fontColor=#FF3333;strokeColor=#000000;strokeWidth=1;fillColor=#9999FF;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="550" y="259.5" as="sourcePoint"/>
<mxPoint x="600" y="259.5" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="163" value="" style="shape=flexArrow;endArrow=classic;html=1;fontSize=13;fontColor=#FF3333;strokeColor=#000000;strokeWidth=1;fillColor=#9999FF;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="879.5" y="490" as="sourcePoint"/>
<mxPoint x="879.5" y="540" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="164" value="" style="shape=flexArrow;endArrow=classic;html=1;fontSize=13;fontColor=#FF3333;strokeColor=#000000;strokeWidth=1;fillColor=#9999FF;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="274.5" y="1010" as="sourcePoint"/>
<mxPoint x="274.5" y="1060" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="165" value="" style="shape=flexArrow;endArrow=classic;html=1;fontSize=13;fontColor=#FF3333;strokeColor=#000000;strokeWidth=1;fillColor=#9999FF;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="550" y="1309" as="sourcePoint"/>
<mxPoint x="600" y="1309" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="167" value="" style="shape=flexArrow;endArrow=classic;html=1;fontSize=13;fontColor=#FF3333;strokeColor=#000000;strokeWidth=1;fillColor=#9999FF;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="606" y="779.5" as="sourcePoint"/>
<mxPoint x="550" y="779.5" as="targetPoint"/>
</mxGeometry>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

View File

@@ -0,0 +1,95 @@
<mxfile host="65bd71144e">
<diagram id="bhaMuW99Q1BzDTtcfRXp" name="Page-1">
<mxGraphModel dx="1344" dy="785" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="27" value="Kafka集群--A" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=top;align=center;verticalAlign=bottom;" vertex="1" parent="1">
<mxGeometry x="320" y="40" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="32" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="330" y="50" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="33" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="330" y="90" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="47" style="edgeStyle=orthogonalEdgeStyle;html=1;entryX=1;entryY=0.25;entryDx=0;entryDy=0;exitX=1;exitY=0.75;exitDx=0;exitDy=0;" edge="1" parent="1" source="36" target="27">
<mxGeometry relative="1" as="geometry">
<Array as="points">
<mxPoint x="560" y="260"/>
<mxPoint x="560" y="60"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="51" value="2、网关发现是A集群的KafkaUser&lt;br&gt;网关将请求转发到A集群" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="47">
<mxGeometry x="-0.0444" y="-1" relative="1" as="geometry">
<mxPoint x="49" y="72" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="55" style="edgeStyle=orthogonalEdgeStyle;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="36" target="42">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="56" value="4、网关返回Topic元信息" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="55">
<mxGeometry x="0.2125" relative="1" as="geometry">
<mxPoint x="17" y="-10" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="36" value="Kafka网关" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="320" y="200" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="37" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="330" y="210" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="38" value="Kafka-Gateways" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="330" y="250" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="39" value="Kafka集群--B" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;" vertex="1" parent="1">
<mxGeometry x="320" y="360" width="160" height="80" as="geometry"/>
</mxCell>
<mxCell id="40" value="Zookeeper" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
<mxGeometry x="330" y="370" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="41" value="Kafka-Brokers" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="330" y="410" width="140" height="20" as="geometry"/>
</mxCell>
<mxCell id="57" style="html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;strokeColor=default;startArrow=classic;startFill=1;" edge="1" parent="1" source="42" target="27">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="58" value="5、通过Topic元信息&lt;br&gt;客户端直接访问A集群进行生产消费" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="57">
<mxGeometry x="-0.0724" y="1" relative="1" as="geometry">
<mxPoint x="-6" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="42" value="Kafka-Client" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
<mxGeometry x="40" y="220" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="48" style="html=1;entryX=0;entryY=0.75;entryDx=0;entryDy=0;exitX=0.5;exitY=1;exitDx=0;exitDy=0;edgeStyle=orthogonalEdgeStyle;" edge="1" parent="1" source="42" target="36">
<mxGeometry relative="1" as="geometry">
<mxPoint x="490" y="250" as="sourcePoint"/>
<mxPoint x="490" y="90" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="50" value="1、请求Topic元信息" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="48">
<mxGeometry x="-0.3373" y="-1" relative="1" as="geometry">
<mxPoint x="17" y="7" as="offset"/>
</mxGeometry>
</mxCell>
<mxCell id="49" style="edgeStyle=orthogonalEdgeStyle;html=1;entryX=1;entryY=0.25;entryDx=0;entryDy=0;exitX=1;exitY=0.75;exitDx=0;exitDy=0;" edge="1" parent="1" source="27" target="36">
<mxGeometry relative="1" as="geometry">
<mxPoint x="640" y="60" as="sourcePoint"/>
<mxPoint x="490" y="70" as="targetPoint"/>
<Array as="points">
<mxPoint x="520" y="100"/>
<mxPoint x="520" y="220"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="52" value="3、A集群返回&lt;br&gt;Topic元信息给网关" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="49">
<mxGeometry x="-0.03" y="-1" relative="1" as="geometry">
<mxPoint x="-19" y="3" as="offset"/>
</mxGeometry>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

View File

@@ -0,0 +1,132 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## 基于Docker部署Logikm
为了方便用户快速的在自己的环境搭建Logikm可使用docker快速搭建
### 部署Mysql
```shell
docker run --name mysql -p 3306:3306 -d registry.cn-hangzhou.aliyuncs.com/zqqq/logikm-mysql:5.7.37
```
可选变量参考[文档](https://hub.docker.com/_/mysql)
默认参数
* MYSQL_ROOT_PASSWORDroot
### 部署Logikm Allinone
> 前后端部署在一起
```shell
docker run --name logikm -p 8080:8080 --link mysql -d registry.cn-hangzhou.aliyuncs.com/zqqq/logikm:2.6.0
```
参数详解:
* -p 映射容器8080端口至宿主机的8080
* --link 连接mysql容器
### 部署前后端分离
#### 部署后端 Logikm-backend
```shell
docker run --name logikm-backend --link mysql -d registry.cn-hangzhou.aliyuncs.com/zqqq/logikm-backend:2.6.0
```
可选参数:
* -e LOGI_MYSQL_HOST mysql连接地址默认mysql
* -e LOGI_MYSQL_PORT mysql端口默认3306
* -e LOGI_MYSQL_DATABASE 数据库默认logi_kafka_manager
* -e LOGI_MYSQL_USER mysql用户名默认root
* -e LOGI_MYSQL_PASSWORD mysql密码默认root
#### 部署前端 Logikm-front
```shell
docker run --name logikm-front -p 8088:80 --link logikm-backend -d registry.cn-hangzhou.aliyuncs.com/zqqq/logikm-front:2.6.0
```
### Logi后端可配置参数
docker run 运行参数 -e 可指定环境变量如下
| 环境变量 | 变量解释 | 默认值 |
| ------------------- | ------------- | ------------------ |
| LOGI_MYSQL_HOST | mysql连接地址 | mysql |
| LOGI_MYSQL_PORT | mysql端口 | 3306 |
| LOGI_MYSQL_DATABASE | 数据库 | logi_kafka_manager |
| LOGI_MYSQL_USER | mysql用户名 | root |
| LOGI_MYSQL_PASSWORD | mysql密码 | root |
## 基于Docker源码构建
根据此文档用户可自行通过Docker 源码构建 Logikm
### 构建Mysql
```shell
docker build -t mysql:{TAG} -f container/dockerfiles/mysql/Dockerfile container/dockerfiles/mysql
```
### 构建Allinone
将前后端打包在一起
```shell
docker build -t logikm:{TAG} .
```
可选参数 --build-arg
* MAVEN_VERSION maven镜像tag
* JAVA_VERSION java镜像tag
### 构建前后端分离
前后端分离打包
#### 构建后端
```shell
docker build --build-arg CONSOLE_ENABLE=false -t logikm-backend:{TAG} .
```
参数:
* MAVEN_VERSION maven镜像tag
* JAVA_VERSION java镜像tag
* CONSOLE_ENABLE=false 不构建console模块
#### 构建前端
```shell
docker build -t logikm-front:{TAG} -f kafka-manager-console/Dockerfile kafka-manager-console
```
可选参数:
* --build-argOUTPUT_PATH 修改默认打包输出路径默认当前目录下的dist

View File

@@ -112,5 +112,15 @@
<artifactId>lombok</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator</artifactId>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
import lombok.Getter;
@Getter
public enum JobLogBizTypEnum {
HA_SWITCH_JOB_LOG(100, "HA-主备切换日志"),
UNKNOWN(-1, "unknown"),
;
JobLogBizTypEnum(int code, String msg) {
this.code = code;
this.msg = msg;
}
private final int code;
private final String msg;
}

View File

@@ -1,11 +1,11 @@
package com.xiaojukeji.kafka.manager.kcm.common.bizenum;
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 任务动作
* @author zengqiao
* @date 20/4/26
*/
public enum ClusterTaskActionEnum {
public enum TaskActionEnum {
UNKNOWN("unknown"),
START("start"),
@@ -17,13 +17,15 @@ public enum ClusterTaskActionEnum {
REDO("redo"),
KILL("kill"),
FORCE("force"),
ROLLBACK("rollback"),
;
private String action;
private final String action;
ClusterTaskActionEnum(String action) {
TaskActionEnum(String action) {
this.action = action;
}

View File

@@ -1,10 +1,13 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
import lombok.Getter;
/**
* 任务状态
* @author zengqiao
* @date 2017/6/29.
*/
@Getter
public enum TaskStatusEnum {
UNKNOWN( -1, "未知"),
@@ -15,6 +18,7 @@ public enum TaskStatusEnum {
RUNNING( 30, "运行中"),
KILLING( 31, "杀死中"),
RUNNING_IN_TIMEOUT( 32, "超时运行中"),
BLOCKED( 40, "暂停"),
@@ -30,31 +34,15 @@ public enum TaskStatusEnum {
;
private Integer code;
private final Integer code;
private String message;
private final String message;
TaskStatusEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public String getMessage() {
return message;
}
@Override
public String toString() {
return "TaskStatusEnum{" +
"code=" + code +
", message='" + message + '\'' +
'}';
}
public static Boolean isFinished(Integer code) {
return code >= FINISHED.getCode();
}

View File

@@ -17,9 +17,9 @@ public enum TopicAuthorityEnum {
OWNER(4, "可管理"),
;
private Integer code;
private final Integer code;
private String message;
private final String message;
TopicAuthorityEnum(Integer code, String message) {
this.code = code;
@@ -34,6 +34,16 @@ public enum TopicAuthorityEnum {
return message;
}
public static String getMsgByCode(Integer code) {
for (TopicAuthorityEnum authorityEnum: TopicAuthorityEnum.values()) {
if (authorityEnum.getCode().equals(code)) {
return authorityEnum.message;
}
}
return DENY.message;
}
@Override
public String toString() {
return "TopicAuthorityEnum{" +

View File

@@ -10,12 +10,11 @@ public enum GatewayConfigKeyEnum {
SD_APP_RATE("SD_APP_RATE", "SD_APP_RATE"),
SD_IP_RATE("SD_IP_RATE", "SD_IP_RATE"),
SD_SP_RATE("SD_SP_RATE", "SD_SP_RATE"),
;
private String configType;
private final String configType;
private String configName;
private final String configName;
GatewayConfigKeyEnum(String configType, String configName) {
this.configType = configType;

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.kafka.manager.common.bizenum.ha;
import lombok.Getter;
/**
* @author zengqiao
* @date 20/7/28
*/
@Getter
public enum HaRelationTypeEnum {
UNKNOWN(-1, "非高可用"),
STANDBY(0, ""),
ACTIVE(1, ""),
MUTUAL_BACKUP(2 , "互备");
private final int code;
private final String msg;
HaRelationTypeEnum(int code, String msg) {
this.code = code;
this.msg = msg;
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.kafka.manager.common.bizenum.ha;
import lombok.Getter;
/**
* @author zengqiao
* @date 20/7/28
*/
@Getter
public enum HaResTypeEnum {
CLUSTER(0, "Cluster"),
TOPIC(1, "Topic"),
KAFKA_USER(2, "KafkaUser"),
KAFKA_USER_AND_CLIENT(3, "KafkaUserAndClient"),
;
private final int code;
private final String msg;
HaResTypeEnum(int code, String msg) {
this.code = code;
this.msg = msg;
}
}

View File

@@ -0,0 +1,75 @@
package com.xiaojukeji.kafka.manager.common.bizenum.ha;
/**
* @author zengqiao
* @date 20/7/28
*/
public enum HaStatusEnum {
UNKNOWN(-1, "未知状态"),
STABLE(HaStatusEnum.STABLE_CODE, "稳定状态"),
// SWITCHING(HaStatusEnum.SWITCHING_CODE, "切换中"),
SWITCHING_PREPARE(
HaStatusEnum.SWITCHING_PREPARE_CODE,
"主备切换--源集群[%s]--预处理(阻止当前主Topic写入)"),
SWITCHING_WAITING_IN_SYNC(
HaStatusEnum.SWITCHING_WAITING_IN_SYNC_CODE,
"主备切换--目标集群[%s]--等待主与备Topic数据同步完成"),
SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH(
HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE,
"主备切换--目标集群[%s]--关闭旧的备Topic的副本同步"),
SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH(
HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE,
"主备切换--源集群[%s]--开启新的备Topic的副本同步"),
SWITCHING_CLOSEOUT(
HaStatusEnum.SWITCHING_CLOSEOUT_CODE,
"主备切换--目标集群[%s]--收尾(允许新的主Topic写入)"),
;
public static final int UNKNOWN_CODE = -1;
public static final int STABLE_CODE = 0;
public static final int SWITCHING_CODE = 100;
public static final int SWITCHING_PREPARE_CODE = 101;
public static final int SWITCHING_WAITING_IN_SYNC_CODE = 102;
public static final int SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE = 103;
public static final int SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE = 104;
public static final int SWITCHING_CLOSEOUT_CODE = 105;
private final int code;
private final String msg;
public int getCode() {
return code;
}
public String getMsg(String clusterName) {
if (this.code == UNKNOWN_CODE || this.code == STABLE_CODE) {
return this.msg;
}
return String.format(msg, clusterName);
}
HaStatusEnum(int code, String msg) {
this.code = code;
this.msg = msg;
}
public static Integer calProgress(Integer status) {
if (status == null || status == HaStatusEnum.STABLE_CODE || status == UNKNOWN_CODE) {
return 100;
}
// 最小进度为 1%
return Math.max(1, (status - 101) * 100 / 5);
}
}

View File

@@ -0,0 +1,44 @@
package com.xiaojukeji.kafka.manager.common.bizenum.ha.job;
public enum HaJobActionEnum {
/**
*
*/
START(1,"start"),
STOP(2, "stop"),
CANCEL(3,"cancel"),
CONTINUE(4,"continue"),
UNKNOWN(-1, "unknown");
HaJobActionEnum(int status, String value) {
this.status = status;
this.value = value;
}
private final int status;
private final String value;
public int getStatus() {
return status;
}
public String getValue() {
return value;
}
public static HaJobActionEnum valueOfStatus(int status) {
for (HaJobActionEnum statusEnum : HaJobActionEnum.values()) {
if (status == statusEnum.getStatus()) {
return statusEnum;
}
}
return HaJobActionEnum.UNKNOWN;
}
}

View File

@@ -0,0 +1,75 @@
package com.xiaojukeji.kafka.manager.common.bizenum.ha.job;
import com.xiaojukeji.kafka.manager.common.bizenum.TaskStatusEnum;
public enum HaJobStatusEnum {
/**执行中*/
RUNNING(TaskStatusEnum.RUNNING),
RUNNING_IN_TIMEOUT(TaskStatusEnum.RUNNING_IN_TIMEOUT),
SUCCESS(TaskStatusEnum.SUCCEED),
FAILED(TaskStatusEnum.FAILED),
UNKNOWN(TaskStatusEnum.UNKNOWN);
HaJobStatusEnum(TaskStatusEnum taskStatusEnum) {
this.status = taskStatusEnum.getCode();
this.value = taskStatusEnum.getMessage();
}
private final int status;
private final String value;
public int getStatus() {
return status;
}
public String getValue() {
return value;
}
public static HaJobStatusEnum valueOfStatus(int status) {
for (HaJobStatusEnum statusEnum : HaJobStatusEnum.values()) {
if (status == statusEnum.getStatus()) {
return statusEnum;
}
}
return HaJobStatusEnum.UNKNOWN;
}
public static HaJobStatusEnum getStatusBySubStatus(int totalJobNum,
int successJobNu,
int failedJobNu,
int runningJobNu,
int runningInTimeoutJobNu,
int unknownJobNu) {
if (unknownJobNu > 0) {
return UNKNOWN;
}
if((failedJobNu + runningJobNu + runningInTimeoutJobNu + unknownJobNu) == 0) {
return SUCCESS;
}
if((runningJobNu + runningInTimeoutJobNu + unknownJobNu) == 0 && failedJobNu > 0) {
return FAILED;
}
if (runningInTimeoutJobNu > 0) {
return RUNNING_IN_TIMEOUT;
}
return RUNNING;
}
public static boolean isRunning(Integer jobStatus) {
return jobStatus != null && (RUNNING.status == jobStatus || RUNNING_IN_TIMEOUT.status == jobStatus);
}
public static boolean isFinished(Integer jobStatus) {
return jobStatus != null && (SUCCESS.status == jobStatus || FAILED.status == jobStatus);
}
}

View File

@@ -31,6 +31,10 @@ public class ConfigConstant {
public static final String KAFKA_CLUSTER_DO_CONFIG_KEY = "KAFKA_CLUSTER_DO_CONFIG";
public static final String HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_PREFIX = "HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_CLUSTER";
public static final String HA_CONNECTION_ACTIVE_TIME_UNIT_MIN = "HA_CONNECTION_ACTIVE_TIME_UNIT_MIN";
private ConfigConstant() {
}
}

View File

@@ -21,6 +21,32 @@ public class KafkaConstant {
public static final String INTERNAL_KEY = "INTERNAL";
public static final String BOOTSTRAP_SERVERS = "bootstrap.servers";
/**
* HA
*/
public static final String DIDI_KAFKA_ENABLE = "didi.kafka.enable";
public static final String DIDI_HA_REMOTE_CLUSTER = "didi.ha.remote.cluster";
// TODO 平台来管理配置,不需要底层来管理,因此可以删除该配置
public static final String DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED = "didi.ha.sync.topic.configs.enabled";
public static final String DIDI_HA_ACTIVE_CLUSTER = "didi.ha.active.cluster";
public static final String DIDI_HA_REMOTE_TOPIC = "didi.ha.remote.topic";
public static final String SECURITY_PROTOCOL = "security.protocol";
public static final String SASL_MECHANISM = "sasl.mechanism";
public static final String SASL_JAAS_CONFIG = "sasl.jaas.config";
public static final String NONE = "None";
private KafkaConstant() {
}
}

View File

@@ -0,0 +1,96 @@
package com.xiaojukeji.kafka.manager.common.constant;
/**
* 信息模版Constant
* @author zengqiao
* @date 22/03/03
*/
public class MsgConstant {
private MsgConstant() {
}
/**************************************************** Cluster ****************************************************/
public static String getClusterBizStr(Long clusterPhyId, String clusterName){
return String.format("集群ID:[%d] 集群名称:[%s]", clusterPhyId, clusterName);
}
public static String getClusterPhyNotExist(Long clusterPhyId) {
return String.format("集群ID:[%d] 不存在或者未加载", clusterPhyId);
}
/**************************************************** Broker ****************************************************/
public static String getBrokerNotExist(Long clusterPhyId, Integer brokerId) {
return String.format("集群ID:[%d] brokerId:[%d] 不存在或未存活", clusterPhyId, brokerId);
}
public static String getBrokerBizStr(Long clusterPhyId, Integer brokerId) {
return String.format("集群ID:[%d] brokerId:[%d]", clusterPhyId, brokerId);
}
/**************************************************** Topic ****************************************************/
public static String getTopicNotExist(Long clusterPhyId, String topicName) {
return String.format("集群ID:[%d] Topic名称:[%s] 不存在", clusterPhyId, topicName);
}
public static String getTopicBizStr(Long clusterPhyId, String topicName) {
return String.format("集群ID:[%d] Topic名称:[%s]", clusterPhyId, topicName);
}
public static String getTopicExtend(Long existPartitionNum, Long totalPartitionNum,String expandParam){
return String.format("新增分区, 从:[%d] 增加到:[%d], 详细参数信息:[%s]", existPartitionNum,totalPartitionNum,expandParam);
}
public static String getClusterTopicKey(Long clusterPhyId, String topicName) {
return String.format("%d@%s", clusterPhyId, topicName);
}
/**************************************************** Partition ****************************************************/
public static String getPartitionNotExist(Long clusterPhyId, String topicName) {
return String.format("集群ID:[%d] Topic名称:[%s] 存在非法的分区ID", clusterPhyId, topicName);
}
public static String getPartitionNotExist(Long clusterPhyId, String topicName, Integer partitionId) {
return String.format("集群ID:[%d] Topic名称:[%s] 分区Id:[%d] 不存在", clusterPhyId, topicName, partitionId);
}
/**************************************************** KafkaUser ****************************************************/
public static String getKafkaUserBizStr(Long clusterPhyId, String kafkaUser) {
return String.format("集群ID:[%d] kafkaUser:[%s]", clusterPhyId, kafkaUser);
}
public static String getKafkaUserNotExist(Long clusterPhyId, String kafkaUser) {
return String.format("集群ID:[%d] kafkaUser:[%s] 不存在", clusterPhyId, kafkaUser);
}
public static String getKafkaUserDuplicate(Long clusterPhyId, String kafkaUser) {
return String.format("集群ID:[%d] kafkaUser:[%s] 已存在", clusterPhyId, kafkaUser);
}
/**************************************************** ha-Cluster ****************************************************/
public static String getActiveClusterDuplicate(Long clusterPhyId, String clusterName) {
return String.format("集群ID:[%d] 主集群:[%s] 已存在", clusterPhyId, clusterName);
}
/**************************************************** reassign ****************************************************/
public static String getReassignJobBizStr(Long jobId, Long clusterPhyId) {
return String.format("任务Id:[%d] 集群ID:[%s]", jobId, clusterPhyId);
}
public static String getJobIdCanNotNull() {
return "jobId不允许为空";
}
public static String getJobNotExist(Long jobId) {
return String.format("jobId:[%d] 不存在", jobId);
}
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.kafka.manager.common.entity;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.ToString;
import java.io.Serializable;
@Data
@ToString
public class BaseResult implements Serializable {
private static final long serialVersionUID = -5771016784021901099L;
@ApiModelProperty(value = "信息", example = "成功")
protected String message;
@ApiModelProperty(value = "状态", example = "0")
protected int code;
public boolean successful() {
return !this.failed();
}
public boolean failed() {
return !Constant.SUCCESS.equals(code);
}
}

View File

@@ -1,21 +1,23 @@
package com.xiaojukeji.kafka.manager.common.entity;
import com.alibaba.fastjson.JSON;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import java.io.Serializable;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author huangyiminghappy@163.com
* @date 2019-07-08
*/
public class Result<T> implements Serializable {
private static final long serialVersionUID = -2772975319944108658L;
@Data
@ApiModel(description = "调用结果")
public class Result<T> extends BaseResult {
@ApiModelProperty(value = "数据")
protected T data;
private T data;
private String message;
private String tips;
private int code;
public Result() {
this.code = ResultStatus.SUCCESS.getCode();
this.message = ResultStatus.SUCCESS.getMessage();
}
public Result(T data) {
this.data = data;
@@ -23,10 +25,6 @@ public class Result<T> implements Serializable {
this.message = ResultStatus.SUCCESS.getMessage();
}
public Result() {
this(null);
}
public Result(Integer code, String message) {
this.message = message;
this.code = code;
@@ -38,98 +36,135 @@ public class Result<T> implements Serializable {
this.code = code;
}
public T getData()
{
return (T)this.data;
public static <T> Result<T> build(boolean succ) {
if (succ) {
return buildSuc();
}
return buildFail();
}
public void setData(T data)
{
this.data = data;
public static <T> Result<T> buildFail() {
Result<T> result = new Result<>();
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(ResultStatus.FAIL.getMessage());
return result;
}
public String getMessage()
{
return this.message;
public static <T> Result<T> build(boolean succ, T data) {
Result<T> result = new Result<>();
if (succ) {
result.setCode(ResultStatus.SUCCESS.getCode());
result.setMessage(ResultStatus.SUCCESS.getMessage());
result.setData(data);
} else {
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(ResultStatus.FAIL.getMessage());
}
return result;
}
public void setMessage(String message)
{
this.message = message;
}
public String getTips() {
return tips;
}
public void setTips(String tips) {
this.tips = tips;
}
public int getCode()
{
return this.code;
}
public void setCode(int code)
{
this.code = code;
}
@Override
public String toString()
{
return JSON.toJSONString(this);
}
public static Result buildSuc() {
Result result = new Result();
public static <T> Result<T> buildSuc() {
Result<T> result = new Result<>();
result.setCode(ResultStatus.SUCCESS.getCode());
result.setMessage(ResultStatus.SUCCESS.getMessage());
return result;
}
public static <T> Result<T> buildSuc(T data) {
Result<T> result = new Result<T>();
Result<T> result = new Result<>();
result.setCode(ResultStatus.SUCCESS.getCode());
result.setMessage(ResultStatus.SUCCESS.getMessage());
result.setData(data);
return result;
}
public static <T> Result<T> buildGatewayFailure(String message) {
Result<T> result = new Result<T>();
result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static <T> Result<T> buildFailure(String message) {
Result<T> result = new Result<T>();
Result<T> result = new Result<>();
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static Result buildFrom(ResultStatus resultStatus) {
Result result = new Result();
result.setCode(resultStatus.getCode());
result.setMessage(resultStatus.getMessage());
public static <T> Result<T> buildFailure(String message, T data) {
Result<T> result = new Result<>();
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(message);
result.setData(data);
return result;
}
public static Result buildFrom(ResultStatus resultStatus, Object data) {
Result result = new Result();
public static <T> Result<T> buildFailure(ResultStatus rs) {
Result<T> result = new Result<>();
result.setCode(rs.getCode());
result.setMessage(rs.getMessage());
result.setData(null);
return result;
}
public static <T> Result<T> buildGatewayFailure(String message) {
Result<T> result = new Result<>();
result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static <T> Result<T> buildFrom(ResultStatus rs) {
Result<T> result = new Result<>();
result.setCode(rs.getCode());
result.setMessage(rs.getMessage());
return result;
}
public static <T> Result<T> buildFrom(ResultStatus resultStatus, T data) {
Result<T> result = new Result<>();
result.setCode(resultStatus.getCode());
result.setMessage(resultStatus.getMessage());
result.setData(data);
return result;
}
public boolean failed() {
return !Constant.SUCCESS.equals(code);
public static <T> Result<T> buildFromRSAndMsg(ResultStatus resultStatus, String message) {
Result<T> result = new Result<>();
result.setCode(resultStatus.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static <T> Result<T> buildFromRSAndData(ResultStatus rs, T data) {
Result<T> result = new Result<>();
result.setCode(rs.getCode());
result.setMessage(rs.getMessage());
result.setData(data);
return result;
}
public static <T, U> Result<T> buildFromIgnoreData(Result<U> anotherResult) {
Result<T> result = new Result<>();
result.setCode(anotherResult.getCode());
result.setMessage(anotherResult.getMessage());
return result;
}
public static <T> Result<T> buildParamIllegal(String msg) {
Result<T> result = new Result<>();
result.setCode(ResultStatus.PARAM_ILLEGAL.getCode());
result.setMessage(ResultStatus.PARAM_ILLEGAL.getMessage() + ":" + msg + ",请检查后再提交!");
return result;
}
public boolean hasData(){
return !failed() && this.data != null;
}
@Override
public String toString() {
return "Result{" +
"message='" + message + '\'' +
", code=" + code +
", data=" + data +
'}';
}
}

View File

@@ -23,6 +23,8 @@ public enum ResultStatus {
API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"),
USER_WITHOUT_AUTHORITY(1404, "user without authority"),
CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"),
HA_CLUSTER_DELETE_FORBIDDEN(1409, "先删除主topic才能删除该集群"),
HA_TOPIC_DELETE_FORBIDDEN(1410, "先解除高可用关系才能删除该topic"),
APP_OFFLINE_FORBIDDEN(1406, "先下线topic才能下线应用"),
@@ -76,6 +78,8 @@ public enum ResultStatus {
QUOTA_NOT_EXIST(7113, "quota not exist, please check clusterId, topicName and appId"),
CONSUMER_GROUP_NOT_EXIST(7114, "consumerGroup not exist"),
TOPIC_BIZ_DATA_NOT_EXIST(7115, "topic biz data not exist, please sync topic to db"),
SD_ZK_NOT_EXIST(7116, "SD_ZK未配置"),
// 资源已存在
RESOURCE_ALREADY_EXISTED(7200, "资源已经存在"),
@@ -88,6 +92,7 @@ public enum ResultStatus {
RESOURCE_ALREADY_USED(7400, "资源早已被使用"),
/**
* 因为外部系统的问题, 操作时引起的错误, [8000, 9000)
* ------------------------------------------------------------------------------------------
@@ -98,6 +103,7 @@ public enum ResultStatus {
ZOOKEEPER_READ_FAILED(8021, "zookeeper read failed"),
ZOOKEEPER_WRITE_FAILED(8022, "zookeeper write failed"),
ZOOKEEPER_DELETE_FAILED(8023, "zookeeper delete failed"),
ZOOKEEPER_OPERATE_FAILED(8024, "zookeeper operate failed"),
// 调用集群任务里面的agent失败
CALL_CLUSTER_TASK_AGENT_FAILED(8030, " call cluster task agent failed"),

View File

@@ -1,11 +1,14 @@
package com.xiaojukeji.kafka.manager.common.entity.ao;
import lombok.Data;
import java.util.Date;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
public class ClusterDetailDTO {
private Long clusterId;
@@ -41,141 +44,9 @@ public class ClusterDetailDTO {
private Integer regionNum;
public Long getClusterId() {
return clusterId;
}
private Integer haRelation;
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public String getZookeeper() {
return zookeeper;
}
public void setZookeeper(String zookeeper) {
this.zookeeper = zookeeper;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public String getKafkaVersion() {
return kafkaVersion;
}
public void setKafkaVersion(String kafkaVersion) {
this.kafkaVersion = kafkaVersion;
}
public String getIdc() {
return idc;
}
public void setIdc(String idc) {
this.idc = idc;
}
public Integer getMode() {
return mode;
}
public void setMode(Integer mode) {
this.mode = mode;
}
public String getSecurityProperties() {
return securityProperties;
}
public void setSecurityProperties(String securityProperties) {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
public Integer getBrokerNum() {
return brokerNum;
}
public void setBrokerNum(Integer brokerNum) {
this.brokerNum = brokerNum;
}
public Integer getTopicNum() {
return topicNum;
}
public void setTopicNum(Integer topicNum) {
this.topicNum = topicNum;
}
public Integer getConsumerGroupNum() {
return consumerGroupNum;
}
public void setConsumerGroupNum(Integer consumerGroupNum) {
this.consumerGroupNum = consumerGroupNum;
}
public Integer getControllerId() {
return controllerId;
}
public void setControllerId(Integer controllerId) {
this.controllerId = controllerId;
}
public Integer getRegionNum() {
return regionNum;
}
public void setRegionNum(Integer regionNum) {
this.regionNum = regionNum;
}
private String mutualBackupClusterName;
@Override
public String toString() {
@@ -197,6 +68,8 @@ public class ClusterDetailDTO {
", consumerGroupNum=" + consumerGroupNum +
", controllerId=" + controllerId +
", regionNum=" + regionNum +
", haRelation=" + haRelation +
", mutualBackupClusterName='" + mutualBackupClusterName + '\'' +
'}';
}
}

View File

@@ -1,5 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.ao;
import lombok.Data;
import java.util.List;
import java.util.Properties;
@@ -7,6 +9,7 @@ import java.util.Properties;
* @author zengqiao
* @date 20/6/10
*/
@Data
public class RdTopicBasic {
private Long clusterId;
@@ -26,77 +29,7 @@ public class RdTopicBasic {
private List<String> regionNameList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public Properties getProperties() {
return properties;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public List<String> getRegionNameList() {
return regionNameList;
}
public void setRegionNameList(List<String> regionNameList) {
this.regionNameList = regionNameList;
}
private Integer haRelation;
@Override
public String toString() {
@@ -109,7 +42,8 @@ public class RdTopicBasic {
", appName='" + appName + '\'' +
", properties=" + properties +
", description='" + description + '\'' +
", regionNameList='" + regionNameList + '\'' +
", regionNameList=" + regionNameList +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -0,0 +1,40 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.common;
import lombok.Getter;
import java.util.concurrent.Delayed;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
@Getter
public class FutureTaskDelayQueueData<T> implements Delayed {
private final String taskName;
private final Future<T> futureTask;
private final long timeoutTimeUnitMs;
private final long createTimeUnitMs;
public FutureTaskDelayQueueData(String taskName, Future<T> futureTask, long timeoutTimeUnitMs) {
this.taskName = taskName;
this.futureTask = futureTask;
this.timeoutTimeUnitMs = timeoutTimeUnitMs;
this.createTimeUnitMs = System.currentTimeMillis();
}
@Override
public long getDelay(TimeUnit unit) {
return unit.convert(timeoutTimeUnitMs - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
}
@Override
public int compareTo(Delayed delayed) {
FutureTaskDelayQueueData<T> other = (FutureTaskDelayQueueData<T>) delayed;
if (this.timeoutTimeUnitMs == other.timeoutTimeUnitMs) {
return (this.timeoutTimeUnitMs + "_" + this.createTimeUnitMs).compareTo((other.timeoutTimeUnitMs + "_" + other.createTimeUnitMs));
}
return (this.timeoutTimeUnitMs - other.timeoutTimeUnitMs) <= 0 ? -1: 1;
}
}

View File

@@ -0,0 +1,54 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.ha;
import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
@Data
public class HaSwitchTopic {
/**
* 是否完成
*/
private boolean finished;
/**
* 每一个Topic的状态
*/
private Map<String, Integer> activeTopicSwitchStatusMap;
public HaSwitchTopic(boolean finished) {
this.finished = finished;
this.activeTopicSwitchStatusMap = new HashMap<>();
}
public void addHaSwitchTopic(HaSwitchTopic haSwitchTopic) {
this.finished &= haSwitchTopic.finished;
}
public boolean isFinished() {
return this.finished;
}
public void addActiveTopicStatus(String activeTopicName, Integer status) {
activeTopicSwitchStatusMap.put(activeTopicName, status);
}
public boolean isActiveTopicSwitchFinished(String activeTopicName) {
Integer status = activeTopicSwitchStatusMap.get(activeTopicName);
if (status == null) {
return false;
}
return status.equals(HaStatusEnum.STABLE.getCode());
}
@Override
public String toString() {
return "HaSwitchTopic{" +
"finished=" + finished +
", activeTopicSwitchStatusMap=" + activeTopicSwitchStatusMap +
'}';
}
}

View File

@@ -0,0 +1,28 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job详情")
public class HaJobDetail {
@ApiModelProperty(value = "Topic名称")
private String topicName;
@ApiModelProperty(value="主集群ID")
private Long activeClusterPhyId;
@ApiModelProperty(value="备集群ID")
private Long standbyClusterPhyId;
@ApiModelProperty(value="Lag和")
private Long sumLag;
@ApiModelProperty(value="状态")
private Integer status;
}

View File

@@ -0,0 +1,16 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job日志")
public class HaJobLog {
@ApiModelProperty(value = "日志信息")
private String log;
}

View File

@@ -0,0 +1,70 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job;
import com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
@Data
@NoArgsConstructor
public class HaJobState {
/**
* @see com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum
*/
private int status;
private int total;
private int success;
private int failed;
private int doing;
private int doingInTimeout;
private int unknown;
private Integer progress;
/**
* 按照状态,直接进行聚合
*/
public HaJobState(List<Integer> jobStatusList, Integer progress) {
this.total = jobStatusList.size();
this.success = 0;
this.failed = 0;
this.doing = 0;
this.doingInTimeout = 0;
this.unknown = 0;
for (Integer jobStatus: jobStatusList) {
if (HaJobStatusEnum.SUCCESS.getStatus() == jobStatus) {
success += 1;
} else if (HaJobStatusEnum.FAILED.getStatus() == jobStatus) {
failed += 1;
} else if (HaJobStatusEnum.RUNNING.getStatus() == jobStatus) {
doing += 1;
} else if (HaJobStatusEnum.RUNNING_IN_TIMEOUT.getStatus() == jobStatus) {
doingInTimeout += 1;
} else {
unknown += 1;
}
}
this.status = HaJobStatusEnum.getStatusBySubStatus(this.total, this.success, this.failed, this.doing, this.doingInTimeout, this.unknown).getStatus();
this.progress = progress;
}
public HaJobState(Integer doingSize, Integer progress) {
this.total = doingSize;
this.success = 0;
this.failed = 0;
this.doing = doingSize;
this.doingInTimeout = 0;
this.unknown = 0;
this.progress = progress;
}
}

View File

@@ -0,0 +1,12 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class HaSubJobExtendData {
private Long sumLag;
}

View File

@@ -1,11 +1,14 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.topic;
import lombok.Data;
import java.util.List;
/**
* @author arthur
* @date 2018/09/03
*/
@Data
public class TopicBasicDTO {
private Long clusterId;
@@ -39,133 +42,7 @@ public class TopicBasicDTO {
private Long retentionBytes;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public String getPrincipals() {
return principals;
}
public void setPrincipals(String principals) {
this.principals = principals;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public List<String> getRegionNameList() {
return regionNameList;
}
public void setRegionNameList(List<String> regionNameList) {
this.regionNameList = regionNameList;
}
public Integer getScore() {
return score;
}
public void setScore(Integer score) {
this.score = score;
}
public String getTopicCodeC() {
return topicCodeC;
}
public void setTopicCodeC(String topicCodeC) {
this.topicCodeC = topicCodeC;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public Integer getBrokerNum() {
return brokerNum;
}
public void setBrokerNum(Integer brokerNum) {
this.brokerNum = brokerNum;
}
public Long getModifyTime() {
return modifyTime;
}
public void setModifyTime(Long modifyTime) {
this.modifyTime = modifyTime;
}
public Long getCreateTime() {
return createTime;
}
public void setCreateTime(Long createTime) {
this.createTime = createTime;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public Long getRetentionBytes() {
return retentionBytes;
}
public void setRetentionBytes(Long retentionBytes) {
this.retentionBytes = retentionBytes;
}
private Integer haRelation;
@Override
public String toString() {
@@ -186,6 +63,7 @@ public class TopicBasicDTO {
", createTime=" + createTime +
", retentionTime=" + retentionTime +
", retentionBytes=" + retentionBytes +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -1,9 +1,12 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.topic;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/20
*/
@Data
public class TopicConnection {
private Long clusterId;
@@ -19,72 +22,9 @@ public class TopicConnection {
private String clientVersion;
public Long getClusterId() {
return clusterId;
}
private String clientId;
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
private Long realConnectTime;
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public String getHostname() {
return hostname;
}
public void setHostname(String hostname) {
this.hostname = hostname;
}
public String getClientType() {
return clientType;
}
public void setClientType(String clientType) {
this.clientType = clientType;
}
public String getClientVersion() {
return clientVersion;
}
public void setClientVersion(String clientVersion) {
this.clientVersion = clientVersion;
}
@Override
public String toString() {
return "TopicConnectionDTO{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", appId='" + appId + '\'' +
", ip='" + ip + '\'' +
", hostname='" + hostname + '\'' +
", clientType='" + clientType + '\'' +
", clientVersion='" + clientVersion + '\'' +
'}';
}
private Long createTime;
}

View File

@@ -1,10 +1,13 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.topic;
import lombok.Data;
/**
* Topic概览信息
* @author zengqiao
* @date 20/5/14
*/
@Data
public class TopicOverview {
private Long clusterId;
@@ -32,109 +35,7 @@ public class TopicOverview {
private Long logicalClusterId;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public Object getByteIn() {
return byteIn;
}
public void setByteIn(Object byteIn) {
this.byteIn = byteIn;
}
public Object getByteOut() {
return byteOut;
}
public void setByteOut(Object byteOut) {
this.byteOut = byteOut;
}
public Object getProduceRequest() {
return produceRequest;
}
public void setProduceRequest(Object produceRequest) {
this.produceRequest = produceRequest;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getUpdateTime() {
return updateTime;
}
public void setUpdateTime(Long updateTime) {
this.updateTime = updateTime;
}
public Long getLogicalClusterId() {
return logicalClusterId;
}
public void setLogicalClusterId(Long logicalClusterId) {
this.logicalClusterId = logicalClusterId;
}
private Integer haRelation;
@Override
public String toString() {
@@ -152,6 +53,7 @@ public class TopicOverview {
", description='" + description + '\'' +
", updateTime=" + updateTime +
", logicalClusterId=" + logicalClusterId +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.ha;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
@Data
@ApiModel(description="Topic信息")
public class ASSwitchJobActionDTO {
/**
* @see com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum
*/
@NotBlank(message = "action不允许为空")
@ApiModelProperty(value = "动作, force")
private String action;
}

View File

@@ -0,0 +1,40 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.ha;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.Valid;
import javax.validation.constraints.NotNull;
import java.util.List;
@Data
@ApiModel(description="主备切换任务")
public class ASSwitchJobDTO {
@NotNull(message = "all不允许为NULL")
@ApiModelProperty(value = "所有Topic")
private Boolean all;
@NotNull(message = "mustContainAllKafkaUserTopics不允许为NULL")
@ApiModelProperty(value = "是否需要包含KafkaUser关联的所有Topic")
private Boolean mustContainAllKafkaUserTopics;
@NotNull(message = "activeClusterPhyId不允许为NULL")
@ApiModelProperty(value="主集群ID")
private Long activeClusterPhyId;
@NotNull(message = "standbyClusterPhyId不允许为NULL")
@ApiModelProperty(value="备集群ID")
private Long standbyClusterPhyId;
@NotNull(message = "topicNameList不允许为NULL")
@ApiModelProperty(value="切换的Topic名称列表")
private List<String> topicNameList;
/**
* kafkaUser+Client列表
*/
@Valid
@ApiModelProperty(value="切换的KafkaUser&ClientId列表Client可以为空串")
private List<KafkaUserAndClientDTO> kafkaUserAndClientIdList;
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.ha;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
@Data
@ApiModel(description="KafkaUser和ClientId信息")
public class KafkaUserAndClientDTO {
@NotBlank(message = "kafkaUser不允许为空串")
@ApiModelProperty(value = "kafkaUser")
private String kafkaUser;
@ApiModelProperty(value = "clientId")
private String clientId;
}

View File

@@ -0,0 +1,55 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.op.topic;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author huangyiminghappy@163.com, zengqiao
* @date 2022-06-29
*/
@Data
@JsonIgnoreProperties(ignoreUnknown = true)
@ApiModel(description = "Topic高可用关联|解绑")
public class HaTopicRelationDTO {
@NotNull(message = "主集群id不能为空")
@ApiModelProperty(value = "主集群id")
private Long activeClusterId;
@NotNull(message = "备集群id不能为空")
@ApiModelProperty(value = "备集群id")
private Long standbyClusterId;
@NotNull(message = "是否应用于所有topic")
@ApiModelProperty(value = "是否应用于所有topic")
private Boolean all;
@ApiModelProperty(value = "需要关联|解绑的topic名称列表")
private List<String> topicNames;
@ApiModelProperty(value = "解绑是否保留备集群资源topic,kafkaUser,group")
private Boolean retainStandbyResource;
@Override
public String toString() {
return "HaTopicRelationDTO{" +
", activeClusterId=" + activeClusterId +
", standbyClusterId=" + standbyClusterId +
", all=" + all +
", topicNames=" + topicNames +
", retainStandbyResource=" + retainStandbyResource +
'}';
}
public boolean paramLegal() {
if(!all && ValidateUtils.isEmptyList(topicNames)) {
return false;
}
return true;
}
}

View File

@@ -0,0 +1,31 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.rd;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 20/5/4
*/
@Data
@ApiModel(description="App关联Topic信息")
public class AppRelateTopicsDTO {
@NotNull(message = "clusterPhyId不允许为NULL")
@ApiModelProperty(value="物理集群ID")
private Long clusterPhyId;
@NotNull(message = "filterTopicNameList不允许为NULL")
@ApiModelProperty(value="过滤的Topic列表")
private List<String> filterTopicNameList;
@ApiModelProperty(value="使用KafkaUser+Client维度的数据默认是kafkaUser维度")
private Boolean useKafkaUserAndClientId;
@NotNull(message = "ha不允许为NULL")
@ApiModelProperty(value="查询是否高可用topic")
private Boolean ha;
}

View File

@@ -4,11 +4,13 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description = "集群接入&修改")
@JsonIgnoreProperties(ignoreUnknown = true)
public class ClusterDTO {
@@ -33,60 +35,21 @@ public class ClusterDTO {
@ApiModelProperty(value="Jmx配置")
private String jmxProperties;
public Long getClusterId() {
return clusterId;
}
@ApiModelProperty(value="主集群Id")
private Long activeClusterId;
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
@ApiModelProperty(value="是否高可用")
private boolean isHa;
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public String getZookeeper() {
return zookeeper;
}
public void setZookeeper(String zookeeper) {
this.zookeeper = zookeeper;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public String getIdc() {
return idc;
}
public void setIdc(String idc) {
this.idc = idc;
}
public String getSecurityProperties() {
return securityProperties;
}
public void setSecurityProperties(String securityProperties) {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
public boolean legal() {
if (ValidateUtils.isNull(clusterName)
|| ValidateUtils.isNull(zookeeper)
|| ValidateUtils.isNull(idc)
|| ValidateUtils.isNull(bootstrapServers)
|| (isHa && ValidateUtils.isNull(activeClusterId))) {
return false;
}
return true;
}
@Override
@@ -99,16 +62,8 @@ public class ClusterDTO {
", idc='" + idc + '\'' +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
", activeClusterId=" + activeClusterId +
", isHa=" + isHa +
'}';
}
public boolean legal() {
if (ValidateUtils.isNull(clusterName)
|| ValidateUtils.isNull(zookeeper)
|| ValidateUtils.isNull(idc)
|| ValidateUtils.isNull(bootstrapServers)) {
return false;
}
return true;
}
}

View File

@@ -118,10 +118,7 @@ public class LogicalClusterDTO {
}
public boolean legal() {
if (ValidateUtils.isNull(clusterId)
|| ValidateUtils.isNull(clusterId)
|| ValidateUtils.isEmptyList(regionIdList)
|| ValidateUtils.isNull(mode)) {
if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(regionIdList) || ValidateUtils.isNull(mode)) {
return false;
}
if (!ClusterModeEnum.SHARED_MODE.getCode().equals(mode) && ValidateUtils.isNull(appId)) {

View File

@@ -94,10 +94,7 @@ public class RegionDTO {
}
public boolean legal() {
if (ValidateUtils.isNull(clusterId)
|| ValidateUtils.isNull(clusterId)
|| ValidateUtils.isEmptyList(brokerIdList)
|| ValidateUtils.isNull(status)) {
if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(brokerIdList) || ValidateUtils.isNull(status)) {
return false;
}
description = ValidateUtils.isNull(description)? "": description;

View File

@@ -0,0 +1,24 @@
package com.xiaojukeji.kafka.manager.common.entity.pagination;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
@Data
@ApiModel(description = "分页信息")
public class Pagination {
@ApiModelProperty(value = "总记录数", example = "100")
private long total;
@ApiModelProperty(value = "当前页码", example = "0")
private long pageNo;
@ApiModelProperty(value = "单页大小", example = "10")
private long pageSize;
public Pagination(long total, long pageNo, long pageSize) {
this.total = total;
this.pageNo = pageNo;
this.pageSize = pageSize;
}
}

View File

@@ -0,0 +1,17 @@
package com.xiaojukeji.kafka.manager.common.entity.pagination;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
@Data
@ApiModel(description = "分页数据")
public class PaginationData<T> {
@ApiModelProperty(value = "业务数据")
private List<T> bizData;
@ApiModelProperty(value = "分页信息")
private Pagination pagination;
}

View File

@@ -0,0 +1,30 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import lombok.Data;
import java.io.Serializable;
import java.util.Date;
/**
* @author zengqiao
* @date 21/07/19
*/
@Data
public class BaseDO implements Serializable {
private static final long serialVersionUID = 8782560709154468485L;
/**
* 主键ID
*/
protected Long id;
/**
* 创建时间
*/
protected Date createTime;
/**
* 更新时间
*/
protected Date modifyTime;
}

View File

@@ -1,11 +1,18 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
import java.util.Date;
/**
* @author zengqiao
* @date 20/6/29
*/
@Data
@ToString
@NoArgsConstructor
public class LogicalClusterDO {
private Long id;
@@ -27,99 +34,17 @@ public class LogicalClusterDO {
private Date gmtModify;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
public LogicalClusterDO(String name,
String identification,
Integer mode,
String appId,
Long clusterId,
String regionList) {
this.name = name;
}
public String getIdentification() {
return identification;
}
public void setIdentification(String identification) {
this.identification = identification;
}
public Integer getMode() {
return mode;
}
public void setMode(Integer mode) {
this.mode = mode;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getRegionList() {
return regionList;
}
public void setRegionList(String regionList) {
this.regionList = regionList;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
@Override
public String toString() {
return "LogicalClusterDO{" +
"id=" + id +
", name='" + name + '\'' +
", identification='" + identification + '\'' +
", mode=" + mode +
", appId='" + appId + '\'' +
", clusterId=" + clusterId +
", regionList='" + regionList + '\'' +
", description='" + description + '\'' +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}

View File

@@ -1,7 +1,14 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
import java.util.Date;
@Data
@ToString
@NoArgsConstructor
public class RegionDO implements Comparable<RegionDO> {
private Long id;
@@ -25,111 +32,13 @@ public class RegionDO implements Comparable<RegionDO> {
private String description;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
public RegionDO(Integer status, String name, Long clusterId, String brokerList) {
this.status = status;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getBrokerList() {
return brokerList;
}
public void setBrokerList(String brokerList) {
this.brokerList = brokerList;
}
public Long getCapacity() {
return capacity;
}
public void setCapacity(Long capacity) {
this.capacity = capacity;
}
public Long getRealUsed() {
return realUsed;
}
public void setRealUsed(Long realUsed) {
this.realUsed = realUsed;
}
public Long getEstimateUsed() {
return estimateUsed;
}
public void setEstimateUsed(Long estimateUsed) {
this.estimateUsed = estimateUsed;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
@Override
public String toString() {
return "RegionDO{" +
"id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
", name='" + name + '\'' +
", clusterId=" + clusterId +
", brokerList='" + brokerList + '\'' +
", capacity=" + capacity +
", realUsed=" + realUsed +
", estimateUsed=" + estimateUsed +
", description='" + description + '\'' +
'}';
}
@Override
public int compareTo(RegionDO regionDO) {
return this.id.compareTo(regionDO.id);

View File

@@ -2,6 +2,8 @@ package com.xiaojukeji.kafka.manager.common.entity.pojo;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicCreationDTO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Date;
@@ -9,6 +11,8 @@ import java.util.Date;
* @author zengqiao
* @date 20/4/24
*/
@Data
@NoArgsConstructor
public class TopicDO {
private Long id;
@@ -26,70 +30,14 @@ public class TopicDO {
private Long peakBytesIn;
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
public TopicDO(String appId, Long clusterId, String topicName, String description, Long peakBytesIn) {
this.appId = appId;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getPeakBytesIn() {
return peakBytesIn;
}
public void setPeakBytesIn(Long peakBytesIn) {
this.peakBytesIn = peakBytesIn;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
public static TopicDO buildFrom(TopicCreationDTO dto) {
TopicDO topicDO = new TopicDO();
topicDO.setAppId(dto.getAppId());

View File

@@ -1,5 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo.gateway;
import lombok.Data;
import java.util.Date;
/**
@@ -7,6 +9,7 @@ import java.util.Date;
* @author zengqiao
* @date 20/7/6
*/
@Data
public class TopicConnectionDO {
private Long id;
@@ -22,87 +25,13 @@ public class TopicConnectionDO {
private String clientVersion;
private String clientId;
private Long realConnectTime;
private Date createTime;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public String getClientVersion() {
return clientVersion;
}
public void setClientVersion(String clientVersion) {
this.clientVersion = clientVersion;
}
public Date getCreateTime() {
return createTime;
}
public void setCreateTime(Date createTime) {
this.createTime = createTime;
}
@Override
public String toString() {
return "TopicConnectionDO{" +
"id=" + id +
", clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", type='" + type + '\'' +
", appId='" + appId + '\'' +
", ip='" + ip + '\'' +
", clientVersion='" + clientVersion + '\'' +
", createTime=" + createTime +
'}';
}
public String uniqueKey() {
return appId + clusterId + topicName + type + ip;
return appId + clusterId + topicName + type + ip + clientId;
}
}

View File

@@ -0,0 +1,71 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo.ha;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum;
import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* HA-主备关系表
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@TableName("ha_active_standby_relation")
public class HaASRelationDO extends BaseDO {
/**
* 主集群ID
*/
private Long activeClusterPhyId;
/**
* 主集群资源名称
*/
private String activeResName;
/**
* 备集群ID
*/
private Long standbyClusterPhyId;
/**
* 备集群资源名称
*/
private String standbyResName;
/**
* 资源类型
* @see HaResTypeEnum
*/
private Integer resType;
/**
* 主备状态
*/
private Integer status;
/**
* 主备关系中的唯一性字段
*/
private String uniqueField;
public HaASRelationDO(Long id, Integer status) {
this.id = id;
this.status = status;
}
public HaASRelationDO(Long activeClusterPhyId, String activeResName, Long standbyClusterPhyId, String standbyResName, Integer resType, Integer status) {
this.activeClusterPhyId = activeClusterPhyId;
this.activeResName = activeResName;
this.standbyClusterPhyId = standbyClusterPhyId;
this.standbyResName = standbyResName;
this.resType = resType;
this.status = status;
// 主备两个资源之间唯一,但是不保证两个资源之间,只存在主备关系,也可能存在双活关系,及各自都为对方的主备
this.uniqueField = String.format("%d_%s||%d_%s||%d", activeClusterPhyId, activeResName, standbyClusterPhyId, standbyResName, resType);
}
}

View File

@@ -0,0 +1,68 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo.ha;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.kafka.manager.common.entity.dto.ha.KafkaUserAndClientDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO;
import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List;
/**
* HA-主备关系切换任务表
*/
@Data
@NoArgsConstructor
@TableName("ha_active_standby_switch_job")
public class HaASSwitchJobDO extends BaseDO {
/**
* 主集群ID
*/
private Long activeClusterPhyId;
/**
* 备集群ID
*/
private Long standbyClusterPhyId;
/**
* 主备状态
*/
private Integer jobStatus;
/**
* 类型0kafkaUser 1kafkaUser+Client
*/
private Integer type;
/**
* 扩展数据
*/
private String extendData;
/**
* 操作人
*/
private String operator;
public HaASSwitchJobDO(Long activeClusterPhyId, Long standbyClusterPhyId, Integer type, List<KafkaUserAndClientDTO> extendDataObj, Integer jobStatus, String operator) {
this.activeClusterPhyId = activeClusterPhyId;
this.standbyClusterPhyId = standbyClusterPhyId;
this.type = type;
this.extendData = ValidateUtils.isEmptyList(extendDataObj)? "": ConvertUtil.obj2Json(extendDataObj);
this.jobStatus = jobStatus;
this.operator = operator;
}
public List<KafkaUserAndClientDTO> getExtendRawData() {
if (ValidateUtils.isBlank(extendData)) {
return new ArrayList<>();
}
return ConvertUtil.str2ObjArrayByJson(extendData, KafkaUserAndClientDTO.class);
}
}

View File

@@ -0,0 +1,67 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo.ha;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* HA-主备关系切换子任务表
*/
@Data
@NoArgsConstructor
@TableName("ha_active_standby_switch_sub_job")
public class HaASSwitchSubJobDO extends BaseDO {
/**
* 任务ID
*/
private Long jobId;
/**
* 主集群ID
*/
private Long activeClusterPhyId;
/**
* 主集群资源名称
*/
private String activeResName;
/**
* 备集群ID
*/
private Long standbyClusterPhyId;
/**
* 备集群资源名称
*/
private String standbyResName;
/**
* 资源类型
*/
private Integer resType;
/**
* 任务状态
*/
private Integer jobStatus;
/**
* 扩展数据
* @see com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaSubJobExtendData
*/
private String extendData;
public HaASSwitchSubJobDO(Long jobId, Long activeClusterPhyId, String activeResName, Long standbyClusterPhyId, String standbyResName, Integer resType, Integer jobStatus, String extendData) {
this.jobId = jobId;
this.activeClusterPhyId = activeClusterPhyId;
this.activeResName = activeResName;
this.standbyClusterPhyId = standbyClusterPhyId;
this.standbyResName = standbyResName;
this.resType = resType;
this.jobStatus = jobStatus;
this.extendData = extendData;
}
}

View File

@@ -0,0 +1,50 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo.ha;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Date;
@Data
@NoArgsConstructor
@TableName("job_log")
public class JobLogDO extends BaseDO {
/**
* 业务类型
*/
private Integer bizType;
/**
* 业务关键字
*/
private String bizKeyword;
/**
* 打印时间
*/
private Date printTime;
/**
* 内容
*/
private String content;
public JobLogDO(Integer bizType, String bizKeyword) {
this.bizType = bizType;
this.bizKeyword = bizKeyword;
}
public JobLogDO(Integer bizType, String bizKeyword, Date printTime, String content) {
this.bizType = bizType;
this.bizKeyword = bizKeyword;
this.printTime = printTime;
this.content = content;
}
public JobLogDO setAndCopyNew(Date printTime, String content) {
return new JobLogDO(this.bizType, this.bizKeyword, printTime, content);
}
}

View File

@@ -2,12 +2,14 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.common;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* Topic信息
* @author zengqiao
* @date 19/4/1
*/
@Data
@ApiModel(description = "Topic信息概览")
public class TopicOverviewVO {
@ApiModelProperty(value = "集群ID")
@@ -49,109 +51,8 @@ public class TopicOverviewVO {
@ApiModelProperty(value = "逻辑集群id")
private Long logicalClusterId;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public Object getByteIn() {
return byteIn;
}
public void setByteIn(Object byteIn) {
this.byteIn = byteIn;
}
public Object getByteOut() {
return byteOut;
}
public void setByteOut(Object byteOut) {
this.byteOut = byteOut;
}
public Object getProduceRequest() {
return produceRequest;
}
public void setProduceRequest(Object produceRequest) {
this.produceRequest = produceRequest;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getUpdateTime() {
return updateTime;
}
public void setUpdateTime(Long updateTime) {
this.updateTime = updateTime;
}
public Long getLogicalClusterId() {
return logicalClusterId;
}
public void setLogicalClusterId(Long logicalClusterId) {
this.logicalClusterId = logicalClusterId;
}
@ApiModelProperty(value = "高可用关系1:主topic, 0:备topic , 其他:非高可用topic")
private Integer haRelation;
@Override
public String toString() {
@@ -169,6 +70,7 @@ public class TopicOverviewVO {
", description='" + description + '\'' +
", updateTime=" + updateTime +
", logicalClusterId=" + logicalClusterId +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -0,0 +1,34 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.ha;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/29
*/
@Data
@ApiModel(description="HA集群-Topic信息")
public class HaClusterTopicVO {
@ApiModelProperty(value="当前查询的集群ID")
private Long clusterId;
@ApiModelProperty(value="Topic名称")
private String topicName;
@ApiModelProperty(value="生产Acl数量")
private Integer produceAclNum;
@ApiModelProperty(value="消费Acl数量")
private Integer consumeAclNum;
@ApiModelProperty(value="主集群ID")
private Long activeClusterId;
@ApiModelProperty(value="备集群ID")
private Long standbyClusterId;
@ApiModelProperty(value="主备状态")
private Integer status;
}

View File

@@ -0,0 +1,48 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.ha;
import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterBaseVO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/29
*/
@Data
@ApiModel(description="HA集群-集群信息")
public class HaClusterVO extends ClusterBaseVO {
@ApiModelProperty(value="broker数量")
private Integer brokerNum;
@ApiModelProperty(value="topic数量")
private Integer topicNum;
@ApiModelProperty(value="消费组数")
private Integer consumerGroupNum;
@ApiModelProperty(value="region数")
private Integer regionNum;
@ApiModelProperty(value="ControllerID")
private Integer controllerId;
/**
* @see com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum
*/
@ApiModelProperty(value="主备状态")
private Integer haStatus;
@ApiModelProperty(value="主topic数")
private Long activeTopicCount;
@ApiModelProperty(value="备topic数")
private Long standbyTopicCount;
@ApiModelProperty(value="备集群信息")
private HaClusterVO haClusterVO;
@ApiModelProperty(value="切换任务id")
private Long haASSwitchJobId;
}

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.ha.job;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job详情")
public class HaJobDetailVO {
@ApiModelProperty(value = "Topic名称")
private String topicName;
@ApiModelProperty(value="主物理集群ID")
private Long activeClusterPhyId;
@ApiModelProperty(value="主物理集群名称")
private String activeClusterPhyName;
@ApiModelProperty(value="备物理集群ID")
private Long standbyClusterPhyId;
@ApiModelProperty(value="备物理集群名称")
private String standbyClusterPhyName;
@ApiModelProperty(value="Lag和")
private Long sumLag;
@ApiModelProperty(value="状态")
private Integer status;
@ApiModelProperty(value="超时时间配置")
private Long timeoutUnitSecConfig;
}

View File

@@ -0,0 +1,46 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.ha.job;
import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobState;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job状态")
public class HaJobStateVO {
@ApiModelProperty(value = "任务总数")
private Integer jobNu;
@ApiModelProperty(value = "运行中的任务数")
private Integer runningNu;
@ApiModelProperty(value = "超时运行中的任务数")
private Integer runningInTimeoutNu;
@ApiModelProperty(value = "准备好待运行的任务数")
private Integer waitingNu;
@ApiModelProperty(value = "运行成功的任务数")
private Integer successNu;
@ApiModelProperty(value = "运行失败的任务数")
private Integer failedNu;
@ApiModelProperty(value = "进度,[0 - 100]")
private Integer progress;
public HaJobStateVO(HaJobState jobState) {
this.jobNu = jobState.getTotal();
this.runningNu = jobState.getDoing();
this.runningInTimeoutNu = jobState.getDoingInTimeout();
this.waitingNu = 0;
this.successNu = jobState.getSuccess();
this.failedNu = jobState.getFailed();
this.progress = jobState.getProgress();
}
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/8
*/
@Data
@ApiModel(value = "集群的topic高可用状态")
public class HaClusterTopicHaStatusVO {
@ApiModelProperty(value = "物理集群ID")
private Long clusterId;
@ApiModelProperty(value = "物理集群名称")
private String clusterName;
@ApiModelProperty(value = "Topic名称")
private String topicName;
@ApiModelProperty(value = "高可用关系1:主topic, 0:备topic , 其他:非高可用topic")
private Integer haRelation;
}

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
@@ -10,6 +11,7 @@ import java.util.List;
* @author zengqiao
* @date 19/4/1
*/
@Data
@ApiModel(description = "Topic基本信息")
public class TopicBasicVO {
@ApiModelProperty(value = "集群id")
@@ -57,125 +59,8 @@ public class TopicBasicVO {
@ApiModelProperty(value = "所属region")
private List<String> regionNameList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public String getPrincipals() {
return principals;
}
public void setPrincipals(String principals) {
this.principals = principals;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public Long getRetentionBytes() {
return retentionBytes;
}
public void setRetentionBytes(Long retentionBytes) {
this.retentionBytes = retentionBytes;
}
public Long getCreateTime() {
return createTime;
}
public void setCreateTime(Long createTime) {
this.createTime = createTime;
}
public Long getModifyTime() {
return modifyTime;
}
public void setModifyTime(Long modifyTime) {
this.modifyTime = modifyTime;
}
public Integer getScore() {
return score;
}
public void setScore(Integer score) {
this.score = score;
}
public String getTopicCodeC() {
return topicCodeC;
}
public void setTopicCodeC(String topicCodeC) {
this.topicCodeC = topicCodeC;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(String bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public List<String> getRegionNameList() {
return regionNameList;
}
public void setRegionNameList(List<String> regionNameList) {
this.regionNameList = regionNameList;
}
@ApiModelProperty(value = "高可用关系1:主topic, 0:备topic , 其他:非主备topic")
private Integer haRelation;
@Override
public String toString() {
@@ -195,6 +80,7 @@ public class TopicBasicVO {
", description='" + description + '\'' +
", bootstrapServers='" + bootstrapServers + '\'' +
", regionNameList=" + regionNameList +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -2,11 +2,13 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zhongyuankai,zengqiao
* @date 20/4/8
*/
@Data
@ApiModel(value = "Topic连接信息")
public class TopicConnectionVO {
@ApiModelProperty(value = "集群ID")
@@ -30,72 +32,12 @@ public class TopicConnectionVO {
@ApiModelProperty(value = "客户端版本")
private String clientVersion;
public Long getClusterId() {
return clusterId;
}
@ApiModelProperty(value = "客户端ID")
private String clientId;
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
@ApiModelProperty(value = "连接Broker时间")
private Long realConnectTime;
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public String getHostname() {
return hostname;
}
public void setHostname(String hostname) {
this.hostname = hostname;
}
public String getClientType() {
return clientType;
}
public void setClientType(String clientType) {
this.clientType = clientType;
}
public String getClientVersion() {
return clientVersion;
}
public void setClientVersion(String clientVersion) {
this.clientVersion = clientVersion;
}
@Override
public String toString() {
return "TopicConnectionVO{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", appId='" + appId + '\'' +
", ip='" + ip + '\'' +
", hostname='" + hostname + '\'' +
", clientType='" + clientType + '\'' +
", clientVersion='" + clientVersion + '\'' +
'}';
}
@ApiModelProperty(value = "创建时间")
private Long createTime;
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/8
*/
@Data
@ApiModel(value = "Topic信息")
public class TopicHaVO {
@ApiModelProperty(value = "物理集群ID")
private Long clusterId;
@ApiModelProperty(value = "物理集群名称")
private String clusterName;
@ApiModelProperty(value = "Topic名称")
private String topicName;
@ApiModelProperty(value = "高可用关系1:主topic, 0:备topic , 其他:非高可用topic")
private Integer haRelation;
}

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.rd;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
import java.util.Properties;
@@ -10,6 +11,7 @@ import java.util.Properties;
* @author zengqiao
* @date 20/6/10
*/
@Data
@ApiModel(description = "Topic基本信息(RD视角)")
public class RdTopicBasicVO {
@ApiModelProperty(value = "集群ID")
@@ -39,77 +41,8 @@ public class RdTopicBasicVO {
@ApiModelProperty(value = "所属region")
private List<String> regionNameList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
public String getAppName() {
return appName;
}
public void setAppName(String appName) {
this.appName = appName;
}
public Properties getProperties() {
return properties;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public List<String> getRegionNameList() {
return regionNameList;
}
public void setRegionNameList(List<String> regionNameList) {
this.regionNameList = regionNameList;
}
@ApiModelProperty(value = "高可用关系1:主topic, 0:备topic , 其他:非主备topic")
private Integer haRelation;
@Override
public String toString() {
@@ -122,7 +55,8 @@ public class RdTopicBasicVO {
", appName='" + appName + '\'' +
", properties=" + properties +
", description='" + description + '\'' +
", regionNameList='" + regionNameList + '\'' +
", regionNameList=" + regionNameList +
", haRelation=" + haRelation +
'}';
}
}

View File

@@ -0,0 +1,72 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.rd.app;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List;
/**
* @author zengqiao
* @date 20/5/4
*/
@Data
@NoArgsConstructor
@ApiModel(description="App关联Topic信息")
public class AppRelateTopicsVO {
@ApiModelProperty(value="物理集群ID")
private Long clusterPhyId;
@ApiModelProperty(value="kafkaUser")
private String kafkaUser;
@ApiModelProperty(value="clientId")
private String clientId;
@ApiModelProperty(value="已建立HA的Client")
private List<String> haClientIdList;
@ApiModelProperty(value="选中的Topic列表")
private List<String> selectedTopicNameList;
@ApiModelProperty(value="未选中的Topic列表")
private List<String> notSelectTopicNameList;
@ApiModelProperty(value="未建立HA的Topic列表")
private List<String> notHaTopicNameList;
public AppRelateTopicsVO(Long clusterPhyId, String kafkaUser, String clientId) {
this.clusterPhyId = clusterPhyId;
this.kafkaUser = kafkaUser;
this.clientId = clientId;
this.selectedTopicNameList = new ArrayList<>();
this.notSelectTopicNameList = new ArrayList<>();
this.notHaTopicNameList = new ArrayList<>();
}
public void addSelectedIfNotExist(String topicName) {
if (selectedTopicNameList.contains(topicName)) {
return;
}
selectedTopicNameList.add(topicName);
}
public void addNotSelectedIfNotExist(String topicName) {
if (notSelectTopicNameList.contains(topicName)) {
return;
}
notSelectTopicNameList.add(topicName);
}
public void addNotHaIfNotExist(String topicName) {
if (notHaTopicNameList.contains(topicName)) {
return;
}
notHaTopicNameList.add(topicName);
}
}

View File

@@ -2,11 +2,13 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 20/4/23
*/
@Data
@ApiModel(description="集群信息")
public class ClusterDetailVO extends ClusterBaseVO {
@ApiModelProperty(value="Broker数")
@@ -24,45 +26,11 @@ public class ClusterDetailVO extends ClusterBaseVO {
@ApiModelProperty(value="Region数")
private Integer regionNum;
public Integer getBrokerNum() {
return brokerNum;
}
@ApiModelProperty(value = "高可用关系1:主, 0:备 , 其他:非高可用")
private Integer haRelation;
public void setBrokerNum(Integer brokerNum) {
this.brokerNum = brokerNum;
}
public Integer getTopicNum() {
return topicNum;
}
public void setTopicNum(Integer topicNum) {
this.topicNum = topicNum;
}
public Integer getConsumerGroupNum() {
return consumerGroupNum;
}
public void setConsumerGroupNum(Integer consumerGroupNum) {
this.consumerGroupNum = consumerGroupNum;
}
public Integer getControllerId() {
return controllerId;
}
public void setControllerId(Integer controllerId) {
this.controllerId = controllerId;
}
public Integer getRegionNum() {
return regionNum;
}
public void setRegionNum(Integer regionNum) {
this.regionNum = regionNum;
}
@ApiModelProperty(value = "互备集群名称")
private String mutualBackupClusterName;
@Override
public String toString() {
@@ -72,6 +40,8 @@ public class ClusterDetailVO extends ClusterBaseVO {
", consumerGroupNum=" + consumerGroupNum +
", controllerId=" + controllerId +
", regionNum=" + regionNum +
"} " + super.toString();
", haRelation=" + haRelation +
", mutualBackupClusterName='" + mutualBackupClusterName + '\'' +
'}';
}
}

View File

@@ -0,0 +1,30 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.rd.job;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Date;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job日志")
public class JobLogVO {
@ApiModelProperty(value = "日志ID")
protected Long id;
@ApiModelProperty(value = "业务类型")
private Integer bizType;
@ApiModelProperty(value = "业务关键字")
private String bizKeyword;
@ApiModelProperty(value = "打印时间")
private Date printTime;
@ApiModelProperty(value = "内容")
private String content;
}

View File

@@ -0,0 +1,31 @@
package com.xiaojukeji.kafka.manager.common.entity.vo.rd.job;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List;
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "Job日志")
public class JobMulLogVO {
@ApiModelProperty(value = "末尾日志ID")
private Long endLogId;
@ApiModelProperty(value = "日志信息")
private List<JobLogVO> logList;
public JobMulLogVO(List<JobLogVO> logList, Long startLogId) {
this.logList = logList == null? new ArrayList<>(): logList;
if (!this.logList.isEmpty()) {
this.endLogId = this.logList.stream().map(elem -> elem.id).reduce(Long::max).get() + 1;
} else {
this.endLogId = startLogId;
}
}
}

View File

@@ -0,0 +1,404 @@
package com.xiaojukeji.kafka.manager.common.utils;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.alibaba.fastjson.TypeReference;
import com.alibaba.fastjson.serializer.SerializerFeature;
import com.google.common.collect.*;
import org.apache.commons.collections.CollectionUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.BeanUtils;
import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.lang.reflect.Type;
import java.util.*;
import java.util.Map.Entry;
import java.util.concurrent.ConcurrentHashMap;
import java.util.function.Consumer;
import java.util.function.Function;
public class ConvertUtil {
private static final Logger LOGGER = LoggerFactory.getLogger(ConvertUtil.class);
private ConvertUtil(){}
public static <T> T toObj(String json, Type resultType) {
if (resultType instanceof Class) {
Class<T> clazz = (Class<T>) resultType;
return str2ObjByJson(json, clazz);
}
return JSON.parseObject(json, resultType);
}
public static <T> T str2ObjByJson(String srcStr, Class<T> tgtClass) {
return JSON.parseObject(srcStr, tgtClass);
}
public static <T> T str2ObjByJson(String srcStr, TypeReference<T> tt) {
return JSON.parseObject(srcStr, tt);
}
public static String obj2Json(Object srcObj) {
if (srcObj == null) {
return null;
}
if (srcObj instanceof String) {
return (String) srcObj;
} else {
return JSON.toJSONString(srcObj);
}
}
public static String obj2JsonWithIgnoreCircularReferenceDetect(Object srcObj) {
return JSON.toJSONString(srcObj, SerializerFeature.DisableCircularReferenceDetect);
}
public static <T> List<T> str2ObjArrayByJson(String srcStr, Class<T> tgtClass) {
return JSON.parseArray(srcStr, tgtClass);
}
public static <T> T obj2ObjByJSON(Object srcObj, Class<T> tgtClass) {
return JSON.parseObject( JSON.toJSONString(srcObj), tgtClass);
}
public static String list2String(List<?> list, String separator) {
if (list == null || list.isEmpty()) {
return "";
}
StringBuilder sb = new StringBuilder();
for (Object item : list) {
sb.append(item).append(separator);
}
return sb.deleteCharAt(sb.length() - 1).toString();
}
public static <K, V> Map<K, V> list2Map(List<V> list, Function<? super V, ? extends K> mapper) {
Map<K, V> map = Maps.newHashMap();
if (CollectionUtils.isNotEmpty(list)) {
for (V v : list) {
map.put(mapper.apply(v), v);
}
}
return map;
}
public static <K, V> Map<K, V> list2MapParallel(List<V> list, Function<? super V, ? extends K> mapper) {
Map<K, V> map = new ConcurrentHashMap<>();
if (CollectionUtils.isNotEmpty(list)) {
list.parallelStream().forEach(v -> map.put(mapper.apply(v), v));
}
return map;
}
public static <K, V, O> Map<K, V> list2Map(List<O> list, Function<? super O, ? extends K> keyMapper,
Function<? super O, ? extends V> valueMapper) {
Map<K, V> map = Maps.newHashMap();
if (CollectionUtils.isNotEmpty(list)) {
for (O o : list) {
map.put(keyMapper.apply(o), valueMapper.apply(o));
}
}
return map;
}
public static <K, V> Multimap<K, V> list2MulMap(List<V> list, Function<? super V, ? extends K> mapper) {
Multimap<K, V> multimap = ArrayListMultimap.create();
if (CollectionUtils.isNotEmpty(list)) {
for (V v : list) {
multimap.put(mapper.apply(v), v);
}
}
return multimap;
}
public static <K, V, O> Multimap<K, V> list2MulMap(List<O> list, Function<? super O, ? extends K> keyMapper,
Function<? super O, ? extends V> valueMapper) {
Multimap<K, V> multimap = ArrayListMultimap.create();
if (CollectionUtils.isNotEmpty(list)) {
for (O o : list) {
multimap.put(keyMapper.apply(o), valueMapper.apply(o));
}
}
return multimap;
}
public static <K, V, O> Map<K, List<V>> list2MapOfList(List<O> list, Function<? super O, ? extends K> keyMapper,
Function<? super O, ? extends V> valueMapper) {
ArrayListMultimap<K, V> multimap = ArrayListMultimap.create();
if (CollectionUtils.isNotEmpty(list)) {
for (O o : list) {
multimap.put(keyMapper.apply(o), valueMapper.apply(o));
}
}
return Multimaps.asMap(multimap);
}
public static <K, V> Set<K> list2Set(List<V> list, Function<? super V, ? extends K> mapper) {
Set<K> set = Sets.newHashSet();
if (CollectionUtils.isNotEmpty(list)) {
for (V v : list) {
set.add(mapper.apply(v));
}
}
return set;
}
public static <T> Set<T> set2Set(Set<? extends Object> set, Class<T> tClass) {
if (CollectionUtils.isEmpty(set)) {
return new HashSet<>();
}
Set<T> result = new HashSet<>();
for (Object o : set) {
T t = obj2Obj(o, tClass);
if (t != null) {
result.add(t);
}
}
return result;
}
public static <T> List<T> list2List(List<? extends Object> list, Class<T> tClass) {
return list2List(list, tClass, (t) -> {
});
}
public static <T> List<T> list2List(List<? extends Object> list, Class<T> tClass, Consumer<T> consumer) {
if (CollectionUtils.isEmpty(list)) {
return Lists.newArrayList();
}
List<T> result = Lists.newArrayList();
for (Object object : list) {
T t = obj2Obj(object, tClass, consumer);
if (t != null) {
result.add(t);
}
}
return result;
}
/**
* 对象转换工具
* @param srcObj 元对象
* @param tgtClass 目标对象类
* @param <T> 泛型
* @return 目标对象
*/
public static <T> T obj2Obj(final Object srcObj, Class<T> tgtClass) {
return obj2Obj(srcObj, tgtClass, (t) -> {
});
}
public static <T> T obj2Obj(final Object srcObj, Class<T> tgtClass, Consumer<T> consumer) {
if (srcObj == null) {
return null;
}
T tgt = null;
try {
tgt = tgtClass.newInstance();
BeanUtils.copyProperties(srcObj, tgt);
consumer.accept(tgt);
} catch (Exception e) {
LOGGER.warn("class=ConvertUtil||method=obj2Obj||msg={}", e.getMessage());
}
return tgt;
}
public static <K, V> Map<K, V> mergeMapList(List<Map<K, V>> mapList) {
Map<K, V> result = Maps.newHashMap();
for (Map<K, V> map : mapList) {
result.putAll(map);
}
return result;
}
public static Map<String, Object> Obj2Map(Object obj) {
if (null == obj) {
return null;
}
Map<String, Object> map = new HashMap<>();
Field[] fields = obj.getClass().getDeclaredFields();
for (Field field : fields) {
field.setAccessible(true);
try {
map.put(field.getName(), field.get(obj));
} catch (IllegalAccessException e) {
LOGGER.warn("class=ConvertUtil||method=Obj2Map||msg={}", e.getMessage(), e);
}
}
return map;
}
public static Object map2Obj(Map<String, Object> map, Class<?> clz) {
Object obj = null;
try {
obj = clz.newInstance();
Field[] declaredFields = obj.getClass().getDeclaredFields();
for (Field field : declaredFields) {
int mod = field.getModifiers();
if (Modifier.isStatic(mod) || Modifier.isFinal(mod)) {
continue;
}
field.setAccessible(true);
field.set(obj, map.get(field.getName()));
}
} catch (Exception e) {
LOGGER.warn("class=ConvertUtil||method=map2Obj||msg={}", e.getMessage(), e);
}
return obj;
}
public static Map<String, Double> sortMapByValue(Map<String, Double> map) {
List<Entry<String, Double>> data = new ArrayList<>(map.entrySet());
data.sort((o1, o2) -> {
if ((o2.getValue() - o1.getValue()) > 0) {
return 1;
} else if ((o2.getValue() - o1.getValue()) == 0) {
return 0;
} else {
return -1;
}
});
Map<String, Double> result = Maps.newLinkedHashMap();
for (Entry<String, Double> next : data) {
result.put(next.getKey(), next.getValue());
}
return result;
}
public static Map<String, Object> directFlatObject(JSONObject obj) {
Map<String, Object> ret = new HashMap<>();
if(obj==null) {
return ret;
}
for (Entry<String, Object> entry : obj.entrySet()) {
String key = entry.getKey();
Object o = entry.getValue();
if (o instanceof JSONObject) {
Map<String, Object> m = directFlatObject((JSONObject) o);
for (Entry<String, Object> e : m.entrySet()) {
ret.put(key + "." + e.getKey(), e.getValue());
}
} else {
ret.put(key, o);
}
}
return ret;
}
public static Long string2Long(String s) {
if (ValidateUtils.isNull(s)) {
return null;
}
try {
return Long.parseLong(s);
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Float string2Float(String s) {
if (ValidateUtils.isNull(s)) {
return null;
}
try {
return Float.parseFloat(s);
} catch (Exception e) {
// ignore exception
}
return null;
}
public static String float2String(Float f) {
if (ValidateUtils.isNull(f)) {
return null;
}
try {
return String.valueOf(f);
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Integer string2Integer(String s) {
if (null == s) {
return null;
}
try {
return Integer.parseInt(s);
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Double string2Double(String s) {
if (null == s) {
return null;
}
try {
return Double.parseDouble(s);
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Long double2Long(Double d) {
if (null == d) {
return null;
}
try {
return d.longValue();
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Integer double2Int(Double d) {
if (null == d) {
return null;
}
try {
return d.intValue();
} catch (Exception e) {
// ignore exception
}
return null;
}
public static Long Float2Long(Float f) {
if (null == f) {
return null;
}
try {
return f.longValue();
} catch (Exception e) {
// ignore exception
}
return null;
}
}

View File

@@ -15,6 +15,7 @@ import java.util.concurrent.ConcurrentHashMap;
* @author huangyiminghappy@163.com
* @date 2019/3/15
*/
@Deprecated
public class CopyUtils {
@SuppressWarnings({"unchecked", "rawtypes"})

View File

@@ -0,0 +1,158 @@
package com.xiaojukeji.kafka.manager.common.utils;
import com.xiaojukeji.kafka.manager.common.entity.ao.common.FutureTaskDelayQueueData;
import com.xiaojukeji.kafka.manager.common.utils.factory.DefaultThreadFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.*;
/**
* Future工具类
*/
public class FutureUtil<T> {
private static final Logger LOGGER = LoggerFactory.getLogger(FutureUtil.class);
private ThreadPoolExecutor executor;
private Map<Long/*currentThreadId*/, DelayQueue<FutureTaskDelayQueueData<T>>> futuresMap;
private FutureUtil() {
}
public static <T> FutureUtil<T> init(String name, int corePoolSize, int maxPoolSize, int queueSize) {
FutureUtil<T> futureUtil = new FutureUtil<>();
futureUtil.executor = new ThreadPoolExecutor(
corePoolSize,
maxPoolSize,
3000,
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(queueSize),
new DefaultThreadFactory("KM-FutureUtil-" + name),
new ThreadPoolExecutor.DiscardOldestPolicy() //对拒绝任务不抛弃,而是抛弃队列里面等待最久的一个线程,然后把拒绝任务加到队列。
);
futureUtil.futuresMap = new ConcurrentHashMap<>();
return futureUtil;
}
public Future<T> directSubmitTask(Callable<T> callable) {
return executor.submit(callable);
}
public Future<T> directSubmitTask(Runnable runnable) {
return (Future<T>) executor.submit(runnable);
}
/**
* 必须配合 waitExecute使用 否则容易会撑爆内存
*/
public FutureUtil<T> runnableTask(String taskName, Integer timeoutUnisMs, Callable<T> callable) {
Long currentThreadId = Thread.currentThread().getId();
futuresMap.putIfAbsent(currentThreadId, new DelayQueue<>());
DelayQueue<FutureTaskDelayQueueData<T>> delayQueueData = futuresMap.get(currentThreadId);
delayQueueData.put(new FutureTaskDelayQueueData<>(taskName, executor.submit(callable), timeoutUnisMs + System.currentTimeMillis()));
return this;
}
public FutureUtil<T> runnableTask(String taskName, Integer timeoutUnisMs, Runnable runnable) {
Long currentThreadId = Thread.currentThread().getId();
futuresMap.putIfAbsent(currentThreadId, new DelayQueue<>());
DelayQueue<FutureTaskDelayQueueData<T>> delayQueueData = futuresMap.get(currentThreadId);
delayQueueData.put(new FutureTaskDelayQueueData<T>(taskName, (Future<T>) executor.submit(runnable), timeoutUnisMs + System.currentTimeMillis()));
return this;
}
public void waitExecute() {
this.waitResult();
}
public void waitExecute(Integer stepWaitTimeUnitMs) {
this.waitResult(stepWaitTimeUnitMs);
}
public List<T> waitResult() {
return waitResult(null);
}
/**
* 等待结果
* @param stepWaitTimeUnitMs 超时时间达到后,没有完成时,继续等待的时间
*/
public List<T> waitResult(Integer stepWaitTimeUnitMs) {
Long currentThreadId = Thread.currentThread().getId();
DelayQueue<FutureTaskDelayQueueData<T>> delayQueueData = futuresMap.remove(currentThreadId);
if(delayQueueData == null || delayQueueData.isEmpty()) {
return new ArrayList<>();
}
List<T> resultList = new ArrayList<>();
while (!delayQueueData.isEmpty()) {
try {
// 不进行阻塞,直接获取第一个任务
FutureTaskDelayQueueData<T> queueData = delayQueueData.peek();
if (queueData.getFutureTask().isDone()) {
// 如果第一个已经完成了则移除掉第一个然后获取其result
delayQueueData.remove(queueData);
resultList.add(queueData.getFutureTask().get());
continue;
}
// 如果第一个未完成则阻塞10ms判断是否达到超时时间了。
// 这里的10ms不建议设置较大因为任务可能在这段时间内完成了此时如果设置的较大会导致迟迟不能返回从而影响接口调用的性能
queueData = delayQueueData.poll(10, TimeUnit.MILLISECONDS);
if (queueData == null) {
continue;
}
// 在到达超时时间后,任务没有完成,但是没有完成的原因可能是因为任务一直处于等待状态导致的。
// 因此这里再给一段补充时间,看这段时间内是否可以完成任务。
stepWaitResult(queueData, stepWaitTimeUnitMs);
// 达到超时时间
if (queueData.getFutureTask().isDone()) {
// 任务已经完成
resultList.add(queueData.getFutureTask().get());
continue;
}
// 达到超时时间,但是任务未完成,则打印日志并强制取消
LOGGER.error("class=FutureUtil||method=waitExecute||taskName={}||msg=cancel task", queueData.getTaskName());
queueData.getFutureTask().cancel(true);
} catch (Exception e) {
LOGGER.error("class=FutureUtil||method=waitExecute||msg=exception", e);
}
}
return resultList;
}
private T stepWaitResult(FutureTaskDelayQueueData<T> queueData, Integer stepWaitTimeUnitMs) {
if (stepWaitTimeUnitMs == null) {
return null;
}
try {
return queueData.getFutureTask().get(stepWaitTimeUnitMs, TimeUnit.MILLISECONDS);
} catch (Exception e) {
// 达到超时时间,但是任务未完成,则打印日志并强制取消
LOGGER.error("class=FutureUtil||method=stepWaitResult||taskName={}||errMsg=exception", queueData.getTaskName(), e);
}
return null;
}
}

View File

@@ -0,0 +1,67 @@
package com.xiaojukeji.kafka.manager.common.utils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Properties;
public class GitPropUtil {
private static final Logger log = LoggerFactory.getLogger(GitPropUtil.class);
private static Properties props = null;
public static final String VERSION_FIELD_NAME = "git.build.version";
public static final String COMMIT_ID_FIELD_NAME = "git.commit.id.abbrev";
public static String getProps(String fieldName) {
if (props == null) {
props = JsonUtils.stringToObj(readGitPropertiesInJarFile(), Properties.class);
}
return props.getProperty(fieldName);
}
public static Properties getProps() {
if (props == null) {
props = JsonUtils.stringToObj(readGitPropertiesInJarFile(), Properties.class);
}
return props;
}
private static String readGitPropertiesInJarFile() {
InputStream inputStream = null;
try {
inputStream = GitPropUtil.class.getClassLoader().getResourceAsStream("git.properties");
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String line = null;
StringBuilder sb = new StringBuilder();
while ((line = bufferedReader.readLine()) != null) {
sb.append(line).append("\n");
}
return sb.toString();
} catch (Exception e) {
log.error("method=readGitPropertiesInJarFile||errMsg=exception.", e);
} finally {
try {
if (inputStream != null) {
inputStream.close();
}
} catch (Exception e) {
log.error("method=readGitPropertiesInJarFile||msg=close failed||errMsg=exception.", e);
}
}
return "{}";
}
private GitPropUtil() {
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.kafka.manager.common.utils;
public class HAUtils {
public static String mergeKafkaUserAndClient(String kafkaUser, String clientId) {
if (ValidateUtils.isBlank(clientId)) {
return kafkaUser;
}
return String.format("%s#%s", kafkaUser, clientId);
}
public static Tuple<String, String> splitKafkaUserAndClient(String kafkaUserAndClientId) {
if (ValidateUtils.isBlank(kafkaUserAndClientId)) {
return null;
}
int idx = kafkaUserAndClientId.indexOf('#');
if (idx == -1) {
return null;
} else if (idx == kafkaUserAndClientId.length() - 1) {
return new Tuple<>(kafkaUserAndClientId.substring(0, idx), "");
}
return new Tuple<>(kafkaUserAndClientId.substring(0, idx), kafkaUserAndClientId.substring(idx + 1));
}
private HAUtils() {
}
}

View File

@@ -79,10 +79,27 @@ public class JsonUtils {
TopicConnectionDO connectionDO = new TopicConnectionDO();
String[] appIdDetailArray = appIdDetail.toString().split("#");
if (appIdDetailArray.length >= 3) {
connectionDO.setAppId(appIdDetailArray[0]);
connectionDO.setIp(appIdDetailArray[1]);
connectionDO.setClientVersion(appIdDetailArray[2]);
if (appIdDetailArray == null) {
appIdDetailArray = new String[0];
}
connectionDO.setAppId(parseTopicConnections(appIdDetailArray, 0));
connectionDO.setIp(parseTopicConnections(appIdDetailArray, 1));
connectionDO.setClientVersion(parseTopicConnections(appIdDetailArray, 2));
// 解析clientId
StringBuilder sb = new StringBuilder();
for (int i = 3; i < appIdDetailArray.length - 1; ++i) {
sb.append(parseTopicConnections(appIdDetailArray, i)).append("#");
}
connectionDO.setClientId(sb.substring(0, sb.length() - 1));
// 解析时间
Long receiveTime = ConvertUtil.string2Long(parseTopicConnections(appIdDetailArray, appIdDetailArray.length - 1));
if (receiveTime == null) {
connectionDO.setRealConnectTime(-1L);
} else {
connectionDO.setRealConnectTime(receiveTime);
}
connectionDO.setClusterId(clusterId);
@@ -95,4 +112,8 @@ public class JsonUtils {
}
return connectionDOList;
}
private static String parseTopicConnections(String[] appIdDetailArray, int idx) {
return (appIdDetailArray != null && appIdDetailArray.length >= idx + 1)? appIdDetailArray[idx]: "";
}
}

View File

@@ -13,6 +13,7 @@ import org.springframework.context.ApplicationEvent;
import org.springframework.context.annotation.Lazy;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Service;
import org.springframework.web.context.request.RequestAttributes;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
@@ -81,16 +82,19 @@ public class SpringTool implements ApplicationContextAware, DisposableBean {
}
public static String getUserName(){
HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()).getRequest();
String username = null;
if (TrickLoginConstant.TRICK_LOGIN_SWITCH_ON.equals(request.getHeader(TrickLoginConstant.TRICK_LOGIN_SWITCH))) {
// trick登录方式的获取用户
username = request.getHeader(TrickLoginConstant.TRICK_LOGIN_USER);
} else {
// 走页面登录方式登录的获取用户
HttpSession session = request.getSession();
username = (String) session.getAttribute(LoginConstant.SESSION_USERNAME_KEY);
RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
if (!ValidateUtils.isNull(requestAttributes)) {
HttpServletRequest request = ((ServletRequestAttributes) requestAttributes).getRequest();
if (TrickLoginConstant.TRICK_LOGIN_SWITCH_ON.equals(request.getHeader(TrickLoginConstant.TRICK_LOGIN_SWITCH))) {
// trick登录方式的获取用户
username = request.getHeader(TrickLoginConstant.TRICK_LOGIN_USER);
} else {
// 走页面登录方式登录的获取用户
HttpSession session = request.getSession();
username = (String) session.getAttribute(LoginConstant.SESSION_USERNAME_KEY);
}
}
if (ValidateUtils.isNull(username)) {

View File

@@ -0,0 +1,61 @@
package com.xiaojukeji.kafka.manager.common.utils;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import lombok.Data;
/**
* @Author: D10865
* @Description:
* @Date: Create on 2018/5/29 下午4:08
* @Modified By
*/
@JsonIgnoreProperties(value = { "hibernateLazyInitializer", "handler" })
@Data
public class Tuple<T, V> {
private T v1;
private V v2;
public Tuple(){}
public Tuple(T v1, V v2) {
this.v1 = v1;
this.v2 = v2;
}
public T v1() {
return v1;
}
public Tuple<T, V> setV1(T v1) {
this.v1 = v1;
return this;
}
public V v2() {
return v2;
}
public Tuple<T, V> setV2(V v2) {
this.v2 = v2;
return this;
}
@Override
public boolean equals(Object o) {
if (this == o) {return true;}
if (o == null || getClass() != o.getClass()) {return false;}
Tuple<?, ?> tuple = (Tuple<?, ?>) o;
if (v1 != null ? !v1.equals(tuple.v1) : tuple.v1 != null) {return false;}
return v2 != null ? v2.equals(tuple.v2) : tuple.v2 == null;
}
@Override
public int hashCode() {
int result = v1 != null ? v1.hashCode() : 0;
result = 31 * result + (v2 != null ? v2.hashCode() : 0);
return result;
}
}

View File

@@ -8,6 +8,8 @@ package com.xiaojukeji.kafka.manager.common.zookeeper;
public class ZkPathUtil {
private static final String ZOOKEEPER_SEPARATOR = "/";
public static final String CLUSTER_ID_NODE = ZOOKEEPER_SEPARATOR + "cluster/id";
public static final String BROKER_ROOT_NODE = ZOOKEEPER_SEPARATOR + "brokers";
public static final String CONTROLLER_ROOT_NODE = ZOOKEEPER_SEPARATOR + "controller";

View File

@@ -29,10 +29,10 @@ public class TopicQuotaData {
public static TopicQuotaData getClientData(Long producerByteRate, Long consumerByteRate) {
TopicQuotaData clientData = new TopicQuotaData();
if (!ValidateUtils.isNull(producerByteRate) && consumerByteRate != -1) {
if (!ValidateUtils.isNull(consumerByteRate) && consumerByteRate != -1) {
clientData.setConsumer_byte_rate(consumerByteRate.toString());
}
if (!ValidateUtils.isNull(consumerByteRate) && producerByteRate != -1) {
if (!ValidateUtils.isNull(producerByteRate) && producerByteRate != -1) {
clientData.setProducer_byte_rate(producerByteRate.toString());
}
return clientData;

View File

@@ -0,0 +1,20 @@
ARG NODE_VERSION=12.20.0
ARG NGINX_VERSION=1.21.5-alpine
FROM node:${NODE_VERSION} AS builder
ARG OUTPUT_PATH=dist
ENV TZ Asia/Shanghai
WORKDIR /opt
COPY . .
RUN npm config set registry https://registry.npm.taobao.org \
&& npm install \
# Change the output directory to dist
&& sed -i "s#../kafka-manager-web/src/main/resources/templates#$OUTPUT_PATH#g" webpack.config.js \
&& npm run prod-build
FROM nginx:${NGINX_VERSION}
ENV TZ=Asia/Shanghai
COPY --from=builder /opt/dist /opt/dist
COPY --from=builder /opt/web.conf /etc/nginx/conf.d/default.conf

View File

@@ -1,13 +1,13 @@
{
"name": "logi-kafka",
"version": "2.6.0",
"version": "2.8.0",
"description": "",
"scripts": {
"prestart": "npm install --save-dev webpack-dev-server",
"start": "webpack serve",
"start": "webpack-dev-server",
"daily-build": "cross-env NODE_ENV=production webpack",
"pre-build": "cross-env NODE_ENV=production webpack",
"prod-build": "cross-env NODE_ENV=production webpack",
"prod-build": "cross-env NODE_OPTIONS=--max-old-space-size=8000 NODE_ENV=production webpack",
"fix-memory": "cross-env LIMIT=4096 increase-memory-limit"
},
"author": "",
@@ -16,8 +16,10 @@
"@hot-loader/react-dom": "^16.8.6",
"@types/events": "^3.0.0",
"@types/lodash.debounce": "^4.0.6",
"@types/node": "18.7.13",
"@types/react": "^16.8.8",
"@types/react-dom": "^16.8.2",
"@types/react-router": "4.4.5",
"@types/react-router-dom": "^4.3.1",
"@types/spark-md5": "^3.0.2",
"@webpack-cli/serve": "^1.6.0",
@@ -52,7 +54,8 @@
"typescript": "^3.3.3333",
"url-loader": "^4.1.1",
"webpack": "^4.29.6",
"webpack-cli": "^4.9.1",
"webpack-cli": "^3.2.3",
"webpack-dev-server": "^3.11.3",
"xlsx": "^0.16.1"
},
"dependencies": {

View File

@@ -8,7 +8,7 @@ export class XFormWrapper extends React.Component<IXFormWrapper> {
public state = {
confirmLoading: false,
formMap: this.props.formMap || [] as any,
formData: this.props.formData || {}
formData: this.props.formData || {},
};
private $formRef: any;
@@ -121,7 +121,8 @@ export class XFormWrapper extends React.Component<IXFormWrapper> {
this.closeModalWrapper();
}).catch((err: any) => {
const { formMap, formData } = wrapper.xFormWrapper;
onSubmitFaild(err, this.$formRef, formData, formMap);
// tslint:disable-next-line:no-unused-expression
onSubmitFaild && onSubmitFaild(err, this.$formRef, formData, formMap);
}).finally(() => {
this.setState({
confirmLoading: false,

View File

@@ -1,4 +1,5 @@
.ant-input-number, .ant-form-item-children .ant-select {
.ant-input-number,
.ant-form-item-children .ant-select {
width: 314px
}
@@ -9,3 +10,35 @@
margin-right: 16px;
}
}
.x-form {
.ant-form-item-label {
line-height: 32px;
}
.ant-form-item-control {
line-height: 32px;
}
}
.prompt-info {
color: #ccc;
font-size: 12px;
line-height: 20px;
display: block;
&.inline {
margin-left: 16px;
display: inline-block;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #042866;
letter-spacing: 0;
text-align: justify;
.anticon {
margin-right: 6px;
}
}
}

View File

@@ -85,6 +85,10 @@ class XForm extends React.Component<IXFormProps> {
initialValue = false;
}
if (formItem.type === FormItemType.select) {
initialValue = initialValue || undefined;
}
// if (formItem.type === FormItemType.select && formItem.attrs
// && ['tags'].includes(formItem.attrs.mode)) {
// initialValue = formItem.defaultValue ? [formItem.defaultValue] : [];
@@ -105,7 +109,7 @@ class XForm extends React.Component<IXFormProps> {
const { form, formData, formMap, formLayout, layout } = this.props;
const { getFieldDecorator } = form;
return (
<Form layout={layout || 'horizontal'} onSubmit={() => ({})}>
<Form className="x-form" layout={layout || 'horizontal'} onSubmit={() => ({})}>
{formMap.map(formItem => {
const { initialValue, valuePropName } = this.handleFormItem(formItem, formData);
const getFieldValue = {
@@ -131,7 +135,13 @@ class XForm extends React.Component<IXFormProps> {
)}
{formItem.renderExtraElement ? formItem.renderExtraElement() : null}
{/* 添加保存时间提示文案 */}
{formItem.attrs?.prompttype ? <span style={{ color: "#cccccc", fontSize: '12px', lineHeight: '20px', display: 'block' }}>{formItem.attrs.prompttype}</span> : null}
{formItem.attrs?.prompttype ?
<span className={`prompt-info ${formItem.attrs?.promptclass || ''}`}>
{formItem.attrs?.prompticon ?
<Icon type="info-circle" theme="twoTone" twoToneColor="#0A70F5" className={formItem.attrs?.prompticomclass} /> : null}
{formItem.attrs.prompttype}
</span>
: null}
</Form.Item>
);
})}

View File

@@ -30,7 +30,7 @@ export class ClusterOverview extends React.Component<IOverview> {
const content = this.props.basicInfo as IMetaData;
const gmtCreate = moment(content.gmtCreate).format(timeFormat);
const clusterContent = [{
value: content.clusterName,
value: `${content.clusterName}${content.haRelation === 0 ? '(备)' : content.haRelation === 1 ? '(主)' : content.haRelation === 2 ? '(主&备)' : ''}`,
label: '集群名称',
},
// {
@@ -50,6 +50,9 @@ export class ClusterOverview extends React.Component<IOverview> {
}, {
value: content.zookeeper,
label: 'Zookeeper',
}, {
value: `${content.mutualBackupClusterName || '-'}${content.haRelation === 0 ? '(主)' : content.haRelation === 1 ? '(备)' : content.haRelation === 2 ? '(主&备)' : ''}`,
label: '互备集群',
}];
return (
<>
@@ -64,18 +67,18 @@ export class ClusterOverview extends React.Component<IOverview> {
</Descriptions.Item>
))}
{clusterInfo.map((item: ILabelValue, index: number) => (
<Descriptions.Item key={index} label={item.label}>
<Tooltip placement="bottomLeft" title={item.value}>
<span className="overview-bootstrap">
<Icon
onClick={() => copyString(item.value)}
type="copy"
className="didi-theme overview-theme"
/>
<i className="overview-boot">{item.value}</i>
</span>
</Tooltip>
</Descriptions.Item>
<Descriptions.Item key={index} label={item.label}>
<Tooltip placement="bottomLeft" title={item.value}>
<span className="overview-bootstrap">
<Icon
onClick={() => copyString(item.value)}
type="copy"
className="didi-theme overview-theme"
/>
<i className="overview-boot">{item.value}</i>
</span>
</Tooltip>
</Descriptions.Item>
))}
</Descriptions>
</PageHeader>

View File

@@ -118,10 +118,10 @@ export class ClusterTopic extends SearchAndFilterContainer {
public renderClusterTopicList() {
const clusterColumns = [
{
title: 'Topic名称',
title: `Topic名称`,
dataIndex: 'topicName',
key: 'topicName',
width: '120px',
width: '140px',
sorter: (a: IClusterTopics, b: IClusterTopics) => a.topicName.charCodeAt(0) - b.topicName.charCodeAt(0),
render: (text: string, record: IClusterTopics) => {
return (
@@ -130,7 +130,7 @@ export class ClusterTopic extends SearchAndFilterContainer {
// tslint:disable-next-line:max-line-length
href={`${urlPrefix}/topic/topic-detail?clusterId=${record.clusterId || ''}&topic=${record.topicName || ''}&isPhysicalClusterId=true&region=${region.currentRegion}`}
>
{text}
{text}{record.haRelation === 0 ? '(备)' : record.haRelation === 1 ? '(主)' : record.haRelation === 2 ? '(主&备)' : ''}
</a>
</Tooltip>);
},
@@ -208,23 +208,27 @@ export class ClusterTopic extends SearchAndFilterContainer {
{
title: '操作',
width: '120px',
render: (value: string, item: IClusterTopics) => (
<>
<a onClick={() => this.getBaseInfo(item)} className="action-button"></a>
<a onClick={() => this.expandPartition(item)} className="action-button"></a>
{/* <a onClick={() => this.expandPartition(item)} className="action-button">删除</a> */}
<Popconfirm
title="确定删除?"
// 运维管控集群列表Topic列表修改删除业务逻辑
onConfirm={() => this.confirmDetailTopic(item)}
// onConfirm={() => this.deleteTopic(item)}
cancelText="取消"
okText="确认"
>
<a></a>
</Popconfirm>
</>
),
render: (value: string, item: IClusterTopics) => {
if (item.haRelation === 0) return '-';
return (
<>
<a onClick={() => this.getBaseInfo(item)} className="action-button"></a>
<a onClick={() => this.expandPartition(item)} className="action-button"></a>
{/* <a onClick={() => this.expandPartition(item)} className="action-button">删除</a> */}
<Popconfirm
title="确定删除?"
// 运维管控集群列表Topic列表修改删除业务逻辑
onConfirm={() => this.confirmDetailTopic(item)}
// onConfirm={() => this.deleteTopic(item)}
cancelText="取消"
okText="确认"
>
<a></a>
</Popconfirm>
</>
);
},
},
];
if (users.currentUser.role !== 2) {

View File

@@ -73,6 +73,7 @@ export class LogicalCluster extends SearchAndFilterContainer {
key: 'mode',
render: (value: number) => {
let val = '';
// tslint:disable-next-line:no-unused-expression
cluster.clusterModes && cluster.clusterModes.forEach((ele: any) => {
if (value === ele.code) {
val = ele.message;
@@ -206,6 +207,7 @@ export class LogicalCluster extends SearchAndFilterContainer {
}
public render() {
const clusterModes = cluster.clusterModes;
return (
<div className="k-row">
<ul className="k-tab">

View File

@@ -0,0 +1,381 @@
.switch-style {
&.ant-switch {
min-width: 32px;
height: 20px;
line-height: 18px;
::after {
height: 16px;
width: 16px;
}
}
&.ant-switch-loading-icon,
&.ant-switch::after {
height: 16px;
width: 16px;
}
}
.expanded-table {
width: auto ! important;
.ant-table-thead {
// visibility: hidden;
display: none;
}
.ant-table-tbody>tr>td {
background-color: #FAFAFA;
border-bottom: none;
}
}
tr.ant-table-expanded-row td>.expanded-table {
padding: 10px;
// margin: -13px 0px -14px ! important;
border: none;
}
.cluster-tag {
background: #27D687;
border-radius: 2px;
font-family: PingFangSC-Medium;
color: #FFFFFF;
letter-spacing: 0;
text-align: justify;
-webkit-transform: scale(0.5);
margin-right: 0px;
}
.no-padding {
.ant-modal-body {
padding: 0;
.attribute-content {
.tag-gray {
font-family: PingFangSC-Regular;
font-size: 12px;
color: #575757;
text-align: center;
line-height: 18px;
padding: 0 4px;
margin: 3px;
height: 20px;
background: #EEEEEE;
border-radius: 5px;
}
.icon {
zoom: 0.8;
}
.tag-num {
font-family: PingFangSC-Medium;
text-align: right;
line-height: 13px;
margin-left: 6px;
transform: scale(0.8333);
}
}
.attribute-tag {
.ant-popover-inner-content {
padding: 12px;
max-width: 480px;
}
.ant-popover-arrow {
display: none;
}
.ant-popover-placement-bottom,
.ant-popover-placement-bottomLeft,
.ant-popover-placement-bottomRight {
top: 23px !important;
border-radius: 2px;
}
.tag-gray {
font-family: PingFangSC-Regular;
font-size: 12px;
color: #575757;
text-align: center;
line-height: 12px;
padding: 0 4px;
margin: 3px;
height: 20px;
background: #EEEEEE;
border-radius: 5px;
}
}
.col-status {
font-family: PingFangSC-Regular;
font-size: 12px;
letter-spacing: 0;
text-align: justify;
&.green {
.ant-badge-status-text {
color: #2FC25B;
}
}
&.black {
.ant-badge-status-text {
color: #575757;
}
}
&.red {
.ant-badge-status-text {
color: #F5202E;
}
}
}
.ant-alert-message {
font-family: PingFangSC-Regular;
font-size: 12px;
letter-spacing: 0;
text-align: justify;
}
.ant-alert-warning {
border: none;
color: #592D00;
padding: 7px 15px 7px 41px;
background: #FFFAE0;
.ant-alert-message {
color: #592D00
}
}
.ant-alert-info {
border: none;
padding: 7px 15px 7px 41px;
color: #042866;
background: #EFF8FF;
.ant-alert-message {
color: #042866;
}
}
.ant-alert-icon {
left: 24px;
top: 10px;
}
.switch-warning {
.btn {
position: absolute;
top: 60px;
right: 24px;
height: 22px;
width: 64px;
padding: 0px;
&.disabled {
top: 77px;
}
button {
height: 22px;
width: 64px;
padding: 0px;
}
&.loading {
width: 80px;
button {
height: 22px;
width: 88px;
padding: 0px 0px 0px 12px;
}
}
}
}
.modal-table-content {
padding: 0px 24px 16px;
.ant-table-small {
border: none;
border-top: 1px solid #e8e8e8;
.ant-table-thead {
background: #FAFAFA;
}
}
}
.modal-table-download {
height: 40px;
line-height: 40px;
text-align: center;
border-top: 1px solid #e8e8e8;
}
.ant-form {
padding: 18px 24px 0px;
.ant-col-3 {
width: 9.5%;
}
.ant-form-item-label {
text-align: left;
}
.no-label {
.ant-col-21 {
width: 100%;
}
.transfe-list {
.ant-transfer-list {
height: 359px;
}
}
.ant-transfer-list {
width: 249px;
border: 1px solid #E8E8E8;
border-radius: 8px;
.ant-transfer-list-header-title {
font-family: PingFangSC-Regular;
font-size: 12px;
color: #252525;
letter-spacing: 0;
text-align: right;
}
.ant-transfer-list-body-search-wrapper {
padding: 19px 16px 6px;
input {
height: 27px;
background: #FAFAFA;
border-radius: 8px;
border: none;
}
.ant-transfer-list-search-action {
line-height: 27px;
height: 27px;
top: 19px;
}
}
}
.ant-transfer-list-header {
border-radius: 8px 8px 0px 0px;
padding: 16px;
}
}
.ant-transfer-customize-list .ant-transfer-list-body-customize-wrapper {
padding: 0px;
margin: 0px 16px;
background: #FAFAFA;
border-radius: 8px;
.ant-table-header-column {
font-family: PingFangSC-Regular;
font-size: 12px;
color: #575757;
letter-spacing: 0;
text-align: justify;
}
.ant-table-thead>tr {
border: none;
background: #FAFAFA;
}
.ant-table-tbody>tr>td {
border: none;
background: #FAFAFA;
}
.ant-table-body {
background: #FAFAFA;
}
}
.ant-table-selection-column {
.ant-table-header-column {
opacity: 0;
}
}
}
.log-process {
height: 56px;
background: #FAFAFA;
padding: 6px 8px;
margin-bottom: 15px;
.name {
display: flex;
color: #575757;
justify-content: space-between;
}
}
.log-panel {
padding: 24px;
font-family: PingFangSC-Regular;
font-size: 12px;
.title {
color: #252525;
letter-spacing: 0;
text-align: justify;
margin-bottom: 15px;
.divider {
display: inline-block;
border-left: 2px solid #F38031;
height: 9px;
margin-right: 6px;
}
}
.log-info {
color: #575757;
letter-spacing: 0;
text-align: justify;
margin-bottom: 10px;
.text-num {
font-size: 14px;
}
.warning-num {
color: #F38031;
font-size: 14px;
}
}
.log-table {
margin-bottom: 24px;
.ant-table-small {
border: none;
border-top: 1px solid #e8e8e8;
.ant-table-thead {
background: #FAFAFA;
}
}
}
}
}
}

View File

@@ -1,8 +1,8 @@
import * as React from 'react';
import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Popover } from 'component/antd';
import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Dropdown } from 'component/antd';
import { wrapper } from 'store';
import { observer } from 'mobx-react';
import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type';
import { IXFormWrapper, IMetaData, IRegister, ILabelValue } from 'types/base-type';
import { admin } from 'store/admin';
import { users } from 'store/users';
import { registerCluster, createCluster, pauseMonitoring } from 'lib/api';
@@ -10,11 +10,14 @@ import { SearchAndFilterContainer } from 'container/search-filter';
import { cluster } from 'store/cluster';
import { customPagination } from 'constants/table';
import { urlPrefix } from 'constants/left-menu';
import { indexUrl } from 'constants/strategy'
import { indexUrl } from 'constants/strategy';
import { region } from 'store';
import './index.less';
import Monacoeditor from 'component/editor/monacoEditor';
import { getAdminClusterColumns } from '../config';
import { FormItemType } from 'component/x-form';
import { TopicHaRelationWrapper } from 'container/modal/admin/TopicHaRelation';
import { TopicSwitchWrapper } from 'container/modal/admin/TopicHaSwitch';
import { TopicSwitchLog } from 'container/modal/admin/SwitchTaskLog';
const { confirm } = Modal;
@@ -22,6 +25,10 @@ const { confirm } = Modal;
export class ClusterList extends SearchAndFilterContainer {
public state = {
searchKey: '',
haVisible: false,
switchVisible: false,
logVisible: false,
currentCluster: {} as IMetaData,
};
private xFormModal: IXFormWrapper;
@@ -36,7 +43,26 @@ export class ClusterList extends SearchAndFilterContainer {
);
}
public updateFormModal(value: boolean, metaList: ILabelValue[]) {
const formMap = wrapper.xFormWrapper.formMap;
formMap[1].attrs.prompttype = !value ? '' : metaList.length ? '已设置为高可用集群,请选择所关联的主集群' : '当前暂无可用集群进行关联高可用关系,请先添加集群';
formMap[1].attrs.prompticon = 'true';
formMap[2].invisible = !value;
formMap[2].attrs.disabled = !metaList.length;
formMap[6].rules[0].required = value;
// tslint:disable-next-line:no-unused-expression
wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData);
}
public createOrRegisterCluster(item: IMetaData) {
const self = this;
const metaList = Array.from(admin.metaList).filter(item => item.haRelation === null).map(item => ({
label: item.clusterName,
value: item.clusterId,
}));
this.xFormModal = {
formMap: [
{
@@ -51,6 +77,38 @@ export class ClusterList extends SearchAndFilterContainer {
disabled: item ? true : false,
},
},
{
key: 'ha',
label: '高可用',
type: FormItemType._switch,
invisible: item ? true : false,
rules: [{
required: false,
}],
attrs: {
className: 'switch-style',
prompttype: '',
prompticon: '',
prompticomclass: '',
promptclass: 'inline',
onChange(value: boolean) {
self.updateFormModal(value, metaList);
},
},
},
{
key: 'activeClusterId',
label: '主集群',
type: FormItemType.select,
options: metaList,
invisible: true,
rules: [{
required: false,
}],
attrs: {
placeholder: '请选择主集群',
},
},
{
key: 'zookeeper',
label: 'zookeeper地址',
@@ -162,17 +220,18 @@ export class ClusterList extends SearchAndFilterContainer {
visible: true,
width: 590,
title: item ? '编辑' : '接入集群',
isWaitting: true,
onSubmit: (value: IRegister) => {
value.idc = region.currentRegion;
if (item) {
value.clusterId = item.clusterId;
registerCluster(value).then(data => {
admin.getMetaData(true);
return registerCluster(value).then(data => {
admin.getHaMetaData();
notification.success({ message: '编辑集群成功' });
});
} else {
createCluster(value).then(data => {
admin.getMetaData(true);
return createCluster(value).then(data => {
admin.getHaMetaData();
notification.success({ message: '接入集群成功' });
});
}
@@ -186,7 +245,7 @@ export class ClusterList extends SearchAndFilterContainer {
const info = item.status === 1 ? '暂停监控' : '开始监控';
const status = item.status === 1 ? 0 : 1;
pauseMonitoring(item.clusterId, status).then(data => {
admin.getMetaData(true);
admin.getHaMetaData();
notification.success({ message: `${info}成功` });
});
}
@@ -198,7 +257,7 @@ export class ClusterList extends SearchAndFilterContainer {
title: <>
<span className="offline_span">
&nbsp;
<a>
<a>
<Tooltip placement="right" title={'若当前集群存在逻辑集群,则无法删除'} >
<Icon type="question-circle" />
</Tooltip>
@@ -216,12 +275,34 @@ export class ClusterList extends SearchAndFilterContainer {
}
admin.deleteCluster(record.clusterId).then(data => {
notification.success({ message: '删除成功' });
admin.getHaMetaData();
});
},
});
});
}
public showDelStandModal = (record: IMetaData) => {
confirm({
// tslint:disable-next-line:jsx-wrap-multiline
title: '删除集群',
// icon: 'none',
content: <>{record.activeTopicCount ? `当前集群含有主topic无法删除` : record.haStatus !== 0 ? `当前集群正在进行主备切换,无法删除!` : `确认删除集群${record.clusterName}吗?`}</>,
width: 500,
okText: '确认',
cancelText: '取消',
onOk() {
if (record.activeTopicCount || record.haStatus !== 0) {
return;
}
admin.deleteCluster(record.clusterId).then(data => {
notification.success({ message: '删除成功' });
admin.getHaMetaData();
});
},
});
}
public deleteMonitorModal = (source: any) => {
const cellStyle = {
overflow: 'hidden',
@@ -275,11 +356,105 @@ export class ClusterList extends SearchAndFilterContainer {
return data;
}
public expandedRowRender = (record: IMetaData) => {
const dataSource: any = record.haClusterVO ? [record.haClusterVO] : [];
const cols = getAdminClusterColumns(false);
const role = users.currentUser.role;
if (!record.haClusterVO) return null;
const haRecord = record.haClusterVO;
const btnsMenu = (
<>
<ul className="dropdown-menu">
<li>
<a onClick={this.createOrRegisterCluster.bind(this, haRecord)} className="action-button">
</a>
</li>
<li>
<Popconfirm
title={`确定${haRecord.status === 1 ? '暂停' : '开始'}${haRecord.clusterName}监控?`}
onConfirm={() => this.pauseMonitor(haRecord)}
cancelText="取消"
okText="确认"
>
<Tooltip placement="left" title="暂停监控将无法正常监控指标信息,建议开启监控">
<a
className="action-button"
>
{haRecord.status === 1 ? '暂停监控' : '开始监控'}
</a>
</Tooltip>
</Popconfirm>
</li>
<li>
<a onClick={this.showDelStandModal.bind(this, haRecord)}>
</a>
</li>
</ul>
</>);
const noAuthMenu = (
<ul className="dropdown-menu">
<Tooltip placement="left" title="该功能只对运维人员开放">
<li><a style={{ color: '#a0a0a0' }} className="action-button"></a></li>
<li><a className="action-button" style={{ color: '#a0a0a0' }}>{record.status === 1 ? '暂停监控' : '开始监控'}</a></li>
<li><a style={{ color: '#a0a0a0' }}></a></li>
</Tooltip>
</ul>
);
const col = {
title: '操作',
width: 270,
render: (value: string, item: IMetaData) => (
<>
<a
onClick={this.openModal.bind(this, 'haVisible', record)}
className="action-button"
>
Topic高可用关联
</a>
{item.haStatus !== 0 ? null : <a onClick={this.openModal.bind(this, 'switchVisible', record)} className="action-button">
Topic主备切换
</a>}
{item.haASSwitchJobId ? <a className="action-button" onClick={this.openModal.bind(this, 'logVisible', record)}>
</a> : null}
<Dropdown
overlay={role === 2 ? btnsMenu : noAuthMenu}
trigger={['click', 'hover']}
placement="bottomLeft"
>
<span className="didi-theme ml-10">
···
</span>
</Dropdown>
</>
),
};
cols.push(col as any);
return (
<Table
className="expanded-table"
rowKey="clusterId"
style={{ width: '500px' }}
columns={cols}
dataSource={dataSource}
pagination={false}
/>
);
}
public getColumns = () => {
const cols = getAdminClusterColumns();
const role = users.currentUser.role;
const col = {
title: '操作',
width: 270,
render: (value: string, item: IMetaData) => (
<>
{
@@ -307,10 +482,10 @@ export class ClusterList extends SearchAndFilterContainer {
</a>
</> : <Tooltip placement="left" title="该功能只对运维人员开放">
<a style={{ color: '#a0a0a0' }} className="action-button"></a>
<a className="action-button" style={{ color: '#a0a0a0' }}>{item.status === 1 ? '暂停监控' : '开始监控'}</a>
<a style={{ color: '#a0a0a0' }}></a>
</Tooltip>
<a style={{ color: '#a0a0a0' }} className="action-button"></a>
<a className="action-button" style={{ color: '#a0a0a0' }}>{item.status === 1 ? '暂停监控' : '开始监控'}</a>
<a style={{ color: '#a0a0a0' }}></a>
</Tooltip>
}
</>
),
@@ -319,6 +494,20 @@ export class ClusterList extends SearchAndFilterContainer {
return cols;
}
public openModal(type: string, record: IMetaData) {
this.setState({
currentCluster: record,
}, () => {
this.handleVisible(type, true);
});
}
public handleVisible(type: string, visible: boolean) {
this.setState({
[type]: visible,
});
}
public renderClusterList() {
const role = users.currentUser.role;
return (
@@ -333,8 +522,8 @@ export class ClusterList extends SearchAndFilterContainer {
role && role === 2 ?
<Button type="primary" onClick={this.createOrRegisterCluster.bind(this, null)}></Button>
:
<Tooltip placement="left" title="该功能只对运维人员开放" trigger='hover'>
<Button disabled type="primary"></Button>
<Tooltip placement="left" title="该功能只对运维人员开放" trigger="hover">
<Button disabled={true} type="primary"></Button>
</Tooltip>
}
</li>
@@ -343,26 +532,63 @@ export class ClusterList extends SearchAndFilterContainer {
<div className="table-wrapper">
<Table
rowKey="key"
expandIcon={({ expanded, onExpand, record }) => (
record.haClusterVO ?
<Icon style={{ fontSize: 10 }} type={expanded ? 'down' : 'right'} onClick={e => onExpand(record, e)} />
: null
)}
loading={admin.loading}
dataSource={this.getData(admin.metaList)}
expandedRowRender={this.expandedRowRender}
dataSource={this.getData(admin.haMetaList)}
columns={this.getColumns()}
pagination={customPagination}
/>
</div>
</div>
{this.state.haVisible && <TopicHaRelationWrapper
handleVisible={(val: boolean) => this.handleVisible('haVisible', val)}
visible={this.state.haVisible}
currentCluster={this.state.currentCluster}
reload={() => admin.getHaMetaData()}
formData={{}}
/>}
{this.state.switchVisible &&
<TopicSwitchWrapper
reload={(jobId: number) => {
admin.getHaMetaData().then((res) => {
const currentRecord = res.find(item => item.clusterId === this.state.currentCluster.clusterId);
currentRecord.haClusterVO.haASSwitchJobId = jobId;
this.openModal('logVisible', currentRecord);
});
}}
handleVisible={(val: boolean) => this.handleVisible('switchVisible', val)}
visible={this.state.switchVisible}
currentCluster={this.state.currentCluster}
formData={{}}
/>
}
{this.state.logVisible &&
<TopicSwitchLog
reload={() => admin.getHaMetaData()}
handleVisible={(val: boolean) => this.handleVisible('logVisible', val)}
visible={this.state.logVisible}
currentCluster={this.state.currentCluster}
/>
}
</>
);
}
public componentDidMount() {
admin.getMetaData(true);
admin.getHaMetaData();
cluster.getClusterModes();
admin.getDataCenter();
}
public render() {
return (
admin.metaList ? <> {this.renderClusterList()} </> : null
admin.haMetaList ? <> {this.renderClusterList()} </> : null
);
}
}

Some files were not shown because too many files have changed in this diff Show More