diff --git a/README.md b/README.md index f685b311..8f874c9b 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,21 @@ --- -![KnowStreaing](https://user-images.githubusercontent.com/71620349/183546097-71451983-d00e-4ad4-afb0-43fb597c69a9.png) -**一站式`Apache Kafka`管控平台** +![logikm_logo](https://user-images.githubusercontent.com/71620349/125024570-9e07a100-e0b3-11eb-8ebc-22e73e056771.png) -`LogiKM开源至今备受关注,考虑到开源项目应该更贴合Apache Kafka未来发展方向,经项目组慎重考虑,我们将其品牌升级成Know Streaming,新的大版本更新马上就绪,感谢大家一如既往的支持!也欢迎Kafka爱好者一起共建社区` +**一站式`Apache Kafka`集群指标监控与运维管控平台** -阅读本README文档,您可以了解到滴滴Know Streaming的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。 +`LogiKM开源至今备受关注,考虑到开源项目应该更贴合Apache Kafka未来发展方向,经项目组慎重考虑,预计22年下半年将其品牌升级成Know Streaming,届时项目名称和Logo也将统一更新,感谢大家一如既往的支持,敬请期待!` + +阅读本README文档,您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。 ## 1 产品简介 -滴滴Know Streaming脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。 +滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。 ### 1.1 快速体验地址 -- 体验地址(新的体验地址马上就来) http://117.51.150.133:8080 账号密码 admin/admin +- 体验地址 http://117.51.150.133:8080 账号密码 admin/admin ### 1.2 体验地图 相比较于同类产品的用户视角单一(大多为管理员视角),滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是:**用户体验地图、运维体验地图、运营体验地图** @@ -44,7 +45,7 @@ - 高 效 的 问 题 定 位  :监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题 - 便 捷 的 集 群 运 维  :按照Region定义集群资源划分单位,将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时,实现对服务端的强管控 - 专 业 的 资 源 治 理  :基于滴滴内部多年运营实践,沉淀资源治理方法,建立健康分体系。针对Topic分区热点、分区不足等高频常见问题,实现资源治理专家化 -- 友 好 的 运 维 生 态  :与Prometheus、Grafana、滴滴夜莺监控告警系统打通,集成指标分析、监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效 +- 友 好 的 运 维 生 态  :与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效 ### 1.4 滴滴Logi-KafkaManager架构图 @@ -54,29 +55,29 @@ ## 2 相关文档 ### 2.1 产品文档 -- [滴滴Know Streaming 安装手册](docs/install_guide/install_guide_cn.md) -- [滴滴Know Streaming 接入集群](docs/user_guide/add_cluster/add_cluster.md) -- [滴滴Know Streaming 用户使用手册](docs/user_guide/user_guide_cn.md) -- [滴滴Know Streaming FAQ](docs/user_guide/faq.md) +- [滴滴LogiKM 安装手册](docs/install_guide/install_guide_cn.md) +- [滴滴LogiKM 接入集群](docs/user_guide/add_cluster/add_cluster.md) +- [滴滴LogiKM 用户使用手册](docs/user_guide/user_guide_cn.md) +- [滴滴LogiKM FAQ](docs/user_guide/faq.md) ### 2.2 社区文章 - [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html) - [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw) -- [滴滴Know Streaming 一站式Kafka管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ) -- [滴滴Know Streaming 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64) -- [滴滴Know Streaming 系列视频教程](https://space.bilibili.com/442531657/channel/seriesdetail?sid=571649) +- [滴滴LogiKM 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ) +- [滴滴LogiKM 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64) +- [滴滴LogiKM 系列视频教程](https://space.bilibili.com/442531657/channel/seriesdetail?sid=571649) - [kafka最强最全知识图谱](https://www.szzdzhp.com/kafka/) -- [滴滴Know Streaming新用户入门系列文章专栏 --石臻臻](https://www.szzdzhp.com/categories/LogIKM/) -- [kafka实践(十五):滴滴开源Kafka管控平台 Know Streaming研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244) -- [基于云原生应用管理平台Rainbond安装 滴滴Know Streaming](https://www.rainbond.com/docs/opensource-app/logikm/?channel=logikm) +- [滴滴LogiKM新用户入门系列文章专栏 --石臻臻](https://www.szzdzhp.com/categories/LogIKM/) +- [kafka实践(十五):滴滴开源Kafka管控平台 LogiKM研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244) +- [基于云原生应用管理平台Rainbond安装 滴滴LogiKM](https://www.rainbond.com/docs/opensource-app/logikm/?channel=logikm) -## 3 Know Streaming开源用户交流群 +## 3 滴滴Logi开源用户交流群 ![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png) 想跟各个大佬交流Kafka Es 等中间件/大数据相关技术请 加微信进群。 -微信加群:添加mike_zhangliangPenceXie的微信号备注Know Streaming加群或关注公众号 云原生可观测性 回复 "Know Streaming加群" +微信加群:添加mike_zhangliangdanke-x的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群" ## 4 知识星球 @@ -113,9 +114,4 @@ PS:提问请尽量把问题一次性描述清楚,并告知环境信息情况 ## 6 协议 -`Know Streaming`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE) - -## 7 Star History - -[![Star History Chart](https://api.star-history.com/svg?repos=didi/KnowStreaming&type=Date)](https://star-history.com/#didi/KnowStreaming&Date) - +`LogiKM`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE) diff --git a/docs/didi/Kafka主备切换流程简介.md b/docs/didi/Kafka主备切换流程简介.md new file mode 100644 index 00000000..279ae242 --- /dev/null +++ b/docs/didi/Kafka主备切换流程简介.md @@ -0,0 +1,97 @@ + +--- + +![kafka-manager-logo](../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +# Kafka主备切换流程简介 + +## 1、客户端读写流程 + +在介绍Kafka主备切换流程之前,我们先来了解一下客户端通过我们自研的网关的大致读写流程。 + +![基于网关的生产消费流程](./assets/Kafka基于网关的生产消费流程.png) + + +如上图所示,客户端读写流程大致为: +1. 客户端:向网关请求Topic元信息; +2. 网关:发现客户端使用的KafkaUser是A集群的KafkaUser,因此将Topic元信息请求转发到A集群; +3. A集群:收到网关的Topic元信息,处理并返回给网关; +4. 网关:将集群A返回的结果,返回给客户端; +5. 客户端:从Topic元信息中,获取到Topic实际位于集群A,然后客户端会连接集群A进行生产消费; + +**备注:客户端为Kafka原生客户端,无任何定制。** + +--- + +## 2、主备切换流程 + +介绍完基于网关的客户端读写流程之后,我们再来看一下主备高可用版的Kafka,需要如何进行主备切换。 + +### 2.1、大体流程 + +![Kafka主备切换流程](./assets/Kafka主备切换流程.png) + +图有点多,总结起来就是: +1. 先阻止客户端数据的读写; +2. 等待主备数据同步完成; +3. 调整主备集群数据同步方向; +4. 调整配置,引导客户端到备集群进行读写; + + +### 2.2、详细操作 + +看完大体流程,我们再来看一下实际操作的命令。 + +```bash +1. 阻止用户生产和消费 +bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=None --alter + + +2. 等待FetcherLag 和 Offset 同步 +无需操作,仅需检查主备Topic的Offset是否一致了。 + + +3. 取消备集群B向主集群A进行同步数据的配置 +bin/kafka-configs.sh --zookeeper ${备集群B的ZK地址} --entity-type ha-topics --entity-name ${Topic名称} --delete-config didi.ha.remote.cluster --alter + + +4. 增加主集群A向备集群B进行同步数据的配置 +bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type ha-topics --entity-name ${Topic名称} --add-config didi.ha.remote.cluster=${备集群B的集群ID} --alter + + +5. 修改主集群A,备集群B,网关中,kafkaUser对应的集群,从而引导请求走向备集群 +bin/kafka-configs.sh --zookeeper ${主集群A的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter + +bin/kafka-configs.sh --zookeeper ${备集群B的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter + +bin/kafka-configs.sh --zookeeper ${网关的ZK地址} --entity-type users --entity-name ${客户端使用的kafkaUser} --add-config didi.ha.active.cluster=${备集群B的集群ID} --alter +``` + +--- + +## 3、FAQ + +**问题一:使用中,有没有什么需要注意的地方?** + +1. 主备切换是按照KafkaUser维度进行切换的,因此建议**不同服务之间,使用不同的KafkaUser**。这不仅有助于主备切换,也有助于做权限管控等。 +2. 在建立主备关系的过程中,如果主Topic的数据量比较大,建议逐步建立主备关系,避免一次性建立太多主备关系的Topic导致主集群需要被同步大量数据从而出现压力。 +  + +**问题二:消费客户端如果重启之后,会不会导致变成从最旧或者最新的数据开始消费?** + +不会。主备集群,会相互同步__consumer_offsets这个Topic的数据,因此客户端在主集群的消费进度信息,也会被同步到备集群,客户端在备集群进行消费时,也会从上次提交在主集群Topic的位置开始消费。 +  + +**问题三:如果是类似Flink任务,是自己维护消费进度的程序,在主备切换之后,会不会存在数据丢失或者重复消费的情况?** + +如果Flink自己管理好了消费进度,那么就不会。因为主备集群之间的数据同步就和一个集群内的副本同步一样,备集群会将主集群Topic中的Offset信息等都同步过来,因此不会。 +  + +**问题四:可否做到不重启客户端?** + +即将开发完成的高可用版Kafka二期将具备该能力,敬请期待。 +  \ No newline at end of file diff --git a/docs/didi/assets/Kafka主备切换流程.png b/docs/didi/assets/Kafka主备切换流程.png new file mode 100644 index 00000000..199b72f4 Binary files /dev/null and b/docs/didi/assets/Kafka主备切换流程.png differ diff --git a/docs/didi/assets/Kafka基于网关的生产消费流程.png b/docs/didi/assets/Kafka基于网关的生产消费流程.png new file mode 100644 index 00000000..e293cd5b Binary files /dev/null and b/docs/didi/assets/Kafka基于网关的生产消费流程.png differ diff --git a/docs/didi/drawio/Kafka主备切换流程.drawio b/docs/didi/drawio/Kafka主备切换流程.drawio new file mode 100644 index 00000000..0933f5cd --- /dev/null +++ b/docs/didi/drawio/Kafka主备切换流程.drawio @@ -0,0 +1,367 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/docs/didi/drawio/Kafka基于网关的生产消费流程.drawio b/docs/didi/drawio/Kafka基于网关的生产消费流程.drawio new file mode 100644 index 00000000..24477ff5 --- /dev/null +++ b/docs/didi/drawio/Kafka基于网关的生产消费流程.drawio @@ -0,0 +1,95 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/kafka-manager-common/pom.xml b/kafka-manager-common/pom.xml index f784bf8d..d319fb24 100644 --- a/kafka-manager-common/pom.xml +++ b/kafka-manager-common/pom.xml @@ -112,5 +112,15 @@ lombok compile + + + com.baomidou + mybatis-plus-boot-starter + + + + org.hibernate.validator + hibernate-validator + \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/JobLogBizTypEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/JobLogBizTypEnum.java new file mode 100644 index 00000000..13491dc9 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/JobLogBizTypEnum.java @@ -0,0 +1,21 @@ +package com.xiaojukeji.kafka.manager.common.bizenum; + +import lombok.Getter; + +@Getter +public enum JobLogBizTypEnum { + HA_SWITCH_JOB_LOG(100, "HA-主备切换日志"), + + UNKNOWN(-1, "unknown"), + + ; + + JobLogBizTypEnum(int code, String msg) { + this.code = code; + this.msg = msg; + } + + private final int code; + + private final String msg; +} diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskActionEnum.java similarity index 74% rename from kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java rename to kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskActionEnum.java index a51e2c68..293ddfde 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskActionEnum.java @@ -1,11 +1,11 @@ -package com.xiaojukeji.kafka.manager.kcm.common.bizenum; +package com.xiaojukeji.kafka.manager.common.bizenum; /** * 任务动作 * @author zengqiao * @date 20/4/26 */ -public enum ClusterTaskActionEnum { +public enum TaskActionEnum { UNKNOWN("unknown"), START("start"), @@ -17,13 +17,15 @@ public enum ClusterTaskActionEnum { REDO("redo"), KILL("kill"), + FORCE("force"), + ROLLBACK("rollback"), ; - private String action; + private final String action; - ClusterTaskActionEnum(String action) { + TaskActionEnum(String action) { this.action = action; } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskStatusEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskStatusEnum.java index a478eafe..08045ae2 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskStatusEnum.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TaskStatusEnum.java @@ -1,10 +1,13 @@ package com.xiaojukeji.kafka.manager.common.bizenum; +import lombok.Getter; + /** * 任务状态 * @author zengqiao * @date 2017/6/29. */ +@Getter public enum TaskStatusEnum { UNKNOWN( -1, "未知"), @@ -15,6 +18,7 @@ public enum TaskStatusEnum { RUNNING( 30, "运行中"), KILLING( 31, "杀死中"), + RUNNING_IN_TIMEOUT( 32, "超时运行中"), BLOCKED( 40, "暂停"), @@ -30,31 +34,15 @@ public enum TaskStatusEnum { ; - private Integer code; + private final Integer code; - private String message; + private final String message; TaskStatusEnum(Integer code, String message) { this.code = code; this.message = message; } - public Integer getCode() { - return code; - } - - public String getMessage() { - return message; - } - - @Override - public String toString() { - return "TaskStatusEnum{" + - "code=" + code + - ", message='" + message + '\'' + - '}'; - } - public static Boolean isFinished(Integer code) { return code >= FINISHED.getCode(); } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicAuthorityEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicAuthorityEnum.java index 7abafb8c..30f2b048 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicAuthorityEnum.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicAuthorityEnum.java @@ -17,9 +17,9 @@ public enum TopicAuthorityEnum { OWNER(4, "可管理"), ; - private Integer code; + private final Integer code; - private String message; + private final String message; TopicAuthorityEnum(Integer code, String message) { this.code = code; @@ -34,6 +34,16 @@ public enum TopicAuthorityEnum { return message; } + public static String getMsgByCode(Integer code) { + for (TopicAuthorityEnum authorityEnum: TopicAuthorityEnum.values()) { + if (authorityEnum.getCode().equals(code)) { + return authorityEnum.message; + } + } + + return DENY.message; + } + @Override public String toString() { return "TopicAuthorityEnum{" + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/gateway/GatewayConfigKeyEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/gateway/GatewayConfigKeyEnum.java index b3403e69..c1b9fdca 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/gateway/GatewayConfigKeyEnum.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/gateway/GatewayConfigKeyEnum.java @@ -10,12 +10,11 @@ public enum GatewayConfigKeyEnum { SD_APP_RATE("SD_APP_RATE", "SD_APP_RATE"), SD_IP_RATE("SD_IP_RATE", "SD_IP_RATE"), SD_SP_RATE("SD_SP_RATE", "SD_SP_RATE"), - ; - private String configType; + private final String configType; - private String configName; + private final String configName; GatewayConfigKeyEnum(String configType, String configName) { this.configType = configType; diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaRelationTypeEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaRelationTypeEnum.java new file mode 100644 index 00000000..3e8a1091 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaRelationTypeEnum.java @@ -0,0 +1,27 @@ +package com.xiaojukeji.kafka.manager.common.bizenum.ha; + +import lombok.Getter; + +/** + * @author zengqiao + * @date 20/7/28 + */ +@Getter +public enum HaRelationTypeEnum { + UNKNOWN(-1, "非高可用"), + + STANDBY(0, "备"), + + ACTIVE(1, "主"), + + MUTUAL_BACKUP(2 , "互备"); + + private final int code; + + private final String msg; + + HaRelationTypeEnum(int code, String msg) { + this.code = code; + this.msg = msg; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaResTypeEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaResTypeEnum.java new file mode 100644 index 00000000..409758c2 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaResTypeEnum.java @@ -0,0 +1,25 @@ +package com.xiaojukeji.kafka.manager.common.bizenum.ha; + +import lombok.Getter; + +/** + * @author zengqiao + * @date 20/7/28 + */ +@Getter +public enum HaResTypeEnum { + CLUSTER(0, "Cluster"), + TOPIC(1, "Topic"), + KAFKA_USER(2, "KafkaUser"), + + ; + + private final int code; + + private final String msg; + + HaResTypeEnum(int code, String msg) { + this.code = code; + this.msg = msg; + } +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaStatusEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaStatusEnum.java new file mode 100644 index 00000000..1ef138f7 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/HaStatusEnum.java @@ -0,0 +1,75 @@ +package com.xiaojukeji.kafka.manager.common.bizenum.ha; + +/** + * @author zengqiao + * @date 20/7/28 + */ +public enum HaStatusEnum { + UNKNOWN(-1, "未知状态"), + + STABLE(HaStatusEnum.STABLE_CODE, "稳定状态"), + +// SWITCHING(HaStatusEnum.SWITCHING_CODE, "切换中"), + SWITCHING_PREPARE( + HaStatusEnum.SWITCHING_PREPARE_CODE, + "主备切换--源集群[%s]--预处理(阻止当前主Topic写入)"), + + SWITCHING_WAITING_IN_SYNC( + HaStatusEnum.SWITCHING_WAITING_IN_SYNC_CODE, + "主备切换--目标集群[%s]--等待主与备Topic数据同步完成"), + + SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH( + HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE, + "主备切换--目标集群[%s]--关闭旧的备Topic的副本同步"), + SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH( + HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE, + "主备切换--源集群[%s]--开启新的备Topic的副本同步"), + + SWITCHING_CLOSEOUT( + HaStatusEnum.SWITCHING_CLOSEOUT_CODE, + "主备切换--目标集群[%s]--收尾(允许新的主Topic写入)"), + + ; + + public static final int UNKNOWN_CODE = -1; + public static final int STABLE_CODE = 0; + + public static final int SWITCHING_CODE = 100; + public static final int SWITCHING_PREPARE_CODE = 101; + + public static final int SWITCHING_WAITING_IN_SYNC_CODE = 102; + public static final int SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE = 103; + public static final int SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE = 104; + + public static final int SWITCHING_CLOSEOUT_CODE = 105; + + + private final int code; + + private final String msg; + + public int getCode() { + return code; + } + + public String getMsg(String clusterName) { + if (this.code == UNKNOWN_CODE || this.code == STABLE_CODE) { + return this.msg; + } + return String.format(msg, clusterName); + } + + HaStatusEnum(int code, String msg) { + this.code = code; + this.msg = msg; + } + + public static Integer calProgress(Integer status) { + if (status == null || status == HaStatusEnum.STABLE_CODE || status == UNKNOWN_CODE) { + return 100; + } + + // 最小进度为 1% + return Math.max(1, (status - 101) * 100 / 5); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobActionEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobActionEnum.java new file mode 100644 index 00000000..e5da7391 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobActionEnum.java @@ -0,0 +1,44 @@ +package com.xiaojukeji.kafka.manager.common.bizenum.ha.job; + +public enum HaJobActionEnum { + /** + * + */ + START(1,"start"), + + STOP(2, "stop"), + + CANCEL(3,"cancel"), + + CONTINUE(4,"continue"), + + UNKNOWN(-1, "unknown"); + + HaJobActionEnum(int status, String value) { + this.status = status; + this.value = value; + } + + private final int status; + + private final String value; + + public int getStatus() { + return status; + } + + public String getValue() { + return value; + } + + public static HaJobActionEnum valueOfStatus(int status) { + for (HaJobActionEnum statusEnum : HaJobActionEnum.values()) { + if (status == statusEnum.getStatus()) { + return statusEnum; + } + } + + return HaJobActionEnum.UNKNOWN; + } + +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobStatusEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobStatusEnum.java new file mode 100644 index 00000000..d19e0213 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/ha/job/HaJobStatusEnum.java @@ -0,0 +1,75 @@ +package com.xiaojukeji.kafka.manager.common.bizenum.ha.job; + +import com.xiaojukeji.kafka.manager.common.bizenum.TaskStatusEnum; + +public enum HaJobStatusEnum { + /**执行中*/ + RUNNING(TaskStatusEnum.RUNNING), + RUNNING_IN_TIMEOUT(TaskStatusEnum.RUNNING_IN_TIMEOUT), + + SUCCESS(TaskStatusEnum.SUCCEED), + + FAILED(TaskStatusEnum.FAILED), + + UNKNOWN(TaskStatusEnum.UNKNOWN); + + HaJobStatusEnum(TaskStatusEnum taskStatusEnum) { + this.status = taskStatusEnum.getCode(); + this.value = taskStatusEnum.getMessage(); + } + + private final int status; + + private final String value; + + public int getStatus() { + return status; + } + + public String getValue() { + return value; + } + + public static HaJobStatusEnum valueOfStatus(int status) { + for (HaJobStatusEnum statusEnum : HaJobStatusEnum.values()) { + if (status == statusEnum.getStatus()) { + return statusEnum; + } + } + + return HaJobStatusEnum.UNKNOWN; + } + + public static HaJobStatusEnum getStatusBySubStatus(int totalJobNum, + int successJobNu, + int failedJobNu, + int runningJobNu, + int runningInTimeoutJobNu, + int unknownJobNu) { + if (unknownJobNu > 0) { + return UNKNOWN; + } + + if((failedJobNu + runningJobNu + runningInTimeoutJobNu + unknownJobNu) == 0) { + return SUCCESS; + } + + if((runningJobNu + runningInTimeoutJobNu + unknownJobNu) == 0 && failedJobNu > 0) { + return FAILED; + } + + if (runningInTimeoutJobNu > 0) { + return RUNNING_IN_TIMEOUT; + } + + return RUNNING; + } + + public static boolean isRunning(Integer jobStatus) { + return jobStatus != null && (RUNNING.status == jobStatus || RUNNING_IN_TIMEOUT.status == jobStatus); + } + + public static boolean isFinished(Integer jobStatus) { + return jobStatus != null && (SUCCESS.status == jobStatus || FAILED.status == jobStatus); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/ConfigConstant.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/ConfigConstant.java index 361c841f..17f20223 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/ConfigConstant.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/ConfigConstant.java @@ -31,6 +31,8 @@ public class ConfigConstant { public static final String KAFKA_CLUSTER_DO_CONFIG_KEY = "KAFKA_CLUSTER_DO_CONFIG"; + public static final String HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_PREFIX = "HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_CLUSTER"; + private ConfigConstant() { } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/KafkaConstant.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/KafkaConstant.java index 463e9b1a..b1f15bee 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/KafkaConstant.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/KafkaConstant.java @@ -21,6 +21,32 @@ public class KafkaConstant { public static final String INTERNAL_KEY = "INTERNAL"; + public static final String BOOTSTRAP_SERVERS = "bootstrap.servers"; + + + /** + * HA + */ + + public static final String DIDI_KAFKA_ENABLE = "didi.kafka.enable"; + + public static final String DIDI_HA_REMOTE_CLUSTER = "didi.ha.remote.cluster"; + + // TODO 平台来管理配置,不需要底层来管理,因此可以删除该配置 + public static final String DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED = "didi.ha.sync.topic.configs.enabled"; + + public static final String DIDI_HA_ACTIVE_CLUSTER = "didi.ha.active.cluster"; + + public static final String DIDI_HA_REMOTE_TOPIC = "didi.ha.remote.topic"; + + public static final String SECURITY_PROTOCOL = "security.protocol"; + + public static final String SASL_MECHANISM = "sasl.mechanism"; + + public static final String SASL_JAAS_CONFIG = "sasl.jaas.config"; + + public static final String NONE = "None"; + private KafkaConstant() { } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/MsgConstant.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/MsgConstant.java new file mode 100644 index 00000000..d1c9a1d2 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/constant/MsgConstant.java @@ -0,0 +1,96 @@ +package com.xiaojukeji.kafka.manager.common.constant; + +/** + * 信息模版Constant + * @author zengqiao + * @date 22/03/03 + */ +public class MsgConstant { + private MsgConstant() { + } + + /**************************************************** Cluster ****************************************************/ + + public static String getClusterBizStr(Long clusterPhyId, String clusterName){ + return String.format("集群ID:[%d] 集群名称:[%s]", clusterPhyId, clusterName); + } + + public static String getClusterPhyNotExist(Long clusterPhyId) { + return String.format("集群ID:[%d] 不存在或者未加载", clusterPhyId); + } + + + + /**************************************************** Broker ****************************************************/ + + public static String getBrokerNotExist(Long clusterPhyId, Integer brokerId) { + return String.format("集群ID:[%d] brokerId:[%d] 不存在或未存活", clusterPhyId, brokerId); + } + + public static String getBrokerBizStr(Long clusterPhyId, Integer brokerId) { + return String.format("集群ID:[%d] brokerId:[%d]", clusterPhyId, brokerId); + } + + + /**************************************************** Topic ****************************************************/ + + public static String getTopicNotExist(Long clusterPhyId, String topicName) { + return String.format("集群ID:[%d] Topic名称:[%s] 不存在", clusterPhyId, topicName); + } + + public static String getTopicBizStr(Long clusterPhyId, String topicName) { + return String.format("集群ID:[%d] Topic名称:[%s]", clusterPhyId, topicName); + } + + public static String getTopicExtend(Long existPartitionNum, Long totalPartitionNum,String expandParam){ + return String.format("新增分区, 从:[%d] 增加到:[%d], 详细参数信息:[%s]", existPartitionNum,totalPartitionNum,expandParam); + } + + public static String getClusterTopicKey(Long clusterPhyId, String topicName) { + return String.format("%d@%s", clusterPhyId, topicName); + } + + /**************************************************** Partition ****************************************************/ + + public static String getPartitionNotExist(Long clusterPhyId, String topicName) { + return String.format("集群ID:[%d] Topic名称:[%s] 存在非法的分区ID", clusterPhyId, topicName); + } + + public static String getPartitionNotExist(Long clusterPhyId, String topicName, Integer partitionId) { + return String.format("集群ID:[%d] Topic名称:[%s] 分区Id:[%d] 不存在", clusterPhyId, topicName, partitionId); + } + + /**************************************************** KafkaUser ****************************************************/ + + public static String getKafkaUserBizStr(Long clusterPhyId, String kafkaUser) { + return String.format("集群ID:[%d] kafkaUser:[%s]", clusterPhyId, kafkaUser); + } + + public static String getKafkaUserNotExist(Long clusterPhyId, String kafkaUser) { + return String.format("集群ID:[%d] kafkaUser:[%s] 不存在", clusterPhyId, kafkaUser); + } + + public static String getKafkaUserDuplicate(Long clusterPhyId, String kafkaUser) { + return String.format("集群ID:[%d] kafkaUser:[%s] 已存在", clusterPhyId, kafkaUser); + } + + /**************************************************** ha-Cluster ****************************************************/ + + public static String getActiveClusterDuplicate(Long clusterPhyId, String clusterName) { + return String.format("集群ID:[%d] 主集群:[%s] 已存在", clusterPhyId, clusterName); + } + + /**************************************************** reassign ****************************************************/ + + public static String getReassignJobBizStr(Long jobId, Long clusterPhyId) { + return String.format("任务Id:[%d] 集群ID:[%s]", jobId, clusterPhyId); + } + + public static String getJobIdCanNotNull() { + return "jobId不允许为空"; + } + + public static String getJobNotExist(Long jobId) { + return String.format("jobId:[%d] 不存在", jobId); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/BaseResult.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/BaseResult.java new file mode 100644 index 00000000..05eb6440 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/BaseResult.java @@ -0,0 +1,28 @@ +package com.xiaojukeji.kafka.manager.common.entity; + +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; +import lombok.ToString; + +import java.io.Serializable; + +@Data +@ToString +public class BaseResult implements Serializable { + private static final long serialVersionUID = -5771016784021901099L; + + @ApiModelProperty(value = "信息", example = "成功") + protected String message; + + @ApiModelProperty(value = "状态", example = "0") + protected int code; + + public boolean successful() { + return !this.failed(); + } + + public boolean failed() { + return !Constant.SUCCESS.equals(code); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java index 471a3d07..56416372 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java @@ -1,21 +1,23 @@ package com.xiaojukeji.kafka.manager.common.entity; -import com.alibaba.fastjson.JSON; -import com.xiaojukeji.kafka.manager.common.constant.Constant; - -import java.io.Serializable; +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; /** * @author huangyiminghappy@163.com * @date 2019-07-08 */ -public class Result implements Serializable { - private static final long serialVersionUID = -2772975319944108658L; +@Data +@ApiModel(description = "调用结果") +public class Result extends BaseResult { + @ApiModelProperty(value = "数据") + protected T data; - private T data; - private String message; - private String tips; - private int code; + public Result() { + this.code = ResultStatus.SUCCESS.getCode(); + this.message = ResultStatus.SUCCESS.getMessage(); + } public Result(T data) { this.data = data; @@ -23,10 +25,6 @@ public class Result implements Serializable { this.message = ResultStatus.SUCCESS.getMessage(); } - public Result() { - this(null); - } - public Result(Integer code, String message) { this.message = message; this.code = code; @@ -38,48 +36,31 @@ public class Result implements Serializable { this.code = code; } - public T getData() - { - return (T)this.data; + public static Result build(boolean succ) { + if (succ) { + return buildSuc(); + } + return buildFail(); } - public void setData(T data) - { - this.data = data; + public static Result buildFail() { + Result result = new Result<>(); + result.setCode(ResultStatus.FAIL.getCode()); + result.setMessage(ResultStatus.FAIL.getMessage()); + return result; } - public String getMessage() - { - return this.message; - } - - public void setMessage(String message) - { - this.message = message; - } - - public String getTips() { - return tips; - } - - public void setTips(String tips) { - this.tips = tips; - } - - public int getCode() - { - return this.code; - } - - public void setCode(int code) - { - this.code = code; - } - - @Override - public String toString() - { - return JSON.toJSONString(this); + public static Result build(boolean succ, T data) { + Result result = new Result<>(); + if (succ) { + result.setCode(ResultStatus.SUCCESS.getCode()); + result.setMessage(ResultStatus.SUCCESS.getMessage()); + result.setData(data); + } else { + result.setCode(ResultStatus.FAIL.getCode()); + result.setMessage(ResultStatus.FAIL.getMessage()); + } + return result; } public static Result buildSuc() { @@ -97,14 +78,6 @@ public class Result implements Serializable { return result; } - public static Result buildGatewayFailure(String message) { - Result result = new Result<>(); - result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode()); - result.setMessage(message); - result.setData(null); - return result; - } - public static Result buildFailure(String message) { Result result = new Result<>(); result.setCode(ResultStatus.FAIL.getCode()); @@ -113,10 +86,34 @@ public class Result implements Serializable { return result; } - public static Result buildFrom(ResultStatus resultStatus) { + public static Result buildFailure(String message, T data) { Result result = new Result<>(); - result.setCode(resultStatus.getCode()); - result.setMessage(resultStatus.getMessage()); + result.setCode(ResultStatus.FAIL.getCode()); + result.setMessage(message); + result.setData(data); + return result; + } + + public static Result buildFailure(ResultStatus rs) { + Result result = new Result<>(); + result.setCode(rs.getCode()); + result.setMessage(rs.getMessage()); + result.setData(null); + return result; + } + + public static Result buildGatewayFailure(String message) { + Result result = new Result<>(); + result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode()); + result.setMessage(message); + result.setData(null); + return result; + } + + public static Result buildFrom(ResultStatus rs) { + Result result = new Result<>(); + result.setCode(rs.getCode()); + result.setMessage(rs.getMessage()); return result; } @@ -128,8 +125,46 @@ public class Result implements Serializable { return result; } - public boolean failed() { - return !Constant.SUCCESS.equals(code); + public static Result buildFromRSAndMsg(ResultStatus resultStatus, String message) { + Result result = new Result<>(); + result.setCode(resultStatus.getCode()); + result.setMessage(message); + result.setData(null); + return result; } + public static Result buildFromRSAndData(ResultStatus rs, T data) { + Result result = new Result<>(); + result.setCode(rs.getCode()); + result.setMessage(rs.getMessage()); + result.setData(data); + return result; + } + + public static Result buildFromIgnoreData(Result anotherResult) { + Result result = new Result<>(); + result.setCode(anotherResult.getCode()); + result.setMessage(anotherResult.getMessage()); + return result; + } + + public static Result buildParamIllegal(String msg) { + Result result = new Result<>(); + result.setCode(ResultStatus.PARAM_ILLEGAL.getCode()); + result.setMessage(ResultStatus.PARAM_ILLEGAL.getMessage() + ":" + msg + ",请检查后再提交!"); + return result; + } + + public boolean hasData(){ + return !failed() && this.data != null; + } + + @Override + public String toString() { + return "Result{" + + "message='" + message + '\'' + + ", code=" + code + + ", data=" + data + + '}'; + } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java index 0f8aebd6..d385cf0c 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java @@ -23,6 +23,8 @@ public enum ResultStatus { API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"), USER_WITHOUT_AUTHORITY(1404, "user without authority"), CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"), + HA_CLUSTER_DELETE_FORBIDDEN(1409, "先删除主topic,才能删除该集群"), + HA_TOPIC_DELETE_FORBIDDEN(1410, "先解除高可用关系,才能删除该topic"), APP_OFFLINE_FORBIDDEN(1406, "先下线topic,才能下线应用~"), @@ -76,6 +78,8 @@ public enum ResultStatus { QUOTA_NOT_EXIST(7113, "quota not exist, please check clusterId, topicName and appId"), CONSUMER_GROUP_NOT_EXIST(7114, "consumerGroup not exist"), TOPIC_BIZ_DATA_NOT_EXIST(7115, "topic biz data not exist, please sync topic to db"), + SD_ZK_NOT_EXIST(7116, "SD_ZK未配置"), + // 资源已存在 RESOURCE_ALREADY_EXISTED(7200, "资源已经存在"), @@ -88,6 +92,7 @@ public enum ResultStatus { RESOURCE_ALREADY_USED(7400, "资源早已被使用"), + /** * 因为外部系统的问题, 操作时引起的错误, [8000, 9000) * ------------------------------------------------------------------------------------------ @@ -98,6 +103,7 @@ public enum ResultStatus { ZOOKEEPER_READ_FAILED(8021, "zookeeper read failed"), ZOOKEEPER_WRITE_FAILED(8022, "zookeeper write failed"), ZOOKEEPER_DELETE_FAILED(8023, "zookeeper delete failed"), + ZOOKEEPER_OPERATE_FAILED(8024, "zookeeper operate failed"), // 调用集群任务里面的agent失败 CALL_CLUSTER_TASK_AGENT_FAILED(8030, " call cluster task agent failed"), diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java index 2e903485..6fb8ad24 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java @@ -1,11 +1,14 @@ package com.xiaojukeji.kafka.manager.common.entity.ao; +import lombok.Data; + import java.util.Date; /** * @author zengqiao * @date 20/4/23 */ +@Data public class ClusterDetailDTO { private Long clusterId; @@ -41,141 +44,9 @@ public class ClusterDetailDTO { private Integer regionNum; - public Long getClusterId() { - return clusterId; - } + private Integer haRelation; - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getClusterName() { - return clusterName; - } - - public void setClusterName(String clusterName) { - this.clusterName = clusterName; - } - - public String getZookeeper() { - return zookeeper; - } - - public void setZookeeper(String zookeeper) { - this.zookeeper = zookeeper; - } - - public String getBootstrapServers() { - return bootstrapServers; - } - - public void setBootstrapServers(String bootstrapServers) { - this.bootstrapServers = bootstrapServers; - } - - public String getKafkaVersion() { - return kafkaVersion; - } - - public void setKafkaVersion(String kafkaVersion) { - this.kafkaVersion = kafkaVersion; - } - - public String getIdc() { - return idc; - } - - public void setIdc(String idc) { - this.idc = idc; - } - - public Integer getMode() { - return mode; - } - - public void setMode(Integer mode) { - this.mode = mode; - } - - public String getSecurityProperties() { - return securityProperties; - } - - public void setSecurityProperties(String securityProperties) { - this.securityProperties = securityProperties; - } - - public String getJmxProperties() { - return jmxProperties; - } - - public void setJmxProperties(String jmxProperties) { - this.jmxProperties = jmxProperties; - } - - public Integer getStatus() { - return status; - } - - public void setStatus(Integer status) { - this.status = status; - } - - public Date getGmtCreate() { - return gmtCreate; - } - - public void setGmtCreate(Date gmtCreate) { - this.gmtCreate = gmtCreate; - } - - public Date getGmtModify() { - return gmtModify; - } - - public void setGmtModify(Date gmtModify) { - this.gmtModify = gmtModify; - } - - public Integer getBrokerNum() { - return brokerNum; - } - - public void setBrokerNum(Integer brokerNum) { - this.brokerNum = brokerNum; - } - - public Integer getTopicNum() { - return topicNum; - } - - public void setTopicNum(Integer topicNum) { - this.topicNum = topicNum; - } - - public Integer getConsumerGroupNum() { - return consumerGroupNum; - } - - public void setConsumerGroupNum(Integer consumerGroupNum) { - this.consumerGroupNum = consumerGroupNum; - } - - public Integer getControllerId() { - return controllerId; - } - - public void setControllerId(Integer controllerId) { - this.controllerId = controllerId; - } - - public Integer getRegionNum() { - return regionNum; - } - - public void setRegionNum(Integer regionNum) { - this.regionNum = regionNum; - } + private String mutualBackupClusterName; @Override public String toString() { @@ -197,6 +68,8 @@ public class ClusterDetailDTO { ", consumerGroupNum=" + consumerGroupNum + ", controllerId=" + controllerId + ", regionNum=" + regionNum + + ", haRelation=" + haRelation + + ", mutualBackupClusterName='" + mutualBackupClusterName + '\'' + '}'; } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/RdTopicBasic.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/RdTopicBasic.java index bf57a800..97367cfc 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/RdTopicBasic.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/RdTopicBasic.java @@ -1,5 +1,7 @@ package com.xiaojukeji.kafka.manager.common.entity.ao; +import lombok.Data; + import java.util.List; import java.util.Properties; @@ -7,6 +9,7 @@ import java.util.Properties; * @author zengqiao * @date 20/6/10 */ +@Data public class RdTopicBasic { private Long clusterId; @@ -26,77 +29,7 @@ public class RdTopicBasic { private List regionNameList; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getClusterName() { - return clusterName; - } - - public void setClusterName(String clusterName) { - this.clusterName = clusterName; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public Properties getProperties() { - return properties; - } - - public void setProperties(Properties properties) { - this.properties = properties; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public List getRegionNameList() { - return regionNameList; - } - - public void setRegionNameList(List regionNameList) { - this.regionNameList = regionNameList; - } + private Integer haRelation; @Override public String toString() { @@ -109,7 +42,8 @@ public class RdTopicBasic { ", appName='" + appName + '\'' + ", properties=" + properties + ", description='" + description + '\'' + - ", regionNameList='" + regionNameList + '\'' + + ", regionNameList=" + regionNameList + + ", haRelation=" + haRelation + '}'; } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/HaSwitchTopic.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/HaSwitchTopic.java new file mode 100644 index 00000000..b1f63dfa --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/HaSwitchTopic.java @@ -0,0 +1,54 @@ +package com.xiaojukeji.kafka.manager.common.entity.ao.ha; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import lombok.Data; + +import java.util.HashMap; +import java.util.Map; + +@Data +public class HaSwitchTopic { + /** + * 是否完成 + */ + private boolean finished; + + /** + * 每一个Topic的状态 + */ + private Map activeTopicSwitchStatusMap; + + public HaSwitchTopic(boolean finished) { + this.finished = finished; + this.activeTopicSwitchStatusMap = new HashMap<>(); + } + + public void addHaSwitchTopic(HaSwitchTopic haSwitchTopic) { + this.finished &= haSwitchTopic.finished; + } + + public boolean isFinished() { + return this.finished; + } + + public void addActiveTopicStatus(String activeTopicName, Integer status) { + activeTopicSwitchStatusMap.put(activeTopicName, status); + } + + public boolean isActiveTopicSwitchFinished(String activeTopicName) { + Integer status = activeTopicSwitchStatusMap.get(activeTopicName); + if (status == null) { + return false; + } + + return status.equals(HaStatusEnum.STABLE.getCode()); + } + + @Override + public String toString() { + return "HaSwitchTopic{" + + "finished=" + finished + + ", activeTopicSwitchStatusMap=" + activeTopicSwitchStatusMap + + '}'; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobDetail.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobDetail.java new file mode 100644 index 00000000..5dedd3ce --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobDetail.java @@ -0,0 +1,28 @@ +package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job详情") +public class HaJobDetail { + @ApiModelProperty(value = "Topic名称") + private String topicName; + + @ApiModelProperty(value="主集群ID") + private Long activeClusterPhyId; + + @ApiModelProperty(value="备集群ID") + private Long standbyClusterPhyId; + + @ApiModelProperty(value="Lag和") + private Long sumLag; + + @ApiModelProperty(value="状态") + private Integer status; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobLog.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobLog.java new file mode 100644 index 00000000..dbed3369 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobLog.java @@ -0,0 +1,16 @@ +package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job日志") +public class HaJobLog { + @ApiModelProperty(value = "日志信息") + private String log; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobState.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobState.java new file mode 100644 index 00000000..ce8dd2b9 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaJobState.java @@ -0,0 +1,70 @@ +package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum; +import lombok.Data; +import lombok.NoArgsConstructor; + +import java.util.List; + +@Data +@NoArgsConstructor +public class HaJobState { + + /** + * @see com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum + */ + private int status; + + private int total; + + private int success; + + private int failed; + + private int doing; + private int doingInTimeout; + + private int unknown; + + private Integer progress; + + /** + * 按照状态,直接进行聚合 + */ + public HaJobState(List jobStatusList, Integer progress) { + this.total = jobStatusList.size(); + this.success = 0; + this.failed = 0; + this.doing = 0; + this.doingInTimeout = 0; + this.unknown = 0; + for (Integer jobStatus: jobStatusList) { + if (HaJobStatusEnum.SUCCESS.getStatus() == jobStatus) { + success += 1; + } else if (HaJobStatusEnum.FAILED.getStatus() == jobStatus) { + failed += 1; + } else if (HaJobStatusEnum.RUNNING.getStatus() == jobStatus) { + doing += 1; + } else if (HaJobStatusEnum.RUNNING_IN_TIMEOUT.getStatus() == jobStatus) { + doingInTimeout += 1; + } else { + unknown += 1; + } + } + + this.status = HaJobStatusEnum.getStatusBySubStatus(this.total, this.success, this.failed, this.doing, this.doingInTimeout, this.unknown).getStatus(); + + this.progress = progress; + } + + public HaJobState(Integer doingSize, Integer progress) { + this.total = doingSize; + this.success = 0; + this.failed = 0; + this.doing = doingSize; + this.doingInTimeout = 0; + this.unknown = 0; + + this.progress = progress; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaSubJobExtendData.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaSubJobExtendData.java new file mode 100644 index 00000000..dbb82265 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ha/job/HaSubJobExtendData.java @@ -0,0 +1,12 @@ +package com.xiaojukeji.kafka.manager.common.entity.ao.ha.job; + +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@NoArgsConstructor +@AllArgsConstructor +public class HaSubJobExtendData { + private Long sumLag; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicBasicDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicBasicDTO.java index 9150569b..e1d0124d 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicBasicDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicBasicDTO.java @@ -1,11 +1,14 @@ package com.xiaojukeji.kafka.manager.common.entity.ao.topic; +import lombok.Data; + import java.util.List; /** * @author arthur * @date 2018/09/03 */ +@Data public class TopicBasicDTO { private Long clusterId; @@ -39,133 +42,7 @@ public class TopicBasicDTO { private Long retentionBytes; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public String getPrincipals() { - return principals; - } - - public void setPrincipals(String principals) { - this.principals = principals; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public List getRegionNameList() { - return regionNameList; - } - - public void setRegionNameList(List regionNameList) { - this.regionNameList = regionNameList; - } - - public Integer getScore() { - return score; - } - - public void setScore(Integer score) { - this.score = score; - } - - public String getTopicCodeC() { - return topicCodeC; - } - - public void setTopicCodeC(String topicCodeC) { - this.topicCodeC = topicCodeC; - } - - public Integer getPartitionNum() { - return partitionNum; - } - - public void setPartitionNum(Integer partitionNum) { - this.partitionNum = partitionNum; - } - - public Integer getReplicaNum() { - return replicaNum; - } - - public void setReplicaNum(Integer replicaNum) { - this.replicaNum = replicaNum; - } - - public Integer getBrokerNum() { - return brokerNum; - } - - public void setBrokerNum(Integer brokerNum) { - this.brokerNum = brokerNum; - } - - public Long getModifyTime() { - return modifyTime; - } - - public void setModifyTime(Long modifyTime) { - this.modifyTime = modifyTime; - } - - public Long getCreateTime() { - return createTime; - } - - public void setCreateTime(Long createTime) { - this.createTime = createTime; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public Long getRetentionBytes() { - return retentionBytes; - } - - public void setRetentionBytes(Long retentionBytes) { - this.retentionBytes = retentionBytes; - } + private Integer haRelation; @Override public String toString() { @@ -186,6 +63,7 @@ public class TopicBasicDTO { ", createTime=" + createTime + ", retentionTime=" + retentionTime + ", retentionBytes=" + retentionBytes + + ", haRelation=" + haRelation + '}'; } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicOverview.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicOverview.java index fe02fe94..c9666dc1 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicOverview.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/topic/TopicOverview.java @@ -1,10 +1,13 @@ package com.xiaojukeji.kafka.manager.common.entity.ao.topic; +import lombok.Data; + /** * Topic概览信息 * @author zengqiao * @date 20/5/14 */ +@Data public class TopicOverview { private Long clusterId; @@ -32,109 +35,7 @@ public class TopicOverview { private Long logicalClusterId; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public Integer getReplicaNum() { - return replicaNum; - } - - public void setReplicaNum(Integer replicaNum) { - this.replicaNum = replicaNum; - } - - public Integer getPartitionNum() { - return partitionNum; - } - - public void setPartitionNum(Integer partitionNum) { - this.partitionNum = partitionNum; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public Object getByteIn() { - return byteIn; - } - - public void setByteIn(Object byteIn) { - this.byteIn = byteIn; - } - - public Object getByteOut() { - return byteOut; - } - - public void setByteOut(Object byteOut) { - this.byteOut = byteOut; - } - - public Object getProduceRequest() { - return produceRequest; - } - - public void setProduceRequest(Object produceRequest) { - this.produceRequest = produceRequest; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public Long getUpdateTime() { - return updateTime; - } - - public void setUpdateTime(Long updateTime) { - this.updateTime = updateTime; - } - - public Long getLogicalClusterId() { - return logicalClusterId; - } - - public void setLogicalClusterId(Long logicalClusterId) { - this.logicalClusterId = logicalClusterId; - } + private Integer haRelation; @Override public String toString() { @@ -152,6 +53,7 @@ public class TopicOverview { ", description='" + description + '\'' + ", updateTime=" + updateTime + ", logicalClusterId=" + logicalClusterId + + ", haRelation=" + haRelation + '}'; } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobActionDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobActionDTO.java new file mode 100644 index 00000000..1f1d41c6 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobActionDTO.java @@ -0,0 +1,26 @@ +package com.xiaojukeji.kafka.manager.common.entity.dto.ha; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import javax.validation.constraints.NotBlank; + +@Data +@ApiModel(description="Topic信息") +public class ASSwitchJobActionDTO { + /** + * @see com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum + */ + @NotBlank(message = "action不允许为空") + @ApiModelProperty(value = "动作, force") + private String action; + +// @NotNull(message = "all不允许为NULL") +// @ApiModelProperty(value = "所有的Topic") +// private Boolean allJumpWaitInSync; +// +// @NotNull(message = "jumpWaitInSyncActiveTopicList不允许为NULL") +// @ApiModelProperty(value = "操作的Topic") +// private List jumpWaitInSyncActiveTopicList; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobDTO.java new file mode 100644 index 00000000..8c4ae0dc --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/ha/ASSwitchJobDTO.java @@ -0,0 +1,31 @@ +package com.xiaojukeji.kafka.manager.common.entity.dto.ha; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import javax.validation.constraints.NotNull; +import java.util.List; + +@Data +@ApiModel(description="主备切换任务") +public class ASSwitchJobDTO { + @NotNull(message = "all不允许为NULL") + @ApiModelProperty(value = "所有Topic") + private Boolean all; + + @NotNull(message = "mustContainAllKafkaUserTopics不允许为NULL") + @ApiModelProperty(value = "是否需要包含KafkaUser关联的所有Topic") + private Boolean mustContainAllKafkaUserTopics; + + @NotNull(message = "activeClusterPhyId不允许为NULL") + @ApiModelProperty(value="主集群ID") + private Long activeClusterPhyId; + + @NotNull(message = "standbyClusterPhyId不允许为NULL") + @ApiModelProperty(value="备集群ID") + private Long standbyClusterPhyId; + + @NotNull(message = "topicNameList不允许为NULL") + private List topicNameList; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/topic/HaTopicRelationDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/topic/HaTopicRelationDTO.java new file mode 100644 index 00000000..d6aea1e5 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/topic/HaTopicRelationDTO.java @@ -0,0 +1,51 @@ +package com.xiaojukeji.kafka.manager.common.entity.dto.op.topic; + +import com.fasterxml.jackson.annotation.JsonIgnoreProperties; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import javax.validation.constraints.NotNull; +import java.util.List; + +/** + * @author huangyiminghappy@163.com, zengqiao + * @date 2022-06-29 + */ +@Data +@JsonIgnoreProperties(ignoreUnknown = true) +@ApiModel(description = "Topic高可用关联|解绑") +public class HaTopicRelationDTO { + @NotNull(message = "主集群id不能为空") + @ApiModelProperty(value = "主集群id") + private Long activeClusterId; + + @NotNull(message = "备集群id不能为空") + @ApiModelProperty(value = "备集群id") + private Long standbyClusterId; + + @NotNull(message = "是否应用于所有topic") + @ApiModelProperty(value = "是否应用于所有topic") + private Boolean all; + + @ApiModelProperty(value = "需要关联|解绑的topic名称列表") + private List topicNames; + + @Override + public String toString() { + return "HaTopicRelationDTO{" + + ", activeClusterId=" + activeClusterId + + ", standbyClusterId=" + standbyClusterId + + ", all=" + all + + ", topicNames=" + topicNames + + '}'; + } + + public boolean paramLegal() { + if(!all && ValidateUtils.isEmptyList(topicNames)) { + return false; + } + return true; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/AppRelateTopicsDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/AppRelateTopicsDTO.java new file mode 100644 index 00000000..bc49f136 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/AppRelateTopicsDTO.java @@ -0,0 +1,24 @@ +package com.xiaojukeji.kafka.manager.common.entity.dto.rd; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import javax.validation.constraints.NotNull; +import java.util.List; + +/** + * @author zengqiao + * @date 20/5/4 + */ +@Data +@ApiModel(description="App关联Topic信息") +public class AppRelateTopicsDTO { + @NotNull(message = "clusterPhyId不允许为NULL") + @ApiModelProperty(value="物理集群ID") + private Long clusterPhyId; + + @NotNull(message = "filterTopicNameList不允许为NULL") + @ApiModelProperty(value="过滤的Topic列表") + private List filterTopicNameList; +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java index 7afc09c6..9b913539 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java @@ -4,11 +4,13 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.Data; /** * @author zengqiao * @date 20/4/23 */ +@Data @ApiModel(description = "集群接入&修改") @JsonIgnoreProperties(ignoreUnknown = true) public class ClusterDTO { @@ -33,60 +35,21 @@ public class ClusterDTO { @ApiModelProperty(value="Jmx配置") private String jmxProperties; - public Long getClusterId() { - return clusterId; - } + @ApiModelProperty(value="主集群Id") + private Long activeClusterId; - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } + @ApiModelProperty(value="是否高可用") + private boolean isHa; - public String getClusterName() { - return clusterName; - } - - public void setClusterName(String clusterName) { - this.clusterName = clusterName; - } - - public String getZookeeper() { - return zookeeper; - } - - public void setZookeeper(String zookeeper) { - this.zookeeper = zookeeper; - } - - public String getBootstrapServers() { - return bootstrapServers; - } - - public void setBootstrapServers(String bootstrapServers) { - this.bootstrapServers = bootstrapServers; - } - - public String getIdc() { - return idc; - } - - public void setIdc(String idc) { - this.idc = idc; - } - - public String getSecurityProperties() { - return securityProperties; - } - - public void setSecurityProperties(String securityProperties) { - this.securityProperties = securityProperties; - } - - public String getJmxProperties() { - return jmxProperties; - } - - public void setJmxProperties(String jmxProperties) { - this.jmxProperties = jmxProperties; + public boolean legal() { + if (ValidateUtils.isNull(clusterName) + || ValidateUtils.isNull(zookeeper) + || ValidateUtils.isNull(idc) + || ValidateUtils.isNull(bootstrapServers) + || (isHa && ValidateUtils.isNull(activeClusterId))) { + return false; + } + return true; } @Override @@ -99,16 +62,8 @@ public class ClusterDTO { ", idc='" + idc + '\'' + ", securityProperties='" + securityProperties + '\'' + ", jmxProperties='" + jmxProperties + '\'' + + ", activeClusterId=" + activeClusterId + + ", isHa=" + isHa + '}'; } - - public boolean legal() { - if (ValidateUtils.isNull(clusterName) - || ValidateUtils.isNull(zookeeper) - || ValidateUtils.isNull(idc) - || ValidateUtils.isNull(bootstrapServers)) { - return false; - } - return true; - } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/Pagination.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/Pagination.java new file mode 100644 index 00000000..cb0faf84 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/Pagination.java @@ -0,0 +1,24 @@ +package com.xiaojukeji.kafka.manager.common.entity.pagination; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +@Data +@ApiModel(description = "分页信息") +public class Pagination { + @ApiModelProperty(value = "总记录数", example = "100") + private long total; + + @ApiModelProperty(value = "当前页码", example = "0") + private long pageNo; + + @ApiModelProperty(value = "单页大小", example = "10") + private long pageSize; + + public Pagination(long total, long pageNo, long pageSize) { + this.total = total; + this.pageNo = pageNo; + this.pageSize = pageSize; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/PaginationData.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/PaginationData.java new file mode 100644 index 00000000..04a90b86 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pagination/PaginationData.java @@ -0,0 +1,17 @@ +package com.xiaojukeji.kafka.manager.common.entity.pagination; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import java.util.List; + +@Data +@ApiModel(description = "分页数据") +public class PaginationData { + @ApiModelProperty(value = "业务数据") + private List bizData; + + @ApiModelProperty(value = "分页信息") + private Pagination pagination; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/BaseDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/BaseDO.java new file mode 100644 index 00000000..63113694 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/BaseDO.java @@ -0,0 +1,30 @@ +package com.xiaojukeji.kafka.manager.common.entity.pojo; + +import lombok.Data; + +import java.io.Serializable; +import java.util.Date; + +/** + * @author zengqiao + * @date 21/07/19 + */ +@Data +public class BaseDO implements Serializable { + private static final long serialVersionUID = 8782560709154468485L; + + /** + * 主键ID + */ + protected Long id; + + /** + * 创建时间 + */ + protected Date createTime; + + /** + * 更新时间 + */ + protected Date modifyTime; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java index db81c1c9..50362fe4 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java @@ -1,11 +1,18 @@ package com.xiaojukeji.kafka.manager.common.entity.pojo; +import lombok.Data; +import lombok.NoArgsConstructor; +import lombok.ToString; + import java.util.Date; /** * @author zengqiao * @date 20/6/29 */ +@Data +@ToString +@NoArgsConstructor public class LogicalClusterDO { private Long id; @@ -27,99 +34,17 @@ public class LogicalClusterDO { private Date gmtModify; - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public String getName() { - return name; - } - - public void setName(String name) { + public LogicalClusterDO(String name, + String identification, + Integer mode, + String appId, + Long clusterId, + String regionList) { this.name = name; - } - - public String getIdentification() { - return identification; - } - - public void setIdentification(String identification) { this.identification = identification; - } - - public Integer getMode() { - return mode; - } - - public void setMode(Integer mode) { this.mode = mode; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { this.appId = appId; - } - - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { this.clusterId = clusterId; - } - - public String getRegionList() { - return regionList; - } - - public void setRegionList(String regionList) { this.regionList = regionList; } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public Date getGmtCreate() { - return gmtCreate; - } - - public void setGmtCreate(Date gmtCreate) { - this.gmtCreate = gmtCreate; - } - - public Date getGmtModify() { - return gmtModify; - } - - public void setGmtModify(Date gmtModify) { - this.gmtModify = gmtModify; - } - - @Override - public String toString() { - return "LogicalClusterDO{" + - "id=" + id + - ", name='" + name + '\'' + - ", identification='" + identification + '\'' + - ", mode=" + mode + - ", appId='" + appId + '\'' + - ", clusterId=" + clusterId + - ", regionList='" + regionList + '\'' + - ", description='" + description + '\'' + - ", gmtCreate=" + gmtCreate + - ", gmtModify=" + gmtModify + - '}'; - } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/RegionDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/RegionDO.java index 1f948510..e300e9ce 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/RegionDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/RegionDO.java @@ -1,7 +1,14 @@ package com.xiaojukeji.kafka.manager.common.entity.pojo; +import lombok.Data; +import lombok.NoArgsConstructor; +import lombok.ToString; + import java.util.Date; +@Data +@ToString +@NoArgsConstructor public class RegionDO implements Comparable { private Long id; @@ -25,111 +32,13 @@ public class RegionDO implements Comparable { private String description; - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public Integer getStatus() { - return status; - } - - public void setStatus(Integer status) { + public RegionDO(Integer status, String name, Long clusterId, String brokerList) { this.status = status; - } - - public Date getGmtCreate() { - return gmtCreate; - } - - public void setGmtCreate(Date gmtCreate) { - this.gmtCreate = gmtCreate; - } - - public Date getGmtModify() { - return gmtModify; - } - - public void setGmtModify(Date gmtModify) { - this.gmtModify = gmtModify; - } - - public String getName() { - return name; - } - - public void setName(String name) { this.name = name; - } - - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { this.clusterId = clusterId; - } - - public String getBrokerList() { - return brokerList; - } - - public void setBrokerList(String brokerList) { this.brokerList = brokerList; } - public Long getCapacity() { - return capacity; - } - - public void setCapacity(Long capacity) { - this.capacity = capacity; - } - - public Long getRealUsed() { - return realUsed; - } - - public void setRealUsed(Long realUsed) { - this.realUsed = realUsed; - } - - public Long getEstimateUsed() { - return estimateUsed; - } - - public void setEstimateUsed(Long estimateUsed) { - this.estimateUsed = estimateUsed; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - @Override - public String toString() { - return "RegionDO{" + - "id=" + id + - ", status=" + status + - ", gmtCreate=" + gmtCreate + - ", gmtModify=" + gmtModify + - ", name='" + name + '\'' + - ", clusterId=" + clusterId + - ", brokerList='" + brokerList + '\'' + - ", capacity=" + capacity + - ", realUsed=" + realUsed + - ", estimateUsed=" + estimateUsed + - ", description='" + description + '\'' + - '}'; - } - @Override public int compareTo(RegionDO regionDO) { return this.id.compareTo(regionDO.id); diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/TopicDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/TopicDO.java index ecb97e47..e44e58b3 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/TopicDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/TopicDO.java @@ -2,6 +2,8 @@ package com.xiaojukeji.kafka.manager.common.entity.pojo; import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicCreationDTO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import lombok.Data; +import lombok.NoArgsConstructor; import java.util.Date; @@ -9,6 +11,8 @@ import java.util.Date; * @author zengqiao * @date 20/4/24 */ +@Data +@NoArgsConstructor public class TopicDO { private Long id; @@ -26,70 +30,14 @@ public class TopicDO { private Long peakBytesIn; - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { + public TopicDO(String appId, Long clusterId, String topicName, String description, Long peakBytesIn) { this.appId = appId; - } - - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { this.clusterId = clusterId; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { this.topicName = topicName; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { this.description = description; - } - - public Long getPeakBytesIn() { - return peakBytesIn; - } - - public void setPeakBytesIn(Long peakBytesIn) { this.peakBytesIn = peakBytesIn; } - public Long getId() { - return id; - } - - public void setId(Long id) { - this.id = id; - } - - public Date getGmtCreate() { - return gmtCreate; - } - - public void setGmtCreate(Date gmtCreate) { - this.gmtCreate = gmtCreate; - } - - public Date getGmtModify() { - return gmtModify; - } - - public void setGmtModify(Date gmtModify) { - this.gmtModify = gmtModify; - } - public static TopicDO buildFrom(TopicCreationDTO dto) { TopicDO topicDO = new TopicDO(); topicDO.setAppId(dto.getAppId()); diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASRelationDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASRelationDO.java new file mode 100644 index 00000000..a55d5d00 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASRelationDO.java @@ -0,0 +1,69 @@ +package com.xiaojukeji.kafka.manager.common.entity.pojo.ha; + +import com.baomidou.mybatisplus.annotation.TableName; +import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + + +/** + * HA-主备关系表 + */ +@Data +@NoArgsConstructor +@AllArgsConstructor +@TableName("ha_active_standby_relation") +public class HaASRelationDO extends BaseDO { + /** + * 主集群ID + */ + private Long activeClusterPhyId; + + /** + * 主集群资源名称 + */ + private String activeResName; + + /** + * 备集群ID + */ + private Long standbyClusterPhyId; + + /** + * 备集群资源名称 + */ + private String standbyResName; + + /** + * 资源类型 + */ + private Integer resType; + + /** + * 主备状态 + */ + private Integer status; + + /** + * 主备关系中的唯一性字段 + */ + private String uniqueField; + + public HaASRelationDO(Long id, Integer status) { + this.id = id; + this.status = status; + } + + public HaASRelationDO(Long activeClusterPhyId, String activeResName, Long standbyClusterPhyId, String standbyResName, Integer resType, Integer status) { + this.activeClusterPhyId = activeClusterPhyId; + this.activeResName = activeResName; + this.standbyClusterPhyId = standbyClusterPhyId; + this.standbyResName = standbyResName; + this.resType = resType; + this.status = status; + + // 主备两个资源之间唯一,但是不保证两个资源之间,只存在主备关系,也可能存在双活关系,及各自都为对方的主备 + this.uniqueField = String.format("%d_%s||%d_%s||%d", activeClusterPhyId, activeResName, standbyClusterPhyId, standbyResName, resType); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchJobDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchJobDO.java new file mode 100644 index 00000000..d68c4f88 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchJobDO.java @@ -0,0 +1,42 @@ +package com.xiaojukeji.kafka.manager.common.entity.pojo.ha; + +import com.baomidou.mybatisplus.annotation.TableName; +import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO; +import lombok.Data; +import lombok.NoArgsConstructor; + + +/** + * HA-主备关系切换任务表 + */ +@Data +@NoArgsConstructor +@TableName("ha_active_standby_switch_job") +public class HaASSwitchJobDO extends BaseDO { + /** + * 主集群ID + */ + private Long activeClusterPhyId; + + /** + * 备集群ID + */ + private Long standbyClusterPhyId; + + /** + * 主备状态 + */ + private Integer jobStatus; + + /** + * 操作人 + */ + private String operator; + + public HaASSwitchJobDO(Long activeClusterPhyId, Long standbyClusterPhyId, Integer jobStatus, String operator) { + this.activeClusterPhyId = activeClusterPhyId; + this.standbyClusterPhyId = standbyClusterPhyId; + this.jobStatus = jobStatus; + this.operator = operator; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchSubJobDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchSubJobDO.java new file mode 100644 index 00000000..c62c8834 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/HaASSwitchSubJobDO.java @@ -0,0 +1,67 @@ +package com.xiaojukeji.kafka.manager.common.entity.pojo.ha; + +import com.baomidou.mybatisplus.annotation.TableName; +import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO; +import lombok.Data; +import lombok.NoArgsConstructor; + + +/** + * HA-主备关系切换子任务表 + */ +@Data +@NoArgsConstructor +@TableName("ha_active_standby_switch_sub_job") +public class HaASSwitchSubJobDO extends BaseDO { + /** + * 任务ID + */ + private Long jobId; + + /** + * 主集群ID + */ + private Long activeClusterPhyId; + + /** + * 主集群资源名称 + */ + private String activeResName; + + /** + * 备集群ID + */ + private Long standbyClusterPhyId; + + /** + * 备集群资源名称 + */ + private String standbyResName; + + /** + * 资源类型 + */ + private Integer resType; + + /** + * 任务状态 + */ + private Integer jobStatus; + + /** + * 扩展数据 + * @see com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaSubJobExtendData + */ + private String extendData; + + public HaASSwitchSubJobDO(Long jobId, Long activeClusterPhyId, String activeResName, Long standbyClusterPhyId, String standbyResName, Integer resType, Integer jobStatus, String extendData) { + this.jobId = jobId; + this.activeClusterPhyId = activeClusterPhyId; + this.activeResName = activeResName; + this.standbyClusterPhyId = standbyClusterPhyId; + this.standbyResName = standbyResName; + this.resType = resType; + this.jobStatus = jobStatus; + this.extendData = extendData; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/JobLogDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/JobLogDO.java new file mode 100644 index 00000000..ea5b4e57 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ha/JobLogDO.java @@ -0,0 +1,50 @@ +package com.xiaojukeji.kafka.manager.common.entity.pojo.ha; + +import com.baomidou.mybatisplus.annotation.TableName; +import com.xiaojukeji.kafka.manager.common.entity.pojo.BaseDO; +import lombok.Data; +import lombok.NoArgsConstructor; + +import java.util.Date; + + +@Data +@NoArgsConstructor +@TableName("job_log") +public class JobLogDO extends BaseDO { + /** + * 业务类型 + */ + private Integer bizType; + + /** + * 业务关键字 + */ + private String bizKeyword; + + /** + * 打印时间 + */ + private Date printTime; + + /** + * 内容 + */ + private String content; + + public JobLogDO(Integer bizType, String bizKeyword) { + this.bizType = bizType; + this.bizKeyword = bizKeyword; + } + + public JobLogDO(Integer bizType, String bizKeyword, Date printTime, String content) { + this.bizType = bizType; + this.bizKeyword = bizKeyword; + this.printTime = printTime; + this.content = content; + } + + public JobLogDO setAndCopyNew(Date printTime, String content) { + return new JobLogDO(this.bizType, this.bizKeyword, printTime, content); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/common/TopicOverviewVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/common/TopicOverviewVO.java index 724e31b2..9b0d94fd 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/common/TopicOverviewVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/common/TopicOverviewVO.java @@ -2,12 +2,14 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.common; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.Data; /** * Topic信息 * @author zengqiao * @date 19/4/1 */ +@Data @ApiModel(description = "Topic信息概览") public class TopicOverviewVO { @ApiModelProperty(value = "集群ID") @@ -49,109 +51,8 @@ public class TopicOverviewVO { @ApiModelProperty(value = "逻辑集群id") private Long logicalClusterId; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public Integer getReplicaNum() { - return replicaNum; - } - - public void setReplicaNum(Integer replicaNum) { - this.replicaNum = replicaNum; - } - - public Integer getPartitionNum() { - return partitionNum; - } - - public void setPartitionNum(Integer partitionNum) { - this.partitionNum = partitionNum; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public Object getByteIn() { - return byteIn; - } - - public void setByteIn(Object byteIn) { - this.byteIn = byteIn; - } - - public Object getByteOut() { - return byteOut; - } - - public void setByteOut(Object byteOut) { - this.byteOut = byteOut; - } - - public Object getProduceRequest() { - return produceRequest; - } - - public void setProduceRequest(Object produceRequest) { - this.produceRequest = produceRequest; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public Long getUpdateTime() { - return updateTime; - } - - public void setUpdateTime(Long updateTime) { - this.updateTime = updateTime; - } - - public Long getLogicalClusterId() { - return logicalClusterId; - } - - public void setLogicalClusterId(Long logicalClusterId) { - this.logicalClusterId = logicalClusterId; - } + @ApiModelProperty(value = "高可用关系:1:主topic, 0:备topic , 其他:非高可用topic") + private Integer haRelation; @Override public String toString() { @@ -169,6 +70,7 @@ public class TopicOverviewVO { ", description='" + description + '\'' + ", updateTime=" + updateTime + ", logicalClusterId=" + logicalClusterId + + ", haRelation=" + haRelation + '}'; } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterTopicVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterTopicVO.java new file mode 100644 index 00000000..ddd8e6f5 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterTopicVO.java @@ -0,0 +1,34 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.ha; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +/** + * @author zengqiao + * @date 20/4/29 + */ +@Data +@ApiModel(description="HA集群-Topic信息") +public class HaClusterTopicVO { + @ApiModelProperty(value="当前查询的集群ID") + private Long clusterId; + + @ApiModelProperty(value="Topic名称") + private String topicName; + + @ApiModelProperty(value="生产Acl数量") + private Integer produceAclNum; + + @ApiModelProperty(value="消费Acl数量") + private Integer consumeAclNum; + + @ApiModelProperty(value="主集群ID") + private Long activeClusterId; + + @ApiModelProperty(value="备集群ID") + private Long standbyClusterId; + + @ApiModelProperty(value="主备状态") + private Integer status; +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterVO.java new file mode 100644 index 00000000..765da022 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/HaClusterVO.java @@ -0,0 +1,48 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.ha; + +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterBaseVO; +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +/** + * @author zengqiao + * @date 20/4/29 + */ +@Data +@ApiModel(description="HA集群-集群信息") +public class HaClusterVO extends ClusterBaseVO { + @ApiModelProperty(value="broker数量") + private Integer brokerNum; + + @ApiModelProperty(value="topic数量") + private Integer topicNum; + + @ApiModelProperty(value="消费组数") + private Integer consumerGroupNum; + + @ApiModelProperty(value="region数") + private Integer regionNum; + + @ApiModelProperty(value="ControllerID") + private Integer controllerId; + + /** + * @see com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum + */ + @ApiModelProperty(value="主备状态") + private Integer haStatus; + + @ApiModelProperty(value="主topic数") + private Long activeTopicCount; + + @ApiModelProperty(value="备topic数") + private Long standbyTopicCount; + + @ApiModelProperty(value="备集群信息") + private HaClusterVO haClusterVO; + + @ApiModelProperty(value="切换任务id") + private Long haASSwitchJobId; + +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobDetailVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobDetailVO.java new file mode 100644 index 00000000..871e5f77 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobDetailVO.java @@ -0,0 +1,37 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.ha.job; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job详情") +public class HaJobDetailVO { + @ApiModelProperty(value = "Topic名称") + private String topicName; + + @ApiModelProperty(value="主物理集群ID") + private Long activeClusterPhyId; + + @ApiModelProperty(value="主物理集群名称") + private String activeClusterPhyName; + + @ApiModelProperty(value="备物理集群ID") + private Long standbyClusterPhyId; + + @ApiModelProperty(value="备物理集群名称") + private String standbyClusterPhyName; + + @ApiModelProperty(value="Lag和") + private Long sumLag; + + @ApiModelProperty(value="状态") + private Integer status; + + @ApiModelProperty(value="超时时间配置") + private Long timeoutUnitSecConfig; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobStateVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobStateVO.java new file mode 100644 index 00000000..0850e86e --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/ha/job/HaJobStateVO.java @@ -0,0 +1,46 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.ha.job; + +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobState; +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job状态") +public class HaJobStateVO { + @ApiModelProperty(value = "任务总数") + private Integer jobNu; + + @ApiModelProperty(value = "运行中的任务数") + private Integer runningNu; + + @ApiModelProperty(value = "超时运行中的任务数") + private Integer runningInTimeoutNu; + + @ApiModelProperty(value = "准备好待运行的任务数") + private Integer waitingNu; + + @ApiModelProperty(value = "运行成功的任务数") + private Integer successNu; + + @ApiModelProperty(value = "运行失败的任务数") + private Integer failedNu; + + @ApiModelProperty(value = "进度,[0 - 100]") + private Integer progress; + + public HaJobStateVO(HaJobState jobState) { + this.jobNu = jobState.getTotal(); + this.runningNu = jobState.getDoing(); + this.runningInTimeoutNu = jobState.getDoingInTimeout(); + this.waitingNu = 0; + this.successNu = jobState.getSuccess(); + this.failedNu = jobState.getFailed(); + + this.progress = jobState.getProgress(); + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/HaClusterTopicHaStatusVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/HaClusterTopicHaStatusVO.java new file mode 100644 index 00000000..8681e66a --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/HaClusterTopicHaStatusVO.java @@ -0,0 +1,26 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +/** + * @author zengqiao + * @date 20/4/8 + */ +@Data +@ApiModel(value = "集群的topic高可用状态") +public class HaClusterTopicHaStatusVO { + @ApiModelProperty(value = "物理集群ID") + private Long clusterId; + + @ApiModelProperty(value = "物理集群名称") + private String clusterName; + + @ApiModelProperty(value = "Topic名称") + private String topicName; + + @ApiModelProperty(value = "高可用关系:1:主topic, 0:备topic , 其他:非高可用topic") + private Integer haRelation; + +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicBasicVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicBasicVO.java index b200a150..ddaf8dca 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicBasicVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicBasicVO.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.Data; import java.util.List; @@ -10,6 +11,7 @@ import java.util.List; * @author zengqiao * @date 19/4/1 */ +@Data @ApiModel(description = "Topic基本信息") public class TopicBasicVO { @ApiModelProperty(value = "集群id") @@ -57,125 +59,8 @@ public class TopicBasicVO { @ApiModelProperty(value = "所属region") private List regionNameList; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public Integer getPartitionNum() { - return partitionNum; - } - - public void setPartitionNum(Integer partitionNum) { - this.partitionNum = partitionNum; - } - - public Integer getReplicaNum() { - return replicaNum; - } - - public void setReplicaNum(Integer replicaNum) { - this.replicaNum = replicaNum; - } - - public String getPrincipals() { - return principals; - } - - public void setPrincipals(String principals) { - this.principals = principals; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public Long getRetentionBytes() { - return retentionBytes; - } - - public void setRetentionBytes(Long retentionBytes) { - this.retentionBytes = retentionBytes; - } - - public Long getCreateTime() { - return createTime; - } - - public void setCreateTime(Long createTime) { - this.createTime = createTime; - } - - public Long getModifyTime() { - return modifyTime; - } - - public void setModifyTime(Long modifyTime) { - this.modifyTime = modifyTime; - } - - public Integer getScore() { - return score; - } - - public void setScore(Integer score) { - this.score = score; - } - - public String getTopicCodeC() { - return topicCodeC; - } - - public void setTopicCodeC(String topicCodeC) { - this.topicCodeC = topicCodeC; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public String getBootstrapServers() { - return bootstrapServers; - } - - public void setBootstrapServers(String bootstrapServers) { - this.bootstrapServers = bootstrapServers; - } - - public List getRegionNameList() { - return regionNameList; - } - - public void setRegionNameList(List regionNameList) { - this.regionNameList = regionNameList; - } + @ApiModelProperty(value = "高可用关系:1:主topic, 0:备topic , 其他:非主备topic") + private Integer haRelation; @Override public String toString() { @@ -195,6 +80,7 @@ public class TopicBasicVO { ", description='" + description + '\'' + ", bootstrapServers='" + bootstrapServers + '\'' + ", regionNameList=" + regionNameList + + ", haRelation=" + haRelation + '}'; } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicHaVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicHaVO.java new file mode 100644 index 00000000..9f65f15a --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/topic/TopicHaVO.java @@ -0,0 +1,26 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +/** + * @author zengqiao + * @date 20/4/8 + */ +@Data +@ApiModel(value = "Topic信息") +public class TopicHaVO { + @ApiModelProperty(value = "物理集群ID") + private Long clusterId; + + @ApiModelProperty(value = "物理集群名称") + private String clusterName; + + @ApiModelProperty(value = "Topic名称") + private String topicName; + + @ApiModelProperty(value = "高可用关系:1:主topic, 0:备topic , 其他:非高可用topic") + private Integer haRelation; + +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/RdTopicBasicVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/RdTopicBasicVO.java index 75d50f05..49074d94 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/RdTopicBasicVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/RdTopicBasicVO.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.rd; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.Data; import java.util.List; import java.util.Properties; @@ -10,6 +11,7 @@ import java.util.Properties; * @author zengqiao * @date 20/6/10 */ +@Data @ApiModel(description = "Topic基本信息(RD视角)") public class RdTopicBasicVO { @ApiModelProperty(value = "集群ID") @@ -39,77 +41,8 @@ public class RdTopicBasicVO { @ApiModelProperty(value = "所属region") private List regionNameList; - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getClusterName() { - return clusterName; - } - - public void setClusterName(String clusterName) { - this.clusterName = clusterName; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public Long getRetentionTime() { - return retentionTime; - } - - public void setRetentionTime(Long retentionTime) { - this.retentionTime = retentionTime; - } - - public String getAppId() { - return appId; - } - - public void setAppId(String appId) { - this.appId = appId; - } - - public String getAppName() { - return appName; - } - - public void setAppName(String appName) { - this.appName = appName; - } - - public Properties getProperties() { - return properties; - } - - public void setProperties(Properties properties) { - this.properties = properties; - } - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - - public List getRegionNameList() { - return regionNameList; - } - - public void setRegionNameList(List regionNameList) { - this.regionNameList = regionNameList; - } + @ApiModelProperty(value = "高可用关系:1:主topic, 0:备topic , 其他:非主备topic") + private Integer haRelation; @Override public String toString() { @@ -122,7 +55,8 @@ public class RdTopicBasicVO { ", appName='" + appName + '\'' + ", properties=" + properties + ", description='" + description + '\'' + - ", regionNameList='" + regionNameList + '\'' + + ", regionNameList=" + regionNameList + + ", haRelation=" + haRelation + '}'; } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/app/AppRelateTopicsVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/app/AppRelateTopicsVO.java new file mode 100644 index 00000000..1ebe57a4 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/app/AppRelateTopicsVO.java @@ -0,0 +1,30 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.rd.app; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.Data; + +import java.util.List; + +/** + * @author zengqiao + * @date 20/5/4 + */ +@Data +@ApiModel(description="App关联Topic信息") +public class AppRelateTopicsVO { + @ApiModelProperty(value="物理集群ID") + private Long clusterPhyId; + + @ApiModelProperty(value="kafkaUser") + private String kafkaUser; + + @ApiModelProperty(value="选中的Topic列表") + private List selectedTopicNameList; + + @ApiModelProperty(value="未选中的Topic列表") + private List notSelectTopicNameList; + + @ApiModelProperty(value="未建立HA的Topic列表") + private List notHaTopicNameList; +} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterDetailVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterDetailVO.java index cdeb7da7..8ac7a28b 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterDetailVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterDetailVO.java @@ -2,11 +2,13 @@ package com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.Data; /** * @author zengqiao * @date 20/4/23 */ +@Data @ApiModel(description="集群信息") public class ClusterDetailVO extends ClusterBaseVO { @ApiModelProperty(value="Broker数") @@ -24,45 +26,11 @@ public class ClusterDetailVO extends ClusterBaseVO { @ApiModelProperty(value="Region数") private Integer regionNum; - public Integer getBrokerNum() { - return brokerNum; - } + @ApiModelProperty(value = "高可用关系:1:主, 0:备 , 其他:非高可用") + private Integer haRelation; - public void setBrokerNum(Integer brokerNum) { - this.brokerNum = brokerNum; - } - - public Integer getTopicNum() { - return topicNum; - } - - public void setTopicNum(Integer topicNum) { - this.topicNum = topicNum; - } - - public Integer getConsumerGroupNum() { - return consumerGroupNum; - } - - public void setConsumerGroupNum(Integer consumerGroupNum) { - this.consumerGroupNum = consumerGroupNum; - } - - public Integer getControllerId() { - return controllerId; - } - - public void setControllerId(Integer controllerId) { - this.controllerId = controllerId; - } - - public Integer getRegionNum() { - return regionNum; - } - - public void setRegionNum(Integer regionNum) { - this.regionNum = regionNum; - } + @ApiModelProperty(value = "互备集群名称") + private String mutualBackupClusterName; @Override public String toString() { @@ -72,6 +40,8 @@ public class ClusterDetailVO extends ClusterBaseVO { ", consumerGroupNum=" + consumerGroupNum + ", controllerId=" + controllerId + ", regionNum=" + regionNum + - "} " + super.toString(); + ", haRelation=" + haRelation + + ", mutualBackupClusterName='" + mutualBackupClusterName + '\'' + + '}'; } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobLogVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobLogVO.java new file mode 100644 index 00000000..8bae7454 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobLogVO.java @@ -0,0 +1,30 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.rd.job; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +import java.util.Date; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job日志") +public class JobLogVO { + @ApiModelProperty(value = "日志ID") + protected Long id; + + @ApiModelProperty(value = "业务类型") + private Integer bizType; + + @ApiModelProperty(value = "业务关键字") + private String bizKeyword; + + @ApiModelProperty(value = "打印时间") + private Date printTime; + + @ApiModelProperty(value = "内容") + private String content; +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobMulLogVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobMulLogVO.java new file mode 100644 index 00000000..d2cb67a0 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/job/JobMulLogVO.java @@ -0,0 +1,31 @@ +package com.xiaojukeji.kafka.manager.common.entity.vo.rd.job; + +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.Data; +import lombok.NoArgsConstructor; + +import java.util.ArrayList; +import java.util.List; + +@Data +@NoArgsConstructor +@AllArgsConstructor +@ApiModel(description = "Job日志") +public class JobMulLogVO { + @ApiModelProperty(value = "末尾日志ID") + private Long endLogId; + + @ApiModelProperty(value = "日志信息") + private List logList; + + public JobMulLogVO(List logList, Long startLogId) { + this.logList = logList == null? new ArrayList<>(): logList; + if (!this.logList.isEmpty()) { + this.endLogId = this.logList.stream().map(elem -> elem.id).reduce(Long::max).get() + 1; + } else { + this.endLogId = startLogId; + } + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ConvertUtil.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ConvertUtil.java new file mode 100644 index 00000000..a6fbcef2 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ConvertUtil.java @@ -0,0 +1,404 @@ +package com.xiaojukeji.kafka.manager.common.utils; + +import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson.JSONObject; +import com.alibaba.fastjson.TypeReference; +import com.alibaba.fastjson.serializer.SerializerFeature; +import com.google.common.collect.*; +import org.apache.commons.collections.CollectionUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.BeanUtils; + +import java.lang.reflect.Field; +import java.lang.reflect.Modifier; +import java.lang.reflect.Type; +import java.util.*; +import java.util.Map.Entry; +import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; +import java.util.function.Function; + +public class ConvertUtil { + private static final Logger LOGGER = LoggerFactory.getLogger(ConvertUtil.class); + + private ConvertUtil(){} + + public static T toObj(String json, Type resultType) { + if (resultType instanceof Class) { + Class clazz = (Class) resultType; + return str2ObjByJson(json, clazz); + } + + return JSON.parseObject(json, resultType); + } + + public static T str2ObjByJson(String srcStr, Class tgtClass) { + return JSON.parseObject(srcStr, tgtClass); + } + + public static T str2ObjByJson(String srcStr, TypeReference tt) { + return JSON.parseObject(srcStr, tt); + } + + public static String obj2Json(Object srcObj) { + if (srcObj == null) { + return null; + } + if (srcObj instanceof String) { + return (String) srcObj; + } else { + return JSON.toJSONString(srcObj); + } + } + + public static String obj2JsonWithIgnoreCircularReferenceDetect(Object srcObj) { + return JSON.toJSONString(srcObj, SerializerFeature.DisableCircularReferenceDetect); + } + + public static List str2ObjArrayByJson(String srcStr, Class tgtClass) { + return JSON.parseArray(srcStr, tgtClass); + } + + public static T obj2ObjByJSON(Object srcObj, Class tgtClass) { + return JSON.parseObject( JSON.toJSONString(srcObj), tgtClass); + } + + public static String list2String(List list, String separator) { + if (list == null || list.isEmpty()) { + return ""; + } + + StringBuilder sb = new StringBuilder(); + for (Object item : list) { + sb.append(item).append(separator); + } + return sb.deleteCharAt(sb.length() - 1).toString(); + } + + public static Map list2Map(List list, Function mapper) { + Map map = Maps.newHashMap(); + if (CollectionUtils.isNotEmpty(list)) { + for (V v : list) { + map.put(mapper.apply(v), v); + } + } + return map; + } + + public static Map list2MapParallel(List list, Function mapper) { + Map map = new ConcurrentHashMap<>(); + if (CollectionUtils.isNotEmpty(list)) { + list.parallelStream().forEach(v -> map.put(mapper.apply(v), v)); + } + return map; + } + + public static Map list2Map(List list, Function keyMapper, + Function valueMapper) { + Map map = Maps.newHashMap(); + if (CollectionUtils.isNotEmpty(list)) { + for (O o : list) { + map.put(keyMapper.apply(o), valueMapper.apply(o)); + } + } + return map; + } + + public static Multimap list2MulMap(List list, Function mapper) { + Multimap multimap = ArrayListMultimap.create(); + if (CollectionUtils.isNotEmpty(list)) { + for (V v : list) { + multimap.put(mapper.apply(v), v); + } + } + return multimap; + } + + public static Multimap list2MulMap(List list, Function keyMapper, + Function valueMapper) { + Multimap multimap = ArrayListMultimap.create(); + if (CollectionUtils.isNotEmpty(list)) { + for (O o : list) { + multimap.put(keyMapper.apply(o), valueMapper.apply(o)); + } + } + return multimap; + } + + public static Map> list2MapOfList(List list, Function keyMapper, + Function valueMapper) { + ArrayListMultimap multimap = ArrayListMultimap.create(); + if (CollectionUtils.isNotEmpty(list)) { + for (O o : list) { + multimap.put(keyMapper.apply(o), valueMapper.apply(o)); + } + } + + return Multimaps.asMap(multimap); + } + + public static Set list2Set(List list, Function mapper) { + Set set = Sets.newHashSet(); + if (CollectionUtils.isNotEmpty(list)) { + for (V v : list) { + set.add(mapper.apply(v)); + } + } + return set; + } + + public static Set set2Set(Set set, Class tClass) { + if (CollectionUtils.isEmpty(set)) { + return new HashSet<>(); + } + + Set result = new HashSet<>(); + + for (Object o : set) { + T t = obj2Obj(o, tClass); + if (t != null) { + result.add(t); + } + } + + return result; + } + + public static List list2List(List list, Class tClass) { + return list2List(list, tClass, (t) -> { + }); + } + + public static List list2List(List list, Class tClass, Consumer consumer) { + if (CollectionUtils.isEmpty(list)) { + return Lists.newArrayList(); + } + + List result = Lists.newArrayList(); + + for (Object object : list) { + T t = obj2Obj(object, tClass, consumer); + if (t != null) { + result.add(t); + } + } + + return result; + } + + /** + * 对象转换工具 + * @param srcObj 元对象 + * @param tgtClass 目标对象类 + * @param 泛型 + * @return 目标对象 + */ + public static T obj2Obj(final Object srcObj, Class tgtClass) { + return obj2Obj(srcObj, tgtClass, (t) -> { + }); + } + + public static T obj2Obj(final Object srcObj, Class tgtClass, Consumer consumer) { + if (srcObj == null) { + return null; + } + + T tgt = null; + try { + tgt = tgtClass.newInstance(); + BeanUtils.copyProperties(srcObj, tgt); + consumer.accept(tgt); + } catch (Exception e) { + LOGGER.warn("class=ConvertUtil||method=obj2Obj||msg={}", e.getMessage()); + } + + return tgt; + } + + public static Map mergeMapList(List> mapList) { + Map result = Maps.newHashMap(); + for (Map map : mapList) { + result.putAll(map); + } + return result; + } + + public static Map Obj2Map(Object obj) { + if (null == obj) { + return null; + } + + Map map = new HashMap<>(); + Field[] fields = obj.getClass().getDeclaredFields(); + for (Field field : fields) { + field.setAccessible(true); + try { + map.put(field.getName(), field.get(obj)); + } catch (IllegalAccessException e) { + LOGGER.warn("class=ConvertUtil||method=Obj2Map||msg={}", e.getMessage(), e); + } + } + return map; + } + + public static Object map2Obj(Map map, Class clz) { + Object obj = null; + try { + obj = clz.newInstance(); + Field[] declaredFields = obj.getClass().getDeclaredFields(); + for (Field field : declaredFields) { + int mod = field.getModifiers(); + if (Modifier.isStatic(mod) || Modifier.isFinal(mod)) { + continue; + } + field.setAccessible(true); + field.set(obj, map.get(field.getName())); + } + } catch (Exception e) { + LOGGER.warn("class=ConvertUtil||method=map2Obj||msg={}", e.getMessage(), e); + } + + return obj; + } + + public static Map sortMapByValue(Map map) { + List> data = new ArrayList<>(map.entrySet()); + data.sort((o1, o2) -> { + if ((o2.getValue() - o1.getValue()) > 0) { + return 1; + } else if ((o2.getValue() - o1.getValue()) == 0) { + return 0; + } else { + return -1; + } + }); + + Map result = Maps.newLinkedHashMap(); + + for (Entry next : data) { + result.put(next.getKey(), next.getValue()); + } + return result; + } + + public static Map directFlatObject(JSONObject obj) { + Map ret = new HashMap<>(); + + if(obj==null) { + return ret; + } + + for (Entry entry : obj.entrySet()) { + String key = entry.getKey(); + Object o = entry.getValue(); + + if (o instanceof JSONObject) { + Map m = directFlatObject((JSONObject) o); + for (Entry e : m.entrySet()) { + ret.put(key + "." + e.getKey(), e.getValue()); + } + } else { + ret.put(key, o); + } + } + + return ret; + } + + public static Long string2Long(String s) { + if (ValidateUtils.isNull(s)) { + return null; + } + try { + return Long.parseLong(s); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Float string2Float(String s) { + if (ValidateUtils.isNull(s)) { + return null; + } + try { + return Float.parseFloat(s); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static String float2String(Float f) { + if (ValidateUtils.isNull(f)) { + return null; + } + try { + return String.valueOf(f); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Integer string2Integer(String s) { + if (null == s) { + return null; + } + try { + return Integer.parseInt(s); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Double string2Double(String s) { + if (null == s) { + return null; + } + try { + return Double.parseDouble(s); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Long double2Long(Double d) { + if (null == d) { + return null; + } + try { + return d.longValue(); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Integer double2Int(Double d) { + if (null == d) { + return null; + } + try { + return d.intValue(); + } catch (Exception e) { + // ignore exception + } + return null; + } + + public static Long Float2Long(Float f) { + if (null == f) { + return null; + } + try { + return f.longValue(); + } catch (Exception e) { + // ignore exception + } + return null; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/CopyUtils.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/CopyUtils.java index bef175e4..ea265d47 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/CopyUtils.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/CopyUtils.java @@ -15,6 +15,7 @@ import java.util.concurrent.ConcurrentHashMap; * @author huangyiminghappy@163.com * @date 2019/3/15 */ +@Deprecated public class CopyUtils { @SuppressWarnings({"unchecked", "rawtypes"}) diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/FutureUtil.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/FutureUtil.java index b061ebed..6830c915 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/FutureUtil.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/FutureUtil.java @@ -40,6 +40,14 @@ public class FutureUtil { return futureUtil; } + public Future directSubmitTask(Callable callable) { + return executor.submit(callable); + } + + public Future directSubmitTask(Runnable runnable) { + return (Future) executor.submit(runnable); + } + /** * 必须配合 waitExecute使用 否则容易会撑爆内存 */ diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java index 0410a553..4e909528 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java @@ -8,6 +8,8 @@ package com.xiaojukeji.kafka.manager.common.zookeeper; public class ZkPathUtil { private static final String ZOOKEEPER_SEPARATOR = "/"; + public static final String CLUSTER_ID_NODE = ZOOKEEPER_SEPARATOR + "cluster/id"; + public static final String BROKER_ROOT_NODE = ZOOKEEPER_SEPARATOR + "brokers"; public static final String CONTROLLER_ROOT_NODE = ZOOKEEPER_SEPARATOR + "controller"; diff --git a/kafka-manager-console/package.json b/kafka-manager-console/package.json index 5d33a320..1362e7f6 100644 --- a/kafka-manager-console/package.json +++ b/kafka-manager-console/package.json @@ -1,6 +1,6 @@ { "name": "logi-kafka", - "version": "2.6.1", + "version": "2.8.0", "description": "", "scripts": { "prestart": "npm install --save-dev webpack-dev-server", @@ -58,4 +58,4 @@ "dependencies": { "format-to-json": "^1.0.4" } -} +} \ No newline at end of file diff --git a/kafka-manager-console/src/component/x-form-wrapper/index.tsx b/kafka-manager-console/src/component/x-form-wrapper/index.tsx index e39f3ef4..bd068e95 100755 --- a/kafka-manager-console/src/component/x-form-wrapper/index.tsx +++ b/kafka-manager-console/src/component/x-form-wrapper/index.tsx @@ -8,7 +8,7 @@ export class XFormWrapper extends React.Component { public state = { confirmLoading: false, formMap: this.props.formMap || [] as any, - formData: this.props.formData || {} + formData: this.props.formData || {}, }; private $formRef: any; @@ -121,7 +121,8 @@ export class XFormWrapper extends React.Component { this.closeModalWrapper(); }).catch((err: any) => { const { formMap, formData } = wrapper.xFormWrapper; - onSubmitFaild(err, this.$formRef, formData, formMap); + // tslint:disable-next-line:no-unused-expression + onSubmitFaild && onSubmitFaild(err, this.$formRef, formData, formMap); }).finally(() => { this.setState({ confirmLoading: false, diff --git a/kafka-manager-console/src/component/x-form/index.less b/kafka-manager-console/src/component/x-form/index.less index a08230a6..ed06afaa 100644 --- a/kafka-manager-console/src/component/x-form/index.less +++ b/kafka-manager-console/src/component/x-form/index.less @@ -1,4 +1,5 @@ -.ant-input-number, .ant-form-item-children .ant-select { +.ant-input-number, +.ant-form-item-children .ant-select { width: 314px } @@ -8,4 +9,36 @@ Button:first-child { margin-right: 16px; } +} + +.x-form { + .ant-form-item-label { + line-height: 32px; + } + + .ant-form-item-control { + line-height: 32px; + } +} + +.prompt-info { + color: #ccc; + font-size: 12px; + line-height: 20px; + display: block; + + &.inline { + margin-left: 16px; + display: inline-block; + + font-family: PingFangSC-Regular; + font-size: 12px; + color: #042866; + letter-spacing: 0; + text-align: justify; + + .anticon { + margin-right: 6px; + } + } } \ No newline at end of file diff --git a/kafka-manager-console/src/component/x-form/index.tsx b/kafka-manager-console/src/component/x-form/index.tsx index dc435d0f..cd65366b 100755 --- a/kafka-manager-console/src/component/x-form/index.tsx +++ b/kafka-manager-console/src/component/x-form/index.tsx @@ -85,6 +85,10 @@ class XForm extends React.Component { initialValue = false; } + if (formItem.type === FormItemType.select) { + initialValue = initialValue || undefined; + } + // if (formItem.type === FormItemType.select && formItem.attrs // && ['tags'].includes(formItem.attrs.mode)) { // initialValue = formItem.defaultValue ? [formItem.defaultValue] : []; @@ -105,7 +109,7 @@ class XForm extends React.Component { const { form, formData, formMap, formLayout, layout } = this.props; const { getFieldDecorator } = form; return ( -
({})}> + ({})}> {formMap.map(formItem => { const { initialValue, valuePropName } = this.handleFormItem(formItem, formData); const getFieldValue = { @@ -131,7 +135,13 @@ class XForm extends React.Component { )} {formItem.renderExtraElement ? formItem.renderExtraElement() : null} {/* 添加保存时间提示文案 */} - {formItem.attrs?.prompttype ? {formItem.attrs.prompttype} : null} + {formItem.attrs?.prompttype ? + + {formItem.attrs?.prompticon ? + : null} + {formItem.attrs.prompttype} + + : null} ); })} diff --git a/kafka-manager-console/src/container/admin/cluster-detail/cluster-overview.tsx b/kafka-manager-console/src/container/admin/cluster-detail/cluster-overview.tsx index 86b0b67b..5391d9f2 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/cluster-overview.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/cluster-overview.tsx @@ -30,13 +30,13 @@ export class ClusterOverview extends React.Component { const content = this.props.basicInfo as IMetaData; const gmtCreate = moment(content.gmtCreate).format(timeFormat); const clusterContent = [{ - value: content.clusterName, + value: `${content.clusterName}${content.haRelation === 0 ? '(备)' : content.haRelation === 1 ? '(主)' : content.haRelation === 2 ? '(主&备)' : ''}`, label: '集群名称', - }, + }, // { // value: clusterTypeMap[content.mode], // label: '集群类型', - // }, + // }, { value: gmtCreate, label: '接入时间', @@ -50,6 +50,9 @@ export class ClusterOverview extends React.Component { }, { value: content.zookeeper, label: 'Zookeeper', + }, { + value: `${content.mutualBackupClusterName || '-'}${content.haRelation === 0 ? '(主)' : content.haRelation === 1 ? '(备)' : content.haRelation === 2 ? '(主&备)' : ''}`, + label: '互备集群', }]; return ( <> @@ -64,18 +67,18 @@ export class ClusterOverview extends React.Component { ))} {clusterInfo.map((item: ILabelValue, index: number) => ( - - - - copyString(item.value)} - type="copy" - className="didi-theme overview-theme" - /> - {item.value} - - - + + + + copyString(item.value)} + type="copy" + className="didi-theme overview-theme" + /> + {item.value} + + + ))} diff --git a/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx b/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx index c2d3aa54..e66a0a73 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx @@ -118,10 +118,10 @@ export class ClusterTopic extends SearchAndFilterContainer { public renderClusterTopicList() { const clusterColumns = [ { - title: 'Topic名称', + title: `Topic名称`, dataIndex: 'topicName', key: 'topicName', - width: '120px', + width: '140px', sorter: (a: IClusterTopics, b: IClusterTopics) => a.topicName.charCodeAt(0) - b.topicName.charCodeAt(0), render: (text: string, record: IClusterTopics) => { return ( @@ -130,7 +130,7 @@ export class ClusterTopic extends SearchAndFilterContainer { // tslint:disable-next-line:max-line-length href={`${urlPrefix}/topic/topic-detail?clusterId=${record.clusterId || ''}&topic=${record.topicName || ''}&isPhysicalClusterId=true®ion=${region.currentRegion}`} > - {text} + {text}{record.haRelation === 0 ? '(备)' : record.haRelation === 1 ? '(主)' : record.haRelation === 2 ? '(主&备)' : ''} ); }, @@ -208,23 +208,27 @@ export class ClusterTopic extends SearchAndFilterContainer { { title: '操作', width: '120px', - render: (value: string, item: IClusterTopics) => ( - <> - this.getBaseInfo(item)} className="action-button">编辑 - this.expandPartition(item)} className="action-button">扩分区 - {/* this.expandPartition(item)} className="action-button">删除 */} - this.confirmDetailTopic(item)} - // onConfirm={() => this.deleteTopic(item)} - cancelText="取消" - okText="确认" - > - 删除 - - - ), + render: (value: string, item: IClusterTopics) => { + if (item.haRelation === 0) return '-'; + + return ( + <> + this.getBaseInfo(item)} className="action-button">编辑 + this.expandPartition(item)} className="action-button">扩分区 + {/* this.expandPartition(item)} className="action-button">删除 */} + this.confirmDetailTopic(item)} + // onConfirm={() => this.deleteTopic(item)} + cancelText="取消" + okText="确认" + > + 删除 + + + ); + }, }, ]; if (users.currentUser.role !== 2) { diff --git a/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx b/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx index b0ae63f4..25688649 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx @@ -73,6 +73,7 @@ export class LogicalCluster extends SearchAndFilterContainer { key: 'mode', render: (value: number) => { let val = ''; + // tslint:disable-next-line:no-unused-expression cluster.clusterModes && cluster.clusterModes.forEach((ele: any) => { if (value === ele.code) { val = ele.message; @@ -206,6 +207,7 @@ export class LogicalCluster extends SearchAndFilterContainer { } public render() { + const clusterModes = cluster.clusterModes; return (
    diff --git a/kafka-manager-console/src/container/admin/cluster-list/index.less b/kafka-manager-console/src/container/admin/cluster-list/index.less index e69de29b..54bae32c 100644 --- a/kafka-manager-console/src/container/admin/cluster-list/index.less +++ b/kafka-manager-console/src/container/admin/cluster-list/index.less @@ -0,0 +1,381 @@ +.switch-style { + &.ant-switch { + min-width: 32px; + height: 20px; + line-height: 18px; + + ::after { + height: 16px; + width: 16px; + } + } + + &.ant-switch-loading-icon, + &.ant-switch::after { + height: 16px; + width: 16px; + } +} + +.expanded-table { + width: auto ! important; + + .ant-table-thead { + // visibility: hidden; + display: none; + } + + .ant-table-tbody>tr>td { + background-color: #FAFAFA; + border-bottom: none; + } +} + +tr.ant-table-expanded-row td>.expanded-table { + padding: 10px; + // margin: -13px 0px -14px ! important; + border: none; +} + +.cluster-tag { + background: #27D687; + border-radius: 2px; + font-family: PingFangSC-Medium; + color: #FFFFFF; + letter-spacing: 0; + text-align: justify; + -webkit-transform: scale(0.5); + margin-right: 0px; +} + +.no-padding { + .ant-modal-body { + padding: 0; + + .attribute-content { + .tag-gray { + font-family: PingFangSC-Regular; + font-size: 12px; + color: #575757; + text-align: center; + line-height: 18px; + padding: 0 4px; + margin: 3px; + height: 20px; + background: #EEEEEE; + border-radius: 5px; + } + + .icon { + zoom: 0.8; + } + + .tag-num { + font-family: PingFangSC-Medium; + text-align: right; + line-height: 13px; + margin-left: 6px; + transform: scale(0.8333); + } + } + + .attribute-tag { + .ant-popover-inner-content { + padding: 12px; + max-width: 480px; + } + + .ant-popover-arrow { + display: none; + } + + .ant-popover-placement-bottom, + .ant-popover-placement-bottomLeft, + .ant-popover-placement-bottomRight { + top: 23px !important; + border-radius: 2px; + } + + .tag-gray { + font-family: PingFangSC-Regular; + font-size: 12px; + color: #575757; + text-align: center; + line-height: 12px; + padding: 0 4px; + margin: 3px; + height: 20px; + background: #EEEEEE; + border-radius: 5px; + } + } + + .col-status { + font-family: PingFangSC-Regular; + font-size: 12px; + letter-spacing: 0; + text-align: justify; + + &.green { + .ant-badge-status-text { + color: #2FC25B; + } + } + + &.black { + .ant-badge-status-text { + color: #575757; + } + } + + &.red { + .ant-badge-status-text { + color: #F5202E; + } + } + } + + .ant-alert-message { + font-family: PingFangSC-Regular; + font-size: 12px; + letter-spacing: 0; + text-align: justify; + } + + .ant-alert-warning { + border: none; + color: #592D00; + padding: 7px 15px 7px 41px; + background: #FFFAE0; + + .ant-alert-message { + color: #592D00 + } + } + + .ant-alert-info { + border: none; + padding: 7px 15px 7px 41px; + color: #042866; + background: #EFF8FF; + + .ant-alert-message { + color: #042866; + } + } + + .ant-alert-icon { + left: 24px; + top: 10px; + } + + .switch-warning { + .btn { + position: absolute; + top: 60px; + right: 24px; + height: 22px; + width: 64px; + padding: 0px; + + &.disabled { + top: 77px; + } + + button { + height: 22px; + width: 64px; + padding: 0px; + } + + &.loading { + width: 80px; + + button { + height: 22px; + width: 88px; + padding: 0px 0px 0px 12px; + } + } + } + } + + .modal-table-content { + padding: 0px 24px 16px; + + .ant-table-small { + border: none; + border-top: 1px solid #e8e8e8; + + .ant-table-thead { + background: #FAFAFA; + } + } + } + + .modal-table-download { + height: 40px; + line-height: 40px; + text-align: center; + border-top: 1px solid #e8e8e8; + } + + .ant-form { + padding: 18px 24px 0px; + + .ant-col-3 { + width: 9.5%; + } + + .ant-form-item-label { + text-align: left; + } + + .no-label { + .ant-col-21 { + width: 100%; + } + + .transfe-list { + .ant-transfer-list { + height: 359px; + } + } + + .ant-transfer-list { + width: 249px; + border: 1px solid #E8E8E8; + border-radius: 8px; + + .ant-transfer-list-header-title { + font-family: PingFangSC-Regular; + font-size: 12px; + color: #252525; + letter-spacing: 0; + text-align: right; + } + + .ant-transfer-list-body-search-wrapper { + padding: 19px 16px 6px; + + input { + height: 27px; + background: #FAFAFA; + border-radius: 8px; + border: none; + } + + .ant-transfer-list-search-action { + line-height: 27px; + height: 27px; + top: 19px; + } + } + } + + .ant-transfer-list-header { + border-radius: 8px 8px 0px 0px; + padding: 16px; + } + } + + .ant-transfer-customize-list .ant-transfer-list-body-customize-wrapper { + padding: 0px; + margin: 0px 16px; + background: #FAFAFA; + border-radius: 8px; + + .ant-table-header-column { + font-family: PingFangSC-Regular; + font-size: 12px; + color: #575757; + letter-spacing: 0; + text-align: justify; + } + + .ant-table-thead>tr { + border: none; + background: #FAFAFA; + } + + .ant-table-tbody>tr>td { + border: none; + background: #FAFAFA; + } + + .ant-table-body { + background: #FAFAFA; + } + } + + .ant-table-selection-column { + + .ant-table-header-column { + opacity: 0; + } + } + } + + .log-process { + height: 56px; + background: #FAFAFA; + padding: 6px 8px; + margin-bottom: 15px; + + .name { + display: flex; + color: #575757; + justify-content: space-between; + } + } + + .log-panel { + padding: 24px; + font-family: PingFangSC-Regular; + font-size: 12px; + + .title { + color: #252525; + letter-spacing: 0; + text-align: justify; + margin-bottom: 15px; + + .divider { + display: inline-block; + border-left: 2px solid #F38031; + height: 9px; + margin-right: 6px; + } + } + + .log-info { + color: #575757; + letter-spacing: 0; + text-align: justify; + margin-bottom: 10px; + + .text-num { + font-size: 14px; + } + + .warning-num { + color: #F38031; + font-size: 14px; + } + } + + .log-table { + margin-bottom: 24px; + + .ant-table-small { + border: none; + border-top: 1px solid #e8e8e8; + + .ant-table-thead { + background: #FAFAFA; + } + } + } + } + } +} \ No newline at end of file diff --git a/kafka-manager-console/src/container/admin/cluster-list/index.tsx b/kafka-manager-console/src/container/admin/cluster-list/index.tsx index cdd197fa..08769507 100644 --- a/kafka-manager-console/src/container/admin/cluster-list/index.tsx +++ b/kafka-manager-console/src/container/admin/cluster-list/index.tsx @@ -1,8 +1,8 @@ import * as React from 'react'; -import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Popover } from 'component/antd'; +import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Dropdown } from 'component/antd'; import { wrapper } from 'store'; import { observer } from 'mobx-react'; -import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type'; +import { IXFormWrapper, IMetaData, IRegister, ILabelValue } from 'types/base-type'; import { admin } from 'store/admin'; import { users } from 'store/users'; import { registerCluster, createCluster, pauseMonitoring } from 'lib/api'; @@ -10,11 +10,14 @@ import { SearchAndFilterContainer } from 'container/search-filter'; import { cluster } from 'store/cluster'; import { customPagination } from 'constants/table'; import { urlPrefix } from 'constants/left-menu'; -import { indexUrl } from 'constants/strategy' +import { indexUrl } from 'constants/strategy'; import { region } from 'store'; import './index.less'; -import Monacoeditor from 'component/editor/monacoEditor'; import { getAdminClusterColumns } from '../config'; +import { FormItemType } from 'component/x-form'; +import { TopicHaRelationWrapper } from 'container/modal/admin/TopicHaRelation'; +import { TopicSwitchWrapper } from 'container/modal/admin/TopicHaSwitch'; +import { TopicSwitchLog } from 'container/modal/admin/SwitchTaskLog'; const { confirm } = Modal; @@ -22,6 +25,10 @@ const { confirm } = Modal; export class ClusterList extends SearchAndFilterContainer { public state = { searchKey: '', + haVisible: false, + switchVisible: false, + logVisible: false, + currentCluster: {} as IMetaData, }; private xFormModal: IXFormWrapper; @@ -36,7 +43,26 @@ export class ClusterList extends SearchAndFilterContainer { ); } + public updateFormModal(value: boolean, metaList: ILabelValue[]) { + const formMap = wrapper.xFormWrapper.formMap; + formMap[1].attrs.prompttype = !value ? '' : metaList.length ? '已设置为高可用集群,请选择所关联的主集群' : '当前暂无可用集群进行关联高可用关系,请先添加集群'; + formMap[1].attrs.prompticon = 'true'; + formMap[2].invisible = !value; + formMap[2].attrs.disabled = !metaList.length; + formMap[6].rules[0].required = value; + + // tslint:disable-next-line:no-unused-expression + wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData); + + } + public createOrRegisterCluster(item: IMetaData) { + const self = this; + const metaList = Array.from(admin.metaList).filter(item => item.haRelation === null).map(item => ({ + label: item.clusterName, + value: item.clusterId, + })); + this.xFormModal = { formMap: [ { @@ -51,6 +77,38 @@ export class ClusterList extends SearchAndFilterContainer { disabled: item ? true : false, }, }, + { + key: 'ha', + label: '高可用', + type: FormItemType._switch, + invisible: item ? true : false, + rules: [{ + required: false, + }], + attrs: { + className: 'switch-style', + prompttype: '', + prompticon: '', + prompticomclass: '', + promptclass: 'inline', + onChange(value: boolean) { + self.updateFormModal(value, metaList); + }, + }, + }, + { + key: 'activeClusterId', + label: '主集群', + type: FormItemType.select, + options: metaList, + invisible: true, + rules: [{ + required: false, + }], + attrs: { + placeholder: '请选择主集群', + }, + }, { key: 'zookeeper', label: 'zookeeper地址', @@ -130,9 +188,9 @@ export class ClusterList extends SearchAndFilterContainer { }], attrs: { placeholder: `请输入安全协议,例如: -{ - "security.protocol": "SASL_PLAINTEXT", - "sasl.mechanism": "PLAIN", +{ + "security.protocol": "SASL_PLAINTEXT", + "sasl.mechanism": "PLAIN", "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\\"xxxxxx\\" password=\\"xxxxxx\\";" }`, rows: 8, @@ -162,17 +220,18 @@ export class ClusterList extends SearchAndFilterContainer { visible: true, width: 590, title: item ? '编辑' : '接入集群', + isWaitting: true, onSubmit: (value: IRegister) => { value.idc = region.currentRegion; if (item) { value.clusterId = item.clusterId; - registerCluster(value).then(data => { - admin.getMetaData(true); + return registerCluster(value).then(data => { + admin.getHaMetaData(); notification.success({ message: '编辑集群成功' }); }); } else { - createCluster(value).then(data => { - admin.getMetaData(true); + return createCluster(value).then(data => { + admin.getHaMetaData(); notification.success({ message: '接入集群成功' }); }); } @@ -186,7 +245,7 @@ export class ClusterList extends SearchAndFilterContainer { const info = item.status === 1 ? '暂停监控' : '开始监控'; const status = item.status === 1 ? 0 : 1; pauseMonitoring(item.clusterId, status).then(data => { - admin.getMetaData(true); + admin.getHaMetaData(); notification.success({ message: `${info}成功` }); }); } @@ -198,7 +257,7 @@ export class ClusterList extends SearchAndFilterContainer { title: <> 删除集群  - + @@ -216,12 +275,34 @@ export class ClusterList extends SearchAndFilterContainer { } admin.deleteCluster(record.clusterId).then(data => { notification.success({ message: '删除成功' }); + admin.getHaMetaData(); }); }, }); }); } + public showDelStandModal = (record: IMetaData) => { + confirm({ + // tslint:disable-next-line:jsx-wrap-multiline + title: '删除集群', + // icon: 'none', + content: <>{record.activeTopicCount ? `当前集群含有主topic,无法删除!` : record.haStatus !== 0 ? `当前集群正在进行主备切换,无法删除!` : `确认删除集群${record.clusterName}吗?`}, + width: 500, + okText: '确认', + cancelText: '取消', + onOk() { + if (record.activeTopicCount || record.haStatus !== 0) { + return; + } + admin.deleteCluster(record.clusterId).then(data => { + notification.success({ message: '删除成功' }); + admin.getHaMetaData(); + }); + }, + }); + } + public deleteMonitorModal = (source: any) => { const cellStyle = { overflow: 'hidden', @@ -275,11 +356,105 @@ export class ClusterList extends SearchAndFilterContainer { return data; } + public expandedRowRender = (record: IMetaData) => { + const dataSource: any = record.haClusterVO ? [record.haClusterVO] : []; + const cols = getAdminClusterColumns(false); + const role = users.currentUser.role; + + if (!record.haClusterVO) return null; + + const haRecord = record.haClusterVO; + + const btnsMenu = ( + <> + + ); + + const noAuthMenu = ( + + ); + + const col = { + title: '操作', + width: 270, + render: (value: string, item: IMetaData) => ( + <> + + Topic高可用关联 + + {item.haStatus !== 0 ? null : + Topic主备切换 + } + {item.haASSwitchJobId ? + 查看日志 + : null} + + + ··· + + + + ), + }; + cols.push(col as any); + return ( + + ); + } + public getColumns = () => { const cols = getAdminClusterColumns(); const role = users.currentUser.role; const col = { title: '操作', + width: 270, render: (value: string, item: IMetaData) => ( <> { @@ -307,10 +482,10 @@ export class ClusterList extends SearchAndFilterContainer { 删除 : - 编辑 - {item.status === 1 ? '暂停监控' : '开始监控'} - 删除 - + 编辑 + {item.status === 1 ? '暂停监控' : '开始监控'} + 删除 + } ), @@ -319,6 +494,20 @@ export class ClusterList extends SearchAndFilterContainer { return cols; } + public openModal(type: string, record: IMetaData) { + this.setState({ + currentCluster: record, + }, () => { + this.handleVisible(type, true); + }); + } + + public handleVisible(type: string, visible: boolean) { + this.setState({ + [type]: visible, + }); + } + public renderClusterList() { const role = users.currentUser.role; return ( @@ -333,8 +522,8 @@ export class ClusterList extends SearchAndFilterContainer { role && role === 2 ? : - - + + } @@ -343,26 +532,63 @@ export class ClusterList extends SearchAndFilterContainer {
    ( + record.haClusterVO ? + onExpand(record, e)} /> + : null + )} loading={admin.loading} - dataSource={this.getData(admin.metaList)} + expandedRowRender={this.expandedRowRender} + dataSource={this.getData(admin.haMetaList)} columns={this.getColumns()} pagination={customPagination} /> + {this.state.haVisible && this.handleVisible('haVisible', val)} + visible={this.state.haVisible} + currentCluster={this.state.currentCluster} + reload={() => admin.getHaMetaData()} + formData={{}} + />} + {this.state.switchVisible && + { + admin.getHaMetaData().then((res) => { + const currentRecord = res.find(item => item.clusterId === this.state.currentCluster.clusterId); + currentRecord.haClusterVO.haASSwitchJobId = jobId; + this.openModal('logVisible', currentRecord); + }); + }} + handleVisible={(val: boolean) => this.handleVisible('switchVisible', val)} + visible={this.state.switchVisible} + currentCluster={this.state.currentCluster} + formData={{}} + /> + } + {this.state.logVisible && + admin.getHaMetaData()} + handleVisible={(val: boolean) => this.handleVisible('logVisible', val)} + visible={this.state.logVisible} + currentCluster={this.state.currentCluster} + /> + } ); } public componentDidMount() { admin.getMetaData(true); + admin.getHaMetaData(); cluster.getClusterModes(); admin.getDataCenter(); } public render() { return ( - admin.metaList ? <> {this.renderClusterList()} : null + admin.haMetaList ? <> {this.renderClusterList()} : null ); } } diff --git a/kafka-manager-console/src/container/admin/config.tsx b/kafka-manager-console/src/container/admin/config.tsx index 1f9d6d81..29499bc8 100644 --- a/kafka-manager-console/src/container/admin/config.tsx +++ b/kafka-manager-console/src/container/admin/config.tsx @@ -3,12 +3,13 @@ import { IUser, IUploadFile, IConfigure, IConfigGateway, IMetaData } from 'types import { users } from 'store/users'; import { version } from 'store/version'; import { showApplyModal, showApplyModalModifyPassword, showModifyModal, showConfigureModal, showConfigGatewayModal } from 'container/modal/admin'; -import { Popconfirm, Tooltip } from 'component/antd'; +import { Icon, Popconfirm, Tooltip } from 'component/antd'; import { admin } from 'store/admin'; import { cellStyle } from 'constants/table'; import { timeFormat } from 'constants/strategy'; import { urlPrefix } from 'constants/left-menu'; import moment = require('moment'); +import { Tag } from 'antd'; export const getUserColumns = () => { const columns = [ @@ -28,15 +29,15 @@ export const getUserColumns = () => { showApplyModal(record)}>编辑 showApplyModalModifyPassword(record)}>修改密码 - {record.username == users.currentUser.username ? "" : - users.deleteUser(record.username)} - cancelText="取消" - okText="确认" - > - 删除 - + {record.username === users.currentUser.username ? '' : + users.deleteUser(record.username)} + cancelText="取消" + okText="确认" + > + 删除 + } ); }, @@ -271,33 +272,82 @@ export const getConfigColumns = () => { const renderClusterHref = (value: number | string, item: IMetaData, key: number) => { return ( // 0 暂停监控--不可点击 1 监控中---可正常点击 <> - {item.status === 1 ? {value} - : {value}} + {item.status === 1 ? {value} : + {value}} ); }; -export const getAdminClusterColumns = () => { +const renderTopicNum = (value: number | string, item: IMetaData, key: number, active?: boolean) => { + const show = item.haClusterVO || (!item.haClusterVO && !active); + + if (!show) { + return ( // 0 暂停监控--不可点击 1 监控中---可正常点击 + <> + {item.status === 1 ? + {value} + : + + {value} + } + + ); + } + return ( // 0 暂停监控--不可点击 1 监控中---可正常点击 + <> + {item.status === 1 ? + {value} + <>(主{item.activeTopicCount ?? '-'}/备{item.standbyTopicCount ?? '-'}) + : + + {value} + <>(主{item.activeTopicCount ?? '-'}/备{item.standbyTopicCount ?? '-'}) + } + + ); +}; + +const renderClusterName = (value: number | string, item: IMetaData, key: number, active: boolean) => { + const show = item.haClusterVO || (!item.haClusterVO && !active); + + return ( // 0 暂停监控--不可点击 1 监控中---可正常点击 + <> + {item.status === 1 ? + {value} : + {value}} + {active ? <> + {item.haClusterVO ? HA : null} + {item.haClusterVO && item.haStatus !== 0 ? + : null} + : null} + + ); +}; +export const getAdminClusterColumns = (active = true) => { return [ { title: '物理集群ID', dataIndex: 'clusterId', key: 'clusterId', - sorter: (a: IMetaData, b: IMetaData) => b.clusterId - a.clusterId, + sorter: (a: IMetaData, b: IMetaData) => a.clusterId - b.clusterId, + width: active ? 115 : 111, + render: (text: number) => active ? text : `(${text ?? 0})`, }, { title: '物理集群名称', dataIndex: 'clusterName', key: 'clusterName', sorter: (a: IMetaData, b: IMetaData) => a.clusterName.charCodeAt(0) - b.clusterName.charCodeAt(0), - render: (text: string, item: IMetaData) => renderClusterHref(text, item, 1), + render: (text: string, item: IMetaData) => renderClusterName(text, item, 1, active), + width: 235, }, { title: 'Topic数', dataIndex: 'topicNum', key: 'topicNum', sorter: (a: any, b: IMetaData) => b.topicNum - a.topicNum, - render: (text: number, item: IMetaData) => renderClusterHref(text, item, 2), + render: (text: number, item: IMetaData) => renderTopicNum(text, item, 2, active), + width: 140, }, { title: 'Broker数', @@ -305,6 +355,7 @@ export const getAdminClusterColumns = () => { key: 'brokerNum', sorter: (a: IMetaData, b: IMetaData) => b.brokerNum - a.brokerNum, render: (text: number, item: IMetaData) => renderClusterHref(text, item, 3), + width: 140, }, { title: 'Consumer数', @@ -312,6 +363,8 @@ export const getAdminClusterColumns = () => { key: 'consumerGroupNum', sorter: (a: IMetaData, b: IMetaData) => b.consumerGroupNum - a.consumerGroupNum, render: (text: number, item: IMetaData) => renderClusterHref(text, item, 4), + width: 150, + }, { title: 'Region数', @@ -319,6 +372,8 @@ export const getAdminClusterColumns = () => { key: 'regionNum', sorter: (a: IMetaData, b: IMetaData) => b.regionNum - a.regionNum, render: (text: number, item: IMetaData) => renderClusterHref(text, item, 5), + width: 140, + }, { title: 'Controllerld', @@ -326,12 +381,15 @@ export const getAdminClusterColumns = () => { key: 'controllerId', sorter: (a: IMetaData, b: IMetaData) => b.controllerId - a.controllerId, render: (text: number, item: IMetaData) => renderClusterHref(text, item, 7), + width: 150, + }, { title: '监控中', dataIndex: 'status', key: 'status', sorter: (a: IMetaData, b: IMetaData) => b.key - a.key, + width: 140, render: (value: number) => value === 1 ? : , }, diff --git a/kafka-manager-console/src/container/cluster/my-cluster.tsx b/kafka-manager-console/src/container/cluster/my-cluster.tsx index 3cb6115f..f8c5a7bc 100644 --- a/kafka-manager-console/src/container/cluster/my-cluster.tsx +++ b/kafka-manager-console/src/container/cluster/my-cluster.tsx @@ -44,7 +44,7 @@ export class MyCluster extends SearchAndFilterContainer { label: '所属应用', rules: [{ required: true, message: '请选择所属应用' }], type: 'select', - options: app.data.map((item) => { + options: app.clusterAppData.map((item) => { return { label: item.name, value: item.appId, @@ -135,8 +135,8 @@ export class MyCluster extends SearchAndFilterContainer { if (!cluster.clusterModes.length) { cluster.getClusterModes(); } - if (!app.data.length) { - app.getAppList(); + if (!app.clusterAppData.length) { + app.getAppListByClusterId(-1); } } diff --git a/kafka-manager-console/src/container/header/index.tsx b/kafka-manager-console/src/container/header/index.tsx index 3805e653..0c5ee512 100644 --- a/kafka-manager-console/src/container/header/index.tsx +++ b/kafka-manager-console/src/container/header/index.tsx @@ -145,7 +145,7 @@ export const Header = observer((props: IHeader) => {
    LogiKM - v2.6.1 + v2.8.0 {/* 添加版本超链接 */}
    diff --git a/kafka-manager-console/src/container/modal/admin/SwitchTaskLog.tsx b/kafka-manager-console/src/container/modal/admin/SwitchTaskLog.tsx new file mode 100644 index 00000000..297f2d08 --- /dev/null +++ b/kafka-manager-console/src/container/modal/admin/SwitchTaskLog.tsx @@ -0,0 +1,300 @@ +import * as React from 'react'; +import { Modal, Progress, Tooltip } from 'antd'; +import { IMetaData } from 'types/base-type'; +import { Alert, Badge, Button, Input, message, notification, Table } from 'component/antd'; +import { getJobDetail, getJobState, getJobLog, switchAsJobs } from 'lib/api'; +import moment from 'moment'; +import { timeFormat } from 'constants/strategy'; + +interface IProps { + reload: any; + visible?: boolean; + handleVisible?: any; + currentCluster?: IMetaData; +} + +interface IJobState { + failedNu: number; + jobNu: number; + runningNu: number; + successNu: number; + waitingNu: number; + runningInTimeoutNu: number; + progress: number; +} + +interface IJobDetail { + standbyClusterPhyId: number; + status: number; + sumLag: number; + timeoutUnitSecConfig: number; + topicName: string; + activeClusterPhyName: string; + standbyClusterPhyName: string; +} + +interface ILog { + bizKeyword: string; + bizType: number; + content: string; + id: number; + printTime: number; +} +interface IJobLog { + logList: ILog[]; + endLogId: number; +} +const STATUS_MAP = { + '-1': '未知', + '30': '运行中', + '32': '超时运行中', + '101': '成功', + '102': '失败', +} as any; +const STATUS_COLORS = { + '-1': '#575757', + '30': '#575757', + '32': '#F5202E', + '101': '#2FC25B', + '102': '#F5202E', +} as any; +const STATUS_COLOR_MAP = { + '-1': 'black', + '30': 'black', + '32': 'red', + '101': 'green', + '102': 'red', +} as any; + +const getFilters = () => { + const keys = Object.keys(STATUS_MAP); + const filters = []; + for (const key of keys) { + filters.push({ + text: STATUS_MAP[key], + value: key, + }); + } + return filters; +}; + +const columns = [ + { + dataIndex: 'key', + title: '编号', + width: 60, + }, + { + dataIndex: 'topicName', + title: 'Topic名称', + width: 120, + ellipsis: true, + }, + { + dataIndex: 'sumLag', + title: '延迟', + width: 100, + render: (value: number) => value ?? '-', + }, + { + dataIndex: 'status', + title: '状态', + width: 100, + filters: getFilters(), + onFilter: (value: string, record: IJobDetail) => record.status === Number(value), + render: (t: number) => ( + + + + ), + }, +]; + +export class TopicSwitchLog extends React.Component { + public state = { + radioCheck: 'all', + jobDetail: [] as IJobDetail[], + jobState: {} as IJobState, + jobLog: {} as IJobLog, + textStr: '', + primaryTargetKeys: [] as string[], + loading: false, + }; + public timer = null as number; + public jobId = this.props.currentCluster?.haClusterVO?.haASSwitchJobId as number; + + public handleOk = () => { + this.props.handleVisible(false); + this.props.reload(); + } + + public handleCancel = () => { + this.props.handleVisible(false); + this.props.reload(); + } + + public iTimer = () => { + this.timer = window.setInterval(() => { + const { jobLog } = this.state; + this.getContentJobLog(jobLog.endLogId); + this.getContentJobState(); + this.getContentJobDetail(); + }, 10 * 1 * 1000); + } + + public getTextAreaStr = (logList: ILog[]) => { + const strs = []; + + for (const item of logList) { + strs.push(`${moment(item.printTime).format(timeFormat)} ${item.content}`); + } + + return strs.join(`\n`); + } + + public getContentJobLog = (startId?: number) => { + getJobLog(this.jobId, startId).then((res: IJobLog) => { + const { jobLog } = this.state; + const logList = (jobLog.logList || []); + logList.push(...(res?.logList || [])); + + const newJobLog = { + endLogId: res?.endLogId, + logList, + }; + + this.setState({ + textStr: this.getTextAreaStr(logList), + jobLog: newJobLog, + }); + }); + } + + public getContentJobState = () => { + getJobState(this.jobId).then((res: IJobState) => { + // 成功后清除调用 + if (res?.jobNu === res.successNu) { + clearInterval(this.timer); + } + this.setState({ + jobState: res || {}, + }); + }); + } + public getContentJobDetail = () => { + getJobDetail(this.jobId).then((res: IJobDetail[]) => { + this.setState({ + jobDetail: (res || []).map((row, index) => ({ + ...row, + key: index, + })), + }); + }); + } + + public switchJobs = () => { + const { jobState } = this.state; + Modal.confirm({ + title: '强制切换', + content: `当前有${jobState.runningNu}个Topic切换中,${jobState.runningInTimeoutNu}个Topic切换超时,强制切换会使这些Topic有数据丢失的风险,确定强制切换吗?`, + onOk: () => { + this.setState({ + loading: true, + }); + switchAsJobs(this.jobId, { + action: 'force', + allJumpWaitInSync: true, + jumpWaitInSyncActiveTopicList: [], + }).then(res => { + message.success('强制切换成功'); + }).finally(() => { + this.setState({ + loading: false, + }); + }); + }, + }); + } + + public componentWillUnmount() { + clearInterval(this.timer); + } + + public componentDidMount() { + this.getContentJobDetail(); + this.getContentJobState(); + this.getContentJobLog(); + setTimeout(this.iTimer, 0); + } + + public render() { + const { visible, currentCluster } = this.props; + const { jobState, jobDetail, textStr, loading } = this.state; + const runtimeJob = jobDetail.filter(item => item.status === 32); + const percent = jobState?.progress; + return ( + + {runtimeJob.length ? + + : null} +
    + + + +
    + +
    +
    +
    + Topic切换详情: +
    +
    +
    + 源集群 {jobDetail?.[0]?.standbyClusterPhyName || ''} + 目标集群 {jobDetail?.[0]?.activeClusterPhyName || ''} +
    + +
    +
    + Topic总数 {jobState.jobNu ?? '-'} 个, + 切换成功 {jobState.successNu ?? '-'} 个, + 切换超时 {jobState.failedNu ?? '-'} 个, + 待切换 {jobState.waitingNu ?? '-'} 个。 +
    +
    +
    +
    + 集群切换日志: +
    +
    + +
    +
    + + + ); + } +} diff --git a/kafka-manager-console/src/container/modal/admin/TopicHaRelation.tsx b/kafka-manager-console/src/container/modal/admin/TopicHaRelation.tsx new file mode 100644 index 00000000..8f3b128d --- /dev/null +++ b/kafka-manager-console/src/container/modal/admin/TopicHaRelation.tsx @@ -0,0 +1,351 @@ +import * as React from 'react'; +import { admin } from 'store/admin'; +import { Modal, Form, Radio } from 'antd'; +import { IBrokersMetadata, IBrokersRegions, IMetaData } from 'types/base-type'; +import { Alert, message, notification, Table, Tooltip, Transfer } from 'component/antd'; +import { getClusterHaTopicsStatus, setHaTopics, unbindHaTopics } from 'lib/api'; +import { cellStyle } from 'constants/table'; + +const layout = { + labelCol: { span: 3 }, + wrapperCol: { span: 21 }, +}; + +interface IXFormProps { + form: any; + reload: any; + formData?: any; + visible?: boolean; + handleVisible?: any; + currentCluster?: IMetaData; +} + +interface IHaTopic { + clusterId: number; + clusterName: string; + haRelation: number; + topicName: string; + key: string; + disabled?: boolean; +} + +const resColumns = [ + { + title: 'TopicName', + dataIndex: 'topicName', + key: 'topicName', + width: 120, + }, + { + title: '状态', + dataIndex: 'code', + key: 'code', + width: 60, + render: (t: number) => { + return ( + + {t === 0 ? '成功' : '失败'} + + ); + }, + }, + { + title: '原因', + dataIndex: 'message', + key: 'message', + width: 125, + onCell: () => ({ + style: { + maxWidth: 120, + ...cellStyle, + }, + }), + render: (text: string) => { + return ( + + {text} + ); + }, + }, +]; +class TopicHaRelation extends React.Component { + public state = { + radioCheck: 'spec', + haTopics: [] as IHaTopic[], + targetKeys: [] as string[], + confirmLoading: false, + firstMove: true, + primaryActiveKeys: [] as string[], + primaryStandbyKeys: [] as string[], + }; + + public handleOk = () => { + this.props.form.validateFields((err: any, values: any) => { + const unbindTopics = []; + const bindTopics = []; + + if (values.rule === 'all') { + setHaTopics({ + all: true, + activeClusterId: this.props.currentCluster.clusterId, + standbyClusterId: this.props.currentCluster.haClusterVO.clusterId, + topicNames: [], + }).then(res => { + handleMsg(res, '关联成功'); + this.setState({ + confirmLoading: false, + }); + this.handleCancel(); + }); + return; + } + + for (const item of this.state.primaryStandbyKeys) { + if (!this.state.targetKeys.includes(item)) { + unbindTopics.push(item); + } + } + for (const item of this.state.targetKeys) { + if (!this.state.primaryStandbyKeys.includes(item)) { + bindTopics.push(item); + } + } + + if (!unbindTopics.length && !bindTopics.length) { + return message.info('请选择您要操作的Topic'); + } + + const handleMsg = (res: any[], successTip: string) => { + const errorRes = res.filter(item => item.code !== 0); + + if (errorRes.length) { + Modal.confirm({ + title: '执行结果', + width: 520, + icon: null, + content: ( +
    + ), + }); + } else { + notification.success({ message: successTip }); + } + + this.props.reload(); + }; + + if (bindTopics.length) { + this.setState({ + confirmLoading: true, + }); + setHaTopics({ + all: false, + activeClusterId: this.props.currentCluster.clusterId, + standbyClusterId: this.props.currentCluster.haClusterVO.clusterId, + topicNames: bindTopics, + }).then(res => { + this.setState({ + confirmLoading: false, + }); + this.handleCancel(); + handleMsg(res, '关联成功'); + }); + } + + if (unbindTopics.length) { + this.setState({ + confirmLoading: true, + }); + unbindHaTopics({ + all: false, + activeClusterId: this.props.currentCluster.clusterId, + standbyClusterId: this.props.currentCluster.haClusterVO.clusterId, + topicNames: unbindTopics, + }).then(res => { + this.setState({ + confirmLoading: false, + }); + this.handleCancel(); + handleMsg(res, '解绑成功'); + }); + } + }); + } + + public handleCancel = () => { + this.props.handleVisible(false); + this.props.form.resetFields(); + } + + public handleRadioChange = (e: any) => { + this.setState({ + radioCheck: e.target.value, + }); + } + + public isPrimaryStatus = (targetKeys: string[]) => { + const { primaryStandbyKeys } = this.state; + let isReset = false; + // 判断当前移动是否还原为最初的状态 + if (primaryStandbyKeys.length === targetKeys.length) { + targetKeys.sort((a, b) => +a - (+b)); + primaryStandbyKeys.sort((a, b) => +a - (+b)); + let i = 0; + while (i < targetKeys.length) { + if (targetKeys[i] === primaryStandbyKeys[i]) { + i++; + } else { + break; + } + } + isReset = i === targetKeys.length; + } + return isReset; + } + + public setTopicsStatus = (targetKeys: string[], disabled: boolean, isAll = false) => { + const { haTopics } = this.state; + const newTopics = Array.from(haTopics); + if (isAll) { + for (let i = 0; i < haTopics.length; i++) { + newTopics[i].disabled = disabled; + } + } else { + for (const key of targetKeys) { + const index = haTopics.findIndex(item => item.key === key); + if (index > -1) { + newTopics[index].disabled = disabled; + } + } + } + this.setState(({ + haTopics: newTopics, + })); + } + + public onTransferChange = (targetKeys: string[], direction: string, moveKeys: string[]) => { + const { primaryStandbyKeys, firstMove, primaryActiveKeys } = this.state; + // 判断当前移动是否还原为最初的状态 + const isReset = this.isPrimaryStatus(targetKeys); + if (firstMove) { + const primaryKeys = direction === 'right' ? primaryStandbyKeys : primaryActiveKeys; + this.setTopicsStatus(primaryKeys, true, false); + this.setState(({ + firstMove: false, + targetKeys, + })); + return; + } + + // 如果是还原为初始状态则还原禁用状态 + if (isReset) { + this.setTopicsStatus([], false, true); + this.setState(({ + firstMove: true, + targetKeys, + })); + return; + } + + this.setState({ + targetKeys, + }); + } + + public componentDidMount() { + Promise.all([ + getClusterHaTopicsStatus(this.props.currentCluster.clusterId, true), + getClusterHaTopicsStatus(this.props.currentCluster.clusterId, false), + ]).then(([activeRes, standbyRes]: IHaTopic[][]) => { + activeRes = (activeRes || []).map(row => ({ + ...row, + key: row.topicName, + })).filter(item => item.haRelation === null); + standbyRes = (standbyRes || []).map(row => ({ + ...row, + key: row.topicName, + })).filter(item => item.haRelation === 1 || item.haRelation === 0); + this.setState({ + haTopics: [].concat([...activeRes, ...standbyRes]).sort((a, b) => a.topicName.localeCompare(b.topicName)), + primaryActiveKeys: activeRes.map(row => row.topicName), + primaryStandbyKeys: standbyRes.map(row => row.topicName), + targetKeys: standbyRes.map(row => row.topicName), + }); + }); + } + + public render() { + const { formData = {} as any, visible, currentCluster } = this.props; + const { getFieldDecorator } = this.props.form; + let metadata = [] as IBrokersMetadata[]; + metadata = admin.brokersMetadata ? admin.brokersMetadata : metadata; + let regions = [] as IBrokersRegions[]; + regions = admin.brokersRegions ? admin.brokersRegions : regions; + return ( + <> + + + + {/* + {getFieldDecorator('rule', { + initialValue: 'spec', + rules: [{ + required: true, + message: '请选择规则', + }], + })( + 应用于所有Topic + 应用于特定Topic + )} + */} + {this.state.radioCheck === 'spec' ? + {getFieldDecorator('topicNames', { + initialValue: this.state.targetKeys, + rules: [{ + required: false, + message: '请选择Topic', + }], + })( + item.topicName} + titles={['未关联', '已关联']} + locale={{ + itemUnit: '', + itemsUnit: '', + }} + />, + )} + : ''} + + + + ); + } +} +export const TopicHaRelationWrapper = Form.create()(TopicHaRelation); diff --git a/kafka-manager-console/src/container/modal/admin/TopicHaSwitch.tsx b/kafka-manager-console/src/container/modal/admin/TopicHaSwitch.tsx new file mode 100644 index 00000000..78c5565b --- /dev/null +++ b/kafka-manager-console/src/container/modal/admin/TopicHaSwitch.tsx @@ -0,0 +1,718 @@ +import * as React from 'react'; +import { admin } from 'store/admin'; +import { Modal, Form, Radio, Tag, Popover, Button } from 'antd'; +import { IBrokersMetadata, IBrokersRegions, IMetaData } from 'types/base-type'; +import { Alert, Icon, message, Table, Transfer } from 'component/antd'; +import { getClusterHaTopics, getAppRelatedTopics, createSwitchTask } from 'lib/api'; +import { TooltipPlacement } from 'antd/es/tooltip'; +import * as XLSX from 'xlsx'; +import moment from 'moment'; +import { timeMinute } from 'constants/strategy'; + +const layout = { + labelCol: { span: 3 }, + wrapperCol: { span: 21 }, +}; + +interface IXFormProps { + form: any; + reload: any; + formData?: any; + visible?: boolean; + handleVisible?: any; + currentCluster?: IMetaData; +} + +interface IHaTopic { + clusterId: number; + topicName: string; + key: string; + activeClusterId: number; + consumeAclNum: number; + produceAclNum: number; + standbyClusterId: number; + status: number; + disabled?: boolean; +} + +interface IKafkaUser { + clusterPhyId: number; + kafkaUser: string; + notHaTopicNameList: string[]; + notSelectTopicNameList: string[]; + selectedTopicNameList: string[]; + show: boolean; +} + +const columns = [ + { + dataIndex: 'topicName', + title: '名称', + width: 100, + ellipsis: true, + }, + { + dataIndex: 'produceAclNum', + title: '生产者数量', + width: 80, + }, + { + dataIndex: 'consumeAclNum', + title: '消费者数量', + width: 80, + }, +]; + +const kafkaUserColumn = [ + { + dataIndex: 'kafkaUser', + title: 'kafkaUser', + width: 100, + ellipsis: true, + }, + { + dataIndex: 'selectedTopicNameList', + title: '已选中Topic', + width: 120, + render: (text: string[]) => { + return text?.length ? renderAttributes({ data: text, limit: 3 }) : '-'; + }, + }, + { + dataIndex: 'notSelectTopicNameList', + title: '选中关联Topic', + width: 120, + render: (text: string[]) => { + return text?.length ? renderAttributes({ data: text, limit: 3 }) : '-'; + }, + }, + { + dataIndex: 'notHaTopicNameList', + title: '未建立HA Topic', + width: 120, + render: (text: string[]) => { + return text?.length ? renderAttributes({ data: text, limit: 3 }) : '-'; + }, + }, +]; + +export const renderAttributes = (params: { + data: any; + type?: string; + limit?: number; + splitType?: string; + placement?: TooltipPlacement; +}) => { + const { data, type = ',', limit = 2, splitType = ';', placement } = params; + let attrArray = data; + if (!Array.isArray(data) && data) { + attrArray = data.split(type); + } + const showItems = attrArray.slice(0, limit) || []; + const hideItems = attrArray.slice(limit, attrArray.length) || []; + const content = hideItems.map((item: string, index: number) => ( + + {item} + + )); + const showItemsContent = showItems.map((item: string, index: number) => ( + + {item} + + )); + + return ( +
    + {showItems.length > 0 ? showItemsContent : '-'} + {hideItems.length > 0 && ( + + 共{attrArray.length}个 + + )} +
    + ); +}; +class TopicHaSwitch extends React.Component { + public state = { + radioCheck: 'spec', + targetKeys: [] as string[], + selectedKeys: [] as string[], + topics: [] as IHaTopic[], + kafkaUsers: [] as IKafkaUser[], + primaryActiveKeys: [] as string[], + primaryStandbyKeys: [] as string[], + firstMove: true, + }; + + public isPrimaryStatus = (targetKeys: string[]) => { + const { primaryStandbyKeys } = this.state; + let isReset = false; + // 判断当前移动是否还原为最初的状态 + if (primaryStandbyKeys.length === targetKeys.length) { + targetKeys.sort((a, b) => +a - (+b)); + primaryStandbyKeys.sort((a, b) => +a - (+b)); + let i = 0; + while (i < targetKeys.length) { + if (targetKeys[i] === primaryStandbyKeys[i]) { + i++; + } else { + break; + } + } + isReset = i === targetKeys.length; + } + return isReset; + } + + public getTargetTopics = (currentKeys: string[], primaryKeys: string[]) => { + const targetTopics = []; + for (const key of currentKeys) { + if (!primaryKeys.includes(key)) { + const topic = this.state.topics.find(item => item.key === key)?.topicName; + targetTopics.push(topic); + } + } + return targetTopics; + } + + public handleOk = () => { + const { primaryStandbyKeys, primaryActiveKeys, topics } = this.state; + const standbyClusterId = this.props.currentCluster.haClusterVO.clusterId; + const activeClusterId = this.props.currentCluster.clusterId; + + this.props.form.validateFields((err: any, values: any) => { + + if (values.rule === 'all') { + createSwitchTask({ + activeClusterPhyId: activeClusterId, + all: true, + mustContainAllKafkaUserTopics: true, + standbyClusterPhyId: standbyClusterId, + topicNameList: [], + }).then(res => { + message.success('任务创建成功'); + this.handleCancel(); + this.props.reload(res); + }); + return; + } + // 判断当前移动是否还原为最初的状态 + const isPrimary = this.isPrimaryStatus(values.targetKeys || []); + if (isPrimary) { + return message.info('请选择您要切换的Topic'); + } + + // 右侧框值 + const currentStandbyKeys = values.targetKeys || []; + // 左侧框值 + const currentActiveKeys = []; + for (const item of topics) { + if (!currentStandbyKeys.includes(item.key)) { + currentActiveKeys.push(item.key); + } + } + + const currentKeys = currentStandbyKeys.length > primaryStandbyKeys.length ? currentStandbyKeys : currentActiveKeys; + const primaryKeys = currentStandbyKeys.length > primaryStandbyKeys.length ? primaryStandbyKeys : primaryActiveKeys; + const activeClusterPhyId = currentStandbyKeys.length > primaryStandbyKeys.length ? standbyClusterId : activeClusterId; + const standbyClusterPhyId = currentStandbyKeys.length > primaryStandbyKeys.length ? activeClusterId : standbyClusterId; + const targetTopics = this.getTargetTopics(currentKeys, primaryKeys); + createSwitchTask({ + activeClusterPhyId, + all: false, + mustContainAllKafkaUserTopics: true, + standbyClusterPhyId, + topicNameList: targetTopics, + }).then(res => { + message.success('任务创建成功'); + this.handleCancel(); + this.props.reload(res); + }); + }); + } + + public handleCancel = () => { + this.props.handleVisible(false); + this.props.form.resetFields(); + } + + public handleRadioChange = (e: any) => { + this.setState({ + radioCheck: e.target.value, + }); + } + + public getNewSelectKeys = (removeKeys: string[], selectedKeys: string[]) => { + const { topics, kafkaUsers } = this.state; + // 根据移除的key找与该key关联的其他key,一起移除 + let relatedTopics: string[] = []; + const relatedKeys: string[] = []; + const newSelectKeys = []; + for (const key of removeKeys) { + const topicName = topics.find(row => row.key === key)?.topicName; + for (const item of kafkaUsers) { + if (item.selectedTopicNameList.includes(topicName)) { + relatedTopics = relatedTopics.concat(item.selectedTopicNameList); + relatedTopics = relatedTopics.concat(item.notSelectTopicNameList); + } + } + for (const item of relatedTopics) { + const key = topics.find(row => row.topicName === item)?.key; + if (key) { + relatedKeys.push(key); + } + } + for (const key of selectedKeys) { + if (!relatedKeys.includes(key)) { + newSelectKeys.push(key); + } + } + } + return newSelectKeys; + } + + public setTopicsStatus = (targetKeys: string[], disabled: boolean, isAll = false) => { + const { topics } = this.state; + const newTopics = Array.from(topics); + if (isAll) { + for (let i = 0; i < topics.length; i++) { + newTopics[i].disabled = disabled; + } + } else { + for (const key of targetKeys) { + const index = topics.findIndex(item => item.key === key); + if (index > -1) { + newTopics[index].disabled = disabled; + } + } + } + this.setState(({ + topics: newTopics, + })); + } + + public getFilterTopics = (selectKeys: string[]) => { + // 依据key值找topicName + const filterTopics: string[] = []; + const targetKeys = selectKeys; + for (const key of targetKeys) { + const topicName = this.state.topics.find(item => item.key === key)?.topicName; + if (topicName) { + filterTopics.push(topicName); + } + } + return filterTopics; + } + + public getNewKafkaUser = (targetKeys: string[]) => { + const { primaryStandbyKeys, topics } = this.state; + const removeKeys = []; + const addKeys = []; + for (const key of primaryStandbyKeys) { + if (targetKeys.indexOf(key) < 0) { + // 移除的 + removeKeys.push(key); + } + } + for (const key of targetKeys) { + if (primaryStandbyKeys.indexOf(key) < 0) { + // 新增的 + addKeys.push(key); + } + } + const keepKeys = [...removeKeys, ...addKeys]; + const newKafkaUsers = this.state.kafkaUsers; + + const moveTopics = this.getFilterTopics(keepKeys); + + for (const topic of moveTopics) { + for (const item of newKafkaUsers) { + if (item.selectedTopicNameList.includes(topic)) { + item.show = true; + } + } + } + + const showKafaUsers = newKafkaUsers.filter(item => item.show === true); + + for (const item of showKafaUsers) { + let i = 0; + while (i < moveTopics.length) { + if (!item.selectedTopicNameList.includes(moveTopics[i])) { + i++; + } else { + break; + } + } + + // 表示该kafkaUser不该展示 + if (i === moveTopics.length) { + item.show = false; + } + } + + return showKafaUsers; + } + + public getAppRelatedTopicList = (selectedKeys: string[]) => { + const { topics, targetKeys, primaryStandbyKeys, kafkaUsers } = this.state; + const filterTopicNameList = this.getFilterTopics(selectedKeys); + const isReset = this.isPrimaryStatus(targetKeys); + + if (!filterTopicNameList.length && isReset) { + // targetKeys + this.setState({ + kafkaUsers: kafkaUsers.map(item => ({ + ...item, + show: false, + })), + }); + return; + } else { + // 保留选中项与移动的的项 + this.setState({ + kafkaUsers: this.getNewKafkaUser(targetKeys), + }); + } + + // 单向选择,所以取当前值的aactiveClusterId + const clusterPhyId = topics.find(item => item.topicName === filterTopicNameList[0]).activeClusterId; + getAppRelatedTopics({ + clusterPhyId, + filterTopicNameList, + }).then((res: IKafkaUser[]) => { + let notSelectTopicNames: string[] = []; + const notSelectTopicKeys: string[] = []; + for (const item of (res || [])) { + notSelectTopicNames = notSelectTopicNames.concat(item.notSelectTopicNameList || []); + } + + for (const item of notSelectTopicNames) { + const key = topics.find(row => row.topicName === item)?.key; + + if (key) { + notSelectTopicKeys.push(key); + } + } + + const newSelectedKeys = selectedKeys.concat(notSelectTopicKeys); + const newKafkaUsers = (res || []).map(item => ({ + ...item, + show: true, + })); + const { kafkaUsers } = this.state; + + for (const item of kafkaUsers) { + const resItem = res.find(row => row.kafkaUser === item.kafkaUser); + if (!resItem) { + newKafkaUsers.push(item); + } + } + this.setState({ + kafkaUsers: newKafkaUsers, + selectedKeys: newSelectedKeys, + }); + + if (notSelectTopicKeys.length) { + this.getAppRelatedTopicList(newSelectedKeys); + } + }); + } + + public getRelatedKeys = (currentKeys: string[]) => { + // 未被选中的项 + const removeKeys = []; + // 对比上一次记录的选中的值找出本次取消的项 + const { selectedKeys } = this.state; + for (const preKey of selectedKeys) { + if (!currentKeys.includes(preKey)) { + removeKeys.push(preKey); + } + } + + return removeKeys?.length ? this.getNewSelectKeys(removeKeys, currentKeys) : currentKeys; + } + + public handleTopicChange = (sourceSelectedKeys: string[], targetSelectedKeys: string[]) => { + const { topics, targetKeys } = this.state; + // 条件限制只允许选中一边,单向操作 + const keys = [...sourceSelectedKeys, ...targetSelectedKeys]; + + // 判断当前选中项属于哪一类 + if (keys.length) { + const activeClusterId = topics.find(item => item.key === keys[0]).activeClusterId; + const needDisabledKeys = topics.filter(item => item.activeClusterId !== activeClusterId).map(row => row.key); + this.setTopicsStatus(needDisabledKeys, true); + } + const selectedKeys = this.state.selectedKeys.length ? this.getRelatedKeys(keys) : keys; + + const isReset = this.isPrimaryStatus(targetKeys); + if (!selectedKeys.length && isReset) { + this.setTopicsStatus([], false, true); + } + this.setState({ + selectedKeys, + }); + this.getAppRelatedTopicList(selectedKeys); + } + + public onDirectChange = (targetKeys: string[], direction: string, moveKeys: string[]) => { + const { primaryStandbyKeys, firstMove, primaryActiveKeys, topics } = this.state; + + const getKafkaUser = () => { + const newKafkaUsers = this.state.kafkaUsers; + const moveTopics = this.getFilterTopics(moveKeys); + for (const topic of moveTopics) { + for (const item of newKafkaUsers) { + if (item.selectedTopicNameList.includes(topic)) { + item.show = true; + } + } + } + return newKafkaUsers; + }; + // 判断当前移动是否还原为最初的状态 + const isReset = this.isPrimaryStatus(targetKeys); + if (firstMove) { + const primaryKeys = direction === 'right' ? primaryStandbyKeys : primaryActiveKeys; + this.setTopicsStatus(primaryKeys, true, false); + this.setState(({ + firstMove: false, + kafkaUsers: getKafkaUser(), + targetKeys, + })); + return; + } + // 如果是还原为初始状态则还原禁用状态 + if (isReset) { + this.setTopicsStatus([], false, true); + this.setState(({ + firstMove: true, + targetKeys, + kafkaUsers: [], + })); + return; + } + + // 切换后重新判定展示项 + this.setState(({ + targetKeys, + kafkaUsers: this.getNewKafkaUser(targetKeys), + })); + + } + + public downloadData = () => { + const { kafkaUsers } = this.state; + const tableData = kafkaUsers.map(item => { + return { + // tslint:disable + 'kafkaUser': item.kafkaUser, + '已选中Topic': item.selectedTopicNameList?.join('、'), + '选中关联Topic': item.notSelectTopicNameList?.join('、'), + '未建立HA Topic': item.notHaTopicNameList?.join(`、`), + }; + }); + const data = [].concat(tableData); + const wb = XLSX.utils.book_new(); + // json转sheet + const ws = XLSX.utils.json_to_sheet(data, { + header: ['kafkaUser', '已选中Topic', '选中关联Topic', '未建立HA Topic'], + }); + // XLSX.utils. + XLSX.utils.book_append_sheet(wb, ws, 'kafkaUser'); + // 输出 + XLSX.writeFile(wb, 'kafkaUser-' + moment((new Date()).getTime()).format(timeMinute) + '.xlsx'); + } + + public judgeSubmitStatus = () => { + const { kafkaUsers } = this.state; + + const newKafkaUsers = kafkaUsers.filter(item => item.show) + for (const item of newKafkaUsers) { + if (item.notHaTopicNameList.length) { + return true; + } + } + return false; + } + + public componentDidMount() { + const standbyClusterId = this.props.currentCluster.haClusterVO.clusterId; + const activeClusterId = this.props.currentCluster.clusterId; + getClusterHaTopics(this.props.currentCluster.clusterId, standbyClusterId).then((res: IHaTopic[]) => { + res = res.map((item, index) => ({ + key: index.toString(), + ...item, + })); + const targetKeys = (res || []).filter((item) => item.activeClusterId === standbyClusterId).map(row => row.key); + const primaryActiveKeys = (res || []).filter((item) => item.activeClusterId === activeClusterId).map(row => row.key); + this.setState({ + topics: res || [], + primaryStandbyKeys: targetKeys, + primaryActiveKeys, + targetKeys, + }); + }); + } + + public render() { + const { visible, currentCluster } = this.props; + const { getFieldDecorator } = this.props.form; + let metadata = [] as IBrokersMetadata[]; + metadata = admin.brokersMetadata ? admin.brokersMetadata : metadata; + let regions = [] as IBrokersRegions[]; + regions = admin.brokersRegions ? admin.brokersRegions : regions; + const tableData = this.state.kafkaUsers.filter(row => row.show); + + return ( + + + + + } + > + +
    + {/* + {getFieldDecorator('rule', { + initialValue: 'spec', + rules: [{ + required: true, + message: '请选择规则', + }], + })( + 应用于所有Topic + 应用于特定Topic + )} + */} + {this.state.radioCheck === 'spec' ? + {getFieldDecorator('targetKeys', { + initialValue: this.state.targetKeys, + rules: [{ + required: false, + message: '请选择Topic', + }], + })( + , + )} + : ''} + + {this.state.radioCheck === 'spec' ? + <> +
    + {this.state.kafkaUsers.length ? : null} + + : null} + + ); + } +} +export const TopicSwitchWrapper = Form.create()(TopicHaSwitch); + +const TableTransfer = ({ leftColumns, ...restProps }: any) => ( + + {({ + filteredItems, + direction, + onItemSelect, + selectedKeys: listSelectedKeys, + }) => { + const columns = leftColumns; + + const rowSelection = { + columnWidth: 40, + getCheckboxProps: (item: any) => ({ + disabled: item.disabled, + }), + onSelect({ key }: any, selected: any) { + onItemSelect(key, selected); + }, + selectedRowKeys: listSelectedKeys, + }; + return ( +
    ({ + onClick: () => { + if (disabled) return; + onItemSelect(key, !listSelectedKeys.includes(key)); + }, + })} + /> + ); + }} + +); + +interface IProps { + value?: any; + onChange?: any; + onDirectChange?: any; + currentCluster: any; + topicChange: any; + dataSource: any[]; + selectedKeys: string[]; +} + +export class TransferTable extends React.Component { + public onChange = (nextTargetKeys: any, direction: string, moveKeys: string[]) => { + this.props.onDirectChange(nextTargetKeys, direction, moveKeys); + // tslint:disable-next-line:no-unused-expression + this.props.onChange && this.props.onChange(nextTargetKeys); + } + + public render() { + const { currentCluster, dataSource, value, topicChange, selectedKeys } = this.props; + return ( +
    + +
    + ); + } +} diff --git a/kafka-manager-console/src/container/modal/topic.tsx b/kafka-manager-console/src/container/modal/topic.tsx index 4e026641..d7f797ec 100644 --- a/kafka-manager-console/src/container/modal/topic.tsx +++ b/kafka-manager-console/src/container/modal/topic.tsx @@ -16,6 +16,17 @@ import { modal } from 'store/modal'; import { TopicAppSelect } from '../topic/topic-app-select'; import Url from 'lib/url-parser'; import { expandRemarks, quotaRemarks } from 'constants/strategy'; +import { getAppListByClusterId } from 'lib/api'; + +const updateApplyTopicFormModal = (clusterId: number) => { + const formMap = wrapper.xFormWrapper.formMap; + const formData = wrapper.xFormWrapper.formData; + getAppListByClusterId(clusterId).then(res => { + formMap[2].customFormItem = ; + // tslint:disable-next-line:no-unused-expression + wrapper.ref && wrapper.ref.updateFormMap$(formMap, formData); + }); +}; export const applyTopic = () => { const xFormModal = { @@ -28,6 +39,9 @@ export const applyTopic = () => { rules: [{ required: true, message: '请选择' }], attrs: { placeholder: '请选择', + onChange(value: number) { + updateApplyTopicFormModal(value); + }, }, }, { key: 'topicName', @@ -49,7 +63,7 @@ export const applyTopic = () => { type: 'custom', defaultValue: '', rules: [{ required: true, message: '请选择' }], - customFormItem: , + customFormItem: , }, { key: 'peakBytesIn', label: '峰值流量', @@ -88,7 +102,7 @@ export const applyTopic = () => { ], formData: {}, visible: true, - title: , + title: , okText: '确认', // customRenderElement: 集群资源充足时,预计1分钟自动审批通过, isWaitting: true, @@ -106,7 +120,7 @@ export const applyTopic = () => { }; return topic.applyTopic(quotaParams).then(data => { window.location.href = `${urlPrefix}/user/order-detail/?orderId=${data.id}®ion=${region.currentRegion}`; - }) + }); }, onSubmitFaild: (err: any, ref: any, formData: any, formMap: any) => { if (err.message === 'topic already existed') { @@ -115,10 +129,10 @@ export const applyTopic = () => { topicName: { value: topic, errors: [new Error('该topic名称已存在')], - } - }) + }, + }); } - } + }, }; wrapper.open(xFormModal); }; @@ -186,7 +200,7 @@ export const showApplyQuatoModal = (item: ITopic | IAppsIdInfo, record: IQuotaQu // rules: [{ required: true, message: '' }], // attrs: { disabled: true }, // invisible: !item.hasOwnProperty('clusterName'), - // }, + // }, { key: 'topicName', label: 'Topic名称', @@ -300,7 +314,7 @@ export const showTopicApplyQuatoModal = (item: ITopic) => { // attrs: { disabled: true }, // defaultValue: item.clusterName, // // invisible: !item.hasOwnProperty('clusterName'), - // }, + // }, { key: 'topicName', label: 'Topic名称', @@ -380,12 +394,19 @@ export const showTopicApplyQuatoModal = (item: ITopic) => { consumeQuota: transMBToB(value.consumeQuota), produceQuota: transMBToB(value.produceQuota), }); + + if (item.isPhysicalClusterId) { + Object.assign(quota, { + isPhysicalClusterId: true, + }); + } const quotaParams = { type: 2, applicant: users.currentUser.username, description: value.description, extensions: JSON.stringify(quota), }; + topic.applyQuota(quotaParams).then((data) => { notification.success({ message: '申请配额成功' }); window.location.href = `${urlPrefix}/user/order-detail/?orderId=${data.id}®ion=${region.currentRegion}`; @@ -454,23 +475,24 @@ const judgeAccessStatus = (access: number) => { export const showAllPermissionModal = (item: ITopic) => { let appId: string = null; + app.getAppListByClusterId(item.clusterId).then(res => { + if (!app.clusterAppData || !app.clusterAppData.length) { + return notification.info({ + message: ( + <> + + 您的账号暂无可用应用,请先 + 申请应用 + + ), + }); + } + const index = app.clusterAppData.findIndex(row => row.appId === item.appId); - if (!app.data || !app.data.length) { - return notification.info({ - message: ( - <> - - 您的账号暂无可用应用,请先 - 申请应用 - - ), + appId = index > -1 ? item.appId : app.clusterAppData[0].appId; + topic.getAuthorities(appId, item.clusterId, item.topicName).then((data) => { + showAllPermission(appId, item, data.access); }); - } - const index = app.data.findIndex(row => row.appId === item.appId); - - appId = index > -1 ? item.appId : app.data[0].appId; - topic.getAuthorities(appId, item.clusterId, item.topicName).then((data) => { - showAllPermission(appId, item, data.access); }); }; @@ -494,7 +516,7 @@ const showAllPermission = (appId: string, item: ITopic, access: number) => { defaultValue: appId, rules: [{ required: true, message: '请选择应用' }], type: 'custom', - customFormItem: , + customFormItem: , }, { key: 'access', diff --git a/kafka-manager-console/src/container/search-filter.tsx b/kafka-manager-console/src/container/search-filter.tsx index f6ed09fa..ca621c03 100644 --- a/kafka-manager-console/src/container/search-filter.tsx +++ b/kafka-manager-console/src/container/search-filter.tsx @@ -18,7 +18,7 @@ interface IFilterParams { } interface ISearchAndFilterState { - [filter: string]: boolean | string | number | any[]; + [filter: string]: boolean | string | number | any; } export class SearchAndFilterContainer extends React.Component { diff --git a/kafka-manager-console/src/container/topic/topic-detail/index.tsx b/kafka-manager-console/src/container/topic/topic-detail/index.tsx index 0220341b..4745acba 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/index.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/index.tsx @@ -331,11 +331,13 @@ export class TopicDetail extends React.Component { public render() { const role = users.currentUser.role; const baseInfo = topic.baseInfo as ITopicBaseInfo; - const showEditBtn = (role == 1 || role == 2) || (topic.topicBusiness && topic.topicBusiness.principals.includes(users.currentUser.username)); + const showEditBtn = (role == 1 || role == 2) || + (topic.topicBusiness && topic.topicBusiness.principals.includes(users.currentUser.username)); const topicRecord = { clusterId: this.clusterId, topicName: this.topicName, - clusterName: this.clusterName + clusterName: this.clusterName, + isPhysicalClusterId: !!this.isPhysicalTrue, } as ITopic; return ( @@ -349,9 +351,12 @@ export class TopicDetail extends React.Component { title={this.topicName || ''} extra={ <> - {this.needAuth == "true" && } - - + {this.needAuth == 'true' && + } + {baseInfo.haRelation === 0 ? null : + } + {baseInfo.haRelation === 0 ? null : + } {/* {showEditBtn && } */} diff --git a/kafka-manager-console/src/lib/api.ts b/kafka-manager-console/src/lib/api.ts index 39bb63ff..d0200653 100644 --- a/kafka-manager-console/src/lib/api.ts +++ b/kafka-manager-console/src/lib/api.ts @@ -248,6 +248,10 @@ export const getAppTopicList = (appId: string, mine: boolean) => { return fetch(`/normal/apps/${appId}/topics?mine=${mine}`); }; +export const getAppListByClusterId = (clusterId: number) => { + return fetch(`/normal/apps/${clusterId}`); +}; + /** * 专家服务 */ @@ -418,8 +422,69 @@ export const getMetaData = (needDetail: boolean = true) => { return fetch(`/rd/clusters/basic-info?need-detail=${needDetail}`); }; +export const getHaMetaData = () => { + return fetch(`/rd/clusters/ha/basic-info`); +}; + +export const getClusterHaTopics = (firstClusterId: number, secondClusterId?: number) => { + return fetch(`/rd/clusters/${firstClusterId}/ha-topics?secondClusterId=${secondClusterId || ''}`); +}; + +export const getClusterHaTopicsStatus = (firstClusterId: number, checkMetadata: boolean) => { + return fetch(`/rd/clusters/${firstClusterId}/ha-topics/status?checkMetadata=${checkMetadata}`); +}; + +export const setHaTopics = (params: any) => { + return fetch(`/op/ha-topics`, { + method: 'POST', + body: JSON.stringify(params), + }); +}; + +export const getAppRelatedTopics = (params: any) => { + return fetch(`/rd/apps/relate-topics + `, { + method: 'POST', + body: JSON.stringify(params), + }); +}; +// 取消Topic高可用 +export const unbindHaTopics = (params: any) => { + return fetch(`/op/ha-topics`, { + method: 'DELETE', + body: JSON.stringify(params), + }); +}; + +// 创建Topic主备切换任务 +export const createSwitchTask = (params: any) => { + return fetch(`/op/as-switch-jobs`, { + method: 'POST', + body: JSON.stringify(params), + }); +}; + +export const getJobDetail = (jobId: number) => { + return fetch(`/op/as-switch-jobs/${jobId}/job-detail`); +}; + +export const getJobLog = (jobId: number, startLogId?: number) => { + return fetch(`/op/as-switch-jobs/${jobId}/job-logs?startLogId=${startLogId || ''}`); +}; + +export const getJobState = (jobId: number) => { + return fetch(`/op/as-switch-jobs/${jobId}/job-state`); +}; + +export const switchAsJobs = (jobId: number, params: any) => { + return fetch(`/op/as-switch-jobs/${jobId}/action`, { + method: 'PUT', + body: JSON.stringify(params), + }); +}; + export const getOperationRecordData = (params: any) => { - return fetch(`/rd/operate-record`,{ + return fetch(`/rd/operate-record`, { method: 'POST', body: JSON.stringify(params), }); @@ -569,15 +634,15 @@ export const getCandidateController = (clusterId: number) => { return fetch(`/rd/clusters/${clusterId}/controller-preferred-candidates`); }; -export const addCandidateController = (params:any) => { - return fetch(`/op/cluster-controller/preferred-candidates`, { +export const addCandidateController = (params: any) => { + return fetch(`/op/cluster-controller/preferred-candidates`, { method: 'POST', body: JSON.stringify(params), }); }; -export const deleteCandidateCancel = (params:any)=>{ - return fetch(`/op/cluster-controller/preferred-candidates`, { +export const deleteCandidateCancel = (params: any) => { + return fetch(`/op/cluster-controller/preferred-candidates`, { method: 'DELETE', body: JSON.stringify(params), }); diff --git a/kafka-manager-console/src/lib/fetch.ts b/kafka-manager-console/src/lib/fetch.ts index ef307ccb..f51fd7d4 100644 --- a/kafka-manager-console/src/lib/fetch.ts +++ b/kafka-manager-console/src/lib/fetch.ts @@ -33,7 +33,6 @@ const checkStatus = (res: Response) => { }; const filter = (init: IInit) => (res: IRes) => { - if (res.code !== 0 && res.code !== 200) { if (!init.errorNoTips) { notification.error({ @@ -117,7 +116,7 @@ export default function fetch(url: string, init?: IInit) { export function formFetch(url: string, init?: IInit) { url = url.indexOf('?') > 0 ? - `${url}&dataCenter=${region.currentRegion}` : `${url}?dataCenter=${region.currentRegion}`; + `${url}&dataCenter=${region.currentRegion}` : `${url}?dataCenter=${region.currentRegion}`; let realUrl = url; if (!/^http(s)?:\/\//.test(url)) { @@ -127,8 +126,8 @@ export function formFetch(url: string, init?: IInit) { init = addCustomHeader(init); return window - .fetch(realUrl, init) - .then(res => checkStatus(res)) - .then((res) => res.json()) - .then(filter(init)); + .fetch(realUrl, init) + .then(res => checkStatus(res)) + .then((res) => res.json()) + .then(filter(init)); } diff --git a/kafka-manager-console/src/routers/page/index.less b/kafka-manager-console/src/routers/page/index.less index e4559814..21415a74 100644 --- a/kafka-manager-console/src/routers/page/index.less +++ b/kafka-manager-console/src/routers/page/index.less @@ -1,4 +1,3 @@ - * { padding: 0; margin: 0; @@ -13,7 +12,9 @@ li { list-style-type: none; } -html, body, .router-nav { +html, +body, +.router-nav { width: 100%; height: 100%; font-family: PingFangSC-Regular; @@ -52,11 +53,12 @@ html, body, .router-nav { color: @primary-color; } -.ant-table-thead > tr > th, .ant-table-tbody > tr > td { +.ant-table-thead>tr>th, +.ant-table-tbody>tr>td { padding: 13px; } -.ant-table-tbody > tr > td { +.ant-table-tbody>tr>td { background: #fff; } @@ -72,15 +74,11 @@ html, body, .router-nav { overflow: auto; } -.ant-form-item { - margin-bottom: 16px; -} - .mb-24 { margin-bottom: 24px; } -.ant-table-thead > tr > th .ant-table-filter-icon { +.ant-table-thead>tr>th .ant-table-filter-icon { right: initial; } @@ -100,7 +98,7 @@ html, body, .router-nav { margin-left: 10px; } -.config-info{ +.config-info { white-space: pre-line; height: 100%; overflow-y: scroll; @@ -112,5 +110,4 @@ html, body, .router-nav { margin-left: 10px; cursor: pointer; font-size: 12px; -} - +} \ No newline at end of file diff --git a/kafka-manager-console/src/routers/router.tsx b/kafka-manager-console/src/routers/router.tsx index 164eb370..192e55ef 100644 --- a/kafka-manager-console/src/routers/router.tsx +++ b/kafka-manager-console/src/routers/router.tsx @@ -1,6 +1,7 @@ import { BrowserRouter as Router, Route } from 'react-router-dom'; import { hot } from 'react-hot-loader/root'; import * as React from 'react'; +import zhCN from 'antd/lib/locale/zh_CN'; import Home from './page/topic'; import Admin from './page/admin'; @@ -12,58 +13,62 @@ import { urlPrefix } from 'constants/left-menu'; import ErrorPage from './page/error'; import Login from './page/login'; import InfoPage from './page/info'; +import { ConfigProvider } from 'antd'; class RouterDom extends React.Component { public render() { return ( - - - - + - - + + + + - - + + - - + + - - + + - - + + - - - - + + + + + + + + ); } } diff --git a/kafka-manager-console/src/store/admin.ts b/kafka-manager-console/src/store/admin.ts index 582950a3..c7957788 100644 --- a/kafka-manager-console/src/store/admin.ts +++ b/kafka-manager-console/src/store/admin.ts @@ -57,8 +57,9 @@ import { getBillStaffDetail, getCandidateController, addCandidateController, - deleteCandidateCancel - } from 'lib/api'; + deleteCandidateCancel, + getHaMetaData, +} from 'lib/api'; import { getControlMetricOption, getClusterMetricOption } from 'lib/line-charts-config'; import { copyValueMap } from 'constants/status-map'; @@ -104,12 +105,15 @@ class Admin { @observable public metaList: IMetaData[] = []; + @observable + public haMetaList: IMetaData[] = []; + @observable public oRList: any[] = []; @observable - public oRparams:any={ - moduleId:0 + public oRparams: any = { + moduleId: 0 }; @observable @@ -169,9 +173,9 @@ class Admin { @observable public controllerCandidate: IController[] = []; - @observable + @observable public filtercontrollerCandidate: string = ''; - + @observable public brokersPartitions: IBrokersPartitions[] = []; @@ -329,9 +333,20 @@ class Admin { } @action.bound - public setOperationRecordList(data:any){ + public setHaMetaList(data: IMetaData[]) { this.setLoading(false); - this.oRList = data ? data.map((item:any, index: any) => { + this.haMetaList = data ? data.map((item, index) => { + item.key = index; + return item; + }) : []; + this.haMetaList = this.haMetaList.sort((a, b) => a.clusterId - b.clusterId); + return this.haMetaList; + } + + @action.bound + public setOperationRecordList(data: any) { + this.setLoading(false); + this.oRList = data ? data.map((item: any, index: any) => { item.key = index; return item; }) : []; @@ -394,9 +409,9 @@ class Admin { item.key = index; return item; }) : []; - this.filtercontrollerCandidate = data?data.map((item,index)=>{ + this.filtercontrollerCandidate = data ? data.map((item, index) => { return item.brokerId - }).join(','):'' + }).join(',') : '' } @action.bound @@ -479,8 +494,8 @@ class Admin { } @action.bound - public setBrokersMetadata(data: IBrokersMetadata[]|any) { - this.brokersMetadata = data ? data.map((item:any, index:any) => { + public setBrokersMetadata(data: IBrokersMetadata[] | any) { + this.brokersMetadata = data ? data.map((item: any, index: any) => { item.key = index; return { ...item, @@ -675,6 +690,11 @@ class Admin { getMetaData(needDetail).then(this.setMetaList); } + public getHaMetaData() { + this.setLoading(true); + return getHaMetaData().then(this.setHaMetaList); + } + public getOperationRecordData(params: any) { this.setLoading(true); this.oRparams = params @@ -738,17 +758,17 @@ class Admin { } public getCandidateController(clusterId: number) { - return getCandidateController(clusterId).then(data=>{ + return getCandidateController(clusterId).then(data => { return this.setCandidateController(data) }); } public addCandidateController(clusterId: number, brokerIdList: any) { - return addCandidateController({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId)); + return addCandidateController({ clusterId, brokerIdList }).then(() => this.getCandidateController(clusterId)); } - public deleteCandidateCancel(clusterId: number, brokerIdList: any){ - return deleteCandidateCancel({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId)); + public deleteCandidateCancel(clusterId: number, brokerIdList: any) { + return deleteCandidateCancel({ clusterId, brokerIdList }).then(() => this.getCandidateController(clusterId)); } public getBrokersBasicInfo(clusterId: number, brokerId: number) { diff --git a/kafka-manager-console/src/store/app.ts b/kafka-manager-console/src/store/app.ts index a3af345f..a64c93a7 100644 --- a/kafka-manager-console/src/store/app.ts +++ b/kafka-manager-console/src/store/app.ts @@ -1,5 +1,5 @@ import { observable, action } from 'mobx'; -import { getAppList, getAppDetail, getAppTopicList, applyOrder, modfiyApplication, modfiyAdminApp, getAdminAppList, getAppsConnections, getTopicAppQuota } from 'lib/api'; +import { getAppList, getAppDetail, getAppTopicList, applyOrder, modfiyApplication, modfiyAdminApp, getAdminAppList, getAppsConnections, getTopicAppQuota, getAppListByClusterId } from 'lib/api'; import { IAppItem, IAppQuota, ITopic, IOrderParams, IConnectionInfo } from 'types/base-type'; class App { @@ -12,6 +12,9 @@ class App { @observable public data: IAppItem[] = []; + @observable + public clusterAppData: IAppItem[] = []; + @observable public adminAppData: IAppItem[] = []; @@ -19,7 +22,7 @@ class App { public selectData: IAppItem[] = [{ appId: '-1', name: '所有关联应用', - } as IAppItem, + } as IAppItem, ]; @observable @@ -51,12 +54,12 @@ class App { @action.bound public setTopicAppQuota(data: IAppQuota[]) { return this.appQuota = data.map((item, index) => { - return { - ...item, - label: item.appName, - value: item.appId, - key: index, - }; + return { + ...item, + label: item.appName, + value: item.appId, + key: index, + }; }); } @@ -87,6 +90,16 @@ class App { this.setLoading(false); } + @action.bound + public setClusterAppData(data: IAppItem[] = []) { + this.clusterAppData = data.map((item, index) => ({ + ...item, + key: index, + principalList: item.principals ? item.principals.split(',') : [], + })); + return this.clusterAppData; + } + @action.bound public setAdminData(data: IAppItem[] = []) { this.adminAppData = data.map((item, index) => ({ @@ -133,6 +146,10 @@ class App { getAppList().then(this.setData); } + public getAppListByClusterId(clusterId: number) { + return getAppListByClusterId(clusterId).then(this.setClusterAppData); + } + public getTopicAppQuota(clusterId: number, topicName: string) { return getTopicAppQuota(clusterId, topicName).then(this.setTopicAppQuota); } diff --git a/kafka-manager-console/src/store/topic.ts b/kafka-manager-console/src/store/topic.ts index b47c1122..cacb7bf4 100644 --- a/kafka-manager-console/src/store/topic.ts +++ b/kafka-manager-console/src/store/topic.ts @@ -37,6 +37,7 @@ export interface ITopicBaseInfo { physicalClusterId: number; percentile: string; regionNameList: any; + haRelation: number; } export interface IRealTimeTraffic { diff --git a/kafka-manager-console/src/types/base-type.ts b/kafka-manager-console/src/types/base-type.ts index f0858c3a..9f8f73c1 100644 --- a/kafka-manager-console/src/types/base-type.ts +++ b/kafka-manager-console/src/types/base-type.ts @@ -474,7 +474,14 @@ export interface IMetaData { status: number; topicNum: number; zookeeper: string; + haRelation?: number; + haASSwitchJobId?: number; + haStatus?: number; + haClusterVO?: IMetaData; + activeTopicCount?: number; + standbyTopicCount?: number; key?: number; + mutualBackupClusterName?: string; } export interface IConfigure { @@ -641,6 +648,7 @@ export interface IClusterTopics { properties: any; clusterName: string; logicalClusterId: number; + haRelation?: number; key?: number; } diff --git a/kafka-manager-console/webpack.config.js b/kafka-manager-console/webpack.config.js index d6d12fa8..1608de20 100644 --- a/kafka-manager-console/webpack.config.js +++ b/kafka-manager-console/webpack.config.js @@ -130,9 +130,7 @@ module.exports = { historyApiFallback: true, proxy: { '/api/v1/': { - // target: 'http://127.0.0.1:8080', - target: 'http://10.179.37.199:8008', - // target: 'http://99.11.45.164:8888', + target: 'http://127.0.0.1:8080/', changeOrigin: true, } }, diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaASRelationManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaASRelationManager.java new file mode 100644 index 00000000..11c79127 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaASRelationManager.java @@ -0,0 +1,32 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterTopicVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.HaClusterTopicHaStatusVO; + +import java.util.List; + +public interface HaASRelationManager { + /** + * 获取集群主备信息 + */ + List getHATopics(Long firstClusterPhyId, Long secondClusterPhyId, boolean filterSystemTopics); + + /** + * 获取集群Topic的主备状态信息 + */ + Result> listHaStatusTopics(Long clusterPhyId, Boolean checkMetadata); + + + /** + * 获取获取集群topic高可用关系 0:备topic, 1:主topic, -1非高可用 + */ + Integer getRelation(Long clusterId, String topicName); + + /** + * 获取获取集群topic高可用关系 + */ + HaASRelationDO getASRelation(Long clusterId, String topicName); + +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaAppManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaAppManager.java new file mode 100644 index 00000000..c1a480a5 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaAppManager.java @@ -0,0 +1,16 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.app.AppRelateTopicsVO; + +import java.util.List; + + +/** + * Ha App管理 + */ +public interface HaAppManager { + Result> appRelateTopics(Long clusterPhyId, List filterTopicNameList); + + boolean isContainAllRelateAppTopics(Long clusterPhyId, List filterTopicNameList); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaClusterManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaClusterManager.java new file mode 100644 index 00000000..7b25e2c0 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaClusterManager.java @@ -0,0 +1,19 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; + +import java.util.List; + +/** + * Ha Cluster管理 + */ +public interface HaClusterManager { + List getClusterDetailDTOList(Boolean needDetail); + + Result addNew(ClusterDO clusterDO, Long activeClusterId, String operator); + + Result deleteById(Long clusterId, String operator); + +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaTopicManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaTopicManager.java new file mode 100644 index 00000000..b9755e55 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/HaTopicManager.java @@ -0,0 +1,44 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.TopicOperationResult; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.HaSwitchTopic; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.HaTopicRelationDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; + +import java.util.List; + + +/** + * Ha Topic管理 + */ +public interface HaTopicManager { + /** + * 批量更改主备关系 + */ + Result> batchCreateHaTopic(HaTopicRelationDTO dto, String operator); + + /** + * 批量更改主备关系 + */ + Result> batchRemoveHaTopic(HaTopicRelationDTO dto, String operator); + + /** + * 可重试的执行主备切换 + * @param newActiveClusterPhyId 主集群 + * @param newStandbyClusterPhyId 备集群 + * @param switchTopicNameList 切换的Topic列表 + * @param focus 强制切换 + * @param firstTriggerExecute 第一次触发执行 + * @param switchLogTemplate 切换日志模版 + * @param operator 操作人 + * @return 操作结果 + */ + Result switchHaWithCanRetry(Long newActiveClusterPhyId, + Long newStandbyClusterPhyId, + List switchTopicNameList, + boolean focus, + boolean firstTriggerExecute, + JobLogDO switchLogTemplate, + String operator); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaASRelationManagerImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaASRelationManagerImpl.java new file mode 100644 index 00000000..306671a0 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaASRelationManagerImpl.java @@ -0,0 +1,140 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterTopicVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.HaClusterTopicHaStatusVO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +@Service +public class HaASRelationManagerImpl implements HaASRelationManager { + @Autowired + private HaASRelationService haASRelationService; + + @Autowired + private TopicManagerService topicManagerService; + + @Autowired + private HaTopicService haTopicService; + + @Autowired + private AuthorityService authorityService; + + @Override + public List getHATopics(Long firstClusterPhyId, Long secondClusterPhyId, boolean filterSystemTopics) { + List doList = haASRelationService.listAllHAFromDB(firstClusterPhyId, secondClusterPhyId, HaResTypeEnum.TOPIC); + if (ValidateUtils.isEmptyList(doList)) { + return new ArrayList<>(); + } + + List voList = new ArrayList<>(); + for (HaASRelationDO relationDO: doList) { + if (filterSystemTopics + && (relationDO.getActiveResName().startsWith("__") || relationDO.getStandbyResName().startsWith("__"))) { + // 过滤掉系统Topic && 存在系统Topic,则过滤掉 + continue; + } + + HaClusterTopicVO vo = new HaClusterTopicVO(); + vo.setClusterId(firstClusterPhyId); + if (firstClusterPhyId.equals(relationDO.getActiveClusterPhyId())) { + vo.setTopicName(relationDO.getActiveResName()); + } else { + vo.setTopicName(relationDO.getStandbyResName()); + } + + vo.setProduceAclNum(0); + vo.setConsumeAclNum(0); + vo.setActiveClusterId(relationDO.getActiveClusterPhyId()); + vo.setStandbyClusterId(relationDO.getStandbyClusterPhyId()); + vo.setStatus(relationDO.getStatus()); + + // 补充ACL信息 + List authorityDOList = authorityService.getAuthorityByTopicFromCache(relationDO.getActiveClusterPhyId(), relationDO.getActiveResName()); + authorityDOList.forEach(elem -> { + if ((elem.getAccess() & TopicAuthorityEnum.WRITE.getCode()) > 0) { + vo.setProduceAclNum(vo.getProduceAclNum() + 1); + } + if ((elem.getAccess() & TopicAuthorityEnum.READ.getCode()) > 0) { + vo.setConsumeAclNum(vo.getConsumeAclNum() + 1); + } + }); + + voList.add(vo); + } + + return voList; + } + + @Override + public Result> listHaStatusTopics(Long clusterPhyId, Boolean checkMetadata) { + ClusterDO clusterDO = PhysicalClusterMetadataManager.getClusterFromCache(clusterPhyId); + if (clusterDO == null){ + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + List topicDOS = topicManagerService.getByClusterId(clusterPhyId); + if (ValidateUtils.isEmptyList(topicDOS)) { + return Result.buildSuc(new ArrayList<>()); + } + + Map haRelationMap = haTopicService.getRelation(clusterPhyId); + List statusVOS = new ArrayList<>(); + topicDOS.stream().filter(topicDO -> !topicDO.getTopicName().startsWith("__"))//过滤引擎自带topic + .forEach(topicDO -> { + if(checkMetadata && !PhysicalClusterMetadataManager.isTopicExist(clusterPhyId, topicDO.getTopicName())){ + return; + } + HaClusterTopicHaStatusVO statusVO = new HaClusterTopicHaStatusVO(); + statusVO.setClusterId(clusterPhyId); + statusVO.setClusterName(clusterDO.getClusterName()); + statusVO.setTopicName(topicDO.getTopicName()); + statusVO.setHaRelation(haRelationMap.get(topicDO.getTopicName())); + statusVOS.add(statusVO); + }); + + return Result.buildSuc(statusVOS); + } + + @Override + public Integer getRelation(Long clusterId, String topicName) { + HaASRelationDO relationDO = haASRelationService.getHAFromDB(clusterId, topicName, HaResTypeEnum.TOPIC); + if (relationDO == null){ + return HaRelationTypeEnum.UNKNOWN.getCode(); + } + if (topicName.equals(KafkaConstant.COORDINATOR_TOPIC_NAME)){ + return HaRelationTypeEnum.MUTUAL_BACKUP.getCode(); + } + if (clusterId.equals(relationDO.getActiveClusterPhyId())){ + return HaRelationTypeEnum.ACTIVE.getCode(); + } + if (clusterId.equals(relationDO.getStandbyClusterPhyId())){ + return HaRelationTypeEnum.STANDBY.getCode(); + } + return HaRelationTypeEnum.UNKNOWN.getCode(); + } + + @Override + public HaASRelationDO getASRelation(Long clusterId, String topicName) { + return haASRelationService.getHAFromDB(clusterId, topicName, HaResTypeEnum.TOPIC); + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaAppManagerImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaAppManagerImpl.java new file mode 100644 index 00000000..19ffc5ae --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaAppManagerImpl.java @@ -0,0 +1,94 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.app.AppRelateTopicsVO; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaAppManager; +import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.*; +import java.util.stream.Collectors; + + +@Service +public class HaAppManagerImpl implements HaAppManager { + + @Autowired + private AuthorityService authorityService; + + @Autowired + private HaASRelationService haASRelationService; + + @Override + public Result> appRelateTopics(Long clusterPhyId, List filterTopicNameList) { + // 获取关联的Topic列表 + Map> userTopicMap = this.appRelateTopicsMap(clusterPhyId, filterTopicNameList); + + // 获取集群已建立HA的Topic列表 + Set haTopicNameSet = haASRelationService.listAllHAFromDB(clusterPhyId, HaResTypeEnum.TOPIC) + .stream() + .map(elem -> elem.getActiveResName()) + .collect(Collectors.toSet()); + + Set filterTopicNameSet = new HashSet<>(filterTopicNameList); + + List voList = new ArrayList<>(); + for (Map.Entry> entry: userTopicMap.entrySet()) { + AppRelateTopicsVO vo = new AppRelateTopicsVO(); + vo.setClusterPhyId(clusterPhyId); + vo.setKafkaUser(entry.getKey()); + vo.setSelectedTopicNameList(new ArrayList<>()); + vo.setNotSelectTopicNameList(new ArrayList<>()); + vo.setNotHaTopicNameList(new ArrayList<>()); + entry.getValue().forEach(elem -> { + if (elem.startsWith("__")) { + // ignore + return; + } + + if (!haTopicNameSet.contains(elem)) { + vo.getNotHaTopicNameList().add(elem); + } else if (filterTopicNameSet.contains(elem)) { + vo.getSelectedTopicNameList().add(elem); + } else { + vo.getNotSelectTopicNameList().add(elem); + } + }); + + voList.add(vo); + } + + return Result.buildSuc(voList); + } + + @Override + public boolean isContainAllRelateAppTopics(Long clusterPhyId, List filterTopicNameList) { + Map> userTopicMap = this.appRelateTopicsMap(clusterPhyId, filterTopicNameList); + + Set relateTopicSet = new HashSet<>(); + userTopicMap.values().forEach(elem -> relateTopicSet.addAll(elem)); + + return filterTopicNameList.containsAll(relateTopicSet); + } + + private Map> appRelateTopicsMap(Long clusterPhyId, List filterTopicNameList) { + Map> userTopicMap = new HashMap<>(); + for (String topicName: filterTopicNameList) { + authorityService.getAuthorityByTopicFromCache(clusterPhyId, topicName) + .stream() + .map(elem -> elem.getAppId()) + .filter(item -> !userTopicMap.containsKey(item)) + .forEach(kafkaUser -> + userTopicMap.put( + kafkaUser, + authorityService.getAuthority(kafkaUser).stream().map(authorityDO -> authorityDO.getTopicName()).collect(Collectors.toSet()) + ) + ); + } + + return userTopicMap; + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaClusterManagerImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaClusterManagerImpl.java new file mode 100644 index 00000000..debc3d96 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaClusterManagerImpl.java @@ -0,0 +1,169 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.ClusterModeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.DBStatusEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.utils.ListUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaClusterManager; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.LogicalClusterService; +import com.xiaojukeji.kafka.manager.service.service.RegionService; +import com.xiaojukeji.kafka.manager.service.service.ZookeeperService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaClusterService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; +import org.springframework.transaction.annotation.Transactional; +import org.springframework.transaction.interceptor.TransactionAspectSupport; + +import java.util.List; + +@Component +public class HaClusterManagerImpl implements HaClusterManager { + private static final Logger LOGGER = LoggerFactory.getLogger(HaClusterManagerImpl.class); + + @Autowired + private ClusterService clusterService; + + @Autowired + private HaClusterService haClusterService; + + @Autowired + private ZookeeperService zookeeperService; + + @Autowired + private LogicalClusterService logicalClusterService; + + @Autowired + private RegionService regionService; + + @Autowired + private HaASRelationService haASRelationService; + + @Override + public List getClusterDetailDTOList(Boolean needDetail) { + return clusterService.getClusterDetailDTOList(needDetail); + } + + @Override + @Transactional + public Result addNew(ClusterDO clusterDO, Long activeClusterId, String operator) { + if (activeClusterId == null) { + // 普通集群,直接写入DB + Long clusterPhyId = zookeeperService.getClusterIdAndNullIfFailed(clusterDO.getZookeeper()); + if (clusterPhyId != null && clusterService.getById(clusterPhyId) == null) { + // 该集群ID不存在时,则进行设置,如果已经存在了,则忽略 + clusterDO.setId(clusterPhyId); + } + + return Result.buildFrom(clusterService.addNew(clusterDO, operator)); + } + + //高可用集群 + ClusterDO activeClusterDO = clusterService.getById(activeClusterId); + if (activeClusterDO == null) { + // 主集群不存在 + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getClusterPhyNotExist(activeClusterId)); + } + + HaASRelationDO oldRelationDO = haClusterService.getHA(activeClusterId); + if (oldRelationDO != null){ + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_ALREADY_USED, + MsgConstant.getActiveClusterDuplicate(activeClusterDO.getId(), activeClusterDO.getClusterName())); + } + + Long standbyClusterPhyId = zookeeperService.getClusterIdAndNullIfFailed(clusterDO.getZookeeper()); + if (standbyClusterPhyId != null && clusterService.getById(standbyClusterPhyId) == null) { + // 该集群ID不存在时,则进行设置,如果已经存在了,则忽略 + clusterDO.setId(standbyClusterPhyId); + } + + ResultStatus rs = clusterService.addNew(clusterDO, operator); + if (!ResultStatus.SUCCESS.equals(rs)) { + return Result.buildFrom(rs); + } + + Result> rli = zookeeperService.getBrokerIds(clusterDO.getZookeeper()); + if (!rli.hasData()){ + return Result.buildFrom(ResultStatus.BROKER_NOT_EXIST); + } + + // 备集群创建region + RegionDO regionDO = new RegionDO(DBStatusEnum.ALIVE.getStatus(), clusterDO.getClusterName(), clusterDO.getId(), ListUtils.intList2String(rli.getData())); + rs = regionService.createRegion(regionDO); + if (!ResultStatus.SUCCESS.equals(rs)){ + TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); + + return Result.buildFrom(rs); + } + + // 备集群创建逻辑集群 + List logicalClusterDOS = logicalClusterService.getByPhysicalClusterId(activeClusterId); + if (!logicalClusterDOS.isEmpty()) { + // 有逻辑集群,则对应创建逻辑集群 + Integer mode = logicalClusterDOS.get(0).getMode(); + LogicalClusterDO logicalClusterDO = new LogicalClusterDO( + clusterDO.getClusterName(), + clusterDO.getClusterName(), + ClusterModeEnum.INDEPENDENT_MODE.getCode().equals(mode)?mode:ClusterModeEnum.SHARED_MODE.getCode(), + ClusterModeEnum.INDEPENDENT_MODE.getCode().equals(mode)?logicalClusterDOS.get(0).getAppId(): "", + clusterDO.getId(), + regionDO.getId().toString() + ); + ResultStatus clcRS = logicalClusterService.createLogicalCluster(logicalClusterDO); + if (clcRS.getCode() != ResultStatus.SUCCESS.getCode()){ + TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); + return Result.buildFrom(clcRS); + } + } + + return haClusterService.createHA(activeClusterId, clusterDO.getId(), operator); + } + + @Override + @Transactional + public Result deleteById(Long clusterId, String operator) { + HaASRelationDO haRelationDO = haClusterService.getHA(clusterId); + if (haRelationDO == null){ + return clusterService.deleteById(clusterId, operator); + } + + Result rv = checkForDelete(haRelationDO, clusterId); + if (rv.failed()){ + return rv; + } + + //解除高可用关系 + Result result = haClusterService.deleteHA(haRelationDO.getActiveClusterPhyId(), haRelationDO.getStandbyClusterPhyId()); + if (result.failed()){ + return result; + } + + //删除集群 + result = clusterService.deleteById(clusterId, operator); + if (result.failed()){ + return result; + } + return Result.buildSuc(); + } + + private Result checkForDelete(HaASRelationDO haRelationDO, Long clusterId){ + List relationDOS = haASRelationService.listAllHAFromDB(haRelationDO.getActiveClusterPhyId(), + haRelationDO.getStandbyClusterPhyId(), + HaResTypeEnum.TOPIC); + if (relationDOS.stream().filter(relationDO -> !relationDO.getActiveResName().startsWith("__")).count() > 0){ + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FORBIDDEN, "集群还存在高可topic"); + } + return Result.buildSuc(); + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaTopicManagerImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaTopicManagerImpl.java new file mode 100644 index 00000000..d1224a4b --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/ha/impl/HaTopicManagerImpl.java @@ -0,0 +1,559 @@ +package com.xiaojukeji.kafka.manager.service.biz.ha.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.TopicOperationResult; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.HaSwitchTopic; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.HaTopicRelationDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; +import com.xiaojukeji.kafka.manager.common.utils.BackoffUtils; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaTopicManager; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.JobLogService; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaKafkaUserService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; +import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import java.util.*; +import java.util.stream.Collectors; + +@Component +public class HaTopicManagerImpl implements HaTopicManager { + private static final Logger LOGGER = LoggerFactory.getLogger(HaTopicManagerImpl.class); + + @Autowired + private ClusterService clusterService; + + @Autowired + private AuthorityService authorityService; + + @Autowired + private HaTopicService haTopicService; + + @Autowired + private HaKafkaUserService haKafkaUserService; + + @Autowired + private HaASRelationService haASRelationService; + + @Autowired + private TopicManagerService topicManagerService; + + @Autowired + private ConfigUtils configUtils; + + @Autowired + private JobLogService jobLogService; + + @Override + public Result switchHaWithCanRetry(Long newActiveClusterPhyId, + Long newStandbyClusterPhyId, + List switchTopicNameList, + boolean focus, + boolean firstTriggerExecute, + JobLogDO switchLogTemplate, + String operator) { + LOGGER.info( + "method=switchHaWithCanRetry||newActiveClusterPhyId={}||newStandbyClusterPhyId={}||switchTopicNameList={}||focus={}||operator={}", + newActiveClusterPhyId, newStandbyClusterPhyId, ConvertUtil.obj2Json(switchTopicNameList), focus, operator + ); + + // 1、获取集群 + ClusterDO newActiveClusterPhyDO = clusterService.getById(newActiveClusterPhyId); + if (ValidateUtils.isNull(newActiveClusterPhyDO)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getClusterPhyNotExist(newActiveClusterPhyId)); + } + + ClusterDO newStandbyClusterPhyDO = clusterService.getById(newStandbyClusterPhyId); + if (ValidateUtils.isNull(newStandbyClusterPhyDO)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getClusterPhyNotExist(newStandbyClusterPhyId)); + } + + // 2、进行参数检查 + Result> doListResult = this.checkParamAndGetASRelation(newActiveClusterPhyId, newStandbyClusterPhyId, switchTopicNameList); + if (doListResult.failed()) { + LOGGER.error( + "method=switchHaWithCanRetry||newActiveClusterPhyId={}||newStandbyClusterPhyId={}||switchTopicNameList={}||paramErrResult={}||operator={}", + newActiveClusterPhyId, newStandbyClusterPhyId, ConvertUtil.obj2Json(switchTopicNameList), doListResult, operator + ); + + return Result.buildFromIgnoreData(doListResult); + } + List doList = doListResult.getData(); + + // 3、如果是第一次触发执行,且状态是stable,则修改状态 + for (HaASRelationDO relationDO: doList) { + if (firstTriggerExecute && relationDO.getStatus().equals(HaStatusEnum.STABLE_CODE)) { + relationDO.setStatus(HaStatusEnum.SWITCHING_PREPARE_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_PREPARE_CODE); + } + } + + // 4、进行切换预处理 + HaSwitchTopic switchTopic = this.prepareSwitching(newStandbyClusterPhyDO, doList, focus, switchLogTemplate); + + // 5、直接等待10秒,使得相关数据有机会同步完成 + BackoffUtils.backoff(10000); + + // 6、检查数据同步情况 + for (HaASRelationDO relationDO: doList) { + switchTopic.addHaSwitchTopic(this.checkTopicInSync(newActiveClusterPhyDO, newStandbyClusterPhyDO, relationDO, focus, switchLogTemplate)); + } + + // 7、删除旧的备Topic的同步配置 + for (HaASRelationDO relationDO: doList) { + switchTopic.addHaSwitchTopic(this.oldStandbyTopicDelFetchConfig(newActiveClusterPhyDO, newStandbyClusterPhyDO, relationDO, focus, switchLogTemplate, operator)); + } + + // 8、增加新的备Topic的同步配置, + switchTopic.addHaSwitchTopic(this.newStandbyTopicAddFetchConfig(newActiveClusterPhyDO, newStandbyClusterPhyDO, doList, focus, switchLogTemplate, operator)); + + // 9、进行切换收尾 + switchTopic.addHaSwitchTopic(this.closeoutSwitching(newActiveClusterPhyDO, newStandbyClusterPhyDO, configUtils.getDKafkaGatewayZK(), doList, focus, switchLogTemplate)); + + // 10、状态结果汇总记录 + doList.forEach(elem -> switchTopic.addActiveTopicStatus(elem.getActiveResName(), elem.getStatus())); + + // 11、日志记录并返回 + LOGGER.info( + "method=switchHaWithCanRetry||newActiveClusterPhyId={}||newStandbyClusterPhyId={}||switchTopicNameList={}||switchResult={}||operator={}", + newActiveClusterPhyId, newStandbyClusterPhyId, ConvertUtil.obj2Json(switchTopicNameList), switchTopic, operator + ); + + return Result.buildSuc(switchTopic); + } + + @Override + public Result> batchCreateHaTopic(HaTopicRelationDTO dto, String operator) { + List relationDOS = haASRelationService.listAllHAFromDB(dto.getActiveClusterId(), dto.getStandbyClusterId(), HaResTypeEnum.CLUSTER); + if (relationDOS.isEmpty()){ + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "集群高可用关系未建立"); + } + + //获取主集群已有的高可用topic + Map haRelationMap = haTopicService.getRelation(dto.getActiveClusterId()); + List topicNames = dto.getTopicNames(); + if (dto.getAll()){ + topicNames = topicManagerService.getByClusterId(dto.getActiveClusterId()) + .stream() + .filter(topicDO -> !topicDO.getTopicName().startsWith("__"))//过滤掉kafka自带topic + .filter(topicDO -> !haRelationMap.keySet().contains(topicDO.getTopicName()))//过滤调已成为高可用topic的topic + .filter(topicDO -> PhysicalClusterMetadataManager.isTopicExist(dto.getActiveClusterId(), topicDO.getTopicName())) + .map(TopicDO::getTopicName) + .collect(Collectors.toList()); + + } + + List operationResultList = new ArrayList<>(); + topicNames.forEach(topicName->{ + Result rv = haTopicService.createHA(dto.getActiveClusterId(), dto.getStandbyClusterId(),topicName, operator); + operationResultList.add(TopicOperationResult.buildFrom(dto.getActiveClusterId(), topicName, rv)); + }); + + return Result.buildSuc(operationResultList); + } + + @Override + public Result> batchRemoveHaTopic(HaTopicRelationDTO dto, String operator) { + List relationDOS = haASRelationService.listAllHAFromDB(dto.getActiveClusterId(), dto.getStandbyClusterId(), HaResTypeEnum.CLUSTER); + if (relationDOS.isEmpty()){ + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "集群高可用关系未建立"); + } + + List operationResultList = new ArrayList<>(); + for(String topicName : dto.getTopicNames()){ + HaASRelationDO relationDO = haASRelationService.getHAFromDB( + dto.getActiveClusterId(), + topicName, + HaResTypeEnum.TOPIC + ); + if (relationDO == null) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "主备关系不存在"); + } + + Result rv = haTopicService.deleteHA(relationDO.getActiveClusterPhyId(), relationDO.getStandbyClusterPhyId(), topicName, operator); + operationResultList.add(TopicOperationResult.buildFrom(dto.getActiveClusterId(), topicName, rv)); + } + + return Result.buildSuc(operationResultList); + } + + /**************************************************** private method ****************************************************/ + + private void saveLogs(JobLogDO switchLogTemplate, String content) { + jobLogService.addLogAndIgnoreException(switchLogTemplate.setAndCopyNew(new Date(), content)); + } + + /** + * 切换预处理 + * 1、在主集群上,将Topic关联的KafkaUser的active集群设置为None + */ + private HaSwitchTopic prepareSwitching(ClusterDO oldActiveClusterPhyDO, List doList, boolean focus, JobLogDO switchLogTemplate) { + // 暂停HA的KafkaUser + Set stoppedHaKafkaUserSet = new HashSet<>(); + + HaSwitchTopic haSwitchTopic = new HaSwitchTopic(true); + + boolean allSuccess = true; // 所有都成功 + boolean needLog = false; // 需要记录日志 + for (HaASRelationDO relationDO: doList) { + if (!relationDO.getStatus().equals(HaStatusEnum.SWITCHING_PREPARE_CODE)) { + // 当前不处于prepare状态 + haSwitchTopic.setFinished(true); + continue; + } + needLog = true; + + // 获取关联的KafkaUser + Set relatedKafkaUserSet = authorityService.getAuthorityByTopic(relationDO.getActiveClusterPhyId(), relationDO.getActiveResName()) + .stream() + .map(elem -> elem.getAppId()) + .filter(kafkaUser -> !stoppedHaKafkaUserSet.contains(kafkaUser)) + .collect(Collectors.toSet()); + + // 暂停kafkaUser HA + for (String kafkaUser: relatedKafkaUserSet) { + Result rv = haKafkaUserService.setNoneHAInKafka(oldActiveClusterPhyDO.getZookeeper(), kafkaUser); + if (rv.failed() && !focus) { + haSwitchTopic.setFinished(false); + + this.saveLogs(switchLogTemplate, String.format("%s:\t失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_PREPARE.getMsg(oldActiveClusterPhyDO.getClusterName()))); + return haSwitchTopic; + } else if (rv.failed() && focus) { + allSuccess = false; + } + } + + // 记录操作过的user + stoppedHaKafkaUserSet.addAll(relatedKafkaUserSet); + + // 修改Topic主备状态 + relationDO.setStatus(HaStatusEnum.SWITCHING_WAITING_IN_SYNC_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_WAITING_IN_SYNC_CODE); + } + + if (needLog) { + this.saveLogs(switchLogTemplate, String.format("%s:\t%s", HaStatusEnum.SWITCHING_PREPARE.getMsg(oldActiveClusterPhyDO.getClusterName()), allSuccess? "成功": "存在失败,但进行强制执行,跳过该操作")); + } + + haSwitchTopic.setFinished(true); + return haSwitchTopic; + } + + /** + * 等待主备Topic同步 + */ + private HaSwitchTopic checkTopicInSync(ClusterDO newActiveClusterPhyDO, ClusterDO newStandbyClusterPhyDO, HaASRelationDO relationDO, boolean focus, JobLogDO switchLogTemplate) { + HaSwitchTopic haSwitchTopic = new HaSwitchTopic(true); + if (!relationDO.getStatus().equals(HaStatusEnum.SWITCHING_WAITING_IN_SYNC_CODE)) { + // 状态错误,直接略过 + haSwitchTopic.setFinished(true); + return haSwitchTopic; + } + + if (focus) { + // 无需等待inSync + + // 修改Topic主备状态 + relationDO.setStatus(HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE); + + haSwitchTopic.setFinished(true); + this.saveLogs(switchLogTemplate, String.format( + "%s:\tTopic:[%s] 强制切换,跳过等待主备同步完成,直接进入下一步", + HaStatusEnum.SWITCHING_WAITING_IN_SYNC.getMsg(newActiveClusterPhyDO.getClusterName()), + relationDO.getActiveResName() + )); + return haSwitchTopic; + } + + Result lagResult = haTopicService.getStandbyTopicFetchLag(newStandbyClusterPhyDO.getId(), relationDO.getStandbyResName()); + if (lagResult.failed()) { + // 获取Lag信息失败 + this.saveLogs(switchLogTemplate, String.format( + "%s:\tTopic:[%s] 获取同步的Lag信息失败,1分钟后再检查是否主备同步完成", + HaStatusEnum.SWITCHING_WAITING_IN_SYNC.getMsg(newActiveClusterPhyDO.getClusterName()), + relationDO.getActiveResName() + )); + haSwitchTopic.setFinished(false); + return haSwitchTopic; + } + + if (lagResult.getData().longValue() > 0) { + this.saveLogs(switchLogTemplate, String.format( + "%s:\tTopic:[%s] 还存在 %d 条数据未同步完成,1分钟后再检查是否主备同步完成", + HaStatusEnum.SWITCHING_WAITING_IN_SYNC.getMsg(newActiveClusterPhyDO.getClusterName()), + relationDO.getActiveResName(), + lagResult.getData() + )); + + haSwitchTopic.setFinished(false); + return haSwitchTopic; + } + + // 修改Topic主备状态 + relationDO.setStatus(HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE); + + haSwitchTopic.setFinished(true); + this.saveLogs(switchLogTemplate, String.format( + "%s:\tTopic:[%s] 主备同步完成", + HaStatusEnum.SWITCHING_WAITING_IN_SYNC.getMsg(newActiveClusterPhyDO.getClusterName()), + relationDO.getActiveResName() + )); + return haSwitchTopic; + } + + /** + * 备Topic删除拉取主Topic数据的配置 + */ + private HaSwitchTopic oldStandbyTopicDelFetchConfig(ClusterDO newActiveClusterPhyDO, ClusterDO newStandbyClusterPhyDO, HaASRelationDO relationDO, boolean focus, JobLogDO switchLogTemplate, String operator) { + HaSwitchTopic haSwitchTopic = new HaSwitchTopic(true); + if (!relationDO.getStatus().equals(HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH_CODE)) { + // 状态不对 + haSwitchTopic.setFinished(true); + return haSwitchTopic; + } + + Result rv = haTopicService.stopHAInKafka( + newActiveClusterPhyDO, relationDO.getStandbyResName(), // 旧的备 + operator + ); + if (rv.failed() && !focus) { + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH.getMsg(newActiveClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + haSwitchTopic.setFinished(false); + return haSwitchTopic; + } else if (rv.failed() && focus) { + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 失败,但进行强制执行,跳过该操作", HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH.getMsg(newActiveClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + } else { + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 成功", HaStatusEnum.SWITCHING_CLOSE_OLD_STANDBY_TOPIC_FETCH.getMsg(newActiveClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + } + + // 修改Topic主备状态 + relationDO.setStatus(HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE); + + haSwitchTopic.setFinished(true); + return haSwitchTopic; + } + + /** + * 新的备Topic,创建拉取新主Topic数据的配置 + */ + private HaSwitchTopic newStandbyTopicAddFetchConfig(ClusterDO newActiveClusterPhyDO, + ClusterDO newStandbyClusterPhyDO, + List doList, + boolean focus, + JobLogDO switchLogTemplate, + String operator) { + boolean forceAndFailed = false; + for (HaASRelationDO relationDO: doList) { + if (!relationDO.getStatus().equals(HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH_CODE)) { + // 状态不对 + continue; + } + + Result rv = null; + if (!forceAndFailed) { + // 非 强制切换并且失败了 + rv = haTopicService.activeHAInKafka( + newActiveClusterPhyDO, relationDO.getStandbyResName(), + newStandbyClusterPhyDO, relationDO.getStandbyResName(), + operator + ); + } + + if (forceAndFailed) { + // 强制切换并且失败了,记录该日志 + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 失败,但因为是强制执行且强制执行时依旧出现操作失败,因此直接跳过该操作", HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH.getMsg(newStandbyClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + + } else if (rv.failed() && !focus) { + // 如果失败了,并且非强制切换,则直接返回 + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH.getMsg(newStandbyClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + + return new HaSwitchTopic(false); + } else if (rv.failed() && focus) { + // 如果失败了,但是是强制切换,则记录日志并继续 + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 失败,但因为是强制执行,因此跳过该操作", HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH.getMsg(newStandbyClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + + forceAndFailed = true; + } else { + // 记录成功日志 + this.saveLogs(switchLogTemplate, String.format("%s:\tTopic:[%s] 成功", HaStatusEnum.SWITCHING_OPEN_NEW_STANDBY_TOPIC_FETCH.getMsg(newStandbyClusterPhyDO.getClusterName()), relationDO.getActiveResName())); + } + + // 修改Topic主备状态 + relationDO.setStatus(HaStatusEnum.SWITCHING_CLOSEOUT_CODE); + haASRelationService.updateRelationStatus(relationDO.getId(), HaStatusEnum.SWITCHING_CLOSEOUT_CODE); + } + + return new HaSwitchTopic(true); + } + + /** + * 切换收尾 + * 1、原先的主集群-修改user的active集群,指向新的主集群 + * 2、原先的备集群-修改user的active集群,指向新的主集群 + * 3、网关-修改user的active集群,指向新的主集群 + */ + private HaSwitchTopic closeoutSwitching(ClusterDO newActiveClusterPhyDO, ClusterDO newStandbyClusterPhyDO, String gatewayZK, List doList, boolean focus, JobLogDO switchLogTemplate) { + // 暂停HA的KafkaUser + Set activeHaKafkaUserSet = new HashSet<>(); + + boolean allSuccess = true; + boolean needLog = false; + boolean forceAndNewStandbyFailed = false; // 强制切换,但是新的备依旧操作失败 + + HaSwitchTopic haSwitchTopic = new HaSwitchTopic(true); + for (HaASRelationDO relationDO: doList) { + if (!relationDO.getStatus().equals(HaStatusEnum.SWITCHING_CLOSEOUT_CODE)) { + // 当前不处于closeout状态 + haSwitchTopic.setFinished(false); + continue; + } + + needLog = true; + + // 获取关联的KafkaUser + Set relatedKafkaUserSet = authorityService.getAuthorityByTopic(relationDO.getActiveClusterPhyId(), relationDO.getActiveResName()) + .stream() + .map(elem -> elem.getAppId()) + .filter(kafkaUser -> !activeHaKafkaUserSet.contains(kafkaUser)) + .collect(Collectors.toSet()); + + for (String kafkaUser: relatedKafkaUserSet) { + // 操作新的主集群 + Result rv = haKafkaUserService.activeHAInKafka(newActiveClusterPhyDO.getZookeeper(), newActiveClusterPhyDO.getId(), kafkaUser); + if (rv.failed() && !focus) { + haSwitchTopic.setFinished(false); + this.saveLogs(switchLogTemplate, String.format("%s:\t失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_CLOSEOUT.getMsg(newActiveClusterPhyDO.getClusterName()))); + return haSwitchTopic; + } else if (rv.failed() && focus) { + allSuccess = false; + } + + // 操作新的备集群,如果出现错误,则下次就不再进行操作ZK。新的备的Topic不是那么重要,因此这里允许出现跳过 + rv = null; + if (!forceAndNewStandbyFailed) { + // 如果对备集群的操作过程中,出现了失败,则直接跳过 + rv = haKafkaUserService.activeHAInKafka(newStandbyClusterPhyDO.getZookeeper(), newActiveClusterPhyDO.getId(), kafkaUser); + } + + if (rv != null && rv.failed() && !focus) { + haSwitchTopic.setFinished(false); + this.saveLogs(switchLogTemplate, String.format("%s:\t失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_CLOSEOUT.getMsg(newActiveClusterPhyDO.getClusterName()))); + return haSwitchTopic; + } else if (rv != null && rv.failed() && focus) { + allSuccess = false; + forceAndNewStandbyFailed = true; + } + + // 操作网关 + rv = haKafkaUserService.activeHAInKafka(gatewayZK, newActiveClusterPhyDO.getId(), kafkaUser); + if (rv.failed() && !focus) { + haSwitchTopic.setFinished(false); + this.saveLogs(switchLogTemplate, String.format("%s:\t失败,1分钟后再进行重试", HaStatusEnum.SWITCHING_CLOSEOUT.getMsg(newActiveClusterPhyDO.getClusterName()))); + return haSwitchTopic; + } else if (rv.failed() && focus) { + allSuccess = false; + } + } + + // 记录已经激活的User + activeHaKafkaUserSet.addAll(relatedKafkaUserSet); + + // 修改Topic主备信息 + HaASRelationDO newHaASRelationDO = new HaASRelationDO( + newActiveClusterPhyDO.getId(), relationDO.getActiveResName(), + newStandbyClusterPhyDO.getId(), relationDO.getStandbyResName(), + HaResTypeEnum.TOPIC.getCode(), + HaStatusEnum.STABLE_CODE + ); + newHaASRelationDO.setId(relationDO.getId()); + + haASRelationService.updateById(newHaASRelationDO); + } + + if (!needLog) { + return haSwitchTopic; + } + + this.saveLogs(switchLogTemplate, String.format("%s:\t%s", HaStatusEnum.SWITCHING_CLOSEOUT.getMsg(newActiveClusterPhyDO.getClusterName()), allSuccess? "成功": "存在失败,但进行强制执行,跳过该操作")); + return haSwitchTopic; + } + + /** + * 检查参数,并获取主备关系信息 + */ + private Result> checkParamAndGetASRelation(Long activeClusterPhyId, Long standbyClusterPhyId, List switchTopicNameList) { + List doList = new ArrayList<>(); + for (String topicName: switchTopicNameList) { + Result doResult = this.checkParamAndGetASRelation(activeClusterPhyId, standbyClusterPhyId, topicName); + if (doResult.failed()) { + return Result.buildFromIgnoreData(doResult); + } + + doList.add(doResult.getData()); + } + + return Result.buildSuc(doList); + } + + /** + * 检查参数,并获取主备关系信息 + */ + private Result checkParamAndGetASRelation(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName) { + // newActiveTopic必须存在,新的备Topic可以不存在 + if (!PhysicalClusterMetadataManager.isTopicExist(activeClusterPhyId, topicName)) { + return Result.buildFromRSAndMsg( + ResultStatus.RESOURCE_NOT_EXIST, + String.format("新的主集群ID:[%d]-Topic:[%s] 不存在", activeClusterPhyId, topicName) + ); + } + + // 查询主备关系是否存在 + HaASRelationDO relationDO = haASRelationService.getSpecifiedHAFromDB( + standbyClusterPhyId, + topicName, + activeClusterPhyId, + topicName, + HaResTypeEnum.TOPIC + ); + if (relationDO == null) { + // 查询切换后的关系是否存在,如果已经存在,则后续会重新建立一遍 + relationDO = haASRelationService.getSpecifiedHAFromDB( + activeClusterPhyId, + topicName, + standbyClusterPhyId, + topicName, + HaResTypeEnum.TOPIC + ); + } + + if (relationDO == null) { + // 主备关系不存在 + return Result.buildFromRSAndMsg( + ResultStatus.RESOURCE_NOT_EXIST, + String.format("主集群ID:[%d]-Topic:[%s], 备集群ID:[%d] Topic:[%s] 的主备关系不存在,因此无法切换", activeClusterPhyId, topicName, standbyClusterPhyId, topicName) + ); + } + + return Result.buildSuc(relationDO); + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/HaASSwitchJobManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/HaASSwitchJobManager.java new file mode 100644 index 00000000..425ebe03 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/HaASSwitchJobManager.java @@ -0,0 +1,41 @@ +package com.xiaojukeji.kafka.manager.service.biz.job; + + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobState; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobActionDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobDTO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.job.HaJobDetailVO; + +import java.util.List; + + +public interface HaASSwitchJobManager { + /** + * 创建任务 + */ + Result createJob(ASSwitchJobDTO dto, String operator); + + /** + * 执行job + * @param jobId 任务ID + * @param focus 强制切换 + * @param firstTriggerExecute 第一次触发执行 + * @return + */ + Result executeJob(Long jobId, boolean focus, boolean firstTriggerExecute); + + Result jobState(Long jobId); + + /** + * 刷新扩展数据 + */ + void flushExtendData(Long jobId); + + /** + * 对Job执行操作 + */ + Result actionJob(Long jobId, ASSwitchJobActionDTO dto); + + Result> jobDetail(Long jobId); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/impl/HaASSwitchJobManagerImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/impl/HaASSwitchJobManagerImpl.java new file mode 100644 index 00000000..86f68c3f --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/biz/job/impl/HaASSwitchJobManagerImpl.java @@ -0,0 +1,452 @@ +package com.xiaojukeji.kafka.manager.service.biz.job.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.JobLogBizTypEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum; +import com.xiaojukeji.kafka.manager.common.constant.ConfigConstant; +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.HaSwitchTopic; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobDetail; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobState; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaSubJobExtendData; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobActionDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchJobDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchSubJobDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.job.HaJobDetailVO; +import com.xiaojukeji.kafka.manager.common.utils.BackoffUtils; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import com.xiaojukeji.kafka.manager.common.utils.FutureUtil; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaAppManager; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaTopicManager; +import com.xiaojukeji.kafka.manager.service.biz.job.HaASSwitchJobManager; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.ConfigService; +import com.xiaojukeji.kafka.manager.service.service.JobLogService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASSwitchJobService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.*; +import java.util.stream.Collectors; + + +@Service +public class HaASSwitchJobManagerImpl implements HaASSwitchJobManager { + private static final Logger LOGGER = LoggerFactory.getLogger(HaASSwitchJobManagerImpl.class); + + @Autowired + private JobLogService jobLogService; + + @Autowired + private ClusterService clusterService; + + @Autowired + private ConfigService configService; + + @Autowired + private HaASRelationService haASRelationService; + + @Autowired + private HaASSwitchJobService haASSwitchJobService; + + @Autowired + private HaTopicManager haTopicManager; + + @Autowired + private HaTopicService haTopicService; + + @Autowired + private HaAppManager haAppManager; + + private static final Long BACK_OFF_TIME = 3000L; + + private static final FutureUtil asyncExecuteJob = FutureUtil.init( + "HaASSwitchJobManager", + 10, + 10, + 5000 + ); + + @Override + public Result createJob(ASSwitchJobDTO dto, String operator) { + LOGGER.info("method=createJob||activeClusterPhyId={}||switchTopicParam={}||operator={}", dto.getActiveClusterPhyId(), ConvertUtil.obj2Json(dto), operator); + + // 1、检查参数是否合法,并获取需要执行主备切换的Topics + Result> haTopicSetResult = this.checkParamLegalAndGetNeedSwitchHaTopics(dto); + if (haTopicSetResult.failed()) { + // 检查失败,则直接返回 + return Result.buildFromIgnoreData(haTopicSetResult); + } + + LOGGER.info("method=createJob||activeClusterPhyId={}||switchTopics={}||operator={}", dto.getActiveClusterPhyId(), ConvertUtil.obj2Json(haTopicSetResult.getData()), operator); + + // 2、查看是否将KafkaUser关联的Topic都涵盖了 + if (dto.getMustContainAllKafkaUserTopics() != null + && dto.getMustContainAllKafkaUserTopics() + && (dto.getAll() == null || !dto.getAll()) + && !haAppManager.isContainAllRelateAppTopics(dto.getActiveClusterPhyId(), dto.getTopicNameList())) { + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FORBIDDEN, "存在KafkaUser关联的Topic未选中"); + } + + // 3、创建任务 + Result longResult = haASSwitchJobService.createJob( + dto.getActiveClusterPhyId(), + dto.getStandbyClusterPhyId(), + new ArrayList<>(haTopicSetResult.getData()), + operator + ); + if (longResult.failed()) { + // 创建失败 + return longResult; + } + + LOGGER.info("method=createJob||activeClusterPhyId={}||jobId={}||operator={}||msg=create-job success", dto.getActiveClusterPhyId(), longResult.getData(), operator); + + // 4、为了加快执行效率,这里在创建完成任务之后,会直接异步执行HA切换任务 + asyncExecuteJob.directSubmitTask( + () -> { + BackoffUtils.backoff(BACK_OFF_TIME); + + this.executeJob(longResult.getData(), false, true); + + // 更新扩展数据 + this.flushExtendData(longResult.getData()); + } + ); + + // 5、返回结果 + return longResult; + } + + @Override + public Result executeJob(Long jobId, boolean focus, boolean firstTriggerExecute) { + LOGGER.info("method=executeJob||jobId={}||msg=execute job start", jobId); + + // 查询job + HaASSwitchJobDO jobDO = haASSwitchJobService.getJobById(jobId); + if (jobDO == null) { + LOGGER.warn("method=executeJob||jobId={}||msg=job not exist", jobId); + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, String.format("jobId:[%d] 不存在", jobId)); + } + + // 检查job状态 + if (!HaJobStatusEnum.isRunning(jobDO.getJobStatus())) { + LOGGER.warn("method=executeJob||jobId={}||jobStatus={}||msg=job status illegal", jobId, HaJobStatusEnum.valueOfStatus(jobDO.getJobStatus())); + return this.buildActionForbidden(jobId, jobDO.getJobStatus()); + } + + // 查询子job列表 + List subJobDOList = haASSwitchJobService.listSubJobsById(jobId); + if (ValidateUtils.isEmptyList(subJobDOList)) { + // 无子任务,则设置任务状态为成功 + haASSwitchJobService.updateJobStatus(jobId, HaJobStatusEnum.SUCCESS.getStatus()); + return Result.buildSuc(); + } + + Set statusSet = new HashSet<>(); + subJobDOList.forEach(elem -> statusSet.add(elem.getJobStatus())); + if (statusSet.size() == 1 && statusSet.contains(HaJobStatusEnum.SUCCESS.getStatus())) { + // 无子任务,则设置任务状态为成功 + haASSwitchJobService.updateJobStatus(jobId, HaJobStatusEnum.SUCCESS.getStatus()); + return Result.buildSuc(); + } + + if (firstTriggerExecute) { + this.saveLogs(jobDO.getId(), "主备切换开始..."); + this.saveLogs(jobDO.getId(), "如果主备集群或网关的ZK存在问题,则可能会出现1分钟左右日志不刷新的情况"); + } + + // 进行主备切换 + Result haSwitchTopicResult = haTopicManager.switchHaWithCanRetry( + jobDO.getActiveClusterPhyId(), + jobDO.getStandbyClusterPhyId(), + subJobDOList.stream().map(elem -> elem.getActiveResName()).collect(Collectors.toList()), + focus, + firstTriggerExecute, + new JobLogDO(JobLogBizTypEnum.HA_SWITCH_JOB_LOG.getCode(), String.valueOf(jobId)), + jobDO.getOperator() + ); + + if (haSwitchTopicResult.failed()) { + // 出现错误 + LOGGER.error("method=executeJob||jobId={}||executeResult={}||msg=execute job failed", jobId, haSwitchTopicResult); + return Result.buildFromIgnoreData(haSwitchTopicResult); + } + + + // 执行结果 + HaSwitchTopic haSwitchTopic = haSwitchTopicResult.getData(); + Long timeoutUnitSec = this.getTimeoutUnitSecConfig(jobDO.getActiveClusterPhyId()); + + // 存储日志 + if (haSwitchTopic.isFinished()) { + this.saveLogs(jobDO.getId(), "主备切换完成."); + } + + // 更新状态 + for (HaASSwitchSubJobDO subJobDO: subJobDOList) { + if (haSwitchTopic.isActiveTopicSwitchFinished(subJobDO.getActiveResName()) || haSwitchTopic.isFinished()) { + // 执行完成 + haASSwitchJobService.updateSubJobStatus(subJobDO.getId(), HaJobStatusEnum.SUCCESS.getStatus()); + } else if (runningInTimeout(subJobDO.getCreateTime().getTime(), timeoutUnitSec)) { + // 超时运行中 + haASSwitchJobService.updateSubJobStatus(subJobDO.getId(), HaJobStatusEnum.RUNNING_IN_TIMEOUT.getStatus()); + } + } + + if (haSwitchTopic.isFinished()) { + // 任务执行完成 + LOGGER.info("method=executeJob||jobId={}||executeResult={}||msg=execute job success", jobId, haSwitchTopicResult); + + // 更新状态 + haASSwitchJobService.updateJobStatus(jobId, HaJobStatusEnum.SUCCESS.getStatus()); + } else { + LOGGER.info("method=executeJob||jobId={}||executeResult={}||msg=execute job not finished", jobId, haSwitchTopicResult); + } + + // 返回结果 + return Result.buildSuc(); + } + + @Override + public Result jobState(Long jobId) { + List doList = haASSwitchJobService.listSubJobsById(jobId); + if (ValidateUtils.isEmptyList(doList)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, String.format("jobId:[%d] 不存在", jobId)); + } + + if (System.currentTimeMillis() - doList.get(0).getCreateTime().getTime() <= (BACK_OFF_TIME.longValue() * 2)) { + // 进度0 + return Result.buildSuc(new HaJobState(doList.size(), 0)); + } + + // 这里会假设主备Topic的名称是一样的 + Map progressMap = new HashMap<>(); + haASRelationService.listAllHAFromDB(doList.get(0).getActiveClusterPhyId(), HaResTypeEnum.TOPIC).stream().forEach( + elem -> progressMap.put(elem.getActiveResName(), elem.getStatus()) + ); + + HaJobState haJobState = new HaJobState( + doList.stream().map(elem -> elem.getJobStatus()).collect(Collectors.toList()), + 0 + ); + + // 计算细致的进度信息 + Integer progress = 0; + for (HaASSwitchSubJobDO elem: doList) { + if (HaJobStatusEnum.isFinished(elem.getJobStatus())) { + progress += 100; + continue; + } + + progress += HaStatusEnum.calProgress(progressMap.get(elem.getActiveResName())); + } + haJobState.setProgress(ConvertUtil.double2Int(progress * 1.0 / doList.size())); + + return Result.buildSuc(haJobState); + + } + + @Override + public void flushExtendData(Long jobId) { + // 因为仅仅是刷新扩展数据,因此不会对jobId等进行严格检查 + + // 查询子job列表 + List subJobDOList = haASSwitchJobService.listSubJobsById(jobId); + if (ValidateUtils.isEmptyList(subJobDOList)) { + // 无任务,直接返回 + return; + } + + for (HaASSwitchSubJobDO subJobDO: subJobDOList) { + try { + this.flushExtendData(subJobDO); + } catch (Exception e) { + LOGGER.error("method=flushExtendData||jobId={}||subJobDO={}||errMsg=exception", jobId, subJobDO, e); + } + } + } + + @Override + public Result actionJob(Long jobId, ASSwitchJobActionDTO dto) { + if (!TaskActionEnum.FORCE.getAction().equals(dto.getAction())) { + // 不存在,或者不支持 + return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "action不存在"); + } + + // 强制执行,异步执行 + this.saveLogs(jobId, "开始执行强制切换..."); + this.saveLogs(jobId, "强制切换过程中,可能出现日志1分钟不刷新情况"); + this.saveLogs(jobId, "强制切换过程中,因可能与正常切换任务同时执行,因此可能出现日志重复问题"); + asyncExecuteJob.directSubmitTask( + () -> this.executeJob(jobId, true, false) + ); + + return Result.buildSuc(); + } + + @Override + public Result> jobDetail(Long jobId) { + // 获取详情 + Result> haResult = haASSwitchJobService.jobDetail(jobId); + if (haResult.failed()) { + return Result.buildFromIgnoreData(haResult); + } + + List voList = ConvertUtil.list2List(haResult.getData(), HaJobDetailVO.class); + if (voList.isEmpty()) { + return Result.buildSuc(voList); + } + + ClusterDO activeClusterDO = clusterService.getById(voList.get(0).getActiveClusterPhyId()); + ClusterDO standbyClusterDO = clusterService.getById(voList.get(0).getStandbyClusterPhyId()); + + // 获取超时配置 + Long timeoutUnitSecConfig = this.getTimeoutUnitSecConfig(voList.get(0).getActiveClusterPhyId()); + voList.forEach(elem -> { + elem.setTimeoutUnitSecConfig(timeoutUnitSecConfig); + elem.setActiveClusterPhyName(activeClusterDO != null? activeClusterDO.getClusterName(): ""); + elem.setStandbyClusterPhyName(standbyClusterDO != null? standbyClusterDO.getClusterName(): ""); + }); + + // 返回结果 + return Result.buildSuc(voList); + } + + /**************************************************** private method ****************************************************/ + + /** + * 检查参数是否合法并返回需要进行主备切换的Topic + */ + private Result> checkParamLegalAndGetNeedSwitchHaTopics(ASSwitchJobDTO dto) { + // 1、检查主集群是否存在 + ClusterDO activeClusterDO = clusterService.getById(dto.getActiveClusterPhyId()); + if (ValidateUtils.isNull(activeClusterDO)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getActiveClusterPhyId())); + } + + // 2、检查备集群是否存在 + ClusterDO standbyClusterDO = clusterService.getById(dto.getStandbyClusterPhyId()); + if (ValidateUtils.isNull(standbyClusterDO)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getStandbyClusterPhyId())); + } + + // 3、检查集群是否建立了主备关系 + List clusterDOList = haASRelationService.listAllHAFromDB(dto.getActiveClusterPhyId(), dto.getStandbyClusterPhyId(), HaResTypeEnum.CLUSTER); + if (ValidateUtils.isEmptyList(clusterDOList)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "集群主备关系未建立"); + } + + // 4、获取集群当前已经建立主备关系的Topic列表 + List topicDOList = haASRelationService.listAllHAFromDB(dto.getActiveClusterPhyId(), dto.getStandbyClusterPhyId(), HaResTypeEnum.TOPIC); + + if (dto.getAll() != null && dto.getAll()) { + // 5.1、对集群所有已经建立主备关系的Topic,进行主备切换 + + // 过滤掉 __打头的Topic + // 过滤掉 当前主集群已经是切换后的主集群的Topic,即这部分Topic已经是切换后的状态了 + return Result.buildSuc( + topicDOList.stream() + .filter(elem -> !elem.getActiveResName().startsWith("__")) + .filter(elem -> !elem.getActiveClusterPhyId().equals(dto.getActiveClusterPhyId())) + .map(elem -> elem.getActiveResName()) + .collect(Collectors.toSet()) + ); + } + + // 5.2、指定Topic进行主备切换 + + // 当前已经有主备关系的Topic + Set relationTopicNameSet = new HashSet<>(); + topicDOList.forEach(elem -> relationTopicNameSet.add(elem.getActiveResName())); + + // 逐个检查Topic,此时这里不进行过滤,如果进行过滤之后,会导致一些用户提交的信息丢失。 + // 比如提交了10个Topic,我过滤成9个,用户就会比较奇怪。 + // 上一步进行过滤,是减少不必要的Topic的刚扰,PS:也可以考虑增加这些干扰,从而让用户明确知道Topic已进行主备切换 + for (String topicName: dto.getTopicNameList()) { + if (!relationTopicNameSet.contains(topicName)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, String.format("Topic:[%s] 主备关系不存在,需要先建立主备关系", topicName)); + } + + // 检查新的主Topic是否存在,如果不存在则直接返回错误,不检查新的备Topic是否存在 + if (!PhysicalClusterMetadataManager.isTopicExist(dto.getActiveClusterPhyId(), topicName)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getTopicNotExist(dto.getActiveClusterPhyId(), topicName)); + } + } + + return Result.buildSuc( + dto.getTopicNameList().stream().collect(Collectors.toSet()) + ); + } + + private void saveLogs(Long jobId, String content) { + jobLogService.addLogAndIgnoreException(new JobLogDO( + JobLogBizTypEnum.HA_SWITCH_JOB_LOG.getCode(), + String.valueOf(jobId), + new Date(), + content + )); + } + + private void flushExtendData(HaASSwitchSubJobDO subJobDO) { + HaSubJobExtendData extendData = new HaSubJobExtendData(); + Result sumLagResult = haTopicService.getStandbyTopicFetchLag(subJobDO.getActiveClusterPhyId(), subJobDO.getActiveResName()); + if (sumLagResult.failed()) { + extendData.setSumLag(Constant.INVALID_CODE.longValue()); + } else { + extendData.setSumLag(sumLagResult.getData()); + } + + haASSwitchJobService.updateSubJobExtendData(subJobDO.getId(), extendData); + } + + private Result buildActionForbidden(Long jobId, Integer jobStatus) { + return Result.buildFromRSAndMsg( + ResultStatus.OPERATION_FORBIDDEN, + String.format("jobId:[%d] 当前 status:[%s], 不允许被执行", jobId, HaJobStatusEnum.valueOfStatus(jobStatus)) + ); + } + + private boolean runningInTimeout(Long startTimeUnitMs, Long timeoutUnitSec) { + if (timeoutUnitSec == null) { + // 配置为空,则返回未超时 + return false; + } + + // 开始时间 + 超时时间 > 当前时间,则为超时 + return startTimeUnitMs + timeoutUnitSec * 1000 > System.currentTimeMillis(); + } + + private Long getTimeoutUnitSecConfig(Long activeClusterPhyId) { + // 获取该集群配置 + Long durationUnitSec = configService.getLongValue( + ConfigConstant.HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_PREFIX + "_" + activeClusterPhyId, + null + ); + + if (durationUnitSec == null) { + // 当前集群配置不存在,则获取默认配置 + durationUnitSec = configService.getLongValue( + ConfigConstant.HA_SWITCH_JOB_TIMEOUT_UNIT_SEC_CONFIG_PREFIX + "_" + Constant.INVALID_CODE, + null + ); + } + + return durationUnitSec; + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java index 35c4be8d..c33611c2 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java @@ -43,7 +43,7 @@ public interface ClusterService { ClusterNameDTO getClusterName(Long logicClusterId); - ResultStatus deleteById(Long clusterId, String operator); + Result deleteById(Long clusterId, String operator); /** * 获取优先被选举为controller的broker diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/JobLogService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/JobLogService.java new file mode 100644 index 00000000..4918b8c6 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/JobLogService.java @@ -0,0 +1,15 @@ +package com.xiaojukeji.kafka.manager.service.service; + + +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; + +import java.util.List; + +/** + * Job相关的日志 + */ +public interface JobLogService { + void addLogAndIgnoreException(JobLogDO jobLogDO); + + List listLogs(Integer bizType, String bizKeyword, Long startId); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicManagerService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicManagerService.java index 79524204..279236ff 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicManagerService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicManagerService.java @@ -2,11 +2,14 @@ package com.xiaojukeji.kafka.manager.service.service; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.TopicOperationResult; import com.xiaojukeji.kafka.manager.common.entity.ao.RdTopicBasic; +import com.xiaojukeji.kafka.manager.common.entity.ao.topic.MineTopicSummary; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicAppData; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicBusinessInfo; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicDTO; -import com.xiaojukeji.kafka.manager.common.entity.ao.topic.MineTopicSummary; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicExpansionDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicModificationDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicExpiredDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicStatisticsDO; @@ -130,5 +133,15 @@ public interface TopicManagerService { * @return */ ResultStatus addAuthority(AuthorityDO authorityDO); + + /** + * 修改topic + */ + Result modifyTopic(TopicModificationDTO dto); + + /** + * topic扩分区 + */ + TopicOperationResult expandTopic(TopicExpansionDTO dto); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicService.java index 7a0e3eb0..8fd0b4f1 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/TopicService.java @@ -65,6 +65,7 @@ public interface TopicService { * 获取Topic的分区的offset */ Map getPartitionOffset(ClusterDO clusterDO, String topicName, OffsetPosEnum offsetPosEnum); + Map getPartitionOffset(Long clusterPhyId, String topicName, OffsetPosEnum offsetPosEnum); /** * 获取Topic概览信息 diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java index d52d3bc7..c6fe3220 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java @@ -42,4 +42,13 @@ public interface ZookeeperService { * @return */ Result deleteControllerPreferredCandidate(Long clusterId, Integer brokerId); + + /** + * 获取集群的brokerId + * @param zookeeper zookeeper + * @return 操作结果 + */ + Result> getBrokerIds(String zookeeper); + + Long getClusterIdAndNullIfFailed(String zookeeper); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java index 82aa5513..e289f3f6 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java @@ -51,6 +51,13 @@ public interface AppService { */ List getByPrincipal(String principal); + /** + * 通过负责人&集群id(排除已被其他集群绑定的app)来查找 + * @param principal 负责人 + * @return List + */ + List getByPrincipalAndClusterId(String principal, Long phyClusterId); + /** * 通过appId来查,需要check当前登录人是否有权限. * @param appId appId diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AuthorityService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AuthorityService.java index 6a19d84e..7af04408 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AuthorityService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AuthorityService.java @@ -46,6 +46,8 @@ public interface AuthorityService { */ List getAuthorityByTopic(Long clusterId, String topicName); + List getAuthorityByTopicFromCache(Long clusterId, String topicName); + List getAuthority(String appId); /** diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java index 200b3cf4..91afd277 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java @@ -4,24 +4,27 @@ import com.alibaba.fastjson.JSONObject; import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperationStatusEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.AppTopicDTO; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.AppDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.KafkaUserDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; -import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; import com.xiaojukeji.kafka.manager.dao.gateway.AppDao; import com.xiaojukeji.kafka.manager.dao.gateway.KafkaUserDao; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.OperateRecordService; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; -import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; @@ -60,6 +63,9 @@ public class AppServiceImpl implements AppService { @Autowired private OperateRecordService operateRecordService; + @Autowired + private HaASRelationService haASRelationService; + @Override public ResultStatus addApp(AppDO appDO, String operator) { try { @@ -181,6 +187,52 @@ public class AppServiceImpl implements AppService { return new ArrayList<>(); } + @Override + public List getByPrincipalAndClusterId(String principal, Long phyClusterId) { + try { + List appDOs = appDao.getByPrincipal(principal); + if (ValidateUtils.isEmptyList(appDOs)){ + return new ArrayList<>(); + } + + List has = haASRelationService.listAllHAFromDB(phyClusterId, HaResTypeEnum.CLUSTER); + List authorityDOS; + if (has.isEmpty()){ + authorityDOS = authorityService.listAll().stream() + .filter(authorityDO -> !authorityDO.getClusterId().equals(phyClusterId)) + .collect(Collectors.toList()); + }else { + authorityDOS = authorityService.listAll().stream() + .filter(authorityDO -> !(has.get(0).getActiveClusterPhyId().equals(authorityDO.getClusterId()) + || has.get(0).getStandbyClusterPhyId().equals(authorityDO.getClusterId()))) + .collect(Collectors.toList()); + } + + Map> appClusterIdMap = authorityDOS + .stream().filter(authorityDO -> !authorityDO.getClusterId().equals(phyClusterId)) + .collect(Collectors.groupingBy(AuthorityDO::getAppId)); + + //过滤已被其他集群topic使用的app + appDOs = appDOs.stream() + .filter(appDO -> ListUtils.string2StrList(appDO.getPrincipals()).contains(principal)) + .filter(appDO -> appClusterIdMap.get(appDO.getAppId()) == null) + .collect(Collectors.toList()); + + //过滤已被其他集群使用的app + List clusterAppIds = logicClusterMetadataManager.getLogicalClusterList() + .stream().filter(logicalClusterDO -> !logicalClusterDO.getClusterId().equals(phyClusterId) ) + .map(LogicalClusterDO::getAppId).collect(Collectors.toList()); + appDOs = appDOs.stream() + .filter(appDO -> !clusterAppIds.contains(appDO.getAppId())) + .collect(Collectors.toList()); + + return appDOs; + } catch (Exception e) { + LOGGER.error("get app list failed, principals:{}.", principal); + } + return new ArrayList<>(); + } + @Override public AppDO getAppByUserAndId(String appId, String curUser) { AppDO appDO = this.getByAppId(appId); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AuthorityServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AuthorityServiceImpl.java index f5fad493..55cca885 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AuthorityServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AuthorityServiceImpl.java @@ -4,6 +4,7 @@ import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperationStatusEnum; import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum; +import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; @@ -75,8 +76,10 @@ public class AuthorityServiceImpl implements AuthorityService { return kafkaAclDao.insert(kafkaAclDO); } catch (Exception e) { LOGGER.error("add authority failed, authorityDO:{}.", authorityDO, e); + + // 返回-1表示出错 + return Constant.INVALID_CODE; } - return result; } @Override @@ -124,7 +127,10 @@ public class AuthorityServiceImpl implements AuthorityService { operateRecordService.insert(operateRecordDO); } catch (Exception e) { LOGGER.error("delete authority failed, authorityDO:{}.", authorityDO, e); + + return ResultStatus.MYSQL_ERROR; } + return ResultStatus.SUCCESS; } @@ -152,6 +158,11 @@ public class AuthorityServiceImpl implements AuthorityService { return Collections.emptyList(); } + @Override + public List getAuthorityByTopicFromCache(Long clusterId, String topicName) { + return authorityDao.getAuthorityByTopicFromCache(clusterId, topicName); + } + @Override public List getAuthority(String appId) { List doList = null; diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASRelationService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASRelationService.java new file mode 100644 index 00000000..08445182 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASRelationService.java @@ -0,0 +1,61 @@ +package com.xiaojukeji.kafka.manager.service.service.ha; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; + +import java.util.List; + +public interface HaASRelationService { + Result replaceTopicRelationsToDB(Long standbyClusterPhyId, List topicRelationDOList); + + Result addHAToDB(HaASRelationDO haASRelationDO); + + Result deleteById(Long id); + + int updateRelationStatus(Long relationId, Integer newStatus); + int updateById(HaASRelationDO haASRelationDO); + + /** + * 获取主集群关系 + */ + HaASRelationDO getActiveClusterHAFromDB(Long activeClusterPhyId); + + /** + * 获取主备关系 + */ + HaASRelationDO getSpecifiedHAFromDB(Long activeClusterPhyId, + String activeResName, + Long standbyClusterPhyId, + String standbyResName, + HaResTypeEnum resTypeEnum); + + /** + * 获取主备关系 + */ + HaASRelationDO getHAFromDB(Long firstClusterPhyId, + String firstResName, + HaResTypeEnum resTypeEnum); + + /** + * 获取备集群主备关系 + */ + List getStandbyHAFromDB(Long standbyClusterPhyId, HaResTypeEnum resTypeEnum); + List getActiveHAFromDB(Long activeClusterPhyId, HaResTypeEnum resTypeEnum); + + /** + * 获取主备关系 + */ + List listAllHAFromDB(HaResTypeEnum resTypeEnum); + + /** + * 获取主备关系 + */ + List listAllHAFromDB(Long firstClusterPhyId, HaResTypeEnum resTypeEnum); + + /** + * 获取主备关系 + */ + List listAllHAFromDB(Long firstClusterPhyId, Long secondClusterPhyId, HaResTypeEnum resTypeEnum); + +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASSwitchJobService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASSwitchJobService.java new file mode 100644 index 00000000..189a4ba0 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaASSwitchJobService.java @@ -0,0 +1,57 @@ +package com.xiaojukeji.kafka.manager.service.service.ha; + + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobDetail; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaSubJobExtendData; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchJobDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchSubJobDO; + +import java.util.List; +import java.util.Map; + +public interface HaASSwitchJobService { + /** + * 创建任务 + */ + Result createJob(Long activeClusterPhyId, Long standbyClusterPhyId, List topicNameList, String operator); + + /** + * 更新任务状态 + */ + int updateJobStatus(Long jobId, Integer jobStatus); + + /** + * 更新子任务状态 + */ + int updateSubJobStatus(Long subJobId, Integer jobStatus); + + /** + * 更新子任务扩展数据 + */ + int updateSubJobExtendData(Long subJobId, HaSubJobExtendData extendData); + + /** + * 任务详情 + */ + Result> jobDetail(Long jobId); + + /** + * 正在运行中的job + */ + List listRunningJobs(Long ignoreAfterTime); + + /** + * 集群近期的任务ID + */ + Map listClusterLatestJobs(); + + HaASSwitchJobDO getJobById(Long jobId); + + List listSubJobsById(Long jobId); + + /** + * 获取所有切换任务 + */ + List listAll(Boolean isAsc); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaClusterService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaClusterService.java new file mode 100644 index 00000000..3a4774c0 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaClusterService.java @@ -0,0 +1,45 @@ +package com.xiaojukeji.kafka.manager.service.service.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterVO; + +import java.util.List; +import java.util.Map; + +/** + * 集群主备关系 + */ +public interface HaClusterService { + /** + * 创建主备关系 + */ + Result createHA(Long activeClusterPhyId, Long standbyClusterPhyId, String operator); + Result createHAInKafka(String zookeeper, ClusterDO needWriteToZKClusterDO, String operator); + + /** + * 切换主备关系 + */ + Result switchHA(Long newActiveClusterPhyId, Long newStandbyClusterPhyId); + + /** + * 删除主备关系 + */ + Result deleteHA(Long activeClusterPhyId, Long standbyClusterPhyId); + + /** + * 获取主备关系 + */ + HaASRelationDO getHA(Long activeClusterPhyId); + + /** + * 获取集群主备关系 + */ + Map getClusterHARelation(); + + /** + * 获取主备关系 + */ + Result> listAllHA(); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaKafkaUserService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaKafkaUserService.java new file mode 100644 index 00000000..30310d83 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaKafkaUserService.java @@ -0,0 +1,23 @@ +package com.xiaojukeji.kafka.manager.service.service.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; + + +/** + * Topic主备关系管理 + * 不包括ACL,Gateway等信息 + */ +public interface HaKafkaUserService { + + Result setNoneHAInKafka(String zookeeper, String kafkaUser); + + /** + * 暂停HA + */ + Result stopHAInKafka(String zookeeper, String kafkaUser); + + /** + * 激活HA + */ + Result activeHAInKafka(String zookeeper, Long activeClusterPhyId, String kafkaUser); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaTopicService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaTopicService.java new file mode 100644 index 00000000..6da4efaf --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/HaTopicService.java @@ -0,0 +1,43 @@ +package com.xiaojukeji.kafka.manager.service.service.ha; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; + +import java.util.List; +import java.util.Map; + +/** + * Topic主备关系管理 + * 不包括ACL,Gateway等信息 + */ +public interface HaTopicService { + /** + * 创建主备关系 + */ + Result createHA(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName, String operator); + Result activeHAInKafkaNotCheck(ClusterDO activeClusterDO, String activeTopicName, ClusterDO standbyClusterDO, String standbyTopicName, String operator); + Result activeHAInKafka(ClusterDO activeClusterDO, String activeTopicName, ClusterDO standbyClusterDO, String standbyTopicName, String operator); + + /** + * 删除主备关系 + */ + Result deleteHA(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName, String operator); + Result stopHAInKafka(ClusterDO standbyClusterDO, String standbyTopicName, String operator); + + /** + * 获取集群topic的主备关系 + */ + Map getRelation(Long clusterId); + + /** + * 获取所有集群的备topic名称 + */ + Map> getClusterStandbyTopicMap(); + + /** + * 激活kafkaUserHA + */ + Result activeUserHAInKafka(ClusterDO activeClusterDO, ClusterDO standbyClusterDO, String kafkaUser, String operator); + + Result getStandbyTopicFetchLag(Long standbyClusterPhyId, String topicName); +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASRelationServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASRelationServiceImpl.java new file mode 100644 index 00000000..097e864e --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASRelationServiceImpl.java @@ -0,0 +1,199 @@ +package com.xiaojukeji.kafka.manager.service.service.ha.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.dao.ha.HaASRelationDao; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; + +@Service +public class HaASRelationServiceImpl implements HaASRelationService { + private static final Logger LOGGER = LoggerFactory.getLogger(HaASRelationServiceImpl.class); + + @Autowired + private HaASRelationDao haASRelationDao; + + @Override + public Result replaceTopicRelationsToDB(Long standbyClusterPhyId, List topicRelationDOList) { + try { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, HaResTypeEnum.TOPIC.getCode()); + lambdaQueryWrapper.eq(HaASRelationDO::getStandbyClusterPhyId, standbyClusterPhyId); + + Map dbRelationMap = haASRelationDao.selectList(lambdaQueryWrapper).stream().collect(Collectors.toMap(HaASRelationDO::getUniqueField, Function.identity())); + for (HaASRelationDO relationDO: topicRelationDOList) { + HaASRelationDO dbRelationDO = dbRelationMap.remove(relationDO.getUniqueField()); + if (dbRelationDO == null) { + // DB中不存在,则插入新的 + haASRelationDao.insert(relationDO); + } + } + + // dbRelationMap 中剩余的,是需要进行删除的 + for (HaASRelationDO dbRelationDO: dbRelationMap.values()) { + if (System.currentTimeMillis() - dbRelationDO.getModifyTime().getTime() >= 5 * 1000L) { + // 修改时间超过了5分钟了,则进行删除 + haASRelationDao.deleteById(dbRelationDO.getId()); + } + } + + return Result.buildSuc(); + } catch (Exception e) { + LOGGER.error("method=replaceTopicRelationsToDB||standbyClusterPhyId={}||errMsg=exception.", standbyClusterPhyId, e); + + return Result.buildFromRSAndMsg(ResultStatus.MYSQL_ERROR, e.getMessage()); + } + } + + @Override + public Result addHAToDB(HaASRelationDO haASRelationDO) { + try{ + int count = haASRelationDao.insert(haASRelationDO); + if (count < 1){ + LOGGER.error("add ha to db failed! haASRelationDO:{}" , haASRelationDO); + return Result.buildFrom(ResultStatus.MYSQL_ERROR); + } + } catch (Exception e) { + LOGGER.error("add ha to db failed! haASRelationDO:{}" , haASRelationDO); + return Result.buildFrom(ResultStatus.MYSQL_ERROR); + } + return Result.buildSuc(); + } + + @Override + public Result deleteById(Long id) { + try { + haASRelationDao.deleteById(id); + } catch (Exception e){ + LOGGER.error("class=HaASRelationServiceImpl||method=deleteById||id={}||errMsg=exception", id, e); + return Result.buildFrom(ResultStatus.MYSQL_ERROR); + } + return Result.buildSuc(); + } + + @Override + public int updateRelationStatus(Long relationId, Integer newStatus) { + return haASRelationDao.updateById(new HaASRelationDO(relationId, newStatus)); + } + + @Override + public int updateById(HaASRelationDO haASRelationDO) { + return haASRelationDao.updateById(haASRelationDO); + } + + @Override + public HaASRelationDO getActiveClusterHAFromDB(Long activeClusterPhyId) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getActiveClusterPhyId, activeClusterPhyId); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, HaResTypeEnum.CLUSTER.getCode()); + + return haASRelationDao.selectOne(lambdaQueryWrapper); + } + + @Override + public HaASRelationDO getSpecifiedHAFromDB(Long activeClusterPhyId, String activeResName, + Long standbyClusterPhyId, String standbyResName, + HaResTypeEnum resTypeEnum) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + HaASRelationDO relationDO = new HaASRelationDO( + activeClusterPhyId, + activeResName, + standbyClusterPhyId, + standbyResName, + resTypeEnum.getCode(), + HaStatusEnum.UNKNOWN.getCode() + ); + lambdaQueryWrapper.eq(HaASRelationDO::getUniqueField, relationDO.getUniqueField()); + + return haASRelationDao.selectOne(lambdaQueryWrapper); + } + + @Override + public HaASRelationDO getHAFromDB(Long firstClusterPhyId, String firstResName, HaResTypeEnum resTypeEnum) { + List haASRelationDOS = listAllHAFromDB(firstClusterPhyId, resTypeEnum); + for(HaASRelationDO haASRelationDO : haASRelationDOS){ + if (haASRelationDO.getActiveResName().equals(firstResName) + || haASRelationDO.getActiveResName().equals(firstResName)){ + return haASRelationDO; + } + } + return null; + } + + @Override + public List getStandbyHAFromDB(Long standbyClusterPhyId, HaResTypeEnum resTypeEnum) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, resTypeEnum.getCode()); + lambdaQueryWrapper.eq(HaASRelationDO::getStandbyClusterPhyId, standbyClusterPhyId); + + return haASRelationDao.selectList(lambdaQueryWrapper); + } + + @Override + public List getActiveHAFromDB(Long activeClusterPhyId, HaResTypeEnum resTypeEnum) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, resTypeEnum.getCode()); + lambdaQueryWrapper.eq(HaASRelationDO::getActiveClusterPhyId, activeClusterPhyId); + + return haASRelationDao.selectList(lambdaQueryWrapper); + } + + @Override + public List listAllHAFromDB(HaResTypeEnum resTypeEnum) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, resTypeEnum.getCode()); + + return haASRelationDao.selectList(lambdaQueryWrapper); + } + + @Override + public List listAllHAFromDB(Long firstClusterPhyId, HaResTypeEnum resTypeEnum) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASRelationDO::getResType, resTypeEnum.getCode()); + lambdaQueryWrapper.and(lambda -> + lambda.eq(HaASRelationDO::getActiveClusterPhyId, firstClusterPhyId).or().eq(HaASRelationDO::getStandbyClusterPhyId, firstClusterPhyId) + ); + + // 查询HA列表 + List doList = haASRelationDao.selectList(lambdaQueryWrapper); + if (ValidateUtils.isNull(doList)) { + return new ArrayList<>(); + } + + return doList; + } + + @Override + public List listAllHAFromDB(Long firstClusterPhyId, Long secondClusterPhyId, HaResTypeEnum resTypeEnum) { + // 查询HA列表 + List doList = this.listAllHAFromDB(firstClusterPhyId, resTypeEnum); + if (ValidateUtils.isNull(doList)) { + return new ArrayList<>(); + } + + if (secondClusterPhyId == null) { + // 如果为null,则直接返回全部 + return doList; + } + + // 手动过滤掉不需要的集群 + return doList.stream() + .filter(elem -> elem.getActiveClusterPhyId().equals(secondClusterPhyId) || elem.getStandbyClusterPhyId().equals(secondClusterPhyId)) + .collect(Collectors.toList()); + } + +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASSwitchJobServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASSwitchJobServiceImpl.java new file mode 100644 index 00000000..408fcff7 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaASSwitchJobServiceImpl.java @@ -0,0 +1,190 @@ +package com.xiaojukeji.kafka.manager.service.service.ha.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.*; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchJobDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchSubJobDO; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.dao.ha.HaASSwitchJobDao; +import com.xiaojukeji.kafka.manager.dao.ha.HaASSwitchSubJobDao; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASSwitchJobService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import org.springframework.transaction.interceptor.TransactionAspectSupport; + +import java.util.*; +import java.util.stream.Collectors; + +@Service +public class HaASSwitchJobServiceImpl implements HaASSwitchJobService { + private static final Logger LOGGER = LoggerFactory.getLogger(HaASSwitchJobServiceImpl.class); + + @Autowired + private HaASSwitchJobDao haASSwitchJobDao; + + @Autowired + private HaASSwitchSubJobDao haASSwitchSubJobDao; + + @Override + @Transactional + public Result createJob(Long activeClusterPhyId, Long standbyClusterPhyId, List topicNameList, String operator) { + try { + // 父任务 + HaASSwitchJobDO jobDO = new HaASSwitchJobDO(activeClusterPhyId, standbyClusterPhyId, HaJobStatusEnum.RUNNING.getStatus(), operator); + haASSwitchJobDao.insert(jobDO); + + // 子任务 + for (String topicName: topicNameList) { + haASSwitchSubJobDao.insert(new HaASSwitchSubJobDO( + jobDO.getId(), + activeClusterPhyId, + topicName, + standbyClusterPhyId, + topicName, + HaResTypeEnum.TOPIC.getCode(), + HaJobStatusEnum.RUNNING.getStatus(), + "" + )); + } + + return Result.buildSuc(jobDO.getId()); + } catch (Exception e) { + LOGGER.error( + "method=createJob||activeClusterPhyId={}||standbyClusterPhyId={}||topicNameList={}||operator={}||errMsg=exception", + activeClusterPhyId, standbyClusterPhyId, ConvertUtil.obj2Json(topicNameList), operator, e + ); + + // 如果这一步出错了,则对上一步进行手动回滚 + TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); + + return Result.buildFromRSAndMsg(ResultStatus.MYSQL_ERROR, e.getMessage()); + } + } + + @Override + public int updateJobStatus(Long jobId, Integer jobStatus) { + HaASSwitchJobDO jobDO = new HaASSwitchJobDO(); + jobDO.setId(jobId); + jobDO.setJobStatus(jobStatus); + return haASSwitchJobDao.updateById(jobDO); + } + + @Override + public int updateSubJobStatus(Long subJobId, Integer jobStatus) { + HaASSwitchSubJobDO subJobDO = new HaASSwitchSubJobDO(); + subJobDO.setId(subJobId); + subJobDO.setJobStatus(jobStatus); + return haASSwitchSubJobDao.updateById(subJobDO); + } + + @Override + public int updateSubJobExtendData(Long subJobId, HaSubJobExtendData extendData) { + HaASSwitchSubJobDO subJobDO = new HaASSwitchSubJobDO(); + subJobDO.setId(subJobId); + subJobDO.setExtendData(ConvertUtil.obj2Json(extendData)); + return haASSwitchSubJobDao.updateById(subJobDO); + } + + @Override + public Result> jobDetail(Long jobId) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASSwitchSubJobDO::getJobId, jobId); + + List doList = haASSwitchSubJobDao.selectList(lambdaQueryWrapper); + if (ValidateUtils.isEmptyList(doList)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, String.format("jobId:[%d] 不存在", jobId)); + } + + List detailList = new ArrayList<>(); + doList.stream().forEach(elem -> { + HaJobDetail detail = new HaJobDetail(); + detail.setTopicName(elem.getActiveResName()); + detail.setActiveClusterPhyId(elem.getActiveClusterPhyId()); + detail.setStandbyClusterPhyId(elem.getStandbyClusterPhyId()); + detail.setStatus(elem.getJobStatus()); + + // Lag信息 + HaSubJobExtendData extendData = ConvertUtil.str2ObjByJson(elem.getExtendData(), HaSubJobExtendData.class); + detail.setSumLag(extendData != null? extendData.getSumLag(): null); + + detailList.add(detail); + }); + + return Result.buildSuc(detailList); + } + + @Override + public List listRunningJobs(Long ignoreAfterTime) { + return new ArrayList<>(new HashSet<>( + this.listAfterTimeRunningJobs(ignoreAfterTime).values() + )); + } + + @Override + public Map listClusterLatestJobs() { + List doList = haASSwitchJobDao.listAllLatest(); + + Map doMap = new HashMap<>(); + for (HaASSwitchJobDO jobDO: doList) { + HaASSwitchJobDO inMapJobDO = doMap.get(jobDO.getActiveClusterPhyId()); + if (inMapJobDO == null || inMapJobDO.getId() <= jobDO.getId()) { + doMap.put(jobDO.getActiveClusterPhyId(), jobDO); + } + + inMapJobDO = doMap.get(jobDO.getStandbyClusterPhyId()); + if (inMapJobDO == null || inMapJobDO.getId() <= jobDO.getId()) { + doMap.put(jobDO.getStandbyClusterPhyId(), jobDO); + } + } + + return doMap; + } + + @Override + public HaASSwitchJobDO getJobById(Long jobId) { + return haASSwitchJobDao.selectById(jobId); + } + + @Override + public List listSubJobsById(Long jobId) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(HaASSwitchSubJobDO::getJobId, jobId); + return haASSwitchSubJobDao.selectList(lambdaQueryWrapper); + } + + @Override + public List listAll(Boolean isAsc) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.orderBy(isAsc != null, isAsc, HaASSwitchSubJobDO::getId); + return haASSwitchSubJobDao.selectList(lambdaQueryWrapper); + } + + /**************************************************** private method ****************************************************/ + + private Map listAfterTimeRunningJobs(Long ignoreAfterTime) { + LambdaQueryWrapper jobLambdaQueryWrapper = new LambdaQueryWrapper<>(); + jobLambdaQueryWrapper.eq(HaASSwitchJobDO::getJobStatus, HaJobStatusEnum.RUNNING.getStatus()); + List jobDOList = haASSwitchJobDao.selectList(jobLambdaQueryWrapper); + if (jobDOList == null) { + return new HashMap<>(); + } + + // 获取指定时间之前的任务 + jobDOList = jobDOList.stream().filter(job -> job.getCreateTime().getTime() <= ignoreAfterTime).collect(Collectors.toList()); + + Map clusterPhyIdAndJobIdMap = new HashMap<>(); + jobDOList.forEach(elem -> { + clusterPhyIdAndJobIdMap.put(elem.getActiveClusterPhyId(), elem.getId()); + clusterPhyIdAndJobIdMap.put(elem.getStandbyClusterPhyId(), elem.getId()); + }); + return clusterPhyIdAndJobIdMap; + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaClusterServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaClusterServiceImpl.java new file mode 100644 index 00000000..f3d27689 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaClusterServiceImpl.java @@ -0,0 +1,389 @@ +package com.xiaojukeji.kafka.manager.service.service.ha.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.job.HaJobStatusEnum; +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchJobDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterVO; +import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; +import com.xiaojukeji.kafka.manager.dao.ha.HaASRelationDao; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.ZookeeperService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASSwitchJobService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaClusterService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; +import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; +import com.xiaojukeji.kafka.manager.service.utils.HaClusterCommands; +import com.xiaojukeji.kafka.manager.service.utils.HaTopicCommands; +import org.apache.commons.lang.StringUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.BeanUtils; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.*; +import java.util.function.Function; +import java.util.stream.Collectors; + +/** + * 集群主备关系 + */ +@Service +public class HaClusterServiceImpl implements HaClusterService { + private static final Logger LOGGER = LoggerFactory.getLogger(HaClusterServiceImpl.class); + + @Autowired + private ClusterService clusterService; + + @Autowired + private HaASRelationService haASRelationService; + + @Autowired + private HaASRelationDao haActiveStandbyRelationDao; + + @Autowired + private HaTopicService haTopicService; + + @Autowired + private PhysicalClusterMetadataManager physicalClusterMetadataManager; + + @Autowired + private HaASSwitchJobService haASSwitchJobService; + + @Autowired + private ConfigUtils configUtils; + + @Autowired + private ZookeeperService zookeeperService; + + @Override + public Result createHA(Long activeClusterPhyId, Long standbyClusterPhyId, String operator) { + ClusterDO activeClusterDO = clusterService.getById(activeClusterPhyId); + if (activeClusterDO == null){ + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + + ClusterDO standbyClusterDO = clusterService.getById(standbyClusterPhyId); + if (standbyClusterDO == null){ + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + + HaASRelationDO oldRelationDO = getHA(activeClusterPhyId); + if (oldRelationDO != null){ + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_ALREADY_USED, + MsgConstant.getActiveClusterDuplicate(activeClusterDO.getId(), activeClusterDO.getClusterName())); + + } + + //更新集群配置 + Result rv = this.modifyHaClusterConfig(activeClusterDO, standbyClusterDO, operator); + if (rv.failed()){ + return rv; + } + + //更新__consumer_offsets配置 + rv = this.modifyHaTopicConfig(activeClusterDO, standbyClusterDO, operator); + if (rv.failed()){ + return rv; + } + + //添加db数据 + return haASRelationService.addHAToDB( + new HaASRelationDO( + activeClusterPhyId, + activeClusterPhyId.toString(), + standbyClusterPhyId, + standbyClusterPhyId.toString(), + HaResTypeEnum.CLUSTER.getCode(), + HaStatusEnum.STABLE.getCode() + ) + ); + } + + @Override + public Result createHAInKafka(String zookeeper, ClusterDO needWriteToZKClusterDO, String operator) { + Properties props = new Properties(); + props.putAll(getSecurityProperties(needWriteToZKClusterDO.getSecurityProperties())); + props.put(KafkaConstant.BOOTSTRAP_SERVERS, needWriteToZKClusterDO.getBootstrapServers()); + props.put(KafkaConstant.DIDI_KAFKA_ENABLE, "false"); + + Result> rli = zookeeperService.getBrokerIds(needWriteToZKClusterDO.getZookeeper()); + if (rli.failed()){ + return Result.buildFromIgnoreData(rli); + } + + String kafkaVersion = physicalClusterMetadataManager.getKafkaVersion(needWriteToZKClusterDO.getId(), rli.getData()); + if (kafkaVersion != null && kafkaVersion.contains("-d-")){ + int dVersion = Integer.valueOf(kafkaVersion.split("-")[2]); + if (dVersion > 200){ + props.put(KafkaConstant.DIDI_KAFKA_ENABLE, "true"); + } + } + + ResultStatus rs = HaClusterCommands.modifyHaClusterConfig(zookeeper, needWriteToZKClusterDO.getId(), props); + if (!ResultStatus.SUCCESS.equals(rs)) { + LOGGER.error("class=HaClusterServiceImpl||method=createHAInKafka||zookeeper={}||firstClusterDO={}||operator={}||msg=add ha-cluster config failed!", zookeeper, needWriteToZKClusterDO, operator); + return Result.buildFailure("add ha-cluster config failed"); + } + + return Result.buildFrom(rs); + } + + @Override + public Result switchHA(Long newActiveClusterPhyId, Long newStandbyClusterPhyId) { + return Result.buildSuc(); + } + + @Override + public Result deleteHA(Long activeClusterPhyId, Long standbyClusterPhyId) { + ClusterDO clusterDO = clusterService.getById(activeClusterPhyId); + if (clusterDO == null){ + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + ClusterDO standbyClusterDO = clusterService.getById(standbyClusterPhyId); + if (standbyClusterDO == null){ + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + + HaASRelationDO relationDO = getHA(activeClusterPhyId); + if (relationDO == null){ + return Result.buildSuc(); + } + + //删除配置 + Result delResult = delClusterHaConfig(clusterDO, standbyClusterDO); + if (delResult.failed()){ + return delResult; + } + + //删除db + Result delDbResult = delDBHaCluster(activeClusterPhyId, standbyClusterPhyId); + if (delDbResult.failed()){ + return delDbResult; + } + + return Result.buildSuc(); + } + + @Override + public HaASRelationDO getHA(Long activeClusterPhyId) { + return haASRelationService.getActiveClusterHAFromDB(activeClusterPhyId); + } + + @Override + public Map getClusterHARelation() { + Map relationMap = new HashMap<>(); + List haASRelationDOS = haASRelationService.listAllHAFromDB(HaResTypeEnum.CLUSTER); + if (haASRelationDOS.isEmpty()){ + return relationMap; + } + haASRelationDOS.forEach(haASRelationDO -> { + relationMap.put(haASRelationDO.getActiveClusterPhyId(), HaRelationTypeEnum.ACTIVE.getCode()); + relationMap.put(haASRelationDO.getStandbyClusterPhyId(), HaRelationTypeEnum.STANDBY.getCode()); + }); + return relationMap; + } + + @Override + public Result> listAllHA() { + //高可用集群 + List clusterRelationDOS = haASRelationService.listAllHAFromDB(HaResTypeEnum.CLUSTER); + Map activeMap = clusterRelationDOS.stream().collect(Collectors.toMap(HaASRelationDO::getActiveClusterPhyId, Function.identity())); + List standbyList = clusterRelationDOS.stream().map(HaASRelationDO::getStandbyClusterPhyId).collect(Collectors.toList()); + + //高可用topic + List topicRelationDOS = haASRelationService.listAllHAFromDB(HaResTypeEnum.TOPIC); + //主集群topic数 + Map activeTopicCountMap = topicRelationDOS.stream() + .filter(haASRelationDO -> !haASRelationDO.getActiveResName().startsWith("__")) + .collect(Collectors.groupingBy(HaASRelationDO::getActiveClusterPhyId, Collectors.counting())); + Map standbyTopicCountMap = topicRelationDOS.stream() + .filter(haASRelationDO -> !haASRelationDO.getStandbyResName().startsWith("__")) + .collect(Collectors.groupingBy(HaASRelationDO::getStandbyClusterPhyId, Collectors.counting())); + + //切换job + Map jobDOS = haASSwitchJobService.listClusterLatestJobs(); + + List haClusterVOS = new ArrayList<>(); + Map clusterDetailDTOMap = clusterService.getClusterDetailDTOList(Boolean.TRUE).stream().collect(Collectors.toMap(ClusterDetailDTO::getClusterId, Function.identity())); + for (Map.Entry entry : clusterDetailDTOMap.entrySet()){ + ClusterDetailDTO clusterDetailDTO = entry.getValue(); + //高可用集群 + if (activeMap.containsKey(entry.getKey())){ + //主集群 + HaASRelationDO relationDO = activeMap.get(clusterDetailDTO.getClusterId()); + HaClusterVO haClusterVO = new HaClusterVO(); + BeanUtils.copyProperties(clusterDetailDTO,haClusterVO); + haClusterVO.setHaStatus(relationDO.getStatus()); + haClusterVO.setActiveTopicCount(activeTopicCountMap.get(clusterDetailDTO.getClusterId())==null + ?0L:activeTopicCountMap.get(clusterDetailDTO.getClusterId())); + haClusterVO.setStandbyTopicCount(standbyTopicCountMap.get(clusterDetailDTO.getClusterId())==null + ?0L:standbyTopicCountMap.get(clusterDetailDTO.getClusterId())); + HaASSwitchJobDO jobDO = jobDOS.get(haClusterVO.getClusterId()); + haClusterVO.setHaStatus(jobDO != null && HaJobStatusEnum.isRunning(jobDO.getJobStatus()) + ?HaStatusEnum.SWITCHING_CODE: HaStatusEnum.STABLE_CODE); + ClusterDetailDTO standbyClusterDetail = clusterDetailDTOMap.get(relationDO.getStandbyClusterPhyId()); + if (standbyClusterDetail != null){ + //备集群 + HaClusterVO standbyCluster = new HaClusterVO(); + BeanUtils.copyProperties(standbyClusterDetail,standbyCluster); + standbyCluster.setActiveTopicCount(activeTopicCountMap.get(standbyClusterDetail.getClusterId())==null + ?0L:activeTopicCountMap.get(standbyClusterDetail.getClusterId())); + standbyCluster.setStandbyTopicCount(standbyTopicCountMap.get(standbyClusterDetail.getClusterId())==null + ?0L:standbyTopicCountMap.get(standbyClusterDetail.getClusterId())); + + standbyCluster.setHaASSwitchJobId(jobDO != null ? jobDO.getId() : null); + standbyCluster.setHaStatus(haClusterVO.getHaStatus()); + haClusterVO.setHaClusterVO(standbyCluster); + } + haClusterVOS.add(haClusterVO); + }else if(!standbyList.contains(clusterDetailDTO.getClusterId())){ + //普通集群 + HaClusterVO haClusterVO = new HaClusterVO(); + BeanUtils.copyProperties(clusterDetailDTO,haClusterVO); + haClusterVOS.add(haClusterVO); + } + } + return Result.buildSuc(haClusterVOS); + } + + private Result modifyHaClusterConfig(ClusterDO activeClusterDO, ClusterDO standbyClusterDO, String operator){ + //更新A集群配置信息 + Result activeResult = createHAInKafka(activeClusterDO.getZookeeper(), standbyClusterDO, operator); + if (activeResult.failed()){ + return activeResult; + } + + //更新gateway上A集群的配置 + Result activeGatewayResult = this.createHAInKafka(configUtils.getDKafkaGatewayZK(), activeClusterDO, operator); + if (activeGatewayResult.failed()){ + return activeGatewayResult; + } + + //更新B集群配置信息 + Result standbyResult = this.createHAInKafka(standbyClusterDO.getZookeeper(), activeClusterDO, operator); + if (standbyResult.failed()){ + return standbyResult; + } + //更新gateway上B集群的配置 + Result standbyGatewayResult = this.createHAInKafka(configUtils.getDKafkaGatewayZK(), standbyClusterDO, operator); + if (standbyGatewayResult.failed()){ + return activeGatewayResult; + } + + return Result.buildSuc(); + } + + private Result modifyHaTopicConfig(ClusterDO activeClusterDO, ClusterDO standbyClusterDO, String operator){ + //添加B集群拉取A集群offsets的配置信息 + Result aResult = haTopicService.activeHAInKafkaNotCheck(activeClusterDO, KafkaConstant.COORDINATOR_TOPIC_NAME, + standbyClusterDO, KafkaConstant.COORDINATOR_TOPIC_NAME, operator); + if (aResult.failed()){ + return aResult; + } + + //添加A集群拉取B集群offsets的配置信息 + return haTopicService.activeHAInKafkaNotCheck(standbyClusterDO, KafkaConstant.COORDINATOR_TOPIC_NAME, + activeClusterDO, KafkaConstant.COORDINATOR_TOPIC_NAME, operator); + + } + + private Result delClusterHaConfig(ClusterDO clusterDO, ClusterDO standbyClusterDO){ + //删除A集群同步B集群Offset配置 + ResultStatus resultStatus = HaTopicCommands.deleteHaTopicConfig( + clusterDO, + KafkaConstant.COORDINATOR_TOPIC_NAME, + Arrays.asList(KafkaConstant.DIDI_HA_REMOTE_CLUSTER, KafkaConstant.DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED) + ); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete active cluster config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + return Result.buildFailure("删除主集群__consumer_offsets高可用配置失败,请重试!"); + } + + //删除A集群配置信息 + resultStatus = HaClusterCommands.coverHaClusterConfig(clusterDO.getZookeeper(), standbyClusterDO.getId(), new Properties()); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete cluster config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + return Result.buildFailure("删除主集群高可用配置失败,请重试!"); + } + + //删除B集群同步A集群Offset配置 + resultStatus = HaTopicCommands.deleteHaTopicConfig( + standbyClusterDO, + KafkaConstant.COORDINATOR_TOPIC_NAME, + Arrays.asList(KafkaConstant.DIDI_HA_REMOTE_CLUSTER, KafkaConstant.DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED) + ); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete standby cluster config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + } + + //删除B集群配置信息 + resultStatus = HaClusterCommands.coverHaClusterConfig(standbyClusterDO.getZookeeper(), standbyClusterDO.getId(), new Properties()); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete standby cluster config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + } + + //更新gateway中备集群配置信息 + resultStatus = HaClusterCommands.coverHaClusterConfig(configUtils.getDKafkaGatewayZK(), standbyClusterDO.getId(), new Properties()); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete spare gateway config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + } + + //删除gateway中A集群配置信息 + resultStatus = HaClusterCommands.coverHaClusterConfig(configUtils.getDKafkaGatewayZK(), clusterDO.getId(), new Properties()); + if (resultStatus.getCode() != 0){ + LOGGER.error("delete host gateway config failed! clusterId:{} standbyClusterId:{}" , clusterDO.getId(), standbyClusterDO.getId()); + } + + return Result.buildSuc(); + } + + private Result delDBHaCluster(Long activeClusterPhyId, Long standbyClusterPhyId){ + LambdaQueryWrapper topicQueryWrapper = new LambdaQueryWrapper(); + topicQueryWrapper.eq(HaASRelationDO::getResType, HaResTypeEnum.TOPIC.getCode()); + topicQueryWrapper.eq(HaASRelationDO::getActiveClusterPhyId, activeClusterPhyId); + List relationDOS = haActiveStandbyRelationDao.selectList(topicQueryWrapper); + if (!relationDOS.isEmpty()){ + return Result.buildFrom(ResultStatus.HA_CLUSTER_DELETE_FORBIDDEN); + } + + try { + LambdaQueryWrapper queryWrapper = new LambdaQueryWrapper(); + queryWrapper.eq(HaASRelationDO::getActiveClusterPhyId, activeClusterPhyId); + queryWrapper.eq(HaASRelationDO::getResType, HaResTypeEnum.CLUSTER.getCode()); + + int count = haActiveStandbyRelationDao.delete(queryWrapper); + if (count < 1){ + LOGGER.error("delete HA failed! clusterId:{} standbyClusterId:{}" , activeClusterPhyId, standbyClusterPhyId); + return Result.buildFrom(ResultStatus.MYSQL_ERROR); + } + }catch (Exception e){ + LOGGER.error("delete HA failed! clusterId:{} standbyClusterId:{}" , activeClusterPhyId, standbyClusterPhyId); + return Result.buildFrom(ResultStatus.MYSQL_ERROR); + } + return Result.buildSuc(); + } + + private Properties getSecurityProperties(String securityPropertiesStr){ + Properties securityProperties = new Properties(); + if (StringUtils.isBlank(securityPropertiesStr)){ + return securityProperties; + } + securityProperties.putAll(JsonUtils.stringToObj(securityPropertiesStr, Properties.class)); + securityProperties.put(KafkaConstant.SASL_JAAS_CONFIG, securityProperties.getProperty(KafkaConstant.SASL_JAAS_CONFIG)==null + ?"":securityProperties.getProperty(KafkaConstant.SASL_JAAS_CONFIG).replaceAll("\"","\\\\\"")); + return securityProperties; + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaKafkaUserServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaKafkaUserServiceImpl.java new file mode 100644 index 00000000..2a12cd0d --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaKafkaUserServiceImpl.java @@ -0,0 +1,42 @@ +package com.xiaojukeji.kafka.manager.service.service.ha.impl; + +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.service.service.ha.HaKafkaUserService; +import com.xiaojukeji.kafka.manager.service.utils.HaKafkaUserCommands; +import org.springframework.stereotype.Service; + +import java.util.Arrays; +import java.util.Properties; + +@Service +public class HaKafkaUserServiceImpl implements HaKafkaUserService { + + @Override + public Result setNoneHAInKafka(String zookeeper, String kafkaUser) { + Properties props = new Properties(); + props.put(KafkaConstant.DIDI_HA_ACTIVE_CLUSTER, KafkaConstant.NONE); + + return HaKafkaUserCommands.modifyHaUserConfig(zookeeper, kafkaUser, props)? + Result.buildSuc(): // 修改成功 + Result.buildFrom(ResultStatus.ZOOKEEPER_OPERATE_FAILED); // 修改失败 + } + + @Override + public Result stopHAInKafka(String zookeeper, String kafkaUser) { + return HaKafkaUserCommands.deleteHaUserConfig(zookeeper, kafkaUser, Arrays.asList(KafkaConstant.DIDI_HA_ACTIVE_CLUSTER))? + Result.buildSuc(): // 修改成功 + Result.buildFrom(ResultStatus.ZOOKEEPER_OPERATE_FAILED); // 修改失败 + } + + @Override + public Result activeHAInKafka(String zookeeper, Long activeClusterPhyId, String kafkaUser) { + Properties props = new Properties(); + props.put(KafkaConstant.DIDI_HA_ACTIVE_CLUSTER, String.valueOf(activeClusterPhyId)); + + return HaKafkaUserCommands.modifyHaUserConfig(zookeeper, kafkaUser, props)? + Result.buildSuc(): // 修改成功 + Result.buildFrom(ResultStatus.ZOOKEEPER_OPERATE_FAILED); // 修改失败 + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaTopicServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaTopicServiceImpl.java new file mode 100644 index 00000000..5ad90824 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ha/impl/HaTopicServiceImpl.java @@ -0,0 +1,469 @@ +package com.xiaojukeji.kafka.manager.service.service.ha.impl; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.constant.TopicCreationConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxAttributeEnum; +import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap; +import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.PartitionState; +import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.AdminService; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.gateway.QuotaService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaKafkaUserService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; +import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; +import com.xiaojukeji.kafka.manager.service.utils.HaTopicCommands; +import com.xiaojukeji.kafka.manager.service.utils.KafkaZookeeperUtils; +import com.xiaojukeji.kafka.manager.service.utils.TopicCommands; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; +import org.springframework.transaction.interceptor.TransactionAspectSupport; + +import javax.management.Attribute; +import javax.management.ObjectName; +import java.util.*; +import java.util.stream.Collectors; + +@Service +public class HaTopicServiceImpl implements HaTopicService { + private static final Logger LOGGER = LoggerFactory.getLogger(HaTopicServiceImpl.class); + + @Autowired + private ClusterService clusterService; + + @Autowired + private QuotaService quotaService; + + @Autowired + private AdminService adminService; + + @Autowired + private HaASRelationService haASRelationService; + + @Autowired + private AuthorityService authorityService; + + @Autowired + private HaKafkaUserService haKafkaUserService; + + @Autowired + private ConfigUtils configUtils; + + @Autowired + private TopicManagerService topicManagerService; + + @Override + public Result createHA(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName, String operator) { + ClusterDO activeClusterDO = PhysicalClusterMetadataManager.getClusterFromCache(activeClusterPhyId); + if (activeClusterDO == null) { + return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, "主集群不存在"); + } + + ClusterDO standbyClusterDO = PhysicalClusterMetadataManager.getClusterFromCache(standbyClusterPhyId); + if (standbyClusterDO == null) { + return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, "备集群不存在"); + } + + // 查询关系是否已经存在 + HaASRelationDO relationDO = haASRelationService.getSpecifiedHAFromDB( + activeClusterPhyId, + topicName, + standbyClusterPhyId, + topicName, + HaResTypeEnum.TOPIC + ); + if (relationDO != null) { + // 如果已存在该高可用Topic,则直接返回成功 + return Result.buildSuc(); + } + + Result checkResult = this.checkHaTopicAndGetBizInfo(activeClusterPhyId, standbyClusterPhyId, topicName); + if (checkResult.failed()){ + return Result.buildFromIgnoreData(checkResult); + } + + //更新高可用Topic配置 + Result rv = this.modifyHaConfig( + activeClusterDO, + topicName, + standbyClusterDO, + topicName, + operator + ); + if (rv.failed()){ + return rv; + } + + // 新增备Topic + rv = this.addStandbyTopic(checkResult.getData(), activeClusterDO, standbyClusterDO, operator); + if (rv.failed()) { + return rv; + } + + // 备topic添加权限以及quota + rv = this.addStandbyTopicAuthorityAndQuota(activeClusterPhyId, standbyClusterPhyId, topicName); + if (rv.failed()){ + return rv; + } + + //添加db业务信息 + return haASRelationService.addHAToDB( + new HaASRelationDO( + activeClusterPhyId, + topicName, + standbyClusterPhyId, + topicName, + HaResTypeEnum.TOPIC.getCode(), + HaStatusEnum.STABLE.getCode() + ) + ); + } + + private Result addStandbyTopic(TopicDO activeTopicDO, ClusterDO activeClusterDO, ClusterDO standbyClusterDO, String operator){ + // 获取主Topic配置信息 + Properties activeTopicProps = TopicCommands.fetchTopicConfig(activeClusterDO, activeTopicDO.getTopicName()); + if (activeTopicProps == null){ + return Result.buildFromRSAndMsg(ResultStatus.FAIL, "创建备Topic时,获取主Topic配置失败"); + } + + TopicDO newTopicDO = new TopicDO( + activeTopicDO.getAppId(), + standbyClusterDO.getId(), + activeTopicDO.getTopicName(), + activeTopicDO.getDescription(), + TopicCreationConstant.DEFAULT_QUOTA + ); + TopicMetadata topicMetadata = PhysicalClusterMetadataManager.getTopicMetadata(activeClusterDO.getId(), activeTopicDO.getTopicName()); + + ResultStatus rs = adminService.createTopic(standbyClusterDO, + newTopicDO, + topicMetadata.getPartitionNum(), + topicMetadata.getReplicaNum(), + null, + PhysicalClusterMetadataManager.getBrokerIdList(standbyClusterDO.getId()), + activeTopicProps, + operator, + operator + ); + + if (ResultStatus.SUCCESS.equals(rs)) { + LOGGER.error( + "method=createHA||activeClusterPhyId={}||standbyClusterPhyId={}||activeTopicDO={}||result={}||msg=create haTopic create topic failed.", + activeClusterDO.getId(), standbyClusterDO.getId(), activeTopicDO, rs + ); + return Result.buildFromRSAndMsg(rs, String.format("创建备Topic失败,原因:%s", rs.getMessage())); + } + + return Result.buildSuc(); + } + + @Override + public Result activeHAInKafka(ClusterDO activeClusterDO, String activeTopicName, ClusterDO standbyClusterDO, String standbyTopicName, String operator) { + if (!PhysicalClusterMetadataManager.isTopicExist(activeClusterDO.getId(), activeTopicName)) { + // 主Topic不存在 + return Result.buildFrom(ResultStatus.TOPIC_NOT_EXIST); + } + if (!PhysicalClusterMetadataManager.isTopicExist(standbyClusterDO.getId(), standbyTopicName)) { + // 备Topic不存在 + return Result.buildFrom(ResultStatus.TOPIC_NOT_EXIST); + } + + return this.activeTopicHAConfigInKafka(activeClusterDO, activeTopicName, standbyClusterDO, standbyTopicName); + } + + @Override + public Result activeHAInKafkaNotCheck(ClusterDO activeClusterDO, String activeTopicName, ClusterDO standbyClusterDO, String standbyTopicName, String operator) { + //更新开启topic高可用配置,并将备集群的配置信息指向主集群 + Result rv = activeTopicHAConfigInKafka(activeClusterDO, activeTopicName, standbyClusterDO, standbyTopicName); + if (rv.failed()){ + return rv; + } + return Result.buildSuc(); + } + + @Override + @Transactional + public Result deleteHA(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName, String operator) { + ClusterDO activeClusterDO = clusterService.getById(activeClusterPhyId); + if (activeClusterDO == null){ + return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, "主集群不存在"); + } + + ClusterDO standbyClusterDO = clusterService.getById(standbyClusterPhyId); + if (standbyClusterDO == null){ + return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, "备集群不存在"); + } + + HaASRelationDO relationDO = haASRelationService.getHAFromDB( + activeClusterPhyId, + topicName, + HaResTypeEnum.TOPIC + ); + if (relationDO == null) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "主备关系不存在"); + } + if (!relationDO.getStatus().equals(HaStatusEnum.STABLE_CODE)) { + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FORBIDDEN, "主备切换中,不允许解绑"); + } + + // 删除高可用配置信息 + Result rv = this.stopHAInKafka(standbyClusterDO, topicName, operator); + if(rv.failed()){ + return rv; + } + + rv = haASRelationService.deleteById(relationDO.getId()); + if(rv.failed()){ + TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); + return rv; + } + + return rv; + } + + @Override + public Result stopHAInKafka(ClusterDO standbyClusterDO, String standbyTopicName, String operator) { + //删除副集群同步主集群topic配置 + ResultStatus rs = HaTopicCommands.deleteHaTopicConfig( + standbyClusterDO, + standbyTopicName, + Arrays.asList(KafkaConstant.DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED, KafkaConstant.DIDI_HA_REMOTE_CLUSTER) + ); + if (!ResultStatus.SUCCESS.equals(rs)) { + LOGGER.error( + "method=deleteHAInKafka||standbyClusterId={}||standbyTopicName={}||rs={}||msg=delete topic ha failed.", + standbyClusterDO.getId(), standbyTopicName, rs + ); + return Result.buildFromRSAndMsg(rs, "delete topic ha failed"); + } + + return Result.buildSuc(); + } + + @Override + public Map getRelation(Long clusterId) { + Map relationMap = new HashMap<>(); + List relationDOS = haASRelationService.listAllHAFromDB(clusterId, HaResTypeEnum.TOPIC); + if (relationDOS.isEmpty()){ + return relationMap; + } + + //主topic + List activeTopics = relationDOS.stream().filter(haASRelationDO -> haASRelationDO.getActiveClusterPhyId().equals(clusterId)).map(HaASRelationDO::getActiveResName).collect(Collectors.toList()); + activeTopics.stream().forEach(topicName -> relationMap.put(topicName, HaRelationTypeEnum.ACTIVE.getCode())); + + //备topic + List standbyTopics = relationDOS.stream().filter(haASRelationDO -> haASRelationDO.getStandbyClusterPhyId().equals(clusterId)).map(HaASRelationDO::getStandbyResName).collect(Collectors.toList()); + standbyTopics.stream().forEach(topicName -> relationMap.put(topicName, HaRelationTypeEnum.STANDBY.getCode())); + + //互备 + relationMap.put(KafkaConstant.COORDINATOR_TOPIC_NAME, HaRelationTypeEnum.MUTUAL_BACKUP.getCode()); + + return relationMap; + } + + @Override + public Map> getClusterStandbyTopicMap() { + Map> clusterStandbyTopicMap = new HashMap<>(); + List relationDOS = haASRelationService.listAllHAFromDB(HaResTypeEnum.TOPIC); + if (relationDOS.isEmpty()){ + return clusterStandbyTopicMap; + } + return relationDOS.stream().collect(Collectors.groupingBy(HaASRelationDO::getStandbyClusterPhyId, Collectors.mapping(HaASRelationDO::getStandbyResName, Collectors.toList()))); + } + + @Override + public Result activeUserHAInKafka(ClusterDO activeClusterDO, ClusterDO standbyClusterDO, String kafkaUser, String operator) { + Result rv; + rv = haKafkaUserService.activeHAInKafka(activeClusterDO.getZookeeper(), activeClusterDO.getId(), kafkaUser); + if (rv.failed()) { + return rv; + } + + rv = haKafkaUserService.activeHAInKafka(standbyClusterDO.getZookeeper(), activeClusterDO.getId(), kafkaUser); + if (rv.failed()) { + return rv; + } + + rv = haKafkaUserService.activeHAInKafka(configUtils.getDKafkaGatewayZK(), activeClusterDO.getId(), kafkaUser); + if (rv.failed()) { + return rv; + } + return rv; + } + + @Override + public Result getStandbyTopicFetchLag(Long standbyClusterPhyId, String topicName) { + TopicMetadata metadata = PhysicalClusterMetadataManager.getTopicMetadata(standbyClusterPhyId, topicName); + if (metadata == null) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, MsgConstant.getTopicNotExist(standbyClusterPhyId, topicName)); + } + + List partitionIdList = new ArrayList<>(metadata.getPartitionMap().getPartitions().keySet()); + + List partitionStateList = KafkaZookeeperUtils.getTopicPartitionState( + PhysicalClusterMetadataManager.getZKConfig(standbyClusterPhyId), + topicName, + partitionIdList + ); + + if (partitionStateList.size() != partitionIdList.size()) { + return Result.buildFromRSAndMsg(ResultStatus.ZOOKEEPER_READ_FAILED, "读取ZK的分区元信息失败"); + } + + Long sumLag = 0L; + for (Integer leaderBrokerId: partitionStateList.stream().map(elem -> elem.getLeader()).collect(Collectors.toSet())) { + JmxConnectorWrap jmxConnectorWrap = PhysicalClusterMetadataManager.getJmxConnectorWrap(standbyClusterPhyId, leaderBrokerId); + if (jmxConnectorWrap == null || !jmxConnectorWrap.checkJmxConnectionAndInitIfNeed()) { + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FAILED, String.format("获取BrokerId=%d的jmx客户端失败", leaderBrokerId)); + } + + + try { + ObjectName objectName = new ObjectName( + "kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=MirrorFetcherThread-*" + "-" + standbyClusterPhyId + "*" + ",topic=" + topicName + ",partition=*" + ); + + Set objectNameSet = jmxConnectorWrap.queryNames(objectName, null); + for (ObjectName name: objectNameSet) { + List attributeList = jmxConnectorWrap.getAttributes(name, JmxAttributeEnum.VALUE_ATTRIBUTE.getAttribute()).asList(); + for (Attribute attribute: attributeList) { + sumLag += Long.valueOf(attribute.getValue().toString()); + } + } + } catch (Exception e) { + LOGGER.error( + "class=HaTopicServiceImpl||method=getStandbyTopicFetchLag||standbyClusterPhyId={}||topicName={}||leaderBrokerId={}||errMsg=exception.", + standbyClusterPhyId, topicName, leaderBrokerId, e + ); + + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FAILED, e.getMessage()); + } + } + + return Result.buildSuc(sumLag); + } + + /**************************************************** private method ****************************************************/ + + private Result activeTopicHAConfigInKafka(ClusterDO activeClusterDO, String activeTopicName, ClusterDO standbyClusterDO, String standbyTopicName) { + //更新ha-topic配置 + Properties standbyTopicProps = new Properties(); + standbyTopicProps.put(KafkaConstant.DIDI_HA_SYNC_TOPIC_CONFIGS_ENABLED, Boolean.TRUE.toString()); + standbyTopicProps.put(KafkaConstant.DIDI_HA_REMOTE_CLUSTER, activeClusterDO.getId().toString()); + if (!activeTopicName.equals(standbyTopicName)) { + standbyTopicProps.put(KafkaConstant.DIDI_HA_REMOTE_TOPIC, activeTopicName); + } + ResultStatus rs = HaTopicCommands.modifyHaTopicConfig(standbyClusterDO, standbyTopicName, standbyTopicProps); + if (!ResultStatus.SUCCESS.equals(rs)) { + LOGGER.error( + "method=createHAInKafka||activeClusterId={}||activeTopicName={}||standbyClusterId={}||standbyTopicName={}||rs={}||msg=create topic ha failed.", + activeClusterDO.getId(), activeTopicName, standbyClusterDO.getId(), standbyTopicName, rs + ); + return Result.buildFromRSAndMsg(rs, "modify ha topic config failed"); + } + + return Result.buildSuc(); + } + + public Result addStandbyTopicAuthorityAndQuota(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName) { + List authorityDOS = authorityService.getAuthorityByTopic(activeClusterPhyId, topicName); + try { + for (AuthorityDO authorityDO : authorityDOS) { + //权限 + AuthorityDO newAuthorityDO = new AuthorityDO(); + newAuthorityDO.setAppId(authorityDO.getAppId()); + newAuthorityDO.setClusterId(standbyClusterPhyId); + newAuthorityDO.setTopicName(topicName); + newAuthorityDO.setAccess(authorityDO.getAccess()); + + //quota + TopicQuota activeTopicQuotaDO = quotaService.getQuotaFromZk( + activeClusterPhyId, + topicName, + authorityDO.getAppId() + ); + + TopicQuota standbyTopicQuotaDO = new TopicQuota(); + standbyTopicQuotaDO.setTopicName(topicName); + standbyTopicQuotaDO.setAppId(activeTopicQuotaDO.getAppId()); + standbyTopicQuotaDO.setClusterId(standbyClusterPhyId); + standbyTopicQuotaDO.setConsumeQuota(activeTopicQuotaDO.getConsumeQuota()); + standbyTopicQuotaDO.setProduceQuota(activeTopicQuotaDO.getProduceQuota()); + + int result = authorityService.addAuthorityAndQuota(newAuthorityDO, standbyTopicQuotaDO); + if (Constant.INVALID_CODE == result){ + return Result.buildFrom(ResultStatus.OPERATION_FAILED); + } + } + } catch (Exception e) { + LOGGER.error( + "method=addStandbyTopicAuthorityAndQuota||activeClusterPhyId={}||standbyClusterPhyId={}||topicName={}||errMsg=exception.", + activeClusterPhyId, standbyClusterPhyId, topicName, e + ); + + return Result.buildFailure("备Topic复制主Topic权限及配额失败"); + } + + return Result.buildSuc(); + } + + private Result checkHaTopicAndGetBizInfo(Long activeClusterPhyId, Long standbyClusterPhyId, String topicName){ + if (PhysicalClusterMetadataManager.isTopicExist(standbyClusterPhyId, topicName)) { + return Result.buildFromRSAndMsg(ResultStatus.TOPIC_ALREADY_EXIST, "备集群已存在该Topic,请先删除,再行绑定!"); + } + + if (!PhysicalClusterMetadataManager.isTopicExist(activeClusterPhyId, topicName)) { + return Result.buildFromRSAndMsg(ResultStatus.TOPIC_NOT_EXIST, "主集群不存在该Topic"); + } + + TopicDO topicDO = topicManagerService.getByTopicName(activeClusterPhyId, topicName); + if (ValidateUtils.isNull(topicDO)) { + return Result.buildFromRSAndMsg(ResultStatus.RESOURCE_NOT_EXIST, "主集群Topic所属KafkaUser信息不存在"); + } + + return Result.buildSuc(topicDO); + } + + private Result modifyHaConfig(ClusterDO activeClusterDO, String activeTopic, ClusterDO standbyClusterDO, String standbyTopic, String operator){ + //更新副集群同步主集群topic配置 + Result rv = activeHAInKafkaNotCheck(activeClusterDO, activeTopic, standbyClusterDO, standbyTopic, operator); + if (rv.failed()){ + LOGGER.error("method=createHA||activeTopic:{} standbyTopic:{}||msg=create haTopic modify standby topic config failed!.", activeTopic, standbyTopic); + return Result.buildFailure("modify standby topic config failed,please try again"); + } + + //更新user配置,通知用户指向主集群 + Set relatedKafkaUserSet = authorityService.getAuthorityByTopic(activeClusterDO.getId(), activeTopic) + .stream() + .map(elem -> elem.getAppId()) + .collect(Collectors.toSet()); + for(String kafkaUser: relatedKafkaUserSet) { + rv = this.activeUserHAInKafka(activeClusterDO, standbyClusterDO, kafkaUser, operator); + if (rv.failed()) { + return rv; + } + } + return Result.buildSuc(); + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/AdminServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/AdminServiceImpl.java index 594f1aa1..b2eca0d6 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/AdminServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/AdminServiceImpl.java @@ -2,21 +2,27 @@ package com.xiaojukeji.kafka.manager.service.service.impl; import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.JSONObject; -import com.xiaojukeji.kafka.manager.common.bizenum.*; -import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; -import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskStatusEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum; import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; -import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.*; import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; import com.xiaojukeji.kafka.manager.service.utils.KafkaZookeeperUtils; import com.xiaojukeji.kafka.manager.service.utils.TopicCommands; import kafka.admin.AdminOperationException; @@ -55,6 +61,12 @@ public class AdminServiceImpl implements AdminService { @Autowired private AuthorityService authorityService; + @Autowired + private HaTopicService haTopicService; + + @Autowired + private HaASRelationManager haASRelationManager; + @Autowired private OperateRecordService operateRecordService; @@ -123,15 +135,22 @@ public class AdminServiceImpl implements AdminService { } @Override - public ResultStatus deleteTopic(ClusterDO clusterDO, - String topicName, - String operator) { - // 1. 集群中删除topic + public ResultStatus deleteTopic(ClusterDO clusterDO, String topicName, String operator) { + // 1. 若存在高可用topic,先解除高可用关系才能删除topic + HaASRelationDO haASRelationDO = haASRelationManager.getASRelation(clusterDO.getId(), topicName); + if (haASRelationDO != null){ + //高可用topic不允许删除 + if (haASRelationDO.getStandbyClusterPhyId().equals(clusterDO.getId())){ + return ResultStatus.HA_TOPIC_DELETE_FORBIDDEN; + } + } + + // 2. 集群中删除topic ResultStatus rs = TopicCommands.deleteTopic(clusterDO, topicName); if (!ResultStatus.SUCCESS.equals(rs)) { return rs; } - // 2. 记录操作 + // 3. 记录操作 Map content = new HashMap<>(2); content.put("clusterId", clusterDO.getId()); content.put("topicName", topicName); @@ -144,12 +163,13 @@ public class AdminServiceImpl implements AdminService { operateRecordDO.setOperator(operator); operateRecordService.insert(operateRecordDO); - // 3. 数据库中删除topic + // 4. 数据库中删除topic topicManagerService.deleteByTopicName(clusterDO.getId(), topicName); topicExpiredService.deleteByTopicName(clusterDO.getId(), topicName); - // 4. 数据库中删除authority + // 5. 数据库中删除authority authorityService.deleteAuthorityByTopic(clusterDO.getId(), topicName); + return rs; } @@ -346,7 +366,6 @@ public class AdminServiceImpl implements AdminService { @Override public ResultStatus modifyTopicConfig(ClusterDO clusterDO, String topicName, Properties properties, String operator) { - ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, topicName, properties); - return rs; + return TopicCommands.modifyTopicConfig(clusterDO, topicName, properties); } } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java index 153576c4..314130ff 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java @@ -3,13 +3,16 @@ package com.xiaojukeji.kafka.manager.service.service.impl; import com.xiaojukeji.kafka.manager.common.bizenum.DBStatusEnum; import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate; +import com.xiaojukeji.kafka.manager.common.entity.pojo.*; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO; import com.xiaojukeji.kafka.manager.common.utils.ListUtils; -import com.xiaojukeji.kafka.manager.common.entity.pojo.*; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; import com.xiaojukeji.kafka.manager.dao.ClusterDao; @@ -18,15 +21,16 @@ import com.xiaojukeji.kafka.manager.dao.ControllerDao; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.*; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaClusterService; import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; -import org.apache.zookeeper.WatchedEvent; -import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.dao.DuplicateKeyException; import org.springframework.stereotype.Service; +import org.springframework.transaction.annotation.Transactional; import java.util.*; @@ -42,6 +46,9 @@ public class ClusterServiceImpl implements ClusterService { @Autowired private ClusterDao clusterDao; + @Autowired + private HaClusterService haClusterService; + @Autowired private ClusterMetricsDao clusterMetricsDao; @@ -69,6 +76,9 @@ public class ClusterServiceImpl implements ClusterService { @Autowired private OperateRecordService operateRecordService; + @Autowired + private HaASRelationService haASRelationService; + @Override public ResultStatus addNew(ClusterDO clusterDO, String operator) { if (ValidateUtils.isNull(clusterDO) || ValidateUtils.isNull(operator)) { @@ -96,6 +106,7 @@ public class ClusterServiceImpl implements ClusterService { LOGGER.error("add new cluster failed, operate mysql failed, clusterDO:{}.", clusterDO, e); return ResultStatus.MYSQL_ERROR; } + physicalClusterMetadataManager.addNew(clusterDO); return ResultStatus.SUCCESS; } @@ -253,9 +264,11 @@ public class ClusterServiceImpl implements ClusterService { Map consumerGroupNumMap = needDetail? consumerService.getConsumerGroupNumMap(doList): new HashMap<>(0); + Map haRelationMap = haClusterService.getClusterHARelation(); List dtoList = new ArrayList<>(); for (ClusterDO clusterDO: doList) { ClusterDetailDTO dto = getClusterDetailDTO(clusterDO, needDetail); + dto.setHaRelation(haRelationMap.get(clusterDO.getId())); dto.setConsumerGroupNum(consumerGroupNumMap.get(clusterDO.getId())); dto.setRegionNum(regionNumMap.get(clusterDO.getId())); dtoList.add(dto); @@ -281,10 +294,11 @@ public class ClusterServiceImpl implements ClusterService { } @Override - public ResultStatus deleteById(Long clusterId, String operator) { + @Transactional + public Result deleteById(Long clusterId, String operator) { List regionDOList = regionService.getByClusterId(clusterId); if (!ValidateUtils.isEmptyList(regionDOList)) { - return ResultStatus.OPERATION_FORBIDDEN; + return Result.buildFrom(ResultStatus.OPERATION_FORBIDDEN); } try { Map content = new HashMap<>(); @@ -292,13 +306,14 @@ public class ClusterServiceImpl implements ClusterService { operateRecordService.insert(operator, ModuleEnum.CLUSTER, String.valueOf(clusterId), OperateEnum.DELETE, content); if (clusterDao.deleteById(clusterId) <= 0) { LOGGER.error("delete cluster failed, clusterId:{}.", clusterId); - return ResultStatus.MYSQL_ERROR; + return Result.buildFrom(ResultStatus.MYSQL_ERROR); } } catch (Exception e) { LOGGER.error("delete cluster failed, clusterId:{}.", clusterId, e); - return ResultStatus.MYSQL_ERROR; + return Result.buildFrom(ResultStatus.MYSQL_ERROR); } - return ResultStatus.SUCCESS; + + return Result.buildSuc(); } private ClusterDetailDTO getClusterDetailDTO(ClusterDO clusterDO, Boolean needDetail) { @@ -318,6 +333,21 @@ public class ClusterServiceImpl implements ClusterService { dto.setStatus(clusterDO.getStatus()); dto.setGmtCreate(clusterDO.getGmtCreate()); dto.setGmtModify(clusterDO.getGmtModify()); + + List haASRelationDOS = haASRelationService + .listAllHAFromDB(clusterDO.getId(), HaResTypeEnum.CLUSTER); + if (!haASRelationDOS.isEmpty()){ + ClusterDO mbCluster; + if (haASRelationDOS.get(0).getActiveClusterPhyId().equals(clusterDO.getId())){ + dto.setHaRelation(HaRelationTypeEnum.ACTIVE.getCode()); + mbCluster = PhysicalClusterMetadataManager.getClusterFromCache(haASRelationDOS.get(0).getStandbyClusterPhyId()); + }else { + dto.setHaRelation(HaRelationTypeEnum.STANDBY.getCode()); + mbCluster = PhysicalClusterMetadataManager.getClusterFromCache(haASRelationDOS.get(0).getActiveClusterPhyId()); + } + dto.setMutualBackupClusterName(mbCluster != null ? mbCluster.getClusterName() : null); + } + if (ValidateUtils.isNull(needDetail) || !needDetail) { return dto; } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/JobLogServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/JobLogServiceImpl.java new file mode 100644 index 00000000..b47a049c --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/JobLogServiceImpl.java @@ -0,0 +1,42 @@ +package com.xiaojukeji.kafka.manager.service.service.impl; + +import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; +import com.xiaojukeji.kafka.manager.dao.ha.JobLogDao; +import com.xiaojukeji.kafka.manager.service.service.JobLogService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Service; + +import java.util.List; + + +@Service +public class JobLogServiceImpl implements JobLogService { + private static final Logger LOGGER = LoggerFactory.getLogger(JobLogServiceImpl.class); + + @Autowired + private JobLogDao jobLogDao; + + @Override + public void addLogAndIgnoreException(JobLogDO jobLogDO) { + try { + jobLogDao.insert(jobLogDO); + } catch (Exception e) { + LOGGER.error("method=addLogAndIgnoreException||jobLogDO={}||errMsg=exception", jobLogDO); + } + } + + @Override + public List listLogs(Integer bizType, String bizKeyword, Long startId) { + LambdaQueryWrapper lambdaQueryWrapper = new LambdaQueryWrapper<>(); + lambdaQueryWrapper.eq(JobLogDO::getBizType, bizType); + lambdaQueryWrapper.eq(JobLogDO::getBizKeyword, bizKeyword); + if (startId != null) { + lambdaQueryWrapper.ge(JobLogDO::getId, startId); + } + + return jobLogDao.selectList(lambdaQueryWrapper); + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java index 9a6f40be..47396ee8 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java @@ -5,19 +5,20 @@ import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.LogicalCluster; import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.LogicalClusterMetrics; import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics; +import com.xiaojukeji.kafka.manager.common.entity.pojo.BrokerMetricsDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; import com.xiaojukeji.kafka.manager.dao.LogicalClusterDao; -import com.xiaojukeji.kafka.manager.common.entity.pojo.BrokerMetricsDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; -import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.BrokerService; import com.xiaojukeji.kafka.manager.service.service.LogicalClusterService; +import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaClusterService; import com.xiaojukeji.kafka.manager.service.utils.MetricsConvertUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -45,6 +46,9 @@ public class LogicalClusterServiceImpl implements LogicalClusterService { @Autowired private AppService appService; + @Autowired + private HaClusterService haClusterService; + @Autowired private LogicalClusterMetadataManager logicClusterMetadataManager; diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java index a30599f8..bc4112d1 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java @@ -4,38 +4,41 @@ import com.xiaojukeji.kafka.manager.common.bizenum.KafkaClientEnum; import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; import com.xiaojukeji.kafka.manager.common.constant.KafkaMetricsCollections; import com.xiaojukeji.kafka.manager.common.constant.TopicCreationConstant; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.TopicOperationResult; import com.xiaojukeji.kafka.manager.common.entity.ao.RdTopicBasic; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.MineTopicSummary; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicAppData; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicBusinessInfo; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicExpansionDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicModificationDTO; import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicMetrics; +import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicThrottledMetrics; +import com.xiaojukeji.kafka.manager.common.entity.pojo.*; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; -import com.xiaojukeji.kafka.manager.common.utils.DateUtils; -import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; -import com.xiaojukeji.kafka.manager.common.utils.NumberUtils; -import com.xiaojukeji.kafka.manager.common.utils.SpringTool; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.common.utils.*; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.config.TopicQuotaData; import com.xiaojukeji.kafka.manager.dao.TopicDao; import com.xiaojukeji.kafka.manager.dao.TopicExpiredDao; import com.xiaojukeji.kafka.manager.dao.TopicStatisticsDao; -import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicThrottledMetrics; -import com.xiaojukeji.kafka.manager.common.entity.pojo.*; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.KafkaMetricsCache; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.*; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; import com.xiaojukeji.kafka.manager.service.utils.KafkaZookeeperUtils; +import com.xiaojukeji.kafka.manager.service.utils.TopicCommands; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; @@ -87,6 +90,15 @@ public class TopicManagerServiceImpl implements TopicManagerService { @Autowired private OperateRecordService operateRecordService; + @Autowired + private HaTopicService haTopicService; + + @Autowired + private AdminService adminService; + + @Autowired + private HaASRelationManager haASRelationManager; + @Override public List listAll() { try { @@ -188,6 +200,7 @@ public class TopicManagerServiceImpl implements TopicManagerService { Map>> appMap = authorityService.getAllAuthority(); // 增加权限信息和App信息 List summaryList = new ArrayList<>(); + Map> clusterStandbyTopicMap = haTopicService.getClusterStandbyTopicMap(); for (AppDO appDO : appDOList) { // 查权限 for (Map subMap : appMap.getOrDefault(appDO.getAppId(), Collections.emptyMap()).values()) { @@ -196,6 +209,11 @@ public class TopicManagerServiceImpl implements TopicManagerService { || TopicAuthorityEnum.DENY.getCode().equals(authorityDO.getAccess())) { continue; } + //过滤备topic + List standbyTopics = clusterStandbyTopicMap.get(authorityDO.getClusterId()); + if (standbyTopics != null && standbyTopics.contains(authorityDO.getTopicName())){ + continue; + } MineTopicSummary mineTopicSummary = convert2MineTopicSummary( appDO, @@ -224,6 +242,7 @@ public class TopicManagerServiceImpl implements TopicManagerService { TopicDO topicDO = topicDao.getByTopicName(mineTopicSummary.getPhysicalClusterId(), mineTopicSummary.getTopicName()); mineTopicSummary.setDescription(topicDO.getDescription()); } + return summaryList; } @@ -302,8 +321,9 @@ public class TopicManagerServiceImpl implements TopicManagerService { } List dtoList = new ArrayList<>(); + Map> clusterStandbyTopicMap = haTopicService.getClusterStandbyTopicMap(); for (ClusterDO clusterDO: clusterDOList) { - dtoList.addAll(getTopics(clusterDO, appMap, topicMap.getOrDefault(clusterDO.getId(), new HashMap<>()))); + dtoList.addAll(getTopics(clusterDO, appMap, topicMap.getOrDefault(clusterDO.getId(), new HashMap<>()),clusterStandbyTopicMap.get(clusterDO.getId()))); } return dtoList; } @@ -311,13 +331,18 @@ public class TopicManagerServiceImpl implements TopicManagerService { private List getTopics(ClusterDO clusterDO, Map appMap, - Map topicMap) { + Map topicMap, + List standbyTopicNames) { List dtoList = new ArrayList<>(); + for (String topicName: PhysicalClusterMetadataManager.getTopicNameList(clusterDO.getId())) { if (topicName.equals(KafkaConstant.COORDINATOR_TOPIC_NAME) || topicName.equals(KafkaConstant.TRANSACTION_TOPIC_NAME)) { continue; } - + //过滤备topic + if (standbyTopicNames != null && standbyTopicNames.contains(topicName)){ + continue; + } LogicalClusterDO logicalClusterDO = logicalClusterMetadataManager.getTopicLogicalCluster( clusterDO.getId(), topicName @@ -590,12 +615,12 @@ public class TopicManagerServiceImpl implements TopicManagerService { TopicDO topicDO = getByTopicName(physicalClusterId, topicName); if (ValidateUtils.isNull(topicDO)) { - return new Result<>(convert2RdTopicBasic(clusterDO, topicName, null, null, regionNameList, properties)); + return new Result<>(convert2RdTopicBasic(clusterDO, topicName, null, null, regionNameList, properties, HaRelationTypeEnum.UNKNOWN.getCode())); } AppDO appDO = appService.getByAppId(topicDO.getAppId()); - - return new Result<>(convert2RdTopicBasic(clusterDO, topicName, topicDO, appDO, regionNameList, properties)); + Integer haRelation = haASRelationManager.getRelation(physicalClusterId, topicName); + return new Result<>(convert2RdTopicBasic(clusterDO, topicName, topicDO, appDO, regionNameList, properties, haRelation)); } @Override @@ -656,12 +681,56 @@ public class TopicManagerServiceImpl implements TopicManagerService { return ResultStatus.MYSQL_ERROR; } + @Override + public Result modifyTopic(TopicModificationDTO dto) { + ClusterDO clusterDO = clusterService.getById(dto.getClusterId()); + if (ValidateUtils.isNull(clusterDO)) { + return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); + } + + // 获取属性 + Properties properties = dto.getProperties(); + if (ValidateUtils.isNull(properties)) { + properties = new Properties(); + } + properties.put(KafkaConstant.RETENTION_MS_KEY, String.valueOf(dto.getRetentionTime())); + + // 操作修改 + String operator = SpringTool.getUserName(); + ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, dto.getTopicName(), properties); + if (!ResultStatus.SUCCESS.equals(rs)) { + return Result.buildFrom(rs); + } + modifyTopicByOp(dto.getClusterId(), dto.getTopicName(), dto.getAppId(), dto.getDescription(), operator); + return Result.buildSuc(); + } + + @Override + public TopicOperationResult expandTopic(TopicExpansionDTO dto) { + ClusterDO clusterDO = clusterService.getById(dto.getClusterId()); + if (ValidateUtils.isNull(clusterDO)) { + return TopicOperationResult.buildFrom(dto.getClusterId(), dto.getTopicName(), ResultStatus.CLUSTER_NOT_EXIST); + } + + // 参数检查合法, 开始对Topic进行扩分区 + ResultStatus statusEnum = adminService.expandPartitions( + clusterDO, + dto.getTopicName(), + dto.getPartitionNum(), + dto.getRegionId(), + dto.getBrokerIdList(), + SpringTool.getUserName() + ); + return TopicOperationResult.buildFrom(dto.getClusterId(), dto.getTopicName(), statusEnum); + } + private RdTopicBasic convert2RdTopicBasic(ClusterDO clusterDO, String topicName, TopicDO topicDO, AppDO appDO, List regionNameList, - Properties properties) { + Properties properties, + Integer haRelation) { RdTopicBasic rdTopicBasic = new RdTopicBasic(); rdTopicBasic.setClusterId(clusterDO.getId()); rdTopicBasic.setClusterName(clusterDO.getClusterName()); @@ -676,6 +745,7 @@ public class TopicManagerServiceImpl implements TopicManagerService { rdTopicBasic.setRegionNameList(regionNameList); rdTopicBasic.setProperties(properties); rdTopicBasic.setRetentionTime(KafkaZookeeperUtils.getTopicRetentionTime(properties)); + rdTopicBasic.setHaRelation(haRelation); return rdTopicBasic; } } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicServiceImpl.java index 62d1f4cb..f83c0405 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicServiceImpl.java @@ -1,18 +1,19 @@ package com.xiaojukeji.kafka.manager.service.service.impl; -import com.xiaojukeji.kafka.manager.common.bizenum.TopicOffsetChangedEnum; -import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; -import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.common.bizenum.OffsetPosEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TopicOffsetChangedEnum; import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.constant.KafkaMetricsCollections; import com.xiaojukeji.kafka.manager.common.constant.TopicSampleConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.PartitionAttributeDTO; import com.xiaojukeji.kafka.manager.common.entity.ao.PartitionOffsetDTO; import com.xiaojukeji.kafka.manager.common.entity.ao.topic.*; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.TopicDataSampleDTO; import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicMetrics; +import com.xiaojukeji.kafka.manager.common.entity.pojo.*; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConstant; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; @@ -22,13 +23,14 @@ import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata import com.xiaojukeji.kafka.manager.dao.TopicAppMetricsDao; import com.xiaojukeji.kafka.manager.dao.TopicMetricsDao; import com.xiaojukeji.kafka.manager.dao.TopicRequestMetricsDao; -import com.xiaojukeji.kafka.manager.common.entity.pojo.*; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.KafkaClientPool; import com.xiaojukeji.kafka.manager.service.cache.KafkaMetricsCache; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.*; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; import com.xiaojukeji.kafka.manager.service.strategy.AbstractHealthScoreStrategy; import com.xiaojukeji.kafka.manager.service.utils.KafkaZookeeperUtils; import com.xiaojukeji.kafka.manager.service.utils.MetricsConvertUtils; @@ -90,6 +92,12 @@ public class TopicServiceImpl implements TopicService { @Autowired private KafkaClientPool kafkaClientPool; + @Autowired + private HaTopicService haTopicService; + + @Autowired + private HaASRelationManager haASRelationManager; + @Override public List getTopicMetricsFromDB(Long clusterId, String topicName, Date startTime, Date endTime) { try { @@ -244,6 +252,9 @@ public class TopicServiceImpl implements TopicService { basicDTO.setTopicCodeC(jmxService.getTopicCodeCValue(clusterId, topicName)); basicDTO.setScore(healthScoreStrategy.calTopicHealthScore(clusterId, topicName)); + + basicDTO.setHaRelation(haASRelationManager.getRelation(clusterId, topicName)); + return basicDTO; } @@ -325,6 +336,11 @@ public class TopicServiceImpl implements TopicService { return jmxService.getTopicMetrics(clusterId, topicName, metricsCode, byAdd); } + @Override + public Map getPartitionOffset(Long clusterPhyId, String topicName, OffsetPosEnum offsetPosEnum) { + return this.getPartitionOffset(clusterService.getById(clusterPhyId), topicName, offsetPosEnum); + } + @Override public Map getPartitionOffset(ClusterDO clusterDO, String topicName, @@ -403,6 +419,7 @@ public class TopicServiceImpl implements TopicService { appDOMap.put(appDO.getAppId(), appDO); } + Map haRelationMap = haTopicService.getRelation(clusterId); List dtoList = new ArrayList<>(); for (String topicName : topicNameList) { TopicMetadata topicMetadata = PhysicalClusterMetadataManager.getTopicMetadata(clusterId, topicName); @@ -417,7 +434,8 @@ public class TopicServiceImpl implements TopicService { logicalClusterMetadataManager.getTopicLogicalCluster(clusterId, topicName), topicMetadata, topicDO, - appDO + appDO, + haRelationMap.get(topicName) ); dtoList.add(overview); } @@ -429,13 +447,15 @@ public class TopicServiceImpl implements TopicService { LogicalClusterDO logicalClusterDO, TopicMetadata topicMetadata, TopicDO topicDO, - AppDO appDO) { + AppDO appDO, + Integer haRelation) { TopicOverview overview = new TopicOverview(); overview.setClusterId(physicalClusterId); overview.setTopicName(topicMetadata.getTopic()); overview.setPartitionNum(topicMetadata.getPartitionNum()); overview.setReplicaNum(topicMetadata.getReplicaNum()); overview.setUpdateTime(topicMetadata.getModifyTime()); + overview.setHaRelation(haRelation); overview.setRetentionTime( PhysicalClusterMetadataManager.getTopicRetentionTime(physicalClusterId, topicMetadata.getTopic()) ); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java index c4c89513..3ca2259f 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java @@ -17,6 +17,7 @@ import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.List; +import java.util.Properties; /** * @author zengqiao @@ -124,4 +125,43 @@ public class ZookeeperServiceImpl implements ZookeeperService { } return Result.buildFrom(ResultStatus.ZOOKEEPER_DELETE_FAILED); } + + @Override + public Result> getBrokerIds(String zookeeper) { + if (ValidateUtils.isNull(zookeeper)) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); + } + ZkConfigImpl zkConfig = new ZkConfigImpl(zookeeper); + if (ValidateUtils.isNull(zkConfig)) { + return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED); + } + + try { + if (!zkConfig.checkPathExists(ZkPathUtil.BROKER_IDS_ROOT)) { + return Result.buildSuc(new ArrayList<>()); + } + List brokerIdList = zkConfig.getChildren(ZkPathUtil.BROKER_IDS_ROOT); + if (ValidateUtils.isEmptyList(brokerIdList)) { + return Result.buildSuc(new ArrayList<>()); + } + return Result.buildSuc(ListUtils.string2IntList(ListUtils.strList2String(brokerIdList))); + } catch (Exception e) { + LOGGER.error("class=ZookeeperServiceImpl||method=getBrokerIds||zookeeper={}||errMsg={}", zookeeper, e.getMessage()); + } + return Result.buildFrom(ResultStatus.ZOOKEEPER_READ_FAILED); + } + + @Override + public Long getClusterIdAndNullIfFailed(String zookeeper) { + try { + ZkConfigImpl zkConfig = new ZkConfigImpl(zookeeper); + Properties props = zkConfig.get(ZkPathUtil.CLUSTER_ID_NODE, Properties.class); + + return Long.valueOf(props.getProperty("id")); + } catch (Exception e) { + LOGGER.error("class=ZookeeperServiceImpl||method=getClusterIdAndNullIfFailed||zookeeper={}||errMsg=exception", zookeeper, e); + } + + return null; + } } \ No newline at end of file diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java index 9ec66c8b..b1945ff4 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java @@ -20,4 +20,7 @@ public class ConfigUtils { @Value(value = "${spring.profiles.active:dev}") private String kafkaManagerEnv; + + @Value(value = "${d-kafka.gateway-zk:}") + private String dKafkaGatewayZK; } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaClusterCommands.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaClusterCommands.java new file mode 100644 index 00000000..7eda14aa --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaClusterCommands.java @@ -0,0 +1,112 @@ +package com.xiaojukeji.kafka.manager.service.utils; + +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import kafka.admin.AdminUtils; +import kafka.admin.AdminUtils$; +import kafka.utils.ZkUtils; +import org.apache.kafka.common.security.JaasUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.Properties; + + +/** + * @author fengqiongfeng + * @date 21/4/11 + */ +public class HaClusterCommands { + private static final Logger LOGGER = LoggerFactory.getLogger(HaClusterCommands.class); + + private static final String HA_CLUSTERS = "ha-clusters"; + + /** + * 修改HA集群配置 + */ + public static ResultStatus modifyHaClusterConfig(String zookeeper, Long clusterPhyId, Properties modifiedProps) { + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + zookeeper, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + // 获取当前配置 + Properties props = AdminUtils.fetchEntityConfig(zkUtils, HA_CLUSTERS, clusterPhyId.toString()); + + // 补充变更的配置 + props.putAll(modifiedProps); + + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, HA_CLUSTERS, clusterPhyId.toString(), props); + + } catch (Exception e) { + LOGGER.error("method=modifyHaClusterConfig||zookeeper={}||clusterPhyId={}||modifiedProps={}||errMsg=exception", zookeeper, clusterPhyId, modifiedProps, e); + + return ResultStatus.ZOOKEEPER_OPERATE_FAILED; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + return ResultStatus.SUCCESS; + } + + /** + * 获取集群高可用配置 + */ + public static Properties fetchHaClusterConfig(String zookeeper, Long clusterPhyId) { + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + zookeeper, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + // 获取配置 + return AdminUtils.fetchEntityConfig(zkUtils, HA_CLUSTERS, clusterPhyId.toString()); + }catch (Exception e){ + LOGGER.error("method=fetchHaClusterConfig||zookeeper={}||clusterPhyId={}||errMsg=exception", zookeeper, clusterPhyId, e); + + return null; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + } + + /** + * 删除 高可用集群的动态配置 + */ + public static ResultStatus coverHaClusterConfig(String zookeeper, Long clusterPhyId, Properties properties){ + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + zookeeper, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, HA_CLUSTERS, clusterPhyId.toString(), properties); + + return ResultStatus.SUCCESS; + }catch (Exception e){ + LOGGER.error("method=deleteHaClusterConfig||zookeeper={}||clusterPhyId={}||delProps={}||errMsg=exception", zookeeper, clusterPhyId, properties, e); + + return ResultStatus.FAIL; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + } + + private HaClusterCommands() { + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaKafkaUserCommands.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaKafkaUserCommands.java new file mode 100644 index 00000000..eeb43d87 --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaKafkaUserCommands.java @@ -0,0 +1,93 @@ +package com.xiaojukeji.kafka.manager.service.utils; + +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.utils.ListUtils; +import kafka.admin.AdminUtils; +import kafka.admin.AdminUtils$; +import kafka.server.ConfigType; +import kafka.utils.ZkUtils; +import org.apache.kafka.common.security.JaasUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.List; +import java.util.Properties; + + +/** + * @author fengqiongfeng + * @date 21/4/11 + */ +public class HaKafkaUserCommands { + private static final Logger LOGGER = LoggerFactory.getLogger(HaKafkaUserCommands.class); + + /** + * 修改User配置 + */ + public static boolean modifyHaUserConfig(String zookeeper, String kafkaUser, Properties modifiedProps) { + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + zookeeper, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + // 获取当前配置 + Properties props = AdminUtils.fetchEntityConfig(zkUtils, ConfigType.User(), kafkaUser); + + // 补充变更的配置 + props.putAll(modifiedProps); + + // 修改配置, 这里不使用changeUserOrUserClientIdConfig方法的原因是changeUserOrUserClientIdConfig这个方法会进行参数检查 + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, ConfigType.User(), kafkaUser, props); + } catch (Exception e) { + LOGGER.error("method=changeHaUserConfig||zookeeper={}||kafkaUser={}||modifiedProps={}||errMsg=exception", zookeeper, kafkaUser, modifiedProps, e); + return false; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + return true; + } + + /** + * 删除 高可用集群的动态配置 + */ + public static boolean deleteHaUserConfig(String zookeeper, String kafkaUser, List needDeleteConfigNameList){ + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + zookeeper, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + Properties presentProps = AdminUtils.fetchEntityConfig(zkUtils, ConfigType.User(), kafkaUser); + + //删除需要删除的的配置 + for (String configName : needDeleteConfigNameList) { + presentProps.remove(configName); + } + + // 修改配置, 这里不使用changeUserOrUserClientIdConfig方法的原因是changeUserOrUserClientIdConfig这个方法会进行参数检查 + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, ConfigType.User(), kafkaUser, presentProps); + + return true; + }catch (Exception e){ + LOGGER.error("method=deleteHaUserConfig||zookeeper={}||kafkaUser={}||delProps={}||errMsg=exception", zookeeper, kafkaUser, ListUtils.strList2String(needDeleteConfigNameList), e); + + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + + return false; + } + + private HaKafkaUserCommands() { + } +} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaTopicCommands.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaTopicCommands.java new file mode 100644 index 00000000..19a467eb --- /dev/null +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/HaTopicCommands.java @@ -0,0 +1,136 @@ +package com.xiaojukeji.kafka.manager.service.utils; + +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.utils.ListUtils; +import kafka.admin.AdminOperationException; +import kafka.admin.AdminUtils; +import kafka.admin.AdminUtils$; +import kafka.utils.ZkUtils; +import org.apache.kafka.common.errors.*; +import org.apache.kafka.common.security.JaasUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import scala.collection.JavaConversions; + +import java.util.*; + +/** + * HA-Topic Commands + */ +public class HaTopicCommands { + private static final Logger LOGGER = LoggerFactory.getLogger(HaTopicCommands.class); + + private static final String HA_TOPICS = "ha-topics"; + + /** + * 修改HA配置 + */ + public static ResultStatus modifyHaTopicConfig(ClusterDO clusterDO, String topicName, Properties props) { + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + clusterDO.getZookeeper(), + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, HA_TOPICS, topicName, props); + } catch (AdminOperationException aoe) { + LOGGER.error("method=modifyHaTopicConfig||clusterPhyId={}||topicName={}||props={}||errMsg=exception", clusterDO.getId(), topicName, props, aoe); + return ResultStatus.TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION; + } catch (InvalidConfigurationException ice) { + LOGGER.error("method=modifyHaTopicConfig||clusterPhyId={}||topicName={}||props={}||errMsg=exception", clusterDO.getId(), topicName, props, ice); + return ResultStatus.TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL; + } catch (Exception e) { + LOGGER.error("method=modifyHaTopicConfig||clusterPhyId={}||topicName={}||props={}||errMsg=exception", clusterDO.getId(), topicName, props, e); + return ResultStatus.TOPIC_OPERATION_UNKNOWN_ERROR; + } finally { + if (zkUtils != null) { + zkUtils.close(); + } + } + + return ResultStatus.SUCCESS; + } + + /** + * 删除指定HA配置 + */ + public static ResultStatus deleteHaTopicConfig(ClusterDO clusterDO, String topicName, List neeDeleteConfigNameList){ + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + clusterDO.getZookeeper(), + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + // 当前配置 + Properties presentProps = AdminUtils.fetchEntityConfig(zkUtils, HA_TOPICS, topicName); + + //删除需要删除的的配置 + for (String configName : neeDeleteConfigNameList) { + presentProps.remove(configName); + } + + AdminUtils$.MODULE$.kafka$admin$AdminUtils$$changeEntityConfig(zkUtils, HA_TOPICS, topicName, presentProps); + } catch (Exception e){ + LOGGER.error("method=deleteHaTopicConfig||clusterPhyId={}||topicName={}||delProps={}||errMsg=exception", clusterDO.getId(), topicName, ListUtils.strList2String(neeDeleteConfigNameList), e); + return ResultStatus.FAIL; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + return ResultStatus.SUCCESS; + } + + public static Properties fetchHaTopicConfig(ClusterDO clusterDO, String topicName){ + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + clusterDO.getZookeeper(), + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + return AdminUtils.fetchEntityConfig(zkUtils, HA_TOPICS, topicName); + } catch (Exception e){ + LOGGER.error("method=fetchHaTopicConfig||clusterPhyId={}||topicName={}||errMsg=exception", clusterDO.getId(), topicName, e); + return null; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + } + + public static Map fetchAllHaTopicConfig(ClusterDO clusterDO) { + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + clusterDO.getZookeeper(), + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + return JavaConversions.asJavaMap(AdminUtils.fetchAllEntityConfigs(zkUtils, HA_TOPICS)); + } catch (Exception e){ + LOGGER.error("method=fetchAllHaTopicConfig||clusterPhyId={}||errMsg=exception", clusterDO.getId(), e); + return null; + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + } + + private HaTopicCommands() { + } +} \ No newline at end of file diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java index 6995eb97..c8d2fc88 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java @@ -8,6 +8,7 @@ import kafka.admin.AdminOperationException; import kafka.admin.AdminUtils; import kafka.admin.BrokerMetadata; import kafka.common.TopicAndPartition; +import kafka.server.ConfigType; import kafka.utils.ZkUtils; import org.I0Itec.zkclient.exception.ZkNodeExistsException; import org.apache.kafka.common.errors.*; @@ -27,6 +28,8 @@ import java.util.*; public class TopicCommands { private static final Logger LOGGER = LoggerFactory.getLogger(TopicCommands.class); + private TopicCommands() { + } public static ResultStatus createTopic(ClusterDO clusterDO, String topicName, @@ -51,7 +54,7 @@ public class TopicCommands { replicaNum, randomFixedStartIndex(), -1 - ); + ); // 写ZK AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK( @@ -129,6 +132,11 @@ public class TopicCommands { Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, JaasUtils.isZkSecurityEnabled() ); + + if(!zkUtils.pathExists(zkUtils.getTopicPath(topicName))){ + return ResultStatus.TOPIC_NOT_EXIST; + } + AdminUtils.changeTopicConfig(zkUtils, topicName, config); } catch (AdminOperationException e) { LOGGER.error("class=TopicCommands||method=modifyTopicConfig||errMsg={}||clusterDO={}||topicName={}||config={}", e.getMessage(), clusterDO, topicName,config, e); @@ -209,6 +217,31 @@ public class TopicCommands { return ResultStatus.SUCCESS; } + /** + * 获取Topic的动态配置 + */ + public static Properties fetchTopicConfig(ClusterDO clusterDO, String topicName){ + ZkUtils zkUtils = null; + try { + zkUtils = ZkUtils.apply( + clusterDO.getZookeeper(), + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS, + JaasUtils.isZkSecurityEnabled() + ); + + return AdminUtils.fetchEntityConfig(zkUtils, ConfigType.Topic(), topicName); + } catch (Exception e){ + LOGGER.error("get topic config failed, zk:{},topic:{} .err:{}", clusterDO.getZookeeper(), topicName, e); + } finally { + if (null != zkUtils) { + zkUtils.close(); + } + } + + return null; + } + private static Seq convert2BrokerMetadataSeq(List brokerIdList) { List brokerMetadataList = new ArrayList<>(); for (Integer brokerId: brokerIdList) { diff --git a/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/ClusterServiceTest.java b/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/ClusterServiceTest.java index 6210ff1a..68bc334e 100644 --- a/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/ClusterServiceTest.java +++ b/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/ClusterServiceTest.java @@ -22,12 +22,10 @@ import org.springframework.beans.factory.annotation.Value; import org.springframework.dao.DuplicateKeyException; import org.testng.Assert; import org.testng.annotations.BeforeMethod; -import org.testng.annotations.DataProvider; import org.testng.annotations.Test; import java.util.*; -import static org.mockito.Mockito.reset; import static org.mockito.Mockito.when; /** @@ -327,8 +325,8 @@ public class ClusterServiceTest extends BaseTest { @Test(description = "测试删除集群时,该集群下还有region,禁止删除") public void deleteById2OperationForbiddenTest() { when(regionService.getByClusterId(Mockito.anyLong())).thenReturn(Arrays.asList(new RegionDO())); - ResultStatus resultStatus = clusterService.deleteById(1L, "admin"); - Assert.assertEquals(resultStatus.getCode(), ResultStatus.OPERATION_FORBIDDEN.getCode()); + Result result = clusterService.deleteById(1L, "admin"); + Assert.assertEquals(result.successful(), ResultStatus.OPERATION_FORBIDDEN.getCode()); } @Test(description = "测试删除集群成功") @@ -337,18 +335,18 @@ public class ClusterServiceTest extends BaseTest { when(regionService.getByClusterId(Mockito.anyLong())).thenReturn(Collections.emptyList()); Mockito.when(operateRecordService.insert(Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any())).thenReturn(1); Mockito.when(clusterDao.deleteById(Mockito.any())).thenReturn(1); - ResultStatus resultStatus = clusterService.deleteById(clusterDO.getId(), "admin"); - Assert.assertEquals(resultStatus.getCode(), ResultStatus.SUCCESS.getCode()); + Result result = clusterService.deleteById(clusterDO.getId(), "admin"); + Assert.assertEquals(result.successful(), ResultStatus.SUCCESS.getCode()); } @Test(description = "测试MYSQL_ERROR") public void deleteById2MysqlErrorTest() { when(regionService.getByClusterId(Mockito.anyLong())).thenReturn(Collections.emptyList()); - ResultStatus resultStatus = clusterService.deleteById(100L, "admin"); + Result result = clusterService.deleteById(100L, "admin"); Mockito.when(operateRecordService.insert(Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any())).thenReturn(1); Mockito.when(clusterDao.deleteById(Mockito.any())).thenReturn(-1); - Assert.assertEquals(resultStatus.getCode(), ResultStatus.MYSQL_ERROR.getCode()); + Assert.assertEquals(result.successful(), ResultStatus.MYSQL_ERROR.getCode()); } @Test(description = "测试从zk中获取被选举的broker") diff --git a/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/TopicServiceTest.java b/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/TopicServiceTest.java index 712039fc..e9af24ef 100644 --- a/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/TopicServiceTest.java +++ b/kafka-manager-core/src/test/java/com/xiaojukeji/kafka/manager/service/service/TopicServiceTest.java @@ -371,7 +371,7 @@ public class TopicServiceTest extends BaseTest { private void getPartitionOffset2EmptyTest() { ClusterDO clusterDO = getClusterDO(); Map partitionOffset = topicService.getPartitionOffset( - null, null, OffsetPosEnum.BEGINNING); + clusterDO, null, OffsetPosEnum.BEGINNING); Assert.assertTrue(partitionOffset.isEmpty()); Map partitionOffset2 = topicService.getPartitionOffset( diff --git a/kafka-manager-dao/pom.xml b/kafka-manager-dao/pom.xml index 8b30c431..2ba9be40 100644 --- a/kafka-manager-dao/pom.xml +++ b/kafka-manager-dao/pom.xml @@ -33,8 +33,8 @@ - org.mybatis.spring.boot - mybatis-spring-boot-starter + com.baomidou + mybatis-plus-boot-starter mysql diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java index 655218e9..89860ca2 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java @@ -25,6 +25,7 @@ public interface AuthorityDao { List getAuthority(Long clusterId, String topicName, String appId); List getAuthorityByTopic(Long clusterId, String topicName); + List getAuthorityByTopicFromCache(Long clusterId, String topicName); List getByAppId(String appId); diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java index c7bac9e0..5b2621b0 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java @@ -49,6 +49,28 @@ public class AuthorityDaoImpl implements AuthorityDao { return sqlSession.selectList("AuthorityDao.getAuthorityByTopic", params); } + @Override + public List getAuthorityByTopicFromCache(Long clusterId, String topicName) { + updateAuthorityCache(); + + List doList = new ArrayList<>(); + for (Map> authMap: AUTHORITY_MAP.values()) { + Map doMap = authMap.get(clusterId); + if (doMap == null) { + continue; + } + + AuthorityDO authorityDO = doMap.get(topicName); + if (authorityDO == null) { + continue; + } + + doList.add(authorityDO); + } + + return doList; + } + @Override public List getByAppId(String appId) { updateAuthorityCache(); diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASRelationDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASRelationDao.java new file mode 100644 index 00000000..9b8e9565 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASRelationDao.java @@ -0,0 +1,12 @@ +package com.xiaojukeji.kafka.manager.dao.ha; + +import com.baomidou.mybatisplus.core.mapper.BaseMapper; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import org.springframework.stereotype.Repository; + +/** + * 主备关系信息 + */ +@Repository +public interface HaASRelationDao extends BaseMapper { +} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchJobDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchJobDao.java new file mode 100644 index 00000000..9aa1b4c4 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchJobDao.java @@ -0,0 +1,17 @@ +package com.xiaojukeji.kafka.manager.dao.ha; + +import com.baomidou.mybatisplus.core.mapper.BaseMapper; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchJobDO; +import org.springframework.stereotype.Repository; + +import java.util.List; + +/** + * 主备关系切换任务 + */ +@Repository +public interface HaASSwitchJobDao extends BaseMapper { + int addAndSetId(HaASSwitchJobDO jobDO); + + List listAllLatest(); +} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchSubJobDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchSubJobDao.java new file mode 100644 index 00000000..daf76846 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/HaASSwitchSubJobDao.java @@ -0,0 +1,12 @@ +package com.xiaojukeji.kafka.manager.dao.ha; + +import com.baomidou.mybatisplus.core.mapper.BaseMapper; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASSwitchSubJobDO; +import org.springframework.stereotype.Repository; + +/** + * 主备关系切换子任务 + */ +@Repository +public interface HaASSwitchSubJobDao extends BaseMapper { +} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/JobLogDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/JobLogDao.java new file mode 100644 index 00000000..8d66e506 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/ha/JobLogDao.java @@ -0,0 +1,12 @@ +package com.xiaojukeji.kafka.manager.dao.ha; + +import com.baomidou.mybatisplus.core.mapper.BaseMapper; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; +import org.springframework.stereotype.Repository; + +/** + * Job的Log, 正常来说应该与TopicDao等放在一起的,但是因为使用了mybatis-plus,因此零时放在这个地方 + */ +@Repository +public interface JobLogDao extends BaseMapper { +} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/ClusterDaoImpl.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/ClusterDaoImpl.java index 0d2ea867..9ebff7c3 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/ClusterDaoImpl.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/ClusterDaoImpl.java @@ -23,7 +23,11 @@ public class ClusterDaoImpl implements ClusterDao { @Override public int insert(ClusterDO clusterDO) { - return sqlSession.insert("ClusterDao.insert", clusterDO); + if (clusterDO.getId() != null) { + return sqlSession.insert("ClusterDao.insertWithId", clusterDO); + } else { + return sqlSession.insert("ClusterDao.insert", clusterDO); + } } @Override diff --git a/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml b/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml index 53b90293..9fbd0d71 100644 --- a/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml +++ b/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml @@ -15,6 +15,14 @@ + + INSERT INTO cluster ( + id, cluster_name, zookeeper, bootstrap_servers, security_properties, jmx_properties + ) VALUES ( + #{id}, #{clusterName}, #{zookeeper}, #{bootstrapServers}, #{securityProperties}, #{jmxProperties} + ) + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchJobDao.xml b/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchJobDao.xml new file mode 100644 index 00000000..c1128be8 --- /dev/null +++ b/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchJobDao.xml @@ -0,0 +1,32 @@ + + + + + + + + + + + + + + + + INSERT INTO ks_km_physical_cluster + (active_cluster_phy_id, standby_cluster_phy_id, job_status, operator) + VALUES + (#{activeClusterPhyId}, #{standbyClusterPhyId}, #{jobStatus}, #{operator}) + + + + diff --git a/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchSubJobDao.xml b/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchSubJobDao.xml new file mode 100644 index 00000000..a5f60444 --- /dev/null +++ b/kafka-manager-dao/src/main/resources/mapper/HaActiveStandbySwitchSubJobDao.xml @@ -0,0 +1,19 @@ + + + + + + + + + + + + + + + + + + diff --git a/kafka-manager-dao/src/main/resources/mapper/JobLogDao.xml b/kafka-manager-dao/src/main/resources/mapper/JobLogDao.xml new file mode 100644 index 00000000..d885884b --- /dev/null +++ b/kafka-manager-dao/src/main/resources/mapper/JobLogDao.xml @@ -0,0 +1,15 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/kafka-manager-dao/src/main/resources/mapper/RegionDao.xml b/kafka-manager-dao/src/main/resources/mapper/RegionDao.xml index 3b6ede2c..3c20e8c2 100644 --- a/kafka-manager-dao/src/main/resources/mapper/RegionDao.xml +++ b/kafka-manager-dao/src/main/resources/mapper/RegionDao.xml @@ -16,7 +16,10 @@ - + INSERT INTO region (name, cluster_id, broker_list, status, description) VALUES diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/handle/OrderHandleQuotaDTO.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/handle/OrderHandleQuotaDTO.java index cbc4b6fb..a168e013 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/handle/OrderHandleQuotaDTO.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/handle/OrderHandleQuotaDTO.java @@ -4,6 +4,8 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import lombok.AllArgsConstructor; +import lombok.NoArgsConstructor; import java.util.List; @@ -13,6 +15,8 @@ import java.util.List; */ @JsonIgnoreProperties(ignoreUnknown = true) @ApiModel(description = "Quota工单审批参数") +@NoArgsConstructor +@AllArgsConstructor public class OrderHandleQuotaDTO { @ApiModelProperty(value = "分区数, 非必须") private Integer partitionNum; diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java index 60119352..d18d1c89 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java @@ -3,24 +3,31 @@ package com.xiaojukeji.kafka.manager.bpm.order.impl; import com.alibaba.fastjson.JSONException; import com.alibaba.fastjson.JSONObject; import com.xiaojukeji.kafka.manager.account.AccountService; -import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; -import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.OrderExtensionAuthorityDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.AbstractOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.OrderDetailApplyAuthorityDTO; import com.xiaojukeji.kafka.manager.bpm.common.handle.OrderHandleBaseDTO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.bpm.order.AbstractAuthorityOrder; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaRelationTypeEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; +import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; -import com.xiaojukeji.kafka.manager.bpm.order.AbstractAuthorityOrder; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService; -import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; @@ -52,6 +59,12 @@ public class ApplyAuthorityOrder extends AbstractAuthorityOrder { @Autowired private TopicManagerService topicManagerService; + @Autowired + private HaTopicService haTopicService; + + @Autowired + private HaASRelationManager haASRelationManager; + @Override public AbstractOrderDetailData getOrderExtensionDetailData(String extensions) { OrderDetailApplyAuthorityDTO orderDetailDTO = new OrderDetailApplyAuthorityDTO(); @@ -116,21 +129,40 @@ public class ApplyAuthorityOrder extends AbstractAuthorityOrder { if (ValidateUtils.isNull(physicalClusterId)) { return ResultStatus.CLUSTER_NOT_EXIST; } - TopicQuota topicQuotaDO = new TopicQuota(); - topicQuotaDO.setAppId(orderExtensionDTO.getAppId()); - topicQuotaDO.setTopicName(orderExtensionDTO.getTopicName()); - topicQuotaDO.setClusterId(physicalClusterId); - AuthorityDO authorityDO = new AuthorityDO(); - authorityDO.setAccess(orderExtensionDTO.getAccess()); - authorityDO.setAppId(orderExtensionDTO.getAppId()); - authorityDO.setTopicName(orderExtensionDTO.getTopicName()); - authorityDO.setClusterId(physicalClusterId); -// authorityDO.setApplicant(orderDO.getApplicant()); + HaASRelationDO relation = haASRelationManager.getASRelation(physicalClusterId, orderExtensionDTO.getTopicName()); - if (authorityService.addAuthorityAndQuota(authorityDO, topicQuotaDO) < 1) { - return ResultStatus.OPERATION_FAILED; + //是否高可用topic + Integer haRelation = HaRelationTypeEnum.UNKNOWN.getCode(); + if (relation != null){ + //用户侧不允许操作备topic + if (relation.getStandbyClusterPhyId().equals(orderExtensionDTO.getClusterId())){ + return ResultStatus.OPERATION_FORBIDDEN; + } + haRelation = HaRelationTypeEnum.ACTIVE.getCode(); } + + ResultStatus resultStatus = applyAuthority(physicalClusterId, + orderExtensionDTO.getTopicName(), + userName, + orderExtensionDTO.getAppId(), + orderExtensionDTO.getAccess(), + haRelation); + if (haRelation.equals(HaRelationTypeEnum.UNKNOWN.getCode()) + && ResultStatus.SUCCESS.getCode() != resultStatus.getCode()){ + return resultStatus; + } + + //给备topic添加权限 + if (relation.getActiveResName().equals(orderExtensionDTO.getTopicName())){ + return applyAuthority(relation.getStandbyClusterPhyId(), + relation.getStandbyResName(), + userName, + orderExtensionDTO.getAppId(), + orderExtensionDTO.getAccess(), + HaRelationTypeEnum.STANDBY.getCode()); + } + return ResultStatus.SUCCESS; } @@ -158,4 +190,39 @@ public class ApplyAuthorityOrder extends AbstractAuthorityOrder { } return approverList; } + + private ResultStatus applyAuthority(Long physicalClusterId, String topicName, String userName, String appId, Integer access, Integer haRelation){ + ClusterDO clusterDO = PhysicalClusterMetadataManager.getClusterFromCache(physicalClusterId); + if (clusterDO == null){ + return ResultStatus.CLUSTER_NOT_EXIST; + } + TopicQuota topicQuotaDO = new TopicQuota(); + topicQuotaDO.setAppId(appId); + topicQuotaDO.setTopicName(topicName); + topicQuotaDO.setClusterId(physicalClusterId); + + AuthorityDO authorityDO = new AuthorityDO(); + authorityDO.setAccess(access); + authorityDO.setAppId(appId); + authorityDO.setTopicName(topicName); + authorityDO.setClusterId(physicalClusterId); + + if (authorityService.addAuthorityAndQuota(authorityDO, topicQuotaDO) < 1) { + return ResultStatus.OPERATION_FAILED; + } + + Result result = new Result(); + HaASRelationDO relation = haASRelationManager.getASRelation(physicalClusterId, topicName); + if (HaRelationTypeEnum.STANDBY.getCode() == haRelation){ + result = haTopicService.activeUserHAInKafka(PhysicalClusterMetadataManager.getClusterFromCache(relation.getActiveClusterPhyId()), + PhysicalClusterMetadataManager.getClusterFromCache(relation.getStandbyClusterPhyId()), + appId, + userName); + } + if (result.failed()){ + return ResultStatus.ZOOKEEPER_OPERATE_FAILED; + } + return ResultStatus.SUCCESS; + } + } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyPartitionOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyPartitionOrder.java index ae466311..b353587e 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyPartitionOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyPartitionOrder.java @@ -3,6 +3,7 @@ package com.xiaojukeji.kafka.manager.bpm.order.impl; import com.alibaba.fastjson.JSONObject; import com.xiaojukeji.kafka.manager.account.AccountService; import com.xiaojukeji.kafka.manager.bpm.common.OrderTypeEnum; +import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.OrderExtensionQuotaDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.PartitionOrderExtensionDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.AbstractOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.PartitionOrderDetailData; @@ -12,16 +13,17 @@ import com.xiaojukeji.kafka.manager.bpm.order.AbstractOrder; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; -import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.OrderExtensionQuotaDTO; import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicMetrics; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.DateUtils; import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.KafkaMetricsCache; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; @@ -61,6 +63,9 @@ public class ApplyPartitionOrder extends AbstractOrder { @Autowired private RegionService regionService; + @Autowired + private HaASRelationManager haASRelationManager; + @Override public AbstractOrderDetailData getOrderExtensionDetailData(String extensions) { PartitionOrderDetailData detailData = new PartitionOrderDetailData(); @@ -169,28 +174,30 @@ public class ApplyPartitionOrder extends AbstractOrder { if (ValidateUtils.isNull(physicalClusterId)) { return ResultStatus.CLUSTER_NOT_EXIST; } - if (!PhysicalClusterMetadataManager.isTopicExistStrictly(physicalClusterId, extensionDTO.getTopicName())) { - return ResultStatus.TOPIC_NOT_EXIST; - } if (handleDTO.isExistNullParam()) { return ResultStatus.OPERATION_FAILED; } - ClusterDO clusterDO = clusterService.getById(physicalClusterId); - return adminService.expandPartitions( - clusterDO, - extensionDTO.getTopicName(), - handleDTO.getPartitionNum(), - handleDTO.getRegionId(), - handleDTO.getBrokerIdList(), - userName - ); - } - private OrderExtensionQuotaDTO supplyExtension(OrderExtensionQuotaDTO extensionDTO, OrderHandleQuotaDTO handleDTO){ - extensionDTO.setPartitionNum(handleDTO.getPartitionNum()); - extensionDTO.setRegionId(handleDTO.getRegionId()); - extensionDTO.setBrokerIdList(handleDTO.getBrokerIdList()); - return extensionDTO; + //备topic扩分区 + HaASRelationDO relationDO = haASRelationManager.getASRelation(physicalClusterId, extensionDTO.getTopicName()); + if (relationDO != null){ + //用户侧不允许操作备topic + if (relationDO.getStandbyClusterPhyId().equals(extensionDTO.getClusterId())){ + return ResultStatus.OPERATION_FORBIDDEN; + } + ResultStatus rv = apply(relationDO.getStandbyClusterPhyId(), + relationDO.getStandbyResName(), + userName, + handleDTO.getPartitionNum(), + null, + PhysicalClusterMetadataManager.getBrokerIdList(relationDO.getStandbyClusterPhyId())); + if (ResultStatus.SUCCESS.getCode() != rv.getCode()){ + return rv; + } + } + + return apply(physicalClusterId, extensionDTO.getTopicName(), userName, + handleDTO.getPartitionNum(), handleDTO.getRegionId(), handleDTO.getBrokerIdList()); } @Override @@ -206,4 +213,29 @@ public class ApplyPartitionOrder extends AbstractOrder { return accountService.getAdminOrderHandlerFromCache(); } + private ResultStatus apply(Long physicalClusterId, String topicName, String userName, int partitionNum, Long regionId, List brokerIds){ + ClusterDO clusterDO = clusterService.getById(physicalClusterId); + if (clusterDO == null){ + return ResultStatus.CLUSTER_NOT_EXIST; + } + + if (!PhysicalClusterMetadataManager.isTopicExistStrictly(physicalClusterId, topicName)) { + return ResultStatus.TOPIC_NOT_EXIST; + } + return adminService.expandPartitions( + clusterDO, + topicName, + partitionNum, + regionId, + brokerIds, + userName + ); + } + + private OrderExtensionQuotaDTO supplyExtension(OrderExtensionQuotaDTO extensionDTO, OrderHandleQuotaDTO handleDTO){ + extensionDTO.setPartitionNum(handleDTO.getPartitionNum()); + extensionDTO.setRegionId(handleDTO.getRegionId()); + extensionDTO.setBrokerIdList(handleDTO.getBrokerIdList()); + return extensionDTO; + } } \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyQuotaOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyQuotaOrder.java index 7bfc9b64..84dae4eb 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyQuotaOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyQuotaOrder.java @@ -3,39 +3,46 @@ package com.xiaojukeji.kafka.manager.bpm.order.impl; import com.alibaba.fastjson.JSONObject; import com.xiaojukeji.kafka.manager.account.AccountService; import com.xiaojukeji.kafka.manager.bpm.common.OrderTypeEnum; -import com.xiaojukeji.kafka.manager.bpm.order.AbstractOrder; -import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; -import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.OrderExtensionQuotaDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.AbstractOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.QuotaOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.handle.OrderHandleBaseDTO; import com.xiaojukeji.kafka.manager.bpm.common.handle.OrderHandleQuotaDTO; +import com.xiaojukeji.kafka.manager.bpm.order.AbstractOrder; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; +import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; import com.xiaojukeji.kafka.manager.common.entity.metrics.TopicMetrics; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.LogicalClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.DateUtils; import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.NumberUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.config.TopicQuotaData; -import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; -import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.KafkaMetricsCache; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; -import com.xiaojukeji.kafka.manager.service.service.*; +import com.xiaojukeji.kafka.manager.service.service.AdminService; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.RegionService; +import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.gateway.QuotaService; import com.xiaojukeji.kafka.manager.service.utils.KafkaZookeeperUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; -import java.util.*; +import java.util.ArrayList; +import java.util.Date; +import java.util.List; import java.util.stream.Collectors; /** @@ -68,6 +75,9 @@ public class ApplyQuotaOrder extends AbstractOrder { @Autowired private RegionService regionService; + @Autowired + private HaASRelationManager haASRelationManager; + @Override public AbstractOrderDetailData getOrderExtensionDetailData(String extensions) { QuotaOrderDetailData orderDetailDTO = new QuotaOrderDetailData(); @@ -198,40 +208,40 @@ public class ApplyQuotaOrder extends AbstractOrder { if (ValidateUtils.isNull(physicalClusterId)) { return ResultStatus.CLUSTER_NOT_EXIST; } - if (!PhysicalClusterMetadataManager.isTopicExistStrictly(physicalClusterId, extensionDTO.getTopicName())) { - return ResultStatus.TOPIC_NOT_EXIST; - } - if (!handleDTO.isExistNullParam()) { - ClusterDO clusterDO = clusterService.getById(physicalClusterId); - ResultStatus resultStatus = adminService.expandPartitions( - clusterDO, - extensionDTO.getTopicName(), - handleDTO.getPartitionNum(), - handleDTO.getRegionId(), - handleDTO.getBrokerIdList(), - userName); - if (!ResultStatus.SUCCESS.equals(resultStatus)) { - return resultStatus; + + //备topic调整quota + HaASRelationDO relationDO = haASRelationManager.getASRelation(physicalClusterId, extensionDTO.getTopicName()); + if (relationDO != null){ + if (relationDO.getStandbyClusterPhyId().equals(physicalClusterId)){ + return ResultStatus.OPERATION_FORBIDDEN; + } + List standbyBrokerIds = PhysicalClusterMetadataManager.getBrokerIdList(relationDO.getStandbyClusterPhyId()); + if(standbyBrokerIds == null || standbyBrokerIds.isEmpty()){ + return ResultStatus.BROKER_NOT_EXIST; + } + OrderExtensionQuotaDTO standbyDto = new OrderExtensionQuotaDTO(); + standbyDto.setClusterId(relationDO.getStandbyClusterPhyId()); + standbyDto.setTopicName(relationDO.getStandbyResName()); + standbyDto.setConsumeQuota(extensionDTO.getConsumeQuota()); + standbyDto.setProduceQuota(extensionDTO.getProduceQuota()); + standbyDto.setAppId(extensionDTO.getAppId()); + + ResultStatus rv = applyQuota(userName, + new OrderHandleQuotaDTO(handleDTO.getPartitionNum(), null, standbyBrokerIds), + standbyDto); + if (ResultStatus.SUCCESS.getCode() != rv.getCode()){ + return rv; } } - TopicQuota topicQuotaDO = new TopicQuota(); - topicQuotaDO.setAppId(extensionDTO.getAppId()); - topicQuotaDO.setTopicName(extensionDTO.getTopicName()); - topicQuotaDO.setConsumeQuota(extensionDTO.getConsumeQuota()); - topicQuotaDO.setProduceQuota(extensionDTO.getProduceQuota()); - topicQuotaDO.setClusterId(physicalClusterId); - if (quotaService.addTopicQuota(topicQuotaDO) > 0) { - orderDO.setExtensions(JSONObject.toJSONString(supplyExtension(extensionDTO, handleDTO))); - return ResultStatus.SUCCESS; - } - return ResultStatus.OPERATION_FAILED; - } - private OrderExtensionQuotaDTO supplyExtension(OrderExtensionQuotaDTO extensionDTO, OrderHandleQuotaDTO handleDTO){ - extensionDTO.setPartitionNum(handleDTO.getPartitionNum()); - extensionDTO.setRegionId(handleDTO.getRegionId()); - extensionDTO.setBrokerIdList(handleDTO.getBrokerIdList()); - return extensionDTO; + extensionDTO.setClusterId(physicalClusterId); + ResultStatus resultStatus = applyQuota(userName, handleDTO, extensionDTO); + if (ResultStatus.SUCCESS.getCode() != resultStatus.getCode()){ + return resultStatus; + } + orderDO.setExtensions(JSONObject.toJSONString(supplyExtension(extensionDTO, handleDTO))); + + return ResultStatus.SUCCESS; } @Override @@ -246,4 +256,43 @@ public class ApplyQuotaOrder extends AbstractOrder { public List getApproverList(String extensions) { return accountService.getAdminOrderHandlerFromCache(); } + + private ResultStatus applyQuota( + String userName, + OrderHandleQuotaDTO handleDTO, + OrderExtensionQuotaDTO dto){ + if (!PhysicalClusterMetadataManager.isTopicExistStrictly(dto.getClusterId(), dto.getTopicName())) { + return ResultStatus.TOPIC_NOT_EXIST; + } + if (!handleDTO.isExistNullParam()) { + ClusterDO clusterDO = clusterService.getById(dto.getClusterId()); + ResultStatus resultStatus = adminService.expandPartitions( + clusterDO, + dto.getTopicName(), + handleDTO.getPartitionNum(), + handleDTO.getRegionId(), + handleDTO.getBrokerIdList(), + userName); + if (!ResultStatus.SUCCESS.equals(resultStatus)) { + return resultStatus; + } + } + TopicQuota topicQuotaDO = new TopicQuota(); + topicQuotaDO.setAppId(dto.getAppId()); + topicQuotaDO.setTopicName(dto.getTopicName()); + topicQuotaDO.setConsumeQuota(dto.getConsumeQuota()); + topicQuotaDO.setProduceQuota(dto.getProduceQuota()); + topicQuotaDO.setClusterId(dto.getClusterId()); + if (quotaService.addTopicQuota(topicQuotaDO) > 0) { + return ResultStatus.SUCCESS; + } + return ResultStatus.OPERATION_FAILED; + } + + private OrderExtensionQuotaDTO supplyExtension(OrderExtensionQuotaDTO extensionDTO, OrderHandleQuotaDTO handleDTO){ + extensionDTO.setPartitionNum(handleDTO.getPartitionNum()); + extensionDTO.setRegionId(handleDTO.getRegionId()); + extensionDTO.setBrokerIdList(handleDTO.getBrokerIdList()); + return extensionDTO; + } } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/DeleteTopicOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/DeleteTopicOrder.java index 5056e51c..8284aeb8 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/DeleteTopicOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/DeleteTopicOrder.java @@ -2,27 +2,29 @@ package com.xiaojukeji.kafka.manager.bpm.order.impl; import com.alibaba.fastjson.JSONObject; import com.xiaojukeji.kafka.manager.bpm.common.OrderTypeEnum; -import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; -import com.xiaojukeji.kafka.manager.common.constant.Constant; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; -import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicConnection; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.OrderExtensionDeleteTopicDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.AbstractOrderDetailData; import com.xiaojukeji.kafka.manager.bpm.common.entry.detail.OrderDetailDeleteTopicDTO; import com.xiaojukeji.kafka.manager.bpm.common.handle.OrderHandleBaseDTO; -import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.bpm.order.AbstractTopicOrder; +import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.ao.topic.TopicConnection; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; -import com.xiaojukeji.kafka.manager.bpm.order.AbstractTopicOrder; import com.xiaojukeji.kafka.manager.service.service.AdminService; -import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.ClusterService; import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; +import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.service.service.gateway.TopicConnectionService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; @@ -54,6 +56,9 @@ public class DeleteTopicOrder extends AbstractTopicOrder { @Autowired private TopicConnectionService connectionService; + @Autowired + private HaASRelationManager haASRelationManager; + @Override public AbstractOrderDetailData getOrderExtensionDetailData(String extensions) { OrderDetailDeleteTopicDTO orderDetailDTO = new OrderDetailDeleteTopicDTO(); @@ -128,26 +133,32 @@ public class DeleteTopicOrder extends AbstractTopicOrder { if (ValidateUtils.isNull(physicalClusterId)) { return ResultStatus.CLUSTER_NOT_EXIST; } + + HaASRelationDO relationDO = haASRelationManager.getASRelation(physicalClusterId, extensionDTO.getTopicName()); + if (relationDO != null) { + //高可用topic需要先解除高可用关系才能删除 + return ResultStatus.HA_TOPIC_DELETE_FORBIDDEN; + } + + return delTopic(physicalClusterId, extensionDTO.getTopicName(), userName); + } + + private ResultStatus delTopic(Long physicalClusterId, String topicName, String userName){ ClusterDO clusterDO = clusterService.getById(physicalClusterId); - if (!PhysicalClusterMetadataManager.isTopicExistStrictly(physicalClusterId, extensionDTO.getTopicName())) { + if (!PhysicalClusterMetadataManager.isTopicExistStrictly(physicalClusterId, topicName)) { return ResultStatus.TOPIC_NOT_EXIST; } // 最近topic是否还有生产或者消费操作 if (connectionService.isExistConnection( physicalClusterId, - extensionDTO.getTopicName(), + topicName, new Date(System.currentTimeMillis() - Constant.TOPIC_CONNECTION_LATEST_TIME_MS), new Date()) - ) { + ) { return ResultStatus.OPERATION_FORBIDDEN; } - ResultStatus resultStatus = adminService.deleteTopic(clusterDO, extensionDTO.getTopicName(), userName); - - if (!ResultStatus.SUCCESS.equals(resultStatus)) { - return resultStatus; - } - return resultStatus; + return adminService.deleteTopic(clusterDO, topicName, userName); } } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ThirdPartDeleteTopicOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ThirdPartDeleteTopicOrder.java index ec98ced7..69ad3b52 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ThirdPartDeleteTopicOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ThirdPartDeleteTopicOrder.java @@ -155,6 +155,7 @@ public class ThirdPartDeleteTopicOrder extends AbstractTopicOrder { return ResultStatus.USER_WITHOUT_AUTHORITY; } + ResultStatus resultStatus = adminService.deleteTopic(clusterDO, extensionDTO.getTopicName(), userName); if (!ResultStatus.SUCCESS.equals(resultStatus)) { return resultStatus; diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java index 70ce5902..9146b1bd 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java @@ -1,7 +1,7 @@ package com.xiaojukeji.kafka.manager.kcm.component.agent; import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskSubStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; @@ -37,7 +37,7 @@ public abstract class AbstractAgent { * @param actionEnum 执行动作 * @return true:触发成功, false:触发失败 */ - public abstract boolean actionTask(Long taskId, ClusterTaskActionEnum actionEnum); + public abstract boolean actionTask(Long taskId, TaskActionEnum actionEnum); /** * 执行任务 @@ -46,7 +46,7 @@ public abstract class AbstractAgent { * @param hostname 具体主机 * @return true:触发成功, false:触发失败 */ - public abstract boolean actionHostTask(Long taskId, ClusterTaskActionEnum actionEnum, String hostname); + public abstract boolean actionHostTask(Long taskId, TaskActionEnum actionEnum, String hostname); /** * 获取任务运行的状态[阻塞, 执行中, 完成等] diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java index d0a2503b..e836150c 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java @@ -3,7 +3,7 @@ package com.xiaojukeji.kafka.manager.kcm.component.agent.n9e; import com.xiaojukeji.kafka.manager.common.bizenum.KafkaFileEnum; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.kcm.common.Constant; -import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskTypeEnum; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.CreationTaskData; @@ -94,7 +94,7 @@ public class N9e extends AbstractAgent { } @Override - public boolean actionTask(Long taskId, ClusterTaskActionEnum actionEnum) { + public boolean actionTask(Long taskId, TaskActionEnum actionEnum) { Map param = new HashMap<>(1); param.put("action", actionEnum.getAction()); @@ -115,7 +115,7 @@ public class N9e extends AbstractAgent { } @Override - public boolean actionHostTask(Long taskId, ClusterTaskActionEnum actionEnum, String hostname) { + public boolean actionHostTask(Long taskId, TaskActionEnum actionEnum, String hostname) { Map params = new HashMap<>(2); params.put("action", actionEnum.getAction()); params.put("hostname", hostname); @@ -234,7 +234,7 @@ public class N9e extends AbstractAgent { n9eCreationTask.setScript(this.script); n9eCreationTask.setArgs(sb.toString()); n9eCreationTask.setAccount(this.account); - n9eCreationTask.setAction(ClusterTaskActionEnum.PAUSE.getAction()); + n9eCreationTask.setAction(TaskActionEnum.PAUSE.getAction()); n9eCreationTask.setHosts(creationTaskData.getHostList()); return n9eCreationTask; } diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java index b3ef959a..cc9547bc 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java @@ -4,7 +4,7 @@ import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.kcm.ClusterTaskService; import com.xiaojukeji.kafka.manager.kcm.common.Converters; -import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.entry.ClusterTaskConstant; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskSubStatus; @@ -93,38 +93,38 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { return ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED; } - if (ClusterTaskActionEnum.START.getAction().equals(action) && ClusterTaskStateEnum.BLOCKED.equals(stateEnumResult.getData())) { + if (TaskActionEnum.START.getAction().equals(action) && ClusterTaskStateEnum.BLOCKED.equals(stateEnumResult.getData())) { // 暂停状态, 可以执行开始 - return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.START, ""); + return actionTaskExceptRollbackAction(agentTaskId, TaskActionEnum.START, ""); } - if (ClusterTaskActionEnum.PAUSE.getAction().equals(action) && ClusterTaskStateEnum.RUNNING.equals(stateEnumResult.getData())) { + if (TaskActionEnum.PAUSE.getAction().equals(action) && ClusterTaskStateEnum.RUNNING.equals(stateEnumResult.getData())) { // 运行状态, 可以执行暂停 - return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.PAUSE, ""); + return actionTaskExceptRollbackAction(agentTaskId, TaskActionEnum.PAUSE, ""); } - if (ClusterTaskActionEnum.IGNORE.getAction().equals(action)) { + if (TaskActionEnum.IGNORE.getAction().equals(action)) { // 忽略 & 取消随时都可以操作 - return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.IGNORE, hostname); + return actionTaskExceptRollbackAction(agentTaskId, TaskActionEnum.IGNORE, hostname); } - if (ClusterTaskActionEnum.CANCEL.getAction().equals(action)) { + if (TaskActionEnum.CANCEL.getAction().equals(action)) { // 忽略 & 取消随时都可以操作 - return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.CANCEL, hostname); + return actionTaskExceptRollbackAction(agentTaskId, TaskActionEnum.CANCEL, hostname); } if ((!ClusterTaskStateEnum.FINISHED.equals(stateEnumResult.getData()) || !rollback) - && ClusterTaskActionEnum.ROLLBACK.getAction().equals(action)) { + && TaskActionEnum.ROLLBACK.getAction().equals(action)) { // 暂未操作完时可以回滚, 回滚所有操作过的机器到上一个版本 return actionTaskRollback(clusterTaskDO); } return ResultStatus.OPERATION_FAILED; } - private ResultStatus actionTaskExceptRollbackAction(Long agentId, ClusterTaskActionEnum actionEnum, String hostname) { + private ResultStatus actionTaskExceptRollbackAction(Long agentId, TaskActionEnum actionEnum, String hostname) { if (!ValidateUtils.isBlank(hostname)) { return actionHostTaskExceptRollbackAction(agentId, actionEnum, hostname); } return abstractAgent.actionTask(agentId, actionEnum)? ResultStatus.SUCCESS: ResultStatus.OPERATION_FAILED; } - private ResultStatus actionHostTaskExceptRollbackAction(Long agentId, ClusterTaskActionEnum actionEnum, String hostname) { + private ResultStatus actionHostTaskExceptRollbackAction(Long agentId, TaskActionEnum actionEnum, String hostname) { return abstractAgent.actionHostTask(agentId, actionEnum, hostname)? ResultStatus.SUCCESS: ResultStatus.OPERATION_FAILED; } @@ -176,7 +176,7 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { if (clusterTaskDao.updateRollback(clusterTaskDO) <= 0) { return ResultStatus.MYSQL_ERROR; } - abstractAgent.actionTask(clusterTaskDO.getAgentTaskId(), ClusterTaskActionEnum.CANCEL); + abstractAgent.actionTask(clusterTaskDO.getAgentTaskId(), TaskActionEnum.CANCEL); return ResultStatus.SUCCESS; } catch (Exception e) { LOGGER.error("create cluster task failed, clusterTaskDO:{}.", clusterTaskDO, e); diff --git a/kafka-manager-extends/kafka-manager-kcm/src/test/java/com/xiaojukeji/kafka/manager/kcm/ClusterTaskServiceTest.java b/kafka-manager-extends/kafka-manager-kcm/src/test/java/com/xiaojukeji/kafka/manager/kcm/ClusterTaskServiceTest.java index b28b828f..e1ca1b13 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/test/java/com/xiaojukeji/kafka/manager/kcm/ClusterTaskServiceTest.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/test/java/com/xiaojukeji/kafka/manager/kcm/ClusterTaskServiceTest.java @@ -4,7 +4,7 @@ import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterTaskDO; import com.xiaojukeji.kafka.manager.dao.ClusterTaskDao; -import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.TaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskTypeEnum; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; @@ -163,7 +163,7 @@ public class ClusterTaskServiceTest extends BaseTest { } private void executeTask2TaskNotExistTest() { - ResultStatus resultStatus = clusterTaskService.executeTask(INVALID_TASK_ID, ClusterTaskActionEnum.START.getAction(), ADMIN); + ResultStatus resultStatus = clusterTaskService.executeTask(INVALID_TASK_ID, TaskActionEnum.START.getAction(), ADMIN); Assert.assertEquals(resultStatus.getCode(), ResultStatus.RESOURCE_NOT_EXIST.getCode()); } @@ -172,7 +172,7 @@ public class ClusterTaskServiceTest extends BaseTest { ClusterTaskDO clusterTaskDO = getClusterTaskDO(); Mockito.when(clusterTaskDao.getById(Mockito.anyLong())).thenReturn(clusterTaskDO); - ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.START.getAction(), ADMIN); + ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.START.getAction(), ADMIN); Assert.assertEquals(resultStatus.getCode(), ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED.getCode()); } @@ -183,12 +183,12 @@ public class ClusterTaskServiceTest extends BaseTest { // success Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(true); - ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.START.getAction(), ADMIN); + ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.START.getAction(), ADMIN); Assert.assertEquals(resultStatus.getCode(), ResultStatus.SUCCESS.getCode()); // operation failed Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(false); - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.START.getAction(), ADMIN); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.START.getAction(), ADMIN); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FAILED.getCode()); } @@ -199,12 +199,12 @@ public class ClusterTaskServiceTest extends BaseTest { // success Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(true); - ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.PAUSE.getAction(), ADMIN); + ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.PAUSE.getAction(), ADMIN); Assert.assertEquals(resultStatus.getCode(), ResultStatus.SUCCESS.getCode()); // operation failed Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(false); - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.PAUSE.getAction(), ADMIN); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.PAUSE.getAction(), ADMIN); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FAILED.getCode()); } @@ -215,12 +215,12 @@ public class ClusterTaskServiceTest extends BaseTest { // success Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(true); - ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.IGNORE.getAction(), ""); + ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.IGNORE.getAction(), ""); Assert.assertEquals(resultStatus.getCode(), ResultStatus.SUCCESS.getCode()); // operation failed Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(false); - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.IGNORE.getAction(), ""); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.IGNORE.getAction(), ""); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FAILED.getCode()); } @@ -231,12 +231,12 @@ public class ClusterTaskServiceTest extends BaseTest { // success Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(true); - ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.CANCEL.getAction(), ""); + ResultStatus resultStatus = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.CANCEL.getAction(), ""); Assert.assertEquals(resultStatus.getCode(), ResultStatus.SUCCESS.getCode()); // operation failed Mockito.when(abstractAgent.actionTask(Mockito.anyLong(), Mockito.any())).thenReturn(false); - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.CANCEL.getAction(), ""); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.CANCEL.getAction(), ""); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FAILED.getCode()); } @@ -246,7 +246,7 @@ public class ClusterTaskServiceTest extends BaseTest { Mockito.when(clusterTaskDao.getById(Mockito.anyLong())).thenReturn(clusterTaskDO); // operation failed - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.START.getAction(), ADMIN); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.START.getAction(), ADMIN); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FAILED.getCode()); } @@ -257,7 +257,7 @@ public class ClusterTaskServiceTest extends BaseTest { Mockito.when(clusterTaskDao.getById(Mockito.anyLong())).thenReturn(clusterTaskDO); // operation failed - ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, ClusterTaskActionEnum.ROLLBACK.getAction(), ADMIN); + ResultStatus resultStatus2 = clusterTaskService.executeTask(REAL_TASK_ID_IN_MYSQL, TaskActionEnum.ROLLBACK.getAction(), ADMIN); Assert.assertEquals(resultStatus2.getCode(), ResultStatus.OPERATION_FORBIDDEN.getCode()); } diff --git a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/HaFlushASSwitchJob.java b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/HaFlushASSwitchJob.java new file mode 100644 index 00000000..d19726c4 --- /dev/null +++ b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/HaFlushASSwitchJob.java @@ -0,0 +1,41 @@ +package com.xiaojukeji.kafka.manager.task.dispatch.op; + +import com.xiaojukeji.kafka.manager.service.biz.job.HaASSwitchJobManager; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASSwitchJobService; +import com.xiaojukeji.kafka.manager.task.component.AbstractScheduledTask; +import com.xiaojukeji.kafka.manager.task.component.CustomScheduled; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.stereotype.Component; + +import java.util.*; + +/** + * 主备切换任务 + */ +@Component +@CustomScheduled(name = "HaFlushASSwitchJob", + cron = "0 0/1 * * * ?", + threadNum = 1, + description = "刷新主备切换任务") +public class HaFlushASSwitchJob extends AbstractScheduledTask { + @Autowired + private HaASSwitchJobService haASSwitchJobService; + + @Autowired + private HaASSwitchJobManager haASSwitchJobManager; + + @Override + public List listAllTasks() { + // 获取正在运行的任务ID列表, 忽略1分钟内的任务,尽量避免任务被重复执行 + return haASSwitchJobService.listRunningJobs(System.currentTimeMillis() - (60 * 1000L)); + } + + @Override + public void processTask(Long jobId) { + // 执行Job + haASSwitchJobManager.executeJob(jobId, false, false); + + // 更新任务信息 + haASSwitchJobManager.flushExtendData(jobId); + } +} diff --git a/kafka-manager-web/pom.xml b/kafka-manager-web/pom.xml index a28169e1..4e22d49a 100644 --- a/kafka-manager-web/pom.xml +++ b/kafka-manager-web/pom.xml @@ -83,6 +83,12 @@ spring-boot-starter-logging ${spring.boot.version} + + org.springframework.boot + spring-boot-starter-validation + ${spring.boot.version} + + ch.qos.logback logback-classic diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAppController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAppController.java index 34529616..ad19569e 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAppController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAppController.java @@ -72,6 +72,19 @@ public class NormalAppController { ); } + @ApiLevel(level = ApiLevelContent.LEVEL_NORMAL_3, rateLimit = 1) + @ApiOperation(value = "App列表", notes = "") + @RequestMapping(value = "apps/{clusterId}", method = RequestMethod.GET) + @ResponseBody + public Result> getApps(@PathVariable Long clusterId, + @RequestParam(value = "isPhysicalClusterId", required = false, defaultValue = "false") Boolean isPhysicalClusterId) { + + Long physicalClusterId = logicalClusterMetadataManager.getPhysicalClusterId(clusterId, isPhysicalClusterId); + return new Result<>(AppConverter.convert2AppVOList( + appService.getByPrincipalAndClusterId(SpringTool.getUserName(), physicalClusterId)) + ); + } + @ApiOperation(value = "App基本信息", notes = "") @RequestMapping(value = "apps/{appId}/basic-info", method = RequestMethod.GET) @ResponseBody diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalClusterController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalClusterController.java index ed6ff6eb..4c3d286a 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalClusterController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalClusterController.java @@ -24,6 +24,7 @@ import com.xiaojukeji.kafka.manager.service.service.ThrottleService; import com.xiaojukeji.kafka.manager.service.service.TopicService; import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.service.service.ha.HaTopicService; import com.xiaojukeji.kafka.manager.web.converters.ClusterModelConverter; import com.xiaojukeji.kafka.manager.web.converters.CommonModelConverter; import io.swagger.annotations.Api; @@ -50,6 +51,9 @@ public class NormalClusterController { @Autowired private TopicService topicService; + @Autowired + private HaTopicService haTopicService; + @Autowired private LogicalClusterService logicalClusterService; @@ -144,6 +148,13 @@ public class NormalClusterController { return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); } + //过滤备topic + Map> relationMap = haTopicService.getClusterStandbyTopicMap(); + Set topics = logicalClusterMetadataManager.getTopicNameSet(logicalClusterId); + if (relationMap !=null && relationMap.get(logicalClusterDO.getClusterId()) != null){ + topics.removeAll(new HashSet<>(relationMap.get(logicalClusterDO.getClusterId()))); + } + return new Result<>(CommonModelConverter.convert2TopicOverviewVOList( logicalClusterId, topicService.getTopicOverviewList( diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicMineController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicMineController.java index df5d291e..7c7eaeec 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicMineController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicMineController.java @@ -1,21 +1,23 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.normal; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.TopicModifyDTO; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.TopicRetainDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicExpiredVO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicMineVO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicVO; +import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.TopicExpiredService; import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; -import com.xiaojukeji.kafka.manager.common.utils.SpringTool; -import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; -import com.xiaojukeji.kafka.manager.web.utils.ResultCache; import com.xiaojukeji.kafka.manager.web.converters.TopicMineConverter; +import com.xiaojukeji.kafka.manager.web.utils.ResultCache; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import org.springframework.beans.factory.annotation.Autowired; @@ -40,6 +42,9 @@ public class NormalTopicMineController { @Autowired private LogicalClusterMetadataManager logicalClusterMetadataManager; + @Autowired + private HaASRelationManager haASRelationManager; + @ApiOperation(value = "我的Topic", notes = "") @RequestMapping(value = "topics/mine", method = RequestMethod.GET) @ResponseBody @@ -75,14 +80,31 @@ public class NormalTopicMineController { if (ValidateUtils.isNull(physicalClusterId)) { return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); } - return Result.buildFrom( - topicManagerService.modifyTopic( - physicalClusterId, - dto.getTopicName(), - dto.getDescription(), - SpringTool.getUserName() - ) + + //修改备topic + HaASRelationDO relationDO = haASRelationManager.getASRelation(dto.getClusterId(), dto.getTopicName()); + if (relationDO != null){ + if (relationDO.getStandbyClusterPhyId().equals(dto.getClusterId())){ + return Result.buildFromRSAndMsg(ResultStatus.OPERATION_FORBIDDEN, "备topic不允许操作!"); + } + ResultStatus rs = topicManagerService.modifyTopic( + relationDO.getStandbyClusterPhyId(), + relationDO.getStandbyResName(), + dto.getDescription(), + SpringTool.getUserName() + ); + if (ResultStatus.SUCCESS.getCode() != rs.getCode()){ + return Result.buildFrom(rs); + } + } + + ResultStatus resultStatus = topicManagerService.modifyTopic( + physicalClusterId, + dto.getTopicName(), + dto.getDescription(), + SpringTool.getUserName() ); + return Result.buildFrom(resultStatus); } @ApiOperation(value = "过期Topic信息", notes = "") diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java index 2caaa69b..0ceec850 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java @@ -1,13 +1,14 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.op; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO; import com.xiaojukeji.kafka.manager.common.entity.dto.rd.ClusterDTO; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; -import com.xiaojukeji.kafka.manager.service.service.ClusterService; import com.xiaojukeji.kafka.manager.common.utils.SpringTool; -import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaClusterManager; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; import com.xiaojukeji.kafka.manager.web.converters.ClusterModelConverter; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; @@ -26,6 +27,9 @@ public class OpClusterController { @Autowired private ClusterService clusterService; + @Autowired + private HaClusterManager haClusterManager; + @ApiOperation(value = "接入集群") @PostMapping(value = "clusters") @ResponseBody @@ -33,16 +37,14 @@ public class OpClusterController { if (ValidateUtils.isNull(dto) || !dto.legal()) { return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } - return Result.buildFrom( - clusterService.addNew(ClusterModelConverter.convert2ClusterDO(dto), SpringTool.getUserName()) - ); + return haClusterManager.addNew(ClusterModelConverter.convert2ClusterDO(dto), dto.getActiveClusterId(), SpringTool.getUserName()); } @ApiOperation(value = "删除集群") @DeleteMapping(value = "clusters") @ResponseBody public Result delete(@RequestParam(value = "clusterId") Long clusterId) { - return Result.buildFrom(clusterService.deleteById(clusterId, SpringTool.getUserName())); + return haClusterManager.deleteById(clusterId, SpringTool.getUserName()); } @ApiOperation(value = "修改集群信息") diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaASSwitchJobController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaASSwitchJobController.java new file mode 100644 index 00000000..a645a638 --- /dev/null +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaASSwitchJobController.java @@ -0,0 +1,87 @@ +package com.xiaojukeji.kafka.manager.web.api.versionone.op; + +import com.xiaojukeji.kafka.manager.common.bizenum.JobLogBizTypEnum; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ao.ha.job.HaJobState; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobActionDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.ha.ASSwitchJobDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.JobLogDO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.job.HaJobDetailVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.job.JobLogVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.job.JobMulLogVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.job.HaJobStateVO; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import com.xiaojukeji.kafka.manager.common.utils.SpringTool; +import com.xiaojukeji.kafka.manager.service.biz.job.HaASSwitchJobManager; +import com.xiaojukeji.kafka.manager.service.service.JobLogService; +import io.swagger.annotations.Api; +import io.swagger.annotations.ApiOperation; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.validation.annotation.Validated; +import org.springframework.web.bind.annotation.*; + +import java.util.ArrayList; +import java.util.List; + + +/** + * @author zengqiao + * @date 20/4/23 + */ +@Api(tags = "OP-HA-主备切换Job相关接口(REST)") +@RestController +@RequestMapping(ApiPrefix.API_V1_OP_PREFIX) +public class OpHaASSwitchJobController { + @Autowired + private JobLogService jobLogService; + + @Autowired + private HaASSwitchJobManager haASSwitchJobManager; + + @ApiOperation(value = "任务创建[ActiveStandbySwitch]") + @PostMapping(value = "as-switch-jobs") + @ResponseBody + public Result createJob(@Validated @RequestBody ASSwitchJobDTO dto) { + return haASSwitchJobManager.createJob(dto, SpringTool.getUserName()); + } + + @ApiOperation(value = "任务状态[ActiveStandbySwitch]", notes = "最近一个任务") + @GetMapping(value = "as-switch-jobs/{jobId}/job-state") + @ResponseBody + public Result jobState(@PathVariable Long jobId) { + Result haResult = haASSwitchJobManager.jobState(jobId); + if (haResult.failed()) { + return Result.buildFromIgnoreData(haResult); + } + + return Result.buildSuc(new HaJobStateVO(haResult.getData())); + } + + @ApiOperation(value = "任务详情[ActiveStandbySwitch]", notes = "") + @GetMapping(value = "as-switch-jobs/{jobId}/job-detail") + @ResponseBody + public Result> jobDetail(@PathVariable Long jobId) { + return haASSwitchJobManager.jobDetail(jobId); + } + + @ApiOperation(value = "任务日志[ActiveStandbySwitch]", notes = "") + @GetMapping(value = "as-switch-jobs/{jobId}/job-logs") + @ResponseBody + public Result jobLog(@PathVariable Long jobId, @RequestParam(required = false) Long startLogId) { + List doList = jobLogService.listLogs(JobLogBizTypEnum.HA_SWITCH_JOB_LOG.getCode(), String.valueOf(jobId), startLogId); + List voList = doList.isEmpty()? new ArrayList<>(): ConvertUtil.list2List( + doList, + JobLogVO.class + ); + + return Result.buildSuc(new JobMulLogVO(voList, startLogId)); + } + + @ApiOperation(value = "任务操作[ActiveStandbySwitch]", notes = "") + @PutMapping(value = "as-switch-jobs/{jobId}/action") + @ResponseBody + public Result actionJob(@PathVariable Long jobId, @Validated @RequestBody ASSwitchJobActionDTO dto) { + return haASSwitchJobManager.actionJob(jobId, dto); + } +} diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaRelationsController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaRelationsController.java new file mode 100644 index 00000000..00dcb02f --- /dev/null +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaRelationsController.java @@ -0,0 +1,130 @@ +package com.xiaojukeji.kafka.manager.web.api.versionone.op; + +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaResTypeEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ha.HaStatusEnum; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; +import com.xiaojukeji.kafka.manager.common.constant.MsgConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.ha.HaASRelationService; +import com.xiaojukeji.kafka.manager.service.utils.HaTopicCommands; +import io.swagger.annotations.Api; +import io.swagger.annotations.ApiOperation; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.web.bind.annotation.*; + +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.stream.Collectors; + + +/** + * @author zengqiao + * @date 20/4/23 + */ +@Api(tags = "OP-HA-Relations维度相关接口(REST)") +@RestController +@RequestMapping(ApiPrefix.API_V1_OP_PREFIX) +public class OpHaRelationsController { + private static final Logger LOGGER = LoggerFactory.getLogger(OpHaRelationsController.class); + + @Autowired + private ClusterService clusterService; + + @Autowired + private HaASRelationService haASRelationService; + + @ApiOperation(value = "同步Kafka的HA关系到DB") + @PostMapping(value = "ha-relations/{clusterPhyId}/dest-db") + @ResponseBody + public Result syncHaRelationsToDB(@PathVariable Long clusterPhyId) { + // 从ZK获取Topic主备关系信息 + ClusterDO clusterDO = clusterService.getById(clusterPhyId); + if (ValidateUtils.isNull(clusterDO)) { + return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId)); + } + + Map haTopicsConfigMap = HaTopicCommands.fetchAllHaTopicConfig(clusterDO); + if (haTopicsConfigMap == null) { + LOGGER.error("method=processTask||clusterPhyId={}||msg=fetch all ha topic config failed", clusterPhyId); + return Result.buildFailure(ResultStatus.ZOOKEEPER_READ_FAILED); + } + + // 获取当前集群的HA信息 + List doList = haTopicsConfigMap.entrySet() + .stream() + .map(elem -> getHaASRelation(clusterPhyId, elem.getKey(), elem.getValue())) + .filter(relation -> relation != null) + .collect(Collectors.toList()); + + // 更新HA关系表 + Result rv = haASRelationService.replaceTopicRelationsToDB(clusterPhyId, doList); + if (rv.failed()) { + LOGGER.error("method=processTask||clusterPhyId={}||result={}||msg=replace topic relation failed", clusterPhyId, rv); + } + + return rv; + } + +// @ApiOperation(value = "同步DB的HA关系到Kafka") +// @PostMapping(value = "ha-relations/{clusterPhyId}/dest-kafka") +// @ResponseBody +// public Result syncHaRelationsToKafka(@PathVariable Long clusterPhyId) { +// // 从ZK获取Topic主备关系信息 +// ClusterDO clusterDO = clusterService.getById(clusterPhyId); +// if (ValidateUtils.isNull(clusterDO)) { +// return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId)); +// } +// +// Map haTopicsConfigMap = HaTopicCommands.fetchAllHaTopicConfig(clusterDO); +// if (haTopicsConfigMap == null) { +// LOGGER.error("method=processTask||clusterPhyId={}||msg=fetch all ha topic config failed", clusterPhyId); +// return Result.buildFailure(ResultStatus.ZOOKEEPER_READ_FAILED); +// } +// +// // 获取当前集群的HA信息 +// List doList = haTopicsConfigMap.entrySet() +// .stream() +// .map(elem -> getHaASRelation(clusterPhyId, elem.getKey(), elem.getValue())) +// .filter(relation -> relation != null) +// .collect(Collectors.toList()); +// +// // 更新HA关系表 +// Result rv = haASRelationService.replaceTopicRelationsToDB(clusterPhyId, doList); +// if (rv.failed()) { +// LOGGER.error("method=processTask||clusterPhyId={}||result={}||msg=replace topic relation failed", clusterPhyId, rv); +// } +// +// return rv; +// } + + private HaASRelationDO getHaASRelation(Long standbyClusterPhyId, String standbyTopicName, Properties props) { + Long activeClusterPhyId = ConvertUtil.string2Long(props.getProperty(KafkaConstant.DIDI_HA_REMOTE_CLUSTER)); + if (activeClusterPhyId == null) { + return null; + } + + String activeTopicName = props.getProperty(KafkaConstant.DIDI_HA_REMOTE_TOPIC); + if (activeTopicName == null) { + activeTopicName = standbyTopicName; + } + + return new HaASRelationDO( + activeClusterPhyId, + activeTopicName, + standbyClusterPhyId, + standbyTopicName, + HaResTypeEnum.TOPIC.getCode(), + HaStatusEnum.STABLE.getCode() + ); + } +} \ No newline at end of file diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaTopicController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaTopicController.java new file mode 100644 index 00000000..89d401c0 --- /dev/null +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpHaTopicController.java @@ -0,0 +1,43 @@ +package com.xiaojukeji.kafka.manager.web.api.versionone.op; + +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.TopicOperationResult; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.HaTopicRelationDTO; +import com.xiaojukeji.kafka.manager.common.utils.SpringTool; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaTopicManager; +import io.swagger.annotations.Api; +import io.swagger.annotations.ApiOperation; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.validation.annotation.Validated; +import org.springframework.web.bind.annotation.*; + +import java.util.List; + +/** + * 高可用Topic操作相关接口 + * @author zengqiao + * @date 21/5/18 + */ +@Api(tags = "OP-HA-Topic操作相关接口(REST)") +@RestController +@RequestMapping(ApiPrefix.API_V1_OP_PREFIX) +public class OpHaTopicController { + + @Autowired + private HaTopicManager haTopicManager; + + @ApiOperation(value = "高可用Topic绑定") + @PostMapping(value = "ha-topics") + @ResponseBody + public Result> batchCreateHaTopic(@Validated @RequestBody HaTopicRelationDTO dto) { + return haTopicManager.batchCreateHaTopic(dto, SpringTool.getUserName()); + } + + @ApiOperation(value = "高可用topic解绑") + @DeleteMapping(value = "ha-topics") + @ResponseBody + public Result> batchRemoveHaTopic(@Validated @RequestBody HaTopicRelationDTO dto) { + return haTopicManager.batchRemoveHaTopic(dto, SpringTool.getUserName()); + } +} diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpQuotaController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpQuotaController.java index 7d9c70d7..8ef50de1 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpQuotaController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpQuotaController.java @@ -5,7 +5,9 @@ import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota; import com.xiaojukeji.kafka.manager.common.entity.dto.gateway.TopicQuotaDTO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; import com.xiaojukeji.kafka.manager.service.service.gateway.QuotaService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; @@ -24,6 +26,9 @@ public class OpQuotaController { @Autowired private QuotaService quotaService; + @Autowired + private HaASRelationManager haASRelationManager; + @ApiOperation(value = "配额调整",notes = "配额调整") @RequestMapping(value = "topic-quotas",method = RequestMethod.POST) @ResponseBody @@ -32,6 +37,22 @@ public class OpQuotaController { // 非空校验 return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } + + HaASRelationDO relationDO = haASRelationManager.getASRelation(dto.getClusterId(), dto.getTopicName()); + if (relationDO != null){ + if (relationDO.getStandbyClusterPhyId().equals(dto.getClusterId())){ + return Result.buildFrom(ResultStatus.HA_TOPIC_DELETE_FORBIDDEN); + } + //备topic调整 + dto.setClusterId(relationDO.getStandbyClusterPhyId()); + dto.setTopicName(relationDO.getStandbyResName()); + ResultStatus resultStatus = quotaService + .addTopicQuotaByAuthority(TopicQuota.buildFrom(dto)); + if (ResultStatus.SUCCESS.getCode() != resultStatus.getCode()){ + Result.buildFrom(resultStatus); + } + } + return Result.buildFrom(quotaService.addTopicQuotaByAuthority(TopicQuota.buildFrom(dto))); } } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpTopicController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpTopicController.java index bf7a1340..dcac0fa1 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpTopicController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpTopicController.java @@ -13,14 +13,17 @@ import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicExpansionDTO import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicModificationDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; +import com.xiaojukeji.kafka.manager.common.entity.pojo.ha.HaASRelationDO; import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; +import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.AdminService; import com.xiaojukeji.kafka.manager.service.service.ClusterService; import com.xiaojukeji.kafka.manager.service.service.TopicManagerService; -import com.xiaojukeji.kafka.manager.service.utils.TopicCommands; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; +import org.springframework.beans.BeanUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; @@ -45,6 +48,9 @@ public class OpTopicController { @Autowired private TopicManagerService topicManagerService; + + @Autowired + private HaASRelationManager haASRelationManager; @ApiOperation(value = "创建Topic") @RequestMapping(value = {"topics", "utils/topics"}, method = RequestMethod.POST) @@ -109,28 +115,23 @@ public class OpTopicController { @RequestMapping(value = {"topics", "utils/topics"}, method = RequestMethod.PUT) @ResponseBody public Result modifyTopic(@RequestBody TopicModificationDTO dto) { - Result rc = checkParamAndGetClusterDO(dto); - if (rc.getCode() != ResultStatus.SUCCESS.getCode()) { - return rc; + if (!dto.paramLegal()) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } - ClusterDO clusterDO = rc.getData(); - - // 获取属性 - Properties properties = dto.getProperties(); - if (ValidateUtils.isNull(properties)) { - properties = new Properties(); + Result rs = topicManagerService.modifyTopic(dto); + if (rs.failed()){ + return rs; } - properties.put(KafkaConstant.RETENTION_MS_KEY, String.valueOf(dto.getRetentionTime())); - // 操作修改 - String operator = SpringTool.getUserName(); - ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, dto.getTopicName(), properties); - if (!ResultStatus.SUCCESS.equals(rs)) { - return Result.buildFrom(rs); + //修改备topic + HaASRelationDO relationDO = haASRelationManager.getASRelation(dto.getClusterId(), dto.getTopicName()); + if (relationDO != null && relationDO.getActiveClusterPhyId().equals(dto.getClusterId())){ + dto.setClusterId(relationDO.getStandbyClusterPhyId()); + dto.setTopicName(relationDO.getStandbyResName()); + rs = topicManagerService.modifyTopic(dto); } - topicManagerService.modifyTopicByOp(dto.getClusterId(), dto.getTopicName(), dto.getAppId(), dto.getDescription(), operator); - return new Result(); + return rs; } @ApiOperation(value = "Topic扩分区", notes = "") @@ -143,22 +144,31 @@ public class OpTopicController { List resultList = new ArrayList<>(); for (TopicExpansionDTO dto: dtoList) { - Result rc = checkParamAndGetClusterDO(dto); - if (!Constant.SUCCESS.equals(rc.getCode())) { - resultList.add(TopicOperationResult.buildFrom(dto.getClusterId(), dto.getTopicName(), rc)); - continue; - } + TopicOperationResult result; - // 参数检查合法, 开始对Topic进行扩分区 - ResultStatus statusEnum = adminService.expandPartitions( - rc.getData(), - dto.getTopicName(), - dto.getPartitionNum(), - dto.getRegionId(), - dto.getBrokerIdList(), - SpringTool.getUserName() - ); - resultList.add(TopicOperationResult.buildFrom(dto.getClusterId(), dto.getTopicName(), statusEnum)); + HaASRelationDO relationDO = haASRelationManager.getASRelation(dto.getClusterId(), dto.getTopicName()); + if (relationDO != null){ + //用户侧不允许操作备topic + if (relationDO.getStandbyClusterPhyId().equals(dto.getClusterId())){ + resultList.add(TopicOperationResult.buildFrom(dto.getClusterId(), + dto.getTopicName(), + ResultStatus.OPERATION_FORBIDDEN)); + continue; + } + //备topic扩分区 + TopicExpansionDTO standbyDto = new TopicExpansionDTO(); + BeanUtils.copyProperties(dto, standbyDto); + standbyDto.setClusterId(relationDO.getStandbyClusterPhyId()); + standbyDto.setTopicName(relationDO.getStandbyResName()); + standbyDto.setBrokerIdList(PhysicalClusterMetadataManager.getBrokerIdList(relationDO.getStandbyClusterPhyId())); + standbyDto.setRegionId(null); + result = topicManagerService.expandTopic(standbyDto); + if (ResultStatus.SUCCESS.getCode() != result.getCode()){ + resultList.add(result); + continue; + } + } + resultList.add(topicManagerService.expandTopic(dto)); } for (TopicOperationResult operationResult: resultList) { @@ -178,6 +188,12 @@ public class OpTopicController { if (ValidateUtils.isNull(clusterDO)) { return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST); } + + HaASRelationDO relationDO = haASRelationManager.getASRelation(dto.getClusterId(), dto.getTopicName()); + if (relationDO != null) { + return Result.buildFrom(ResultStatus.HA_TOPIC_DELETE_FORBIDDEN); + } + return new Result<>(clusterDO); } } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAppController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAppController.java index 8e0c14cf..288b2eea 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAppController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAppController.java @@ -2,7 +2,10 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.rd; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.AppDTO; +import com.xiaojukeji.kafka.manager.common.entity.dto.rd.AppRelateTopicsDTO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.app.AppVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.app.AppRelateTopicsVO; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaAppManager; import com.xiaojukeji.kafka.manager.service.service.gateway.AppService; import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; @@ -10,6 +13,7 @@ import com.xiaojukeji.kafka.manager.web.converters.AppConverter; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.validation.annotation.Validated; import org.springframework.web.bind.annotation.*; import java.util.List; @@ -25,6 +29,9 @@ public class RdAppController { @Autowired private AppService appService; + @Autowired + private HaAppManager haAppManager; + @ApiOperation(value = "App列表", notes = "") @RequestMapping(value = "apps", method = RequestMethod.GET) @ResponseBody @@ -40,4 +47,11 @@ public class RdAppController { appService.updateByAppId(dto, SpringTool.getUserName(), true) ); } + + @ApiOperation(value = "App关联Topic信息查询", notes = "") + @PostMapping(value = "apps/relate-topics") + @ResponseBody + public Result> appRelateTopics(@Validated @RequestBody AppRelateTopicsDTO dto) { + return haAppManager.appRelateTopics(dto.getClusterPhyId(), dto.getFilterTopicNameList()); + } } \ No newline at end of file diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdClusterController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdClusterController.java index 69ba8c6d..7090ec6d 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdClusterController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdClusterController.java @@ -1,27 +1,28 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.rd; import com.xiaojukeji.kafka.manager.common.bizenum.KafkaClientEnum; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.constant.KafkaMetricsCollections; import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate; -import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.TopicMetadataVO; -import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ControllerPreferredCandidateVO; -import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.RdClusterMetricsVO; -import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterBrokerStatusVO; import com.xiaojukeji.kafka.manager.common.entity.ao.BrokerOverviewDTO; +import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate; import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO; -import com.xiaojukeji.kafka.manager.common.entity.vo.rd.KafkaControllerVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.common.BrokerOverviewVO; import com.xiaojukeji.kafka.manager.common.entity.vo.common.RealTimeMetricsVO; import com.xiaojukeji.kafka.manager.common.entity.vo.common.TopicOverviewVO; -import com.xiaojukeji.kafka.manager.common.entity.vo.common.BrokerOverviewVO; -import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterDetailVO; import com.xiaojukeji.kafka.manager.common.entity.vo.common.TopicThrottleVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.TopicMetadataVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.KafkaControllerVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterBrokerStatusVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ClusterDetailVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.ControllerPreferredCandidateVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.rd.cluster.RdClusterMetricsVO; import com.xiaojukeji.kafka.manager.common.utils.DateUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.service.*; -import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; -import com.xiaojukeji.kafka.manager.web.converters.*; +import com.xiaojukeji.kafka.manager.web.converters.ClusterModelConverter; +import com.xiaojukeji.kafka.manager.web.converters.CommonModelConverter; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import org.springframework.beans.factory.annotation.Autowired; diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdHaClusterController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdHaClusterController.java new file mode 100644 index 00000000..13264acd --- /dev/null +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdHaClusterController.java @@ -0,0 +1,55 @@ +package com.xiaojukeji.kafka.manager.web.api.versionone.rd; + +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterTopicVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.ha.HaClusterVO; +import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.HaClusterTopicHaStatusVO; +import com.xiaojukeji.kafka.manager.service.biz.ha.HaASRelationManager; +import com.xiaojukeji.kafka.manager.service.service.ha.HaClusterService; +import io.swagger.annotations.Api; +import io.swagger.annotations.ApiOperation; +import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.web.bind.annotation.*; + +import java.util.List; + + +/** + * @author zengqiao + * @date 20/4/23 + */ +@Api(tags = "RD-HA-Cluster维度相关接口(REST)") +@RestController +@RequestMapping(ApiPrefix.API_V1_RD_PREFIX) +public class RdHaClusterController { + @Autowired + private HaASRelationManager haASRelationManager; + + @Autowired + private HaClusterService haClusterService; + + @ApiOperation(value = "集群-主备Topic列表", notes = "如果传入secondClusterId,则主备关系必须是firstClusterId与secondClusterId的Topic") + @GetMapping(value = "clusters/{firstClusterId}/ha-topics") + @ResponseBody + public Result> getHATopics(@PathVariable Long firstClusterId, + @RequestParam(required = false) Long secondClusterId, + @RequestParam(required = false, defaultValue = "true") Boolean filterSystemTopics) { + return Result.buildSuc(haASRelationManager.getHATopics(firstClusterId, secondClusterId, filterSystemTopics != null && filterSystemTopics)); + } + + @ApiOperation(value = "集群基本信息列表", notes = "含高可用集群信息") + @GetMapping(value = "clusters/ha/basic-info") + @ResponseBody + public Result> getClusterBasicInfo() { + return haClusterService.listAllHA(); + } + + @ApiOperation(value = "集群Topic高可用状态信息", notes = "") + @GetMapping(value = "clusters/{firstClusterId}/ha-topics/status") + @ResponseBody + public Result> listHaStatusTopics(@PathVariable Long firstClusterId, + @RequestParam(required = false, defaultValue = "true") Boolean checkMetadata) { + return haASRelationManager.listHaStatusTopics(firstClusterId, checkMetadata); + } +} diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/DataSourceConfig.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/DataSourceConfig.java index 2d2a003a..07a2fb61 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/DataSourceConfig.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/DataSourceConfig.java @@ -1,8 +1,13 @@ package com.xiaojukeji.kafka.manager.web.config; +import com.baomidou.mybatisplus.annotation.DbType; +import com.baomidou.mybatisplus.annotation.IdType; +import com.baomidou.mybatisplus.core.config.GlobalConfig; +import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor; +import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean; import org.apache.ibatis.session.SqlSessionFactory; -import org.mybatis.spring.SqlSessionFactoryBean; import org.mybatis.spring.SqlSessionTemplate; +import org.mybatis.spring.annotation.MapperScan; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.boot.jdbc.DataSourceBuilder; @@ -19,6 +24,7 @@ import javax.sql.DataSource; * @date 20/3/17 */ @Configuration +@MapperScan("com.xiaojukeji.kafka.manager.dao.ha") public class DataSourceConfig { @Bean(name = "dataSource") @ConfigurationProperties(prefix = "spring.datasource.kafka-manager") @@ -30,10 +36,15 @@ public class DataSourceConfig { @Bean(name = "sqlSessionFactory") @Primary public SqlSessionFactory sqlSessionFactory(@Qualifier("dataSource") DataSource dataSource) throws Exception { - SqlSessionFactoryBean bean = new SqlSessionFactoryBean(); + MybatisSqlSessionFactoryBean bean = new MybatisSqlSessionFactoryBean(); bean.setDataSource(dataSource); bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml")); bean.setConfigLocation(new PathMatchingResourcePatternResolver().getResource("classpath:mybatis-config.xml")); + bean.setGlobalConfig(globalConfig()); + + //添加分页插件,不加这个,分页不生效 + bean.setPlugins(paginationInterceptor()); + return bean.getObject(); } @@ -48,4 +59,21 @@ public class DataSourceConfig { public SqlSessionTemplate sqlSessionTemplate(@Qualifier("sqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception { return new SqlSessionTemplate(sqlSessionFactory); } + + @Bean + public GlobalConfig globalConfig(){ + GlobalConfig globalConfig=new GlobalConfig(); + globalConfig.setBanner(false); + GlobalConfig.DbConfig dbConfig=new GlobalConfig.DbConfig(); + dbConfig.setIdType(IdType.AUTO); + globalConfig.setDbConfig(dbConfig); + return globalConfig; + } + + @Bean + public PaginationInterceptor paginationInterceptor() { + PaginationInterceptor page = new PaginationInterceptor(); + page.setDbType(DbType.MYSQL); + return page; + } } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java index d92967dd..8520d32a 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java @@ -30,6 +30,7 @@ import com.xiaojukeji.kafka.manager.common.entity.pojo.ControllerDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.utils.MetricsConvertUtils; +import org.springframework.beans.BeanUtils; import java.util.*; @@ -89,7 +90,7 @@ public class ClusterModelConverter { return null; } ClusterDetailVO vo = new ClusterDetailVO(); - CopyUtils.copyProperties(vo, dto); + BeanUtils.copyProperties(dto, vo); if (ValidateUtils.isNull(vo.getRegionNum())) { vo.setRegionNum(0); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java index c7364cb5..4a2270b1 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java @@ -39,6 +39,7 @@ public class TopicModelConverter { vo.setDescription(dto.getDescription()); vo.setBootstrapServers(""); vo.setRegionNameList(dto.getRegionNameList()); + vo.setHaRelation(dto.getHaRelation()); if (!ValidateUtils.isNull(clusterDO)) { vo.setBootstrapServers(clusterDO.getBootstrapServers()); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/handler/CustomGlobalExceptionHandler.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/handler/CustomGlobalExceptionHandler.java new file mode 100644 index 00000000..8f36d180 --- /dev/null +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/handler/CustomGlobalExceptionHandler.java @@ -0,0 +1,47 @@ +package com.xiaojukeji.kafka.manager.web.handler; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.utils.ConvertUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.validation.FieldError; +import org.springframework.web.bind.MethodArgumentNotValidException; +import org.springframework.web.bind.annotation.ExceptionHandler; +import org.springframework.web.bind.annotation.RestControllerAdvice; + +import java.util.List; +import java.util.stream.Collectors; + +@RestControllerAdvice +public class CustomGlobalExceptionHandler { + private static final Logger LOGGER = LoggerFactory.getLogger(CustomGlobalExceptionHandler.class); + + /** + * 处理参数异常并返回 + * @param me 异常 + * @return + */ + @ExceptionHandler(MethodArgumentNotValidException.class) + public Result methodArgumentNotValidException(MethodArgumentNotValidException me) { + List fieldErrorList = me.getBindingResult().getFieldErrors(); + + List errorList = fieldErrorList.stream().map(elem -> elem.getDefaultMessage()).collect(Collectors.toList()); + + return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, ConvertUtil.list2String(errorList, ",")); + } + + @ExceptionHandler(NullPointerException.class) + public Result handleNullPointerException(Exception e) { + LOGGER.error("method=handleNullPointerException||errMsg=exception", e); + + return Result.buildFromRSAndMsg(ResultStatus.FAIL, "服务空指针异常"); + } + + @ExceptionHandler(Exception.class) + public Result handleException(Exception e) { + LOGGER.error("method=handleException||errMsg=exception", e); + + return Result.buildFromRSAndMsg(ResultStatus.FAIL, e.getMessage()); + } +} diff --git a/kafka-manager-web/src/main/resources/application.yml b/kafka-manager-web/src/main/resources/application.yml index 46ac7134..efe2ff82 100644 --- a/kafka-manager-web/src/main/resources/application.yml +++ b/kafka-manager-web/src/main/resources/application.yml @@ -13,9 +13,9 @@ spring: active: dev datasource: kafka-manager: - jdbc-url: jdbc:mysql://116.85.13.90:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8 + jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8 username: root - password: DiDi2020@ + password: 123456 driver-class-name: com.mysql.cj.jdbc.Driver main: allow-bean-definition-overriding: true @@ -127,3 +127,6 @@ notify: topic-name: didi-kafka-notify order: detail-url: http://127.0.0.1 + +d-kafka: + gateway-zk: 127.0.0.1:2181/sd \ No newline at end of file diff --git a/pom.xml b/pom.xml index d8c4411e..67662126 100644 --- a/pom.xml +++ b/pom.xml @@ -16,7 +16,7 @@ - 2.6.1 + 2.8.0_e 2.1.18.RELEASE 2.9.2 1.5.21 @@ -29,6 +29,7 @@ UTF-8 8.5.72 2.16.0 + 3.3.2 3.0.0 1.2.9 @@ -113,21 +114,11 @@ 1.2.16 - + - org.mybatis - mybatis - 3.4.6 - - - org.mybatis - mybatis-spring - 1.3.2 - - - org.mybatis.spring.boot - mybatis-spring-boot-starter - 1.3.2 + com.baomidou + mybatis-plus-boot-starter + ${mybatis-plus.version}