diff --git a/README.md b/README.md index 628d5555..aaa7e1d8 100644 --- a/README.md +++ b/README.md @@ -5,66 +5,89 @@ **一站式`Apache Kafka`集群指标监控与运维管控平台** ---- - -## 主要功能特性 - -### 快速体验 -- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin - -### 集群监控维度 - -- 多版本集群管控,支持从`0.10.2`到`2.x`版本; -- 集群Topic、Broker等多维度历史与实时关键指标查看; -### 集群管控维度 - -- 集群运维,包括逻辑Region方式管理集群 -- Broker运维,包括优先副本选举 -- Topic运维,包括创建、查询、扩容、修改属性、数据采样及迁移等; -- 消费组运维,包括指定时间或指定偏移两种方式进行重置消费偏移 +阅读本README文档,您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。
若滴滴Logi-KafkaManager已在贵司的生产环境进行使用,并想要获得官方更好地支持和指导,可以通过[`OCE认证`](http://obsuite.didiyun.com/open/openAuth),加入官方交流平台。 -### 用户使用维度 +## 1 产品简介 +滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。 -- Kafka用户、Kafka研发、Kafka运维 视角区分 -- Kafka用户、Kafka研发、Kafka运维 权限区分 +### 1.1 快速体验地址 +- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin + +### 1.2 体验地图 +相比较于同类产品的用户视角单一(大多为管理员视角),滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是:**用户体验地图、运维体验地图、运营体验地图** + +#### 1.2.1 用户体验地图 +- 平台租户申请  :申请应用(App)作为Kafka中的用户名,并用 AppID+password作为身份验证 +- 集群资源申请  :按需申请、按需使用。可使用平台提供的共享集群,也可为应用申请独立的集群 +- Topic   申   请  :可根据应用(App)创建Topic,或者申请其他topic的读写权限 +- Topic   运   维  :Topic数据采样、调整配额、申请分区等操作 +- 指   标  监   控  :基于Topic生产消费各环节耗时统计,监控不同分位数性能指标 +- 消 费 组 运 维 :支持将消费偏移重置至指定时间或指定位置 + +#### 1.2.2 运维体验地图 +- 多版本集群管控  :支持从`0.10.2`到`2.x`版本 +- 集    群    监   控  :集群Topic、Broker等多维度历史与实时关键指标查看,建立健康分体系 +- 集    群    运   维  :划分部分Broker作为Region,使用Region定义资源划分单位,并按照业务、保障能力区分逻辑集群 +- Broker    运    维  :包括优先副本选举等操作 +- Topic      运    维  :包括创建、查询、扩容、修改属性、迁移、下线等 -## kafka-manager架构图 +#### 1.2.3 运营体验地图 +- 资  源  治  理  :沉淀资源治理方法。针对Topic分区热点、分区不足等高频常见问题,沉淀资源治理方法,实现资源治理专家化 +- 资  源  审  批  :工单体系。Topic创建、调整配额、申请分区等操作,由专业运维人员审批,规范资源使用,保障平台平稳运行 +- 账  单  体  系  :成本控制。Topic资源、集群资源按需申请、按需使用。根据流量核算费用,帮助企业建设大数据成本核算体系 + +### 1.3 核心优势 +- 高 效 的 问 题 定 位  :监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题 +- 便 捷 的 集 群 运 维  :按照Region定义集群资源划分单位,将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时,实现对服务端的强管控 +- 专 业 的 资 源 治 理  :基于滴滴内部多年运营实践,沉淀资源治理方法,建立健康分体系。针对Topic分区热点、分区不足等高频常见问题,实现资源治理专家化 +- 友 好 的 运 维 生 态  :与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效 + +### 1.4 滴滴Logi-KafkaManager架构图 ![kafka-manager-arch](https://img-ys011.didistatic.com/static/dicloudpub/do1_xgDHNDLj2ChKxctSuf72) -## 相关文档 +## 2 相关文档 -- [kafka-manager 安装手册](docs/install_guide/install_guide_cn.md) -- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) -- [kafka-manager 用户使用手册](docs/user_guide/user_guide_cn.md) -- [kafka-manager FAQ](docs/user_guide/faq.md) +### 2.1 产品文档 +- [滴滴Logi-KafkaManager 安装手册](docs/install_guide/install_guide_cn.md) +- [滴滴Logi-KafkaManager 接入集群](docs/user_guide/add_cluster/add_cluster.md) +- [滴滴Logi-KafkaManager 用户使用手册](docs/user_guide/user_guide_cn.md) +- [滴滴Logi-KafkaManager FAQ](docs/user_guide/faq.md) -## 钉钉交流群 +### 2.2 社区文章 +- [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html) +- [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw) +- [滴滴Logi-KafkaManager 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ) +- [滴滴Logi-KafkaManager 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64) +- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g) +- [kafka实践(十五):滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244) -![dingding_group](./docs/assets/images/common/dingding_group.jpg) +## 3 滴滴Logi开源用户钉钉交流群 + +![dingding_group](./docs/assets/images/common/dingding_group.jpg) 钉钉群ID:32821440 -## OCE认证 -OCE是一个认证机制和交流平台,为Logi-KafkaManager生产用户量身打造,我们会为OCE企业提供更好的技术支持,比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等,如果贵司Logi-KafkaManager上了生产,[快来加入吧](http://obsuite.didiyun.com/open/openAuth) +## 4 OCE认证 +OCE是一个认证机制和交流平台,为滴滴Logi-KafkaManager生产用户量身打造,我们会为OCE企业提供更好的技术支持,比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等,如果贵司Logi-KafkaManager上了生产,[快来加入吧](http://obsuite.didiyun.com/open/openAuth) -## 项目成员 +## 5 项目成员 -### 内部核心人员 +### 5.1 内部核心人员 `iceyuhui`、`liuyaguang`、`limengmonty`、`zhangliangmike`、`nullhuangyiming`、`zengqiao`、`eilenexuzhe`、`huangjiaweihjw`、`zhaoyinrui`、`marzkonglingxu`、`joysunchao` -### 外部贡献者 +### 5.2 外部贡献者 `fangjunyu`、`zhoutaiyang` -## 协议 +## 6 协议 `kafka-manager`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE) diff --git a/Releases_Notes.md b/Releases_Notes.md new file mode 100644 index 00000000..46b5753e --- /dev/null +++ b/Releases_Notes.md @@ -0,0 +1,97 @@ + +--- + +![kafka-manager-logo](./docs/assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +## v2.3.0 + +版本上线时间:2021-02-08 + + +### 能力提升 + +- 新增支持docker化部署 +- 可指定Broker作为候选controller +- 可新增并管理网关配置 +- 可获取消费组状态 +- 增加集群的JMX认证 + +### 体验优化 + +- 优化编辑用户角色、修改密码的流程 +- 新增consumerID的搜索功能 +- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示 +- 在相应位置增加《资源申请文档》链接 + +### bug修复 + +- 修复Broker监控图表时间轴展示错误的问题 +- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题 + + + +## v2.2.0 + +版本上线时间:2021-01-25 + + + +### 能力提升 + +- 优化工单批量操作流程 +- 增加获取Topic75分位/99分位的实时耗时数据 +- 增加定时任务,可将无主未落DB的Topic定期写入DB + +### 体验优化 + +- 在相应位置增加《集群接入文档》链接 +- 优化物理集群、逻辑集群含义 +- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息 +- 优化Topic审批时,Topic数据保存时间的配置流程 +- 优化Topic/应用申请、审批时的错误提示文案 +- 优化Topic数据采样的操作项文案 +- 优化运维人员删除Topic时的提示文案 +- 优化运维人员删除Region的删除逻辑与提示文案 +- 优化运维人员删除逻辑集群的提示文案 +- 优化上传集群配置文件时的文件类型限制条件 + +### bug修复 + +- 修复填写应用名称时校验特殊字符出错的问题 +- 修复普通用户越权访问应用详情的问题 +- 修复由于Kafka版本升级,导致的数据压缩格式无法获取的问题 +- 修复删除逻辑集群或Topic之后,界面依旧展示的问题 +- 修复进行Leader rebalance操作时执行结果重复提示的问题 + + +## v2.1.0 + +版本上线时间:2020-12-19 + + + +### 体验优化 + +- 优化页面加载时的背景样式 +- 优化普通用户申请Topic权限的流程 +- 优化Topic申请配额、申请分区的权限限制 +- 优化取消Topic权限的文案提示 +- 优化申请配额表单的表单项名称 +- 优化重置消费偏移的操作流程 +- 优化创建Topic迁移任务的表单内容 +- 优化Topic扩分区操作的弹窗界面样式 +- 优化集群Broker监控可视化图表样式 +- 优化创建逻辑集群的表单内容 +- 优化集群安全协议的提示文案 + +### bug修复 + +- 修复偶发性重置消费偏移失败的问题 + + + + diff --git a/build.sh b/build.sh index da5d20ef..f3ea8642 100644 --- a/build.sh +++ b/build.sh @@ -4,8 +4,9 @@ cd $workspace ## constant OUTPUT_DIR=./output -KM_VERSION=2.1.0 -APP_NAME=kafka-manager-$KM_VERSION +KM_VERSION=2.3.0 +APP_NAME=kafka-manager +APP_DIR=${APP_NAME}-${KM_VERSION} MYSQL_TABLE_SQL_FILE=./docs/install_guide/create_mysql_table.sql CONFIG_FILE=./kafka-manager-web/src/main/resources/application.yml @@ -28,15 +29,15 @@ function build() { function make_output() { # 新建output目录 rm -rf ${OUTPUT_DIR} &>/dev/null - mkdir -p ${OUTPUT_DIR}/${APP_NAME} &>/dev/null + mkdir -p ${OUTPUT_DIR}/${APP_DIR} &>/dev/null # 填充output目录, output内的内容 ( - cp -rf ${MYSQL_TABLE_SQL_FILE} ${OUTPUT_DIR}/${APP_NAME} && # 拷贝 sql 初始化脚本 至output目录 - cp -rf ${CONFIG_FILE} ${OUTPUT_DIR}/${APP_NAME} && # 拷贝 application.yml 至output目录 + cp -rf ${MYSQL_TABLE_SQL_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 sql 初始化脚本 至output目录 + cp -rf ${CONFIG_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 application.yml 至output目录 # 拷贝程序包到output路径 - cp kafka-manager-web/target/kafka-manager-web-${KM_VERSION}-SNAPSHOT.jar ${OUTPUT_DIR}/${APP_NAME}/${APP_NAME}-SNAPSHOT.jar + cp kafka-manager-web/target/kafka-manager-web-${KM_VERSION}-SNAPSHOT.jar ${OUTPUT_DIR}/${APP_DIR}/${APP_NAME}.jar echo -e "make output ok." ) || { echo -e "make output error"; exit 2; } # 填充output目录失败后, 退出码为 非0 } @@ -44,7 +45,7 @@ function make_output() { function make_package() { # 压缩output目录 ( - cd ${OUTPUT_DIR} && tar cvzf ${APP_NAME}.tar.gz ${APP_NAME} + cd ${OUTPUT_DIR} && tar cvzf ${APP_DIR}.tar.gz ${APP_DIR} echo -e "make package ok." ) || { echo -e "make package error"; exit 2; } # 压缩output目录失败后, 退出码为 非0 } diff --git a/container/dockerfiles/Dockerfile b/container/dockerfiles/Dockerfile new file mode 100644 index 00000000..d8a3d158 --- /dev/null +++ b/container/dockerfiles/Dockerfile @@ -0,0 +1,44 @@ +FROM openjdk:8-jdk-alpine3.9 + +LABEL author="yangvipguang" + +ENV VERSION 2.1.0 +ENV JAR_PATH kafka-manager-web/target +COPY $JAR_PATH/kafka-manager-web-$VERSION-SNAPSHOT.jar /tmp/app.jar +COPY $JAR_PATH/application.yml /km/ + +RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories +RUN apk add --no-cache --virtual .build-deps \ + font-adobe-100dpi \ + ttf-dejavu \ + fontconfig \ + curl \ + apr \ + apr-util \ + apr-dev \ + tomcat-native \ + && apk del .build-deps + +ENV AGENT_HOME /opt/agent/ + +WORKDIR /tmp +COPY docker-depends/config.yaml $AGENT_HOME +COPY docker-depends/jmx_prometheus_javaagent-0.14.0.jar $AGENT_HOME + +ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.14.0.jar=9999:$AGENT_HOME/config.yaml" + +ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M " + +ENV JAVA_OPTS="-verbose:gc \ + -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:/tmp/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \ + -XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \ + -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport" +#-Xlog:gc -Xlog:gc* -Xlog:gc+heap=trace -Xlog:safepoint + +EXPOSE 8080 9999 + +ENTRYPOINT ["sh","-c","java -jar $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"] + +## 默认不带Prometheus JMX监控,需要可以自行取消以下注释并注释上面一行默认Entrypoint 命令。 +## ENTRYPOINT ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"] + diff --git a/container/dockerfiles/docker-depends/config.yaml b/container/dockerfiles/docker-depends/config.yaml new file mode 100644 index 00000000..d4b7b547 --- /dev/null +++ b/container/dockerfiles/docker-depends/config.yaml @@ -0,0 +1,5 @@ +--- + startDelaySeconds: 0 + ssl: false + lowercaseOutputName: false + lowercaseOutputLabelNames: false diff --git a/container/helm/.helmignore b/container/helm/.helmignore new file mode 100644 index 00000000..0e8a0eb3 --- /dev/null +++ b/container/helm/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/container/helm/Chart.yaml b/container/helm/Chart.yaml new file mode 100644 index 00000000..7161f735 --- /dev/null +++ b/container/helm/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +name: didi-km +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "1.16.0" diff --git a/container/helm/templates/NOTES.txt b/container/helm/templates/NOTES.txt new file mode 100644 index 00000000..e9c3e7e8 --- /dev/null +++ b/container/helm/templates/NOTES.txt @@ -0,0 +1,22 @@ +1. Get the application URL by running these commands: +{{- if .Values.ingress.enabled }} +{{- range $host := .Values.ingress.hosts }} + {{- range .paths }} + http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} + {{- end }} +{{- end }} +{{- else if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "didi-km.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "didi-km.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "didi-km.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + echo http://$SERVICE_IP:{{ .Values.service.port }} +{{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "didi-km.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") + echo "Visit http://127.0.0.1:8080 to use your application" + kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT +{{- end }} diff --git a/container/helm/templates/_helpers.tpl b/container/helm/templates/_helpers.tpl new file mode 100644 index 00000000..23314fd4 --- /dev/null +++ b/container/helm/templates/_helpers.tpl @@ -0,0 +1,62 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "didi-km.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "didi-km.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "didi-km.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "didi-km.labels" -}} +helm.sh/chart: {{ include "didi-km.chart" . }} +{{ include "didi-km.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "didi-km.selectorLabels" -}} +app.kubernetes.io/name: {{ include "didi-km.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "didi-km.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "didi-km.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} diff --git a/container/helm/templates/configmap.yaml b/container/helm/templates/configmap.yaml new file mode 100644 index 00000000..ffc75ec5 --- /dev/null +++ b/container/helm/templates/configmap.yaml @@ -0,0 +1,88 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: km-cm +data: + application.yml: | + server: + port: 8080 + tomcat: + accept-count: 1000 + max-connections: 10000 + max-threads: 800 + min-spare-threads: 100 + + spring: + application: + name: kafkamanager + datasource: + kafka-manager: + jdbc-url: jdbc:mysql://xxxxx:3306/kafka-manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false + username: admin + password: admin + driver-class-name: com.mysql.jdbc.Driver + main: + allow-bean-definition-overriding: true + + profiles: + active: dev + servlet: + multipart: + max-file-size: 100MB + max-request-size: 100MB + + logging: + config: classpath:logback-spring.xml + + custom: + idc: cn + jmx: + max-conn: 20 + store-metrics-task: + community: + broker-metrics-enabled: true + topic-metrics-enabled: true + didi: + app-topic-metrics-enabled: false + topic-request-time-metrics-enabled: false + topic-throttled-metrics: false + save-days: 7 + + # 任务相关的开关 + task: + op: + sync-topic-enabled: false # 未落盘的Topic定期同步到DB中 + + account: + ldap: + + kcm: + enabled: false + storage: + base-url: http://127.0.0.1 + n9e: + base-url: http://127.0.0.1:8004 + user-token: 12345678 + timeout: 300 + account: root + script-file: kcm_script.sh + + monitor: + enabled: false + n9e: + nid: 2 + user-token: 1234567890 + mon: + base-url: http://127.0.0.1:8032 + sink: + base-url: http://127.0.0.1:8006 + rdb: + base-url: http://127.0.0.1:80 + + notify: + kafka: + cluster-id: 95 + topic-name: didi-kafka-notify + order: + detail-url: http://127.0.0.1 + diff --git a/container/helm/templates/deployment.yaml b/container/helm/templates/deployment.yaml new file mode 100644 index 00000000..4754b53e --- /dev/null +++ b/container/helm/templates/deployment.yaml @@ -0,0 +1,56 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "didi-km.fullname" . }} + labels: + {{- include "didi-km.labels" . | nindent 4 }} +spec: + {{- if not .Values.autoscaling.enabled }} + replicas: {{ .Values.replicaCount }} + {{- end }} + selector: + matchLabels: + {{- include "didi-km.selectorLabels" . | nindent 6 }} + template: + metadata: + {{- with .Values.podAnnotations }} + annotations: + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "didi-km.selectorLabels" . | nindent 8 }} + spec: + {{- with .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "didi-km.serviceAccountName" . }} + securityContext: + {{- toYaml .Values.podSecurityContext | nindent 8 }} + containers: + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.securityContext | nindent 12 }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + ports: + - name: http + containerPort: 8080 + protocol: TCP + - name: jmx-metrics + containerPort: 9999 + protocol: TCP + resources: + {{- toYaml .Values.resources | nindent 12 }} + {{- with .Values.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} diff --git a/container/helm/templates/hpa.yaml b/container/helm/templates/hpa.yaml new file mode 100644 index 00000000..209d7ae4 --- /dev/null +++ b/container/helm/templates/hpa.yaml @@ -0,0 +1,28 @@ +{{- if .Values.autoscaling.enabled }} +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "didi-km.fullname" . }} + labels: + {{- include "didi-km.labels" . | nindent 4 }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "didi-km.fullname" . }} + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} + - type: Resource + resource: + name: cpu + targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} + {{- end }} + {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} + - type: Resource + resource: + name: memory + targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} + {{- end }} +{{- end }} diff --git a/container/helm/templates/ingress.yaml b/container/helm/templates/ingress.yaml new file mode 100644 index 00000000..47aec7f2 --- /dev/null +++ b/container/helm/templates/ingress.yaml @@ -0,0 +1,41 @@ +{{- if .Values.ingress.enabled -}} +{{- $fullName := include "didi-km.fullname" . -}} +{{- $svcPort := .Values.service.port -}} +{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} +apiVersion: networking.k8s.io/v1beta1 +{{- else -}} +apiVersion: extensions/v1beta1 +{{- end }} +kind: Ingress +metadata: + name: {{ $fullName }} + labels: + {{- include "didi-km.labels" . | nindent 4 }} + {{- with .Values.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + {{- if .Values.ingress.tls }} + tls: + {{- range .Values.ingress.tls }} + - hosts: + {{- range .hosts }} + - {{ . | quote }} + {{- end }} + secretName: {{ .secretName }} + {{- end }} + {{- end }} + rules: + {{- range .Values.ingress.hosts }} + - host: {{ .host | quote }} + http: + paths: + {{- range .paths }} + - path: {{ .path }} + backend: + serviceName: {{ $fullName }} + servicePort: {{ $svcPort }} + {{- end }} + {{- end }} + {{- end }} diff --git a/container/helm/templates/service.yaml b/container/helm/templates/service.yaml new file mode 100644 index 00000000..7fcbc5ba --- /dev/null +++ b/container/helm/templates/service.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "didi-km.fullname" . }} + labels: + {{- include "didi-km.labels" . | nindent 4 }} +spec: + type: {{ .Values.service.type }} + ports: + - port: {{ .Values.service.port }} + targetPort: http + protocol: TCP + name: http + selector: + {{- include "didi-km.selectorLabels" . | nindent 4 }} diff --git a/container/helm/templates/serviceaccount.yaml b/container/helm/templates/serviceaccount.yaml new file mode 100644 index 00000000..4f2676ee --- /dev/null +++ b/container/helm/templates/serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "didi-km.serviceAccountName" . }} + labels: + {{- include "didi-km.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/container/helm/templates/tests/test-connection.yaml b/container/helm/templates/tests/test-connection.yaml new file mode 100644 index 00000000..b5b41d4f --- /dev/null +++ b/container/helm/templates/tests/test-connection.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Pod +metadata: + name: "{{ include "didi-km.fullname" . }}-test-connection" + labels: + {{- include "didi-km.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": test +spec: + containers: + - name: wget + image: busybox + command: ['wget'] + args: ['{{ include "didi-km.fullname" . }}:{{ .Values.service.port }}'] + restartPolicy: Never diff --git a/container/helm/values.yaml b/container/helm/values.yaml new file mode 100644 index 00000000..a5f49e40 --- /dev/null +++ b/container/helm/values.yaml @@ -0,0 +1,79 @@ +# Default values for didi-km. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +replicaCount: 1 + +image: + repository: docker.io/yangvipguang/km + pullPolicy: IfNotPresent + # Overrides the image tag whose default is the chart appVersion. + tag: "v18" + +imagePullSecrets: [] +nameOverride: "" +fullnameOverride: "km" + +serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + +podAnnotations: {} + +podSecurityContext: {} + # fsGroup: 2000 + +securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsNonRoot: true + # runAsUser: 1000 + +service: + type: ClusterIP + port: 8080 + +ingress: + enabled: false + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: chart-example.local + paths: [] + tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + +resources: + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + limits: + cpu: 50m + memory: 2048Mi + requests: + cpu: 10m + memory: 200Mi + +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 100 + targetCPUUtilizationPercentage: 80 + # targetMemoryUtilizationPercentage: 80 + +nodeSelector: {} + +tolerations: [] + +affinity: {} diff --git a/docs/assets/images/common/Logi Kafka Manager架构图.png b/docs/assets/images/common/Logi Kafka Manager架构图.png deleted file mode 100644 index 15932909..00000000 Binary files a/docs/assets/images/common/Logi Kafka Manager架构图.png and /dev/null differ diff --git a/docs/dev_guide/assets/connect_jmx_failed/check_jmx_opened.jpg b/docs/dev_guide/assets/connect_jmx_failed/check_jmx_opened.jpg new file mode 100644 index 00000000..1890983c Binary files /dev/null and b/docs/dev_guide/assets/connect_jmx_failed/check_jmx_opened.jpg differ diff --git a/docs/dev_guide/connect_jmx_failed.md b/docs/dev_guide/connect_jmx_failed.md new file mode 100644 index 00000000..92f0a37e --- /dev/null +++ b/docs/dev_guide/connect_jmx_failed.md @@ -0,0 +1,101 @@ + +--- + +![kafka-manager-logo](../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +## JMX-连接失败问题解决 + +集群正常接入Logi-KafkaManager之后,即可以看到集群的Broker列表,此时如果查看不了Topic的实时流量,或者是Broker的实时流量信息时,那么大概率就是JMX连接的问题了。 + +下面我们按照步骤来一步一步的检查。 + +### 1、问题&说明 + +**类型一:JMX配置未开启** + +未开启时,直接到`2、解决方法`查看如何开启即可。 + +![check_jmx_opened](./assets/connect_jmx_failed/check_jmx_opened.jpg) + + +**类型二:配置错误** + +`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因: + +- `JMX`配置错误:见`2、解决方法`。 +- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。 +- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`。 + + +错误日志例子: +``` +# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。 +2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. +java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is: + + +# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。 +2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. +java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is: +``` + +### 2、解决方法 + +这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。 + +修改`kafka-server-start.sh`文件: +``` +# 在这个下面增加JMX端口的配置 +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" + export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999 +fi +``` + +  + +修改`kafka-run-class.sh`文件 +``` +# JMX settings +if [ -z "$KAFKA_JMX_OPTS" ]; then + KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}" +fi + +# JMX port to use +if [ $JMX_PORT ]; then + KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT" +fi +``` + + +### 3、解决方法 —— 认证的JMX + +如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。 + +在JMX的配置等都没有问题的情况下,如果是因为认证的原因导致连接不了的,此时可以使用下面介绍的方法进行解决。 + +**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。** + +`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。 + +这个数据是`json`格式的字符串,例子如下所示: + +```json +{ + "maxConn": 10, # KM对单台Broker的最大JMX连接数 + "username": "xxxxx", # 用户名 + "password": "xxxx", # 密码 + "openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭 +} +``` + +  + +SQL的例子: +```sql +UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx}; +``` \ No newline at end of file diff --git a/docs/dev_guide/gateway_config_manager.md b/docs/dev_guide/gateway_config_manager.md new file mode 100644 index 00000000..8c656531 --- /dev/null +++ b/docs/dev_guide/gateway_config_manager.md @@ -0,0 +1,10 @@ + +--- + +![kafka-manager-logo](../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +# Kafka-Gateway 配置说明 \ No newline at end of file diff --git a/docs/dev_guide/upgrade_manual/logi-km-v2.2.0.md b/docs/dev_guide/upgrade_manual/logi-km-v2.2.0.md new file mode 100644 index 00000000..96622080 --- /dev/null +++ b/docs/dev_guide/upgrade_manual/logi-km-v2.2.0.md @@ -0,0 +1,27 @@ + +--- + +![kafka-manager-logo](../../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +# 升级至`2.2.0`版本 + +`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段,因此需要执行下面的sql进行字段的增加。 + +```sql +# 往cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息 +ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`; + +# 往logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键. +# 此后, name字段还是表示集群名称, 而identification字段表示的是集群标识, 只能是字母数字及下划线组成, +# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段. +ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`; + +UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0; + +ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC); +``` + diff --git a/docs/dev_guide/upgrade_manual/logi-km-v2.3.0.md b/docs/dev_guide/upgrade_manual/logi-km-v2.3.0.md new file mode 100644 index 00000000..3a4196f8 --- /dev/null +++ b/docs/dev_guide/upgrade_manual/logi-km-v2.3.0.md @@ -0,0 +1,17 @@ + +--- + +![kafka-manager-logo](../../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + +# 升级至`2.3.0`版本 + +`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段,因此需要执行下面的sql进行字段的增加。 + +```sql +ALTER TABLE `gateway_config` +ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`; +``` diff --git a/docs/dev_guide/use_mysql_8.md b/docs/dev_guide/use_mysql_8.md index d4a33b2a..6c8f6b38 100644 --- a/docs/dev_guide/use_mysql_8.md +++ b/docs/dev_guide/use_mysql_8.md @@ -15,7 +15,7 @@ 当前因为无法同时兼容`MySQL 8`与`MySQL 5.7`,因此代码中默认的版本还是`MySQL 5.7`。 -当前如需使用`MySQL 8`,则续按照下述流程进行简单修改代码。 +当前如需使用`MySQL 8`,则需按照下述流程进行简单修改代码。 - Step1. 修改application.yml中的MySQL驱动类 diff --git a/docs/install_guide/create_mysql_table.sql b/docs/install_guide/create_mysql_table.sql index 528838ee..065532eb 100644 --- a/docs/install_guide/create_mysql_table.sql +++ b/docs/install_guide/create_mysql_table.sql @@ -1,3 +1,8 @@ +-- create database +CREATE DATABASE logi_kafka_manager; + +USE logi_kafka_manager; + -- -- Table structure for table `account` -- @@ -104,7 +109,8 @@ CREATE TABLE `cluster` ( `zookeeper` varchar(512) NOT NULL DEFAULT '' COMMENT 'zk地址', `bootstrap_servers` varchar(512) NOT NULL DEFAULT '' COMMENT 'server地址', `kafka_version` varchar(32) NOT NULL DEFAULT '' COMMENT 'kafka版本', - `security_properties` text COMMENT '安全认证参数', + `security_properties` text COMMENT 'Kafka安全认证参数', + `jmx_properties` text COMMENT 'JMX配置', `status` tinyint(4) NOT NULL DEFAULT '1' COMMENT ' 监控标记, 0表示未监控, 1表示监控中', `gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间', @@ -197,7 +203,8 @@ CREATE TABLE `gateway_config` ( `type` varchar(128) NOT NULL DEFAULT '' COMMENT '配置类型', `name` varchar(128) NOT NULL DEFAULT '' COMMENT '配置名称', `value` text COMMENT '配置值', - `version` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT '版本信息', + `version` bigint(20) unsigned NOT NULL DEFAULT '1' COMMENT '版本信息', + `description` text COMMENT '描述信息', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间', PRIMARY KEY (`id`), @@ -302,20 +309,22 @@ INSERT INTO kafka_user(app_id, password, user_type, operation) VALUES ('dkm_admi -- Table structure for table `logical_cluster` -- --- DROP TABLE IF EXISTS `logical_cluster`; CREATE TABLE `logical_cluster` ( - `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id', - `name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称', - `mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群', - `app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用', - `cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id', - `region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表', - `description` text COMMENT '备注说明', - `gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', - `gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间', - PRIMARY KEY (`id`), - UNIQUE KEY `uniq_name` (`name`) -) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表'; + `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id', + `name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称', + `identification` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识', + `mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群', + `app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用', + `cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id', + `region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表', + `description` text COMMENT '备注说明', + `gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', + `gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间', + PRIMARY KEY (`id`), + UNIQUE KEY `uniq_name` (`name`), + UNIQUE KEY `uniq_identification` (`identification`) +) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表'; + -- -- Table structure for table `monitor_rule` diff --git a/docs/install_guide/install_guide_cn.md b/docs/install_guide/install_guide_cn.md index 9c700587..9a4a415b 100644 --- a/docs/install_guide/install_guide_cn.md +++ b/docs/install_guide/install_guide_cn.md @@ -9,19 +9,39 @@ # 安装手册 +## 1、环境依赖 -## 环境依赖 +如果是以Release包进行安装的,则仅安装`Java`及`MySQL`即可。如果是要先进行源码包进行打包,然后再使用,则需要安装`Maven`及`Node`环境。 -- `Maven 3.5+`(后端打包依赖) -- `node v12+`(前端打包依赖) - `Java 8+`(运行环境需要) - `MySQL 5.7`(数据存储) +- `Maven 3.5+`(后端打包依赖) +- `Node 10+`(前端打包依赖) --- -## 环境初始化 +## 2、获取安装包 -执行[create_mysql_table.sql](create_mysql_table.sql)中的SQL命令,从而创建所需的MySQL库及表,默认创建的库名是`kafka_manager`。 +**1、Release直接下载** + +这里如果觉得麻烦,然后也不想进行二次开发,则可以直接下载Release包,下载地址:[Github Release包下载地址](https://github.com/didi/Logi-KafkaManager/releases) + +如果觉得Github的下载地址太慢了,也可以进入`Logi-KafkaManager`的用户群获取,群地址在README中。 + + +**2、源代码进行打包** + +下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`sh build.sh`命令即可,执行完成之后会在`output/kafka-manager-xxx`目录下面生成一个jar包。 + +对于`windows`环境的用户,估计执行不了`sh build.sh`命令,因此可以直接执行`mvn install`,然后在`kafka-manager-web/target`目录下生成一个kafka-manager-web-xxx.jar的包。 + +获取到jar包之后,我们继续下面的步骤。 + +--- + +## 3、MySQL-DB初始化 + +执行[create_mysql_table.sql](create_mysql_table.sql)中的SQL命令,从而创建所需的MySQL库及表,默认创建的库名是`logi_kafka_manager`。 ``` # 示例: @@ -30,29 +50,15 @@ mysql -uXXXX -pXXX -h XXX.XXX.XXX.XXX -PXXXX < ./create_mysql_table.sql --- -## 打包 - -```bash - -# 一次性打包 -cd .. -mvn install +## 4、启动 ``` +# application.yml 是配置文件,最简单的是仅修改MySQL相关的配置即可启动 ---- - -## 启动 - -``` -# application.yml 是配置文件 - -cp kafka-manager-web/src/main/resources/application.yml kafka-manager-web/target/ -cd kafka-manager-web/target/ -nohup java -jar kafka-manager-web-2.1.0-SNAPSHOT.jar --spring.config.location=./application.yml > /dev/null 2>&1 & +nohup java -jar kafka-manager.jar --spring.config.location=./application.yml > /dev/null 2>&1 & ``` -## 使用 +### 5、使用 本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md) diff --git a/docs/user_guide/add_cluster/add_cluster.md b/docs/user_guide/add_cluster/add_cluster.md index 14c1d907..1774a9be 100644 --- a/docs/user_guide/add_cluster/add_cluster.md +++ b/docs/user_guide/add_cluster/add_cluster.md @@ -5,16 +5,26 @@ **一站式`Apache Kafka`集群指标监控与运维管控平台** + --- # 集群接入 -集群的接入总共需要三个步骤,分别是: -1. 接入物理集群 -2. 创建Region -3. 创建逻辑集群 +## 主要概念讲解 +面对大规模集群、业务场景复杂的情况,引入Region、逻辑集群的概念 +- Region:划分部分Broker作为一个 Region,用Region定义资源划分的单位,提高扩展性和隔离性。如果部分Topic异常也不会影响大面积的Broker +- 逻辑集群:逻辑集群由部分Region组成,便于对大规模集群按照业务划分、保障能力进行管理 +![op_cluster_arch](assets/op_cluster_arch.png) -备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。 +集群的接入总共需要三个步骤,分别是: +1. 接入物理集群:填写机器地址、安全协议等配置信息,接入真实的物理集群 +2. 创建Region:将部分Broker划分为一个Region +3. 创建逻辑集群:逻辑集群由部分Region组成,可根据业务划分、保障等级来创建相应的逻辑集群 + +![op_cluster_flow](assets/op_cluster_flow.png) + + +**备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。** ## 1、接入物理集群 @@ -36,4 +46,4 @@ ![op_add_logical_cluster](assets/op_add_logical_cluster.jpg) -如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。 \ No newline at end of file +如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。 diff --git a/docs/user_guide/add_cluster/assets/op_cluster_arch.png b/docs/user_guide/add_cluster/assets/op_cluster_arch.png new file mode 100644 index 00000000..aa972d9e Binary files /dev/null and b/docs/user_guide/add_cluster/assets/op_cluster_arch.png differ diff --git a/docs/user_guide/add_cluster/assets/op_cluster_flow.png b/docs/user_guide/add_cluster/assets/op_cluster_flow.png new file mode 100644 index 00000000..283f2676 Binary files /dev/null and b/docs/user_guide/add_cluster/assets/op_cluster_flow.png differ diff --git a/docs/user_guide/alarm_rules.md b/docs/user_guide/alarm_rules.md new file mode 100644 index 00000000..57cba628 --- /dev/null +++ b/docs/user_guide/alarm_rules.md @@ -0,0 +1,25 @@ +![kafka-manager-logo](../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + + +## 报警策略-报警函数介绍 + + + +| 类别 | 函数 | 含义 |函数文案 |备注 | +| --- | --- | --- | --- | --- | +| 发生次数 |all,n | 最近$n个周期内,全发生 | 连续发生(all) | | +| 发生次数 | happen, n, m | 最近$n个周期内,发生m次 | 出现(happen) | null点也计算在n内 | +| 数学统计 | sum, n | 最近$n个周期取值 的 和 | 求和(sum) | sum_over_time | +| 数学统计 | avg, n | 最近$n个周期取值 的 平均值 | 平均值(avg) | avg_over_time | +| 数学统计 | min, n | 最近$n个周期取值 的 最小值 | 最小值(min) | min_over_time | +| 数学统计 | max, n | 最近$n个周期取值 的 最大值 | 最大值(max | max_over_time | +| 变化率 | pdiff, n | 最近$n个点的变化率, 有一个满足 则触发 | 突增突降率(pdiff) | 假设, 最近3个周期的值分别为 v, v2, v3(v为最新值)那么计算公式为 any( (v-v2)/v2, (v-v3)/v3 )**区分正负** | +| 变化量 | diff, n | 最近$n个点的变化量, 有一个满足 则触发 | 突增突降值(diff) | 假设, 最近3个周期的值分别为 v, v2, v3(v为最新值)那么计算公式为 any( (v-v2), (v-v3) )**区分正负** | +| 变化量 | ndiff | 最近n个周期,发生m次 v(t) - v(t-1) $OP threshold其中 v(t) 为最新值 | 连续变化(区分正负) - ndiff | | +| 数据中断 | nodata, t | 最近 $t 秒内 无数据上报 | 数据上报中断(nodata) | | +| 同环比 | c_avg_rate_abs, n | 最近$n个周期的取值,相比 1天或7天前取值 的变化率 的绝对值 | 同比变化率(c_avg_rate_abs) | 假设最近的n个值为 v1, v2, v3历史取到的对应n'个值为 v1', v2'那么计算公式为abs((avg(v1,v2,v3) / avg(v1',v2') -1)* 100%) | +| 同环比 | c_avg_rate, n | 最近$n个周期的取值,相比 1天或7天前取值 的变化率(**区分正负**) | 同比变化率(c_avg_rate) | 假设最近的n个值为 v1, v2, v3历史取到的对应n'个值为 v1', v2'那么计算公式为(avg(v1,v2,v3) / avg(v1',v2') -1)* 100% | diff --git a/docs/user_guide/assets/resource_apply/production_consumption_flow.png b/docs/user_guide/assets/resource_apply/production_consumption_flow.png new file mode 100644 index 00000000..36187c83 Binary files /dev/null and b/docs/user_guide/assets/resource_apply/production_consumption_flow.png differ diff --git a/docs/user_guide/faq.md b/docs/user_guide/faq.md index ba46eb66..f62ba59f 100644 --- a/docs/user_guide/faq.md +++ b/docs/user_guide/faq.md @@ -9,18 +9,42 @@ # FAQ -- 1、Topic申请时没有可选择的集群? +- 0、Github图裂问题解决 +- 1、Topic申请、新建监控告警等操作时没有可选择的集群? - 2、逻辑集群 & Region的用途? - 3、登录失败? - 4、页面流量信息等无数据? - 5、如何对接夜莺的监控告警功能? - 6、如何使用`MySQL 8`? +- 7、`Jmx`连接失败如何解决? +- 8、`topic biz data not exist`错误及处理方式 +- 9、进程启动后,如何查看API文档 --- -### 1、Topic申请时没有可选择的集群? +### 0、Github图裂问题解决 -- 参看 [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。 +可以在本地机器`ping github.com`这个地址,获取到`github.com`地址的IP地址。 + +然后将IP绑定到`/etc/hosts`文件中。 + +例如 + +```shell +# 在 /etc/hosts文件中增加如下信息 + +140.82.113.3 github.com +``` + +--- + +### 1、Topic申请、新建监控告警等操作时没有可选择的集群? + +缺少逻辑集群导致的,在Topic管理、监控告警、集群管理这三个Tab下面都是普通用户视角,普通用户看到的集群都是逻辑集群,因此在这三个Tab下进行操作时,都需要有逻辑集群。 + +逻辑集群的创建参看: + +- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。 --- @@ -43,7 +67,7 @@ - 1、检查`Broker JMX`是否正确开启。 -如若还未开启,具体可百度一下看如何开启。 +如若还未开启,具体可百度一下看如何开启,或者参看:[Jmx连接配置&问题解决说明文档](../dev_guide/connect_jmx_failed.md) ![helpcenter](./assets/faq/jmx_check.jpg) @@ -53,7 +77,7 @@ - 3、数据库时区问题。 -检查MySQL的topic_metrics、broker_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。 +检查MySQL的topic_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。 --- @@ -66,3 +90,27 @@ ### 6、如何使用`MySQL 8`? - 参看 [kafka-manager 使用`MySQL 8`](../dev_guide/use_mysql_8.md) 说明。 + +--- + +### 7、`Jmx`连接失败如何解决? + +- 参看 [Jmx连接配置&问题解决](../dev_guide/connect_jmx_failed.md) 说明。 + +--- + +### 8、`topic biz data not exist`错误及处理方式 + +**错误原因** + +在进行权限审批的时候,可能会出现这个错误,出现这个错误的原因是因为Topic相关的业务信息没有在DB中存储,或者更具体的说就是该Topic不属于任何应用导致的,只需要将这些无主的Topic挂在某个应用下面即可。 + +**解决方式** + +可以在`运维管控->集群列表->Topic信息`下面,编辑申请权限的Topic,为Topic选择一个应用即可。 + +以上仅仅只是针对单个Topic的场景,如果你有非常多的Topic需要进行初始化的,那么此时可以在配置管理中增加一个配置,来定时的对无主的Topic进行同步,具体见:[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md) + +### 9、进程启动后,如何查看API文档 + +- 滴滴Logi-KafkaManager采用Swagger-API工具记录API文档。Swagger-API地址: [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-ui.html#/) diff --git a/docs/user_guide/resource_apply.md b/docs/user_guide/resource_apply.md new file mode 100644 index 00000000..87537f95 --- /dev/null +++ b/docs/user_guide/resource_apply.md @@ -0,0 +1,32 @@ + +--- + +![kafka-manager-logo](../assets/images/common/logo_name.png) + +**一站式`Apache Kafka`集群指标监控与运维管控平台** + +--- + + +# 资源申请文档 + +## 主要名词解释 + +- 应用(App):作为Kafka中的账户,使用AppID+password作为身份标识 +- 集群:可使用平台提供的共享集群,也可为某一应用申请单独的集群 +- Topic:可申请创建Topic或申请其他Topic的生产/消费权限。进行生产/消费时通过Topic+AppID进行身份鉴权 +![production_consumption_flow](assets/resource_apply/production_consumption_flow.png) + +## 应用申请 +应用(App)作为Kafka中的账户,使用AppID+password作为身份标识。对Topic进行生产/消费时通过Topic+AppID进行身份鉴权。 + +用户申请应用,经由运维人员审批,审批通过后获得AppID和密钥 + +## 集群申请 +可使用平台提供的共享集群,若对隔离性、稳定性、生产消费速率有更高的需求,可对某一应用申请单独的集群 + +## Topic申请 +- 用户可根据已申请的应用创建Topic。创建后,应用负责人默认拥有该Topic的生产/消费权限和管理权限 +- 也可申请其他Topic的生产、消费权限。经由Topic所属应用的负责人审批后,即可拥有相应权限。 + + diff --git a/kafka-manager-common/pom.xml b/kafka-manager-common/pom.xml index f310a81a..6a8ff0cb 100644 --- a/kafka-manager-common/pom.xml +++ b/kafka-manager-common/pom.xml @@ -5,13 +5,13 @@ 4.0.0 com.xiaojukeji.kafka kafka-manager-common - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} @@ -104,5 +104,10 @@ javax.servlet javax.servlet-api + + + junit + junit + \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/AccountRoleEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/AccountRoleEnum.java index 9c3cc06c..55412490 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/AccountRoleEnum.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/AccountRoleEnum.java @@ -47,4 +47,13 @@ public enum AccountRoleEnum { } return AccountRoleEnum.UNKNOWN; } + + public static AccountRoleEnum getUserRoleEnum(String roleName) { + for (AccountRoleEnum elem: AccountRoleEnum.values()) { + if (elem.message.equalsIgnoreCase(roleName)) { + return elem; + } + } + return AccountRoleEnum.UNKNOWN; + } } diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/SinkMonitorSystemEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/SinkMonitorSystemEnum.java deleted file mode 100644 index b843a90c..00000000 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/SinkMonitorSystemEnum.java +++ /dev/null @@ -1,45 +0,0 @@ -package com.xiaojukeji.kafka.manager.common.bizenum; - -/** - * 是否上报监控系统 - * @author zengqiao - * @date 20/9/25 - */ -public enum SinkMonitorSystemEnum { - SINK_MONITOR_SYSTEM(0, "上报监控系统"), - NOT_SINK_MONITOR_SYSTEM(1, "不上报监控系统"), - ; - - private Integer code; - - private String message; - - SinkMonitorSystemEnum(Integer code, String message) { - this.code = code; - this.message = message; - } - - public Integer getCode() { - return code; - } - - public void setCode(Integer code) { - this.code = code; - } - - public String getMessage() { - return message; - } - - public void setMessage(String message) { - this.message = message; - } - - @Override - public String toString() { - return "SinkMonitorSystemEnum{" + - "code=" + code + - ", message='" + message + '\'' + - '}'; - } -} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicExpiredStatusEnum.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicExpiredStatusEnum.java new file mode 100644 index 00000000..bac44235 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/bizenum/TopicExpiredStatusEnum.java @@ -0,0 +1,32 @@ +package com.xiaojukeji.kafka.manager.common.bizenum; + +/** + * 过期Topic状态 + * @author zengqiao + * @date 21/01/25 + */ +public enum TopicExpiredStatusEnum { + ALREADY_NOTIFIED_AND_DELETED(-2, "已通知, 已下线"), + ALREADY_NOTIFIED_AND_CAN_DELETE(-1, "已通知, 可下线"), + ALREADY_EXPIRED_AND_WAIT_NOTIFY(0, "已过期, 待通知"), + ALREADY_NOTIFIED_AND_WAIT_RESPONSE(1, "已通知, 待反馈"), + + ; + + private int status; + + private String message; + + TopicExpiredStatusEnum(int status, String message) { + this.status = status; + this.message = message; + } + + public int getStatus() { + return status; + } + + public String getMessage() { + return message; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java index 0fb38302..323e9ec9 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/Result.java @@ -97,7 +97,7 @@ public class Result implements Serializable { return result; } - public static Result buildFailure(String message) { + public static Result buildGatewayFailure(String message) { Result result = new Result(); result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode()); result.setMessage(message); @@ -105,6 +105,14 @@ public class Result implements Serializable { return result; } + public static Result buildFailure(String message) { + Result result = new Result(); + result.setCode(ResultStatus.FAIL.getCode()); + result.setMessage(message); + result.setData(null); + return result; + } + public static Result buildFrom(ResultStatus resultStatus) { Result result = new Result(); result.setCode(resultStatus.getCode()); diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java index d59ade76..8f0f229b 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ResultStatus.java @@ -12,125 +12,102 @@ public enum ResultStatus { SUCCESS(Constant.SUCCESS, "success"), - LOGIN_FAILED(1, "login failed, please check username and password"), - + FAIL(1, "操作失败"), /** - * 内部依赖错误, [1000, 1200) + * 操作错误[1000, 2000) * ------------------------------------------------------------------------------------------ */ - MYSQL_ERROR(1000, "operate database failed"), - - CONNECT_ZOOKEEPER_FAILED(1000, "connect zookeeper failed"), - READ_ZOOKEEPER_FAILED(1000, "read zookeeper failed"), - READ_JMX_FAILED(1000, "read jmx failed"), - - - // 内部依赖错误 —— Kafka特定错误, [1000, 1100) - BROKER_NUM_NOT_ENOUGH(1000, "broker not enough"), - CONTROLLER_NOT_ALIVE(1000, "controller not alive"), - CLUSTER_METADATA_ERROR(1000, "cluster metadata error"), - TOPIC_CONFIG_ERROR(1000, "topic config error"), - - - /** - * 外部依赖错误, [1200, 1400) - * ------------------------------------------------------------------------------------------ - */ - CALL_CLUSTER_TASK_AGENT_FAILED(1000, " call cluster task agent failed"), - CALL_MONITOR_SYSTEM_ERROR(1000, " call monitor-system failed"), - - - - /** - * 外部用户操作错误, [1400, 1600) - * ------------------------------------------------------------------------------------------ - */ - PARAM_ILLEGAL(1400, "param illegal"), OPERATION_FAILED(1401, "operation failed"), OPERATION_FORBIDDEN(1402, "operation forbidden"), API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"), + USER_WITHOUT_AUTHORITY(1404, "user without authority"), + CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"), - // 资源不存在 - CLUSTER_NOT_EXIST(10000, "cluster not exist"), - BROKER_NOT_EXIST(10000, "broker not exist"), - TOPIC_NOT_EXIST(10000, "topic not exist"), - PARTITION_NOT_EXIST(10000, "partition not exist"), - ACCOUNT_NOT_EXIST(10000, "account not exist"), - APP_NOT_EXIST(1000, "app not exist"), - ORDER_NOT_EXIST(1000, "order not exist"), - CONFIG_NOT_EXIST(1000, "config not exist"), - IDC_NOT_EXIST(1000, "idc not exist"), - TASK_NOT_EXIST(1110, "task not exist"), + TOPIC_OPERATION_PARAM_NULL_POINTER(1450, "参数错误"), + TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(1451, "分区数错误"), + TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(1452, "Broker数不足错误"), + TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(1453, "Topic名称非法"), + TOPIC_OPERATION_TOPIC_EXISTED(1454, "Topic已存在"), + TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(1455, "Topic未知"), + TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(1456, "Topic配置错误"), + TOPIC_OPERATION_TOPIC_IN_DELETING(1457, "Topic正在删除"), + TOPIC_OPERATION_UNKNOWN_ERROR(1458, "未知错误"), - AUTHORITY_NOT_EXIST(1000, "authority not exist"), + /** + * 参数错误[2000, 3000) + * ------------------------------------------------------------------------------------------ + */ + PARAM_ILLEGAL(2000, "param illegal"), + CG_LOCATION_ILLEGAL(2001, "consumer group location illegal"), + ORDER_ALREADY_HANDLED(2002, "order already handled"), + APP_ID_OR_PASSWORD_ILLEGAL(2003, "app or password illegal"), + SYSTEM_CODE_ILLEGAL(2004, "system code illegal"), + CLUSTER_TASK_HOST_LIST_ILLEGAL(2005, "主机列表错误,请检查主机列表"), + JSON_PARSER_ERROR(2006, "json parser error"), - MONITOR_NOT_EXIST(1110, "monitor not exist"), + BROKER_NUM_NOT_ENOUGH(2050, "broker not enough"), + CONTROLLER_NOT_ALIVE(2051, "controller not alive"), + CLUSTER_METADATA_ERROR(2052, "cluster metadata error"), + TOPIC_CONFIG_ERROR(2053, "topic config error"), - QUOTA_NOT_EXIST(1000, "quota not exist, please check clusterId, topicName and appId"), + /** + * 参数错误 - 资源检查错误 + * 因为外部系统的问题, 操作时引起的错误, [7000, 8000) + * ------------------------------------------------------------------------------------------ + */ + RESOURCE_NOT_EXIST(7100, "资源不存在"), + CLUSTER_NOT_EXIST(7101, "cluster not exist"), + BROKER_NOT_EXIST(7102, "broker not exist"), + TOPIC_NOT_EXIST(7103, "topic not exist"), + PARTITION_NOT_EXIST(7104, "partition not exist"), + ACCOUNT_NOT_EXIST(7105, "account not exist"), + APP_NOT_EXIST(7106, "app not exist"), + ORDER_NOT_EXIST(7107, "order not exist"), + CONFIG_NOT_EXIST(7108, "config not exist"), + IDC_NOT_EXIST(7109, "idc not exist"), + TASK_NOT_EXIST(7110, "task not exist"), + AUTHORITY_NOT_EXIST(7111, "authority not exist"), + MONITOR_NOT_EXIST(7112, "monitor not exist"), + QUOTA_NOT_EXIST(7113, "quota not exist, please check clusterId, topicName and appId"), + CONSUMER_GROUP_NOT_EXIST(7114, "consumerGroup not exist"), + TOPIC_BIZ_DATA_NOT_EXIST(7115, "topic biz data not exist, please sync topic to db"), - // 资源不存在, 已存在, 已被使用 - RESOURCE_NOT_EXIST(1200, "资源不存在"), - RESOURCE_ALREADY_EXISTED(1200, "资源已经存在"), - RESOURCE_NAME_DUPLICATED(1200, "资源名称重复"), - RESOURCE_ALREADY_USED(1000, "资源早已被使用"), + // 资源已存在 + RESOURCE_ALREADY_EXISTED(7200, "资源已经存在"), + TOPIC_ALREADY_EXIST(7201, "topic already existed"), + + // 资源重名 + RESOURCE_NAME_DUPLICATED(7300, "资源名称重复"), + + // 资源已被使用 + RESOURCE_ALREADY_USED(7400, "资源早已被使用"), /** - * 资源参数错误 + * 因为外部系统的问题, 操作时引起的错误, [8000, 9000) + * ------------------------------------------------------------------------------------------ */ - CG_LOCATION_ILLEGAL(10000, "consumer group location illegal"), - ORDER_ALREADY_HANDLED(1000, "order already handled"), + MYSQL_ERROR(8010, "operate database failed"), - APP_ID_OR_PASSWORD_ILLEGAL(1000, "app or password illegal"), - SYSTEM_CODE_ILLEGAL(1000, "system code illegal"), + ZOOKEEPER_CONNECT_FAILED(8020, "zookeeper connect failed"), + ZOOKEEPER_READ_FAILED(8021, "zookeeper read failed"), + ZOOKEEPER_WRITE_FAILED(8022, "zookeeper write failed"), + ZOOKEEPER_DELETE_FAILED(8023, "zookeeper delete failed"), - CLUSTER_TASK_HOST_LIST_ILLEGAL(1000, "主机列表错误,请检查主机列表"), + // 调用集群任务里面的agent失败 + CALL_CLUSTER_TASK_AGENT_FAILED(8030, " call cluster task agent failed"), + // 调用监控系统失败 + CALL_MONITOR_SYSTEM_ERROR(8040, " call monitor-system failed"), + // 存储相关的调用失败 + STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"), + STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"), + STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"), + LDAP_AUTHENTICATION_FAILED(8053, "LDAP authentication failed"), - - - - - - - /////////////////////////////////////////////////////////////// - - USER_WITHOUT_AUTHORITY(1000, "user without authority"), - - - - JSON_PARSER_ERROR(1000, "json parser error"), - - - TOPIC_OPERATION_PARAM_NULL_POINTER(2, "参数错误"), - TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(3, "分区数错误"), - TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(4, "Broker数不足错误"), - TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(5, "Topic名称非法"), - TOPIC_OPERATION_TOPIC_EXISTED(6, "Topic已存在"), - TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(7, "Topic未知"), - TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(8, "Topic配置错误"), - TOPIC_OPERATION_TOPIC_IN_DELETING(9, "Topic正在删除"), - TOPIC_OPERATION_UNKNOWN_ERROR(10, "未知错误"), - TOPIC_EXIST_CONNECT_CANNOT_DELETE(10, "topic exist connect cannot delete"), - EXIST_TOPIC_CANNOT_DELETE(10, "exist topic cannot delete"), - - - /** - * 工单 - */ - CHANGE_ZOOKEEPER_FORBIDEN(100, "change zookeeper forbiden"), -// APP_EXIST_TOPIC_AUTHORITY_CANNOT_DELETE(1000, "app exist topic authority cannot delete"), - - UPLOAD_FILE_FAIL(1000, "upload file fail"), - FILE_TYPE_NOT_SUPPORT(1000, "File type not support"), - DOWNLOAD_FILE_FAIL(1000, "download file fail"), - - - TOPIC_ALREADY_EXIST(17400, "topic already existed"), - CONSUMER_GROUP_NOT_EXIST(17411, "consumerGroup not exist"), ; private int code; diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java index 937d9cf8..2e903485 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/ClusterDetailDTO.java @@ -23,6 +23,8 @@ public class ClusterDetailDTO { private String securityProperties; + private String jmxProperties; + private Integer status; private Date gmtCreate; @@ -103,6 +105,14 @@ public class ClusterDetailDTO { this.securityProperties = securityProperties; } + public String getJmxProperties() { + return jmxProperties; + } + + public void setJmxProperties(String jmxProperties) { + this.jmxProperties = jmxProperties; + } + public Integer getStatus() { return status; } @@ -176,8 +186,9 @@ public class ClusterDetailDTO { ", bootstrapServers='" + bootstrapServers + '\'' + ", kafkaVersion='" + kafkaVersion + '\'' + ", idc='" + idc + '\'' + - ", mode='" + mode + '\'' + + ", mode=" + mode + ", securityProperties='" + securityProperties + '\'' + + ", jmxProperties='" + jmxProperties + '\'' + ", status=" + status + ", gmtCreate=" + gmtCreate + ", gmtModify=" + gmtModify + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/cluster/LogicalCluster.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/cluster/LogicalCluster.java index 86941d0e..a7525374 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/cluster/LogicalCluster.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/cluster/LogicalCluster.java @@ -9,6 +9,8 @@ public class LogicalCluster { private String logicalClusterName; + private String logicalClusterIdentification; + private Integer mode; private Integer topicNum; @@ -41,6 +43,14 @@ public class LogicalCluster { this.logicalClusterName = logicalClusterName; } + public String getLogicalClusterIdentification() { + return logicalClusterIdentification; + } + + public void setLogicalClusterIdentification(String logicalClusterIdentification) { + this.logicalClusterIdentification = logicalClusterIdentification; + } + public Integer getMode() { return mode; } @@ -81,6 +91,14 @@ public class LogicalCluster { this.bootstrapServers = bootstrapServers; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + public Long getGmtCreate() { return gmtCreate; } @@ -97,19 +115,12 @@ public class LogicalCluster { this.gmtModify = gmtModify; } - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - @Override public String toString() { return "LogicalCluster{" + "logicalClusterId=" + logicalClusterId + ", logicalClusterName='" + logicalClusterName + '\'' + + ", logicalClusterIdentification='" + logicalClusterIdentification + '\'' + ", mode=" + mode + ", topicNum=" + topicNum + ", clusterVersion='" + clusterVersion + '\'' + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/config/SinkTopicRequestTimeMetricsConfig.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/config/SinkTopicRequestTimeMetricsConfig.java deleted file mode 100644 index 91faaba1..00000000 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/ao/config/SinkTopicRequestTimeMetricsConfig.java +++ /dev/null @@ -1,57 +0,0 @@ -package com.xiaojukeji.kafka.manager.common.entity.ao.config; - -/** - * @author zengqiao - * @date 20/9/7 - */ -public class SinkTopicRequestTimeMetricsConfig { - private Long clusterId; - - private String topicName; - - private Long startId; - - private Long step; - - public Long getClusterId() { - return clusterId; - } - - public void setClusterId(Long clusterId) { - this.clusterId = clusterId; - } - - public String getTopicName() { - return topicName; - } - - public void setTopicName(String topicName) { - this.topicName = topicName; - } - - public Long getStartId() { - return startId; - } - - public void setStartId(Long startId) { - this.startId = startId; - } - - public Long getStep() { - return step; - } - - public void setStep(Long step) { - this.step = step; - } - - @Override - public String toString() { - return "SinkTopicRequestTimeMetricsConfig{" + - "clusterId=" + clusterId + - ", topicName='" + topicName + '\'' + - ", startId=" + startId + - ", step=" + step + - '}'; - } -} \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/ControllerPreferredCandidateDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/ControllerPreferredCandidateDTO.java new file mode 100644 index 00000000..1b4c95b9 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/op/ControllerPreferredCandidateDTO.java @@ -0,0 +1,45 @@ +package com.xiaojukeji.kafka.manager.common.entity.dto.op; + +import com.fasterxml.jackson.annotation.JsonIgnoreProperties; +import io.swagger.annotations.ApiModel; +import io.swagger.annotations.ApiModelProperty; + +import java.util.List; + +/** + * @author zengqiao + * @date 21/01/24 + */ +@JsonIgnoreProperties(ignoreUnknown = true) +@ApiModel(description="优选为Controller的候选者") +public class ControllerPreferredCandidateDTO { + @ApiModelProperty(value="集群ID") + private Long clusterId; + + @ApiModelProperty(value="优选为controller的BrokerId") + private List brokerIdList; + + public Long getClusterId() { + return clusterId; + } + + public void setClusterId(Long clusterId) { + this.clusterId = clusterId; + } + + public List getBrokerIdList() { + return brokerIdList; + } + + public void setBrokerIdList(List brokerIdList) { + this.brokerIdList = brokerIdList; + } + + @Override + public String toString() { + return "ControllerPreferredCandidateDTO{" + + "clusterId=" + clusterId + + ", brokerIdList=" + brokerIdList + + '}'; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java index c28bc8b6..7afc09c6 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/ClusterDTO.java @@ -27,9 +27,12 @@ public class ClusterDTO { @ApiModelProperty(value="数据中心") private String idc; - @ApiModelProperty(value="安全配置参数") + @ApiModelProperty(value="Kafka安全配置") private String securityProperties; + @ApiModelProperty(value="Jmx配置") + private String jmxProperties; + public Long getClusterId() { return clusterId; } @@ -78,6 +81,14 @@ public class ClusterDTO { this.securityProperties = securityProperties; } + public String getJmxProperties() { + return jmxProperties; + } + + public void setJmxProperties(String jmxProperties) { + this.jmxProperties = jmxProperties; + } + @Override public String toString() { return "ClusterDTO{" + @@ -87,15 +98,15 @@ public class ClusterDTO { ", bootstrapServers='" + bootstrapServers + '\'' + ", idc='" + idc + '\'' + ", securityProperties='" + securityProperties + '\'' + + ", jmxProperties='" + jmxProperties + '\'' + '}'; } - public Boolean legal() { + public boolean legal() { if (ValidateUtils.isNull(clusterName) || ValidateUtils.isNull(zookeeper) || ValidateUtils.isNull(idc) - || ValidateUtils.isNull(bootstrapServers) - ) { + || ValidateUtils.isNull(bootstrapServers)) { return false; } return true; diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/LogicalClusterDTO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/LogicalClusterDTO.java index 790f9758..def22479 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/LogicalClusterDTO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/dto/rd/LogicalClusterDTO.java @@ -21,6 +21,9 @@ public class LogicalClusterDTO { @ApiModelProperty(value = "名称") private String name; + @ApiModelProperty(value = "集群标识, 用于告警的上报") + private String identification; + @ApiModelProperty(value = "集群模式") private Integer mode; @@ -52,6 +55,14 @@ public class LogicalClusterDTO { this.name = name; } + public String getIdentification() { + return identification; + } + + public void setIdentification(String identification) { + this.identification = identification; + } + public Integer getMode() { return mode; } @@ -97,6 +108,7 @@ public class LogicalClusterDTO { return "LogicalClusterDTO{" + "id=" + id + ", name='" + name + '\'' + + ", identification='" + identification + '\'' + ", mode=" + mode + ", clusterId=" + clusterId + ", regionIdList=" + regionIdList + @@ -117,6 +129,7 @@ public class LogicalClusterDTO { } appId = ValidateUtils.isNull(appId)? "": appId; description = ValidateUtils.isNull(description)? "": description; + identification = ValidateUtils.isNull(identification)? name: identification; return true; } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ClusterDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ClusterDO.java index cefbc9f2..5ebebc75 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ClusterDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/ClusterDO.java @@ -1,6 +1,7 @@ package com.xiaojukeji.kafka.manager.common.entity.pojo; import java.util.Date; +import java.util.Objects; /** * @author zengqiao @@ -17,6 +18,8 @@ public class ClusterDO implements Comparable { private String securityProperties; + private String jmxProperties; + private Integer status; private Date gmtCreate; @@ -31,30 +34,6 @@ public class ClusterDO implements Comparable { this.id = id; } - public Integer getStatus() { - return status; - } - - public void setStatus(Integer status) { - this.status = status; - } - - public Date getGmtCreate() { - return gmtCreate; - } - - public void setGmtCreate(Date gmtCreate) { - this.gmtCreate = gmtCreate; - } - - public Date getGmtModify() { - return gmtModify; - } - - public void setGmtModify(Date gmtModify) { - this.gmtModify = gmtModify; - } - public String getClusterName() { return clusterName; } @@ -87,6 +66,38 @@ public class ClusterDO implements Comparable { this.securityProperties = securityProperties; } + public String getJmxProperties() { + return jmxProperties; + } + + public void setJmxProperties(String jmxProperties) { + this.jmxProperties = jmxProperties; + } + + public Integer getStatus() { + return status; + } + + public void setStatus(Integer status) { + this.status = status; + } + + public Date getGmtCreate() { + return gmtCreate; + } + + public void setGmtCreate(Date gmtCreate) { + this.gmtCreate = gmtCreate; + } + + public Date getGmtModify() { + return gmtModify; + } + + public void setGmtModify(Date gmtModify) { + this.gmtModify = gmtModify; + } + @Override public String toString() { return "ClusterDO{" + @@ -95,6 +106,7 @@ public class ClusterDO implements Comparable { ", zookeeper='" + zookeeper + '\'' + ", bootstrapServers='" + bootstrapServers + '\'' + ", securityProperties='" + securityProperties + '\'' + + ", jmxProperties='" + jmxProperties + '\'' + ", status=" + status + ", gmtCreate=" + gmtCreate + ", gmtModify=" + gmtModify + @@ -105,4 +117,22 @@ public class ClusterDO implements Comparable { public int compareTo(ClusterDO clusterDO) { return this.id.compareTo(clusterDO.id); } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + ClusterDO clusterDO = (ClusterDO) o; + return Objects.equals(id, clusterDO.id) + && Objects.equals(clusterName, clusterDO.clusterName) + && Objects.equals(zookeeper, clusterDO.zookeeper) + && Objects.equals(bootstrapServers, clusterDO.bootstrapServers) + && Objects.equals(securityProperties, clusterDO.securityProperties) + && Objects.equals(jmxProperties, clusterDO.jmxProperties); + } + + @Override + public int hashCode() { + return Objects.hash(id, clusterName, zookeeper, bootstrapServers, securityProperties, jmxProperties); + } } \ No newline at end of file diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java index d5cb3497..db81c1c9 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/LogicalClusterDO.java @@ -11,6 +11,8 @@ public class LogicalClusterDO { private String name; + private String identification; + private Integer mode; private String appId; @@ -41,6 +43,14 @@ public class LogicalClusterDO { this.name = name; } + public String getIdentification() { + return identification; + } + + public void setIdentification(String identification) { + this.identification = identification; + } + public Integer getMode() { return mode; } @@ -102,6 +112,7 @@ public class LogicalClusterDO { return "LogicalClusterDO{" + "id=" + id + ", name='" + name + '\'' + + ", identification='" + identification + '\'' + ", mode=" + mode + ", appId='" + appId + '\'' + ", clusterId=" + clusterId + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/gateway/GatewayConfigDO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/gateway/GatewayConfigDO.java index c0e96000..fa29c7cf 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/gateway/GatewayConfigDO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/pojo/gateway/GatewayConfigDO.java @@ -17,6 +17,8 @@ public class GatewayConfigDO { private Long version; + private String description; + private Date createTime; private Date modifyTime; @@ -61,6 +63,14 @@ public class GatewayConfigDO { this.version = version; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + public Date getCreateTime() { return createTime; } @@ -85,6 +95,7 @@ public class GatewayConfigDO { ", name='" + name + '\'' + ", value='" + value + '\'' + ", version=" + version + + ", description='" + description + '\'' + ", createTime=" + createTime + ", modifyTime=" + modifyTime + '}'; diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/cluster/LogicClusterVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/cluster/LogicClusterVO.java index c3c5f9c3..8fa5db9d 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/cluster/LogicClusterVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/normal/cluster/LogicClusterVO.java @@ -15,6 +15,9 @@ public class LogicClusterVO { @ApiModelProperty(value="逻辑集群名称") private String clusterName; + @ApiModelProperty(value="逻辑标识") + private String clusterIdentification; + @ApiModelProperty(value="逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群") private Integer mode; @@ -24,9 +27,6 @@ public class LogicClusterVO { @ApiModelProperty(value="集群版本") private String clusterVersion; - @ApiModelProperty(value="物理集群ID") - private Long physicalClusterId; - @ApiModelProperty(value="集群服务地址") private String bootstrapServers; @@ -55,6 +55,22 @@ public class LogicClusterVO { this.clusterName = clusterName; } + public String getClusterIdentification() { + return clusterIdentification; + } + + public void setClusterIdentification(String clusterIdentification) { + this.clusterIdentification = clusterIdentification; + } + + public Integer getMode() { + return mode; + } + + public void setMode(Integer mode) { + this.mode = mode; + } + public Integer getTopicNum() { return topicNum; } @@ -71,14 +87,6 @@ public class LogicClusterVO { this.clusterVersion = clusterVersion; } - public Long getPhysicalClusterId() { - return physicalClusterId; - } - - public void setPhysicalClusterId(Long physicalClusterId) { - this.physicalClusterId = physicalClusterId; - } - public String getBootstrapServers() { return bootstrapServers; } @@ -87,6 +95,14 @@ public class LogicClusterVO { this.bootstrapServers = bootstrapServers; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + public Long getGmtCreate() { return gmtCreate; } @@ -103,32 +119,15 @@ public class LogicClusterVO { this.gmtModify = gmtModify; } - public Integer getMode() { - return mode; - } - - public void setMode(Integer mode) { - this.mode = mode; - } - - - public String getDescription() { - return description; - } - - public void setDescription(String description) { - this.description = description; - } - @Override public String toString() { return "LogicClusterVO{" + "clusterId=" + clusterId + ", clusterName='" + clusterName + '\'' + + ", clusterIdentification='" + clusterIdentification + '\'' + ", mode=" + mode + ", topicNum=" + topicNum + ", clusterVersion='" + clusterVersion + '\'' + - ", physicalClusterId=" + physicalClusterId + ", bootstrapServers='" + bootstrapServers + '\'' + ", description='" + description + '\'' + ", gmtCreate=" + gmtCreate + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/op/expert/ExpiredTopicVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/op/expert/ExpiredTopicVO.java index 46c7a3a2..c4921259 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/op/expert/ExpiredTopicVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/op/expert/ExpiredTopicVO.java @@ -28,7 +28,7 @@ public class ExpiredTopicVO { @ApiModelProperty(value = "负责人") private String principals; - @ApiModelProperty(value = "状态, -1:可下线, 0:过期待通知, 1+:已通知待反馈") + @ApiModelProperty(value = "状态, -1:已通知可下线, 0:过期待通知, 1+:已通知待反馈") private Integer status; public Long getClusterId() { diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/GatewayConfigVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/GatewayConfigVO.java index a0b402af..72314c31 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/GatewayConfigVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/GatewayConfigVO.java @@ -26,6 +26,9 @@ public class GatewayConfigVO { @ApiModelProperty(value="版本") private Long version; + @ApiModelProperty(value="描述说明") + private String description; + @ApiModelProperty(value="创建时间") private Date createTime; @@ -72,6 +75,14 @@ public class GatewayConfigVO { this.version = version; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + public Date getCreateTime() { return createTime; } @@ -96,6 +107,7 @@ public class GatewayConfigVO { ", name='" + name + '\'' + ", value='" + value + '\'' + ", version=" + version + + ", description='" + description + '\'' + ", createTime=" + createTime + ", modifyTime=" + modifyTime + '}'; diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterBaseVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterBaseVO.java index ca2b7350..111304f1 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterBaseVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/ClusterBaseVO.java @@ -32,9 +32,12 @@ public class ClusterBaseVO { @ApiModelProperty(value="集群类型") private Integer mode; - @ApiModelProperty(value="安全配置参数") + @ApiModelProperty(value="Kafka安全配置") private String securityProperties; + @ApiModelProperty(value="Jmx配置") + private String jmxProperties; + @ApiModelProperty(value="1:监控中, 0:暂停监控") private Integer status; @@ -108,6 +111,14 @@ public class ClusterBaseVO { this.securityProperties = securityProperties; } + public String getJmxProperties() { + return jmxProperties; + } + + public void setJmxProperties(String jmxProperties) { + this.jmxProperties = jmxProperties; + } + public Integer getStatus() { return status; } @@ -141,8 +152,9 @@ public class ClusterBaseVO { ", bootstrapServers='" + bootstrapServers + '\'' + ", kafkaVersion='" + kafkaVersion + '\'' + ", idc='" + idc + '\'' + - ", mode='" + mode + '\'' + + ", mode=" + mode + ", securityProperties='" + securityProperties + '\'' + + ", jmxProperties='" + jmxProperties + '\'' + ", status=" + status + ", gmtCreate=" + gmtCreate + ", gmtModify=" + gmtModify + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/LogicalClusterVO.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/LogicalClusterVO.java index 86ced10f..61f9b90c 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/LogicalClusterVO.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/entity/vo/rd/cluster/LogicalClusterVO.java @@ -18,6 +18,9 @@ public class LogicalClusterVO { @ApiModelProperty(value = "逻辑集群名称") private String logicalClusterName; + @ApiModelProperty(value = "逻辑集群标识") + private String logicalClusterIdentification; + @ApiModelProperty(value = "物理集群ID") private Long physicalClusterId; @@ -55,6 +58,14 @@ public class LogicalClusterVO { this.logicalClusterName = logicalClusterName; } + public String getLogicalClusterIdentification() { + return logicalClusterIdentification; + } + + public void setLogicalClusterIdentification(String logicalClusterIdentification) { + this.logicalClusterIdentification = logicalClusterIdentification; + } + public Long getPhysicalClusterId() { return physicalClusterId; } @@ -116,6 +127,7 @@ public class LogicalClusterVO { return "LogicalClusterVO{" + "logicalClusterId=" + logicalClusterId + ", logicalClusterName='" + logicalClusterName + '\'' + + ", logicalClusterIdentification='" + logicalClusterIdentification + '\'' + ", physicalClusterId=" + physicalClusterId + ", regionIdList=" + regionIdList + ", mode=" + mode + diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtils.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtils.java index d9724065..283d59c5 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtils.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtils.java @@ -53,6 +53,20 @@ public class JsonUtils { return JSON.toJSONString(obj); } + public static T stringToObj(String src, Class clazz) { + if (ValidateUtils.isBlank(src)) { + return null; + } + return JSON.parseObject(src, clazz); + } + + public static List stringToArrObj(String src, Class clazz) { + if (ValidateUtils.isBlank(src)) { + return null; + } + return JSON.parseArray(src, clazz); + } + public static List parseTopicConnections(Long clusterId, JSONObject jsonObject, long postTime) { List connectionDOList = new ArrayList<>(); for (String clientType: jsonObject.keySet()) { diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ValidateUtils.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ValidateUtils.java index 1ece8f9f..6bd0c55c 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ValidateUtils.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ValidateUtils.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.utils; import org.apache.commons.lang.StringUtils; +import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.Set; @@ -11,6 +12,20 @@ import java.util.Set; * @date 20/4/16 */ public class ValidateUtils { + /** + * 任意一个为空, 则返回true + */ + public static boolean anyNull(Object... objects) { + return Arrays.stream(objects).anyMatch(ValidateUtils::isNull); + } + + /** + * 是空字符串或者空 + */ + public static boolean anyBlank(String... strings) { + return Arrays.stream(strings).anyMatch(StringUtils::isBlank); + } + /** * 为空 */ diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConfig.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConfig.java new file mode 100644 index 00000000..bbc913c4 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConfig.java @@ -0,0 +1,65 @@ +package com.xiaojukeji.kafka.manager.common.utils.jmx; + +public class JmxConfig { + /** + * 单台最大连接数 + */ + private Integer maxConn; + + /** + * 用户名 + */ + private String username; + + /** + * 密码 + */ + private String password; + + /** + * 开启SSL + */ + private Boolean openSSL; + + public Integer getMaxConn() { + return maxConn; + } + + public void setMaxConn(Integer maxConn) { + this.maxConn = maxConn; + } + + public String getUsername() { + return username; + } + + public void setUsername(String username) { + this.username = username; + } + + public String getPassword() { + return password; + } + + public void setPassword(String password) { + this.password = password; + } + + public Boolean isOpenSSL() { + return openSSL; + } + + public void setOpenSSL(Boolean openSSL) { + this.openSSL = openSSL; + } + + @Override + public String toString() { + return "JmxConfig{" + + "maxConn=" + maxConn + + ", username='" + username + '\'' + + ", password='" + password + '\'' + + ", openSSL=" + openSSL + + '}'; + } +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConnectorWrap.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConnectorWrap.java index ed8a639e..c7c69ca3 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConnectorWrap.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/jmx/JmxConnectorWrap.java @@ -1,5 +1,6 @@ package com.xiaojukeji.kafka.manager.common.utils.jmx; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -7,8 +8,14 @@ import javax.management.*; import javax.management.remote.JMXConnector; import javax.management.remote.JMXConnectorFactory; import javax.management.remote.JMXServiceURL; +import javax.management.remote.rmi.RMIConnectorServer; +import javax.naming.Context; +import javax.rmi.ssl.SslRMIClientSocketFactory; import java.io.IOException; import java.net.MalformedURLException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; @@ -28,13 +35,19 @@ public class JmxConnectorWrap { private AtomicInteger atomicInteger; - public JmxConnectorWrap(String host, int port, int maxConn) { + private JmxConfig jmxConfig; + + public JmxConnectorWrap(String host, int port, JmxConfig jmxConfig) { this.host = host; this.port = port; - if (maxConn <= 0) { - maxConn = 1; + this.jmxConfig = jmxConfig; + if (ValidateUtils.isNull(this.jmxConfig)) { + this.jmxConfig = new JmxConfig(); } - this.atomicInteger = new AtomicInteger(maxConn); + if (ValidateUtils.isNullOrLessThanZero(this.jmxConfig.getMaxConn())) { + this.jmxConfig.setMaxConn(1); + } + this.atomicInteger = new AtomicInteger(this.jmxConfig.getMaxConn()); } public boolean checkJmxConnectionAndInitIfNeed() { @@ -64,8 +77,18 @@ public class JmxConnectorWrap { } String jmxUrl = String.format("service:jmx:rmi:///jndi/rmi://%s:%d/jmxrmi", host, port); try { - JMXServiceURL url = new JMXServiceURL(jmxUrl); - jmxConnector = JMXConnectorFactory.connect(url, null); + Map environment = new HashMap(); + if (!ValidateUtils.isBlank(this.jmxConfig.getUsername()) && !ValidateUtils.isBlank(this.jmxConfig.getPassword())) { + environment.put(JMXConnector.CREDENTIALS, Arrays.asList(this.jmxConfig.getUsername(), this.jmxConfig.getPassword())); + } + if (jmxConfig.isOpenSSL() != null && this.jmxConfig.isOpenSSL()) { + environment.put(Context.SECURITY_PROTOCOL, "ssl"); + SslRMIClientSocketFactory clientSocketFactory = new SslRMIClientSocketFactory(); + environment.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE, clientSocketFactory); + environment.put("com.sun.jndi.rmi.factory.socket", clientSocketFactory); + } + + jmxConnector = JMXConnectorFactory.connect(new JMXServiceURL(jmxUrl), environment); LOGGER.info("JMX connect success, host:{} port:{}.", host, port); return true; } catch (MalformedURLException e) { diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ldap/LDAPAuthentication.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ldap/LDAPAuthentication.java new file mode 100644 index 00000000..eff3bc25 --- /dev/null +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/utils/ldap/LDAPAuthentication.java @@ -0,0 +1,128 @@ +package com.xiaojukeji.kafka.manager.common.utils.ldap; + +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.stereotype.Component; + +import javax.naming.AuthenticationException; +import javax.naming.Context; +import javax.naming.NamingEnumeration; +import javax.naming.NamingException; +import javax.naming.directory.SearchControls; +import javax.naming.directory.SearchResult; +import javax.naming.ldap.InitialLdapContext; +import javax.naming.ldap.LdapContext; +import java.util.Hashtable; + +@Component +public class LDAPAuthentication { + + @Value(value = "${ldap.url}") + private String ldapUrl; + + @Value(value = "${ldap.basedn}") + private String ldapBasedn; + + @Value(value = "${ldap.factory}") + private String ldapFactory; + + @Value(value = "${ldap.filter}") + private String ldapfilter; + + @Value(value = "${ldap.auth-user-registration-role}") + private String authUserRegistrationRole; + + @Value(value = "${ldap.security.authentication}") + private String securityAuthentication; + + @Value(value = "${ldap.security.principal}") + private String securityPrincipal; + + @Value(value = "${ldap.security.credentials}") + private String securityCredentials; + + private LdapContext getConnect() { + Hashtable env = new Hashtable(); + env.put(Context.INITIAL_CONTEXT_FACTORY, ldapFactory); + env.put(Context.PROVIDER_URL, ldapUrl + ldapBasedn); + env.put(Context.SECURITY_AUTHENTICATION, securityAuthentication); + + // 此处若不指定用户名和密码,则自动转换为匿名登录 + env.put(Context.SECURITY_PRINCIPAL, securityPrincipal); + env.put(Context.SECURITY_CREDENTIALS, securityCredentials); + try { + return new InitialLdapContext(env, null); + } catch (AuthenticationException e) { + e.printStackTrace(); + } catch (Exception e) { + e.printStackTrace(); + } + return null; + } + + private String getUserDN(String account,LdapContext ctx) { + String userDN = ""; + try { + SearchControls constraints = new SearchControls(); + constraints.setSearchScope(SearchControls.SUBTREE_SCOPE); + String filter = "(&(objectClass=*)("+ldapfilter+"=" + account + "))"; + + NamingEnumeration en = ctx.search("", filter, constraints); + if (en == null || !en.hasMoreElements()) { + return ""; + } + // maybe more than one element + while (en.hasMoreElements()) { + Object obj = en.nextElement(); + if (obj instanceof SearchResult) { + SearchResult si = (SearchResult) obj; + userDN += si.getName(); + userDN += "," + ldapBasedn; + break; + } + } + } catch (Exception e) { + e.printStackTrace(); + } + + return userDN; + } + + /** + * LDAP账密验证 + * @param account + * @param password + * @return + */ + public boolean authenricate(String account, String password) { + LdapContext ctx = getConnect(); + + boolean valide = false; + + try { + String userDN = getUserDN(account,ctx); + if(ValidateUtils.isBlank(userDN)){ + return valide; + } + ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, userDN); + ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password); + ctx.reconnect(null); + valide = true; + } catch (AuthenticationException e) { + System.out.println(e.toString()); + } catch (NamingException e) { + e.printStackTrace(); + }finally { + if(ctx!=null) { + try { + ctx.close(); + } catch (NamingException e) { + e.printStackTrace(); + } + } + } + + return valide; + } + +} diff --git a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java index e0a5632a..6705f435 100644 --- a/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java +++ b/kafka-manager-common/src/main/java/com/xiaojukeji/kafka/manager/common/zookeeper/ZkPathUtil.java @@ -33,7 +33,9 @@ public class ZkPathUtil { private static final String D_METRICS_CONFIG_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "KafkaExMetrics"; - public static final String D_CONTROLLER_CANDIDATES = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension/candidates"; + public static final String D_CONFIG_EXTENSION_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension"; + + public static final String D_CONTROLLER_CANDIDATES = D_CONFIG_EXTENSION_ROOT_NODE + ZOOKEEPER_SEPARATOR + "candidates"; public static String getBrokerIdNodePath(Integer brokerId) { return BROKER_IDS_ROOT + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId); @@ -111,6 +113,10 @@ public class ZkPathUtil { } public static String getKafkaExtraMetricsPath(Integer brokerId) { - return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId); + return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + brokerId; + } + + public static String getControllerCandidatePath(Integer brokerId) { + return D_CONTROLLER_CANDIDATES + ZOOKEEPER_SEPARATOR + brokerId; } } diff --git a/kafka-manager-common/src/test/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtilsTest.java b/kafka-manager-common/src/test/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtilsTest.java new file mode 100644 index 00000000..1d338015 --- /dev/null +++ b/kafka-manager-common/src/test/java/com/xiaojukeji/kafka/manager/common/utils/JsonUtilsTest.java @@ -0,0 +1,18 @@ +package com.xiaojukeji.kafka.manager.common.utils; + +import org.junit.Assert; +import org.junit.Test; + +import java.util.HashMap; +import java.util.Map; + +public class JsonUtilsTest { + @Test + public void testMapToJsonString() { + Map map = new HashMap<>(); + map.put("key", "value"); + map.put("int", 1); + String expectRes = "{\"key\":\"value\",\"int\":1}"; + Assert.assertEquals(expectRes, JsonUtils.toJSONString(map)); + } +} diff --git a/kafka-manager-console/pom.xml b/kafka-manager-console/pom.xml index 2339dabd..02ee7e1c 100644 --- a/kafka-manager-console/pom.xml +++ b/kafka-manager-console/pom.xml @@ -8,7 +8,7 @@ kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} diff --git a/kafka-manager-console/src/component/antd/index.tsx b/kafka-manager-console/src/component/antd/index.tsx index 2d771efe..d0958daf 100644 --- a/kafka-manager-console/src/component/antd/index.tsx +++ b/kafka-manager-console/src/component/antd/index.tsx @@ -94,6 +94,9 @@ import 'antd/es/divider/style'; import Upload from 'antd/es/upload'; import 'antd/es/upload/style'; +import Transfer from 'antd/es/transfer'; +import 'antd/es/transfer/style'; + import TimePicker from 'antd/es/time-picker'; import 'antd/es/time-picker/style'; @@ -142,5 +145,6 @@ export { TimePicker, RangePickerValue, Badge, - Popover + Popover, + Transfer }; diff --git a/kafka-manager-console/src/component/editor/index.less b/kafka-manager-console/src/component/editor/index.less index 4ff05854..36c52cde 100644 --- a/kafka-manager-console/src/component/editor/index.less +++ b/kafka-manager-console/src/component/editor/index.less @@ -25,7 +25,7 @@ .editor{ height: 100%; position: absolute; - left: -14%; + left: -12%; width: 120%; } } diff --git a/kafka-manager-console/src/component/editor/monacoEditor.tsx b/kafka-manager-console/src/component/editor/monacoEditor.tsx index 7a0dd44c..ac0a297a 100644 --- a/kafka-manager-console/src/component/editor/monacoEditor.tsx +++ b/kafka-manager-console/src/component/editor/monacoEditor.tsx @@ -21,24 +21,12 @@ class Monacoeditor extends React.Component { public state = { placeholder: '', }; - // public arr = '{"clusterId":95,"startId":37397856,"step":100,"topicName":"kmo_topic_metrics_tempory_zq"}'; - // public Ars(a: string) { - // const obj = JSON.parse(a); - // const newobj: any = {}; - // for (const item in obj) { - // if (typeof obj[item] === 'object') { - // this.Ars(obj[item]); - // } else { - // newobj[item] = obj[item]; - // } - // } - // return JSON.stringify(newobj); - // } + public async componentDidMount() { const { value, onChange } = this.props; const format: any = await format2json(value); this.editor = monaco.editor.create(this.ref, { - value: format.result, + value: format.result || value, language: 'json', lineNumbers: 'off', scrollBeyondLastLine: false, @@ -48,7 +36,7 @@ class Monacoeditor extends React.Component { minimap: { enabled: false, }, - // automaticLayout: true, // 自动布局 + automaticLayout: true, // 自动布局 glyphMargin: true, // 字形边缘 {},[] // useTabStops: false, // formatOnPaste: true, diff --git a/kafka-manager-console/src/component/x-form/index.tsx b/kafka-manager-console/src/component/x-form/index.tsx index 20b7c421..dc435d0f 100755 --- a/kafka-manager-console/src/component/x-form/index.tsx +++ b/kafka-manager-console/src/component/x-form/index.tsx @@ -2,6 +2,7 @@ import * as React from 'react'; import { Select, Input, InputNumber, Form, Switch, Checkbox, DatePicker, Radio, Upload, Button, Icon, Tooltip } from 'component/antd'; import Monacoeditor from 'component/editor/monacoEditor'; import { searchProps } from 'constants/table'; +import { version } from 'store/version'; import './index.less'; const TextArea = Input.TextArea; @@ -129,6 +130,8 @@ class XForm extends React.Component { this.renderFormItem(formItem), )} {formItem.renderExtraElement ? formItem.renderExtraElement() : null} + {/* 添加保存时间提示文案 */} + {formItem.attrs?.prompttype ? {formItem.attrs.prompttype} : null} ); })} @@ -189,7 +192,7 @@ class XForm extends React.Component { case FormItemType.upload: return ( false} {...item.attrs}> - + {version.fileSuffix && {`请上传${version.fileSuffix}文件`}} ); } diff --git a/kafka-manager-console/src/constants/strategy.ts b/kafka-manager-console/src/constants/strategy.ts index c0d19001..e92563e6 100644 --- a/kafka-manager-console/src/constants/strategy.ts +++ b/kafka-manager-console/src/constants/strategy.ts @@ -67,7 +67,7 @@ export const timeMonthStr = 'YYYY/MM'; // tslint:disable-next-line:max-line-length export const indexUrl ={ - indexUrl:'https://github.com/didi/kafka-manager', + indexUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/kafka_metrics_desc.md', // 指标说明 cagUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/add_cluster/add_cluster.md', // 集群接入指南 Cluster access Guide } diff --git a/kafka-manager-console/src/constants/table.ts b/kafka-manager-console/src/constants/table.ts index 8a148407..3028e78d 100644 --- a/kafka-manager-console/src/constants/table.ts +++ b/kafka-manager-console/src/constants/table.ts @@ -19,7 +19,7 @@ export const cellStyle = { overflow: 'hidden', whiteSpace: 'nowrap', textOverflow: 'ellipsis', - cursor: 'pointer', + // cursor: 'pointer', }; export const searchProps = { diff --git a/kafka-manager-console/src/container/admin/cluster-detail/cluster-consumer.tsx b/kafka-manager-console/src/container/admin/cluster-detail/cluster-consumer.tsx index dd0df49a..911f44d2 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/cluster-consumer.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/cluster-consumer.tsx @@ -38,7 +38,7 @@ export class ClusterConsumer extends SearchAndFilterContainer { key: 'operation', width: '10%', render: (t: string, item: IOffset) => { - return ( this.getConsumeDetails(item)}>详情); + return ( this.getConsumeDetails(item)}>消费详情); }, }]; private xFormModal: IXFormWrapper; @@ -100,7 +100,7 @@ export class ClusterConsumer extends SearchAndFilterContainer {
  • {this.props.tab}
  • - {this.renderSearch()} + {this.renderSearch('', '请输入消费组名称')}
this.handleDetailsOk()} onCancel={() => this.handleDetailsCancel()} diff --git a/kafka-manager-console/src/container/admin/cluster-detail/cluster-controller.tsx b/kafka-manager-console/src/container/admin/cluster-detail/cluster-controller.tsx index aa536471..817dbc15 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/cluster-controller.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/cluster-controller.tsx @@ -2,7 +2,8 @@ import * as React from 'react'; import { SearchAndFilterContainer } from 'container/search-filter'; -import { Table } from 'component/antd'; +import { Table, Button, Popconfirm, Modal, Transfer, notification } from 'component/antd'; +// import { Transfer } from 'antd' import { observer } from 'mobx-react'; import { pagination } from 'constants/table'; import Url from 'lib/url-parser'; @@ -16,8 +17,12 @@ import { timeFormat } from 'constants/strategy'; export class ClusterController extends SearchAndFilterContainer { public clusterId: number; - public state = { + public state: any = { searchKey: '', + searchCandidateKey: '', + isCandidateModel: false, + mockData: [], + targetKeys: [], }; constructor(props: any) { @@ -37,6 +42,94 @@ export class ClusterController extends SearchAndFilterContainer { return data; } + public getCandidateData(origin: T[]) { + let data: T[] = origin; + let { searchCandidateKey } = this.state; + searchCandidateKey = (searchCandidateKey + '').trim().toLowerCase(); + + data = searchCandidateKey ? origin.filter((item: IController) => + (item.host !== undefined && item.host !== null) && item.host.toLowerCase().includes(searchCandidateKey as string), + ) : origin; + return data; + } + + // 候选controller + public renderCandidateController() { + const columns = [ + { + title: 'BrokerId', + dataIndex: 'brokerId', + key: 'brokerId', + width: '20%', + sorter: (a: IController, b: IController) => b.brokerId - a.brokerId, + render: (r: string, t: IController) => { + return ( + {r} + + ); + }, + }, + { + title: 'BrokerHost', + key: 'host', + dataIndex: 'host', + width: '20%', + // render: (r: string, t: IController) => { + // return ( + // {r} + // + // ); + // }, + }, + { + title: 'Broker状态', + key: 'status', + dataIndex: 'status', + width: '20%', + render: (r: number, t: IController) => { + return ( + {r === 1 ? '不在线' : '在线'} + ); + }, + }, + { + title: '创建时间', + dataIndex: 'startTime', + key: 'startTime', + width: '25%', + sorter: (a: IController, b: IController) => b.timestamp - a.timestamp, + render: (t: number) => moment(t).format(timeFormat), + }, + { + title: '操作', + dataIndex: 'operation', + key: 'operation', + width: '15%', + render: (r: string, t: IController) => { + return ( + this.deleteCandidateCancel(t)} + cancelText="取消" + okText="确认" + > + 删除 + + ); + }, + }, + ]; + + return ( +
+ ); + } + public renderController() { const columns = [ @@ -58,12 +151,6 @@ export class ClusterController extends SearchAndFilterContainer { key: 'host', dataIndex: 'host', width: '30%', - // render: (r: string, t: IController) => { - // return ( - // {r} - // - // ); - // }, }, { title: '变更时间', @@ -87,16 +174,104 @@ export class ClusterController extends SearchAndFilterContainer { public componentDidMount() { admin.getControllerHistory(this.clusterId); + admin.getCandidateController(this.clusterId); + admin.getBrokersMetadata(this.clusterId); + } + + public addController = () => { + this.setState({ isCandidateModel: true, targetKeys: [] }) + } + + public addCandidateChange = (targetKeys: any) => { + this.setState({ targetKeys }) + } + + + + public handleCandidateCancel = () => { + this.setState({ isCandidateModel: false }); + } + + public handleCandidateOk = () => { + let brokerIdList = this.state.targetKeys.map((item: any) => { + return admin.brokersMetadata[item].brokerId + }) + admin.addCandidateController(this.clusterId, brokerIdList).then(data => { + notification.success({ message: '新增成功' }); + admin.getCandidateController(this.clusterId); + }).catch(err => { + notification.error({ message: '新增失败' }); + }) + this.setState({ isCandidateModel: false, targetKeys: [] }); + } + + public deleteCandidateCancel = (target: any) => { + admin.deleteCandidateCancel(this.clusterId, [target.brokerId]).then(() => { + notification.success({ message: '删除成功' }); + }); + this.setState({ isCandidateModel: false }); + } + + public renderAddCandidateController() { + let filterControllerCandidate = admin.brokersMetadata.filter((item: any) => { + return !admin.filtercontrollerCandidate.includes(item.brokerId) + }) + + return ( + this.handleCandidateOk()} + onCancel={() => this.handleCandidateCancel()} + footer={<> + + + + } + > + item.host} + onChange={(targetKeys) => this.addCandidateChange(targetKeys)} + titles={['未选', '已选']} + locale={{ + itemUnit: '项', + itemsUnit: '项', + }} + listStyle={{ + width: "45%", + }} + /> + + ); } public render() { return (
    +
  • + 候选Controller + Controller将会优先从以下Broker中选举 +
  • +
    +
    + +
    + {this.renderSearch('', '请查找Host', 'searchCandidateKey')} +
    +
+ {this.renderCandidateController()} +
  • {this.props.tab}
  • {this.renderSearch('', '请输入Host')}
{this.renderController()} + {this.renderAddCandidateController()}
); } diff --git a/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx b/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx index 9aab6cf9..7b7aaae7 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/cluster-topic.tsx @@ -2,7 +2,7 @@ import * as React from 'react'; import Url from 'lib/url-parser'; import { region } from 'store'; import { admin } from 'store/admin'; -import { topic } from 'store/topic'; +import { app } from 'store/app'; import { Table, notification, Tooltip, Popconfirm } from 'antd'; import { pagination, cellStyle } from 'constants/table'; import { observer } from 'mobx-react'; @@ -56,8 +56,6 @@ export class ClusterTopic extends SearchAndFilterContainer { public expandPartition(item: IClusterTopics) { // getTopicBasicInfo admin.getTopicsBasicInfo(item.clusterId, item.topicName).then(data => { - console.log(admin.topicsBasic); - console.log(admin.basicInfo); this.clusterTopicsFrom = item; this.setState({ expandVisible: true, @@ -114,6 +112,7 @@ export class ClusterTopic extends SearchAndFilterContainer { public componentDidMount() { admin.getClusterTopics(this.clusterId); + app.getAdminAppList() } public renderClusterTopicList() { diff --git a/kafka-manager-console/src/container/admin/cluster-detail/exclusive-cluster.tsx b/kafka-manager-console/src/container/admin/cluster-detail/exclusive-cluster.tsx index 24b90c5e..6aaa2e78 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/exclusive-cluster.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/exclusive-cluster.tsx @@ -159,7 +159,6 @@ export class ExclusiveCluster extends SearchAndFilterContainer { public handleDeleteRegion = (record: IBrokersRegions) => { const filterRegion = admin.logicalClusters.filter(item => item.regionIdList.includes(record.id)); - if (!filterRegion) { return; } @@ -335,6 +334,7 @@ export class ExclusiveCluster extends SearchAndFilterContainer { {this.renderSearch('', '请输入Region名称/broker ID')} {this.renderRegion()} + {this.renderDeleteRegionModal()} ); } diff --git a/kafka-manager-console/src/container/admin/cluster-detail/index.less b/kafka-manager-console/src/container/admin/cluster-detail/index.less index 65c45b9c..0dd4d106 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/index.less +++ b/kafka-manager-console/src/container/admin/cluster-detail/index.less @@ -94,4 +94,10 @@ .region-prompt{ font-weight: bold; text-align: center; +} + +.asd{ + display: flex; + justify-content: space-around; + align-items: center; } \ No newline at end of file diff --git a/kafka-manager-console/src/container/admin/cluster-detail/index.tsx b/kafka-manager-console/src/container/admin/cluster-detail/index.tsx index 5882dd57..027dde27 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/index.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/index.tsx @@ -32,9 +32,9 @@ export class ClusterDetail extends React.Component { } public render() { - let content = {} as IMetaData; - content = admin.basicInfo ? admin.basicInfo : content; - return ( + let content = {} as IMetaData; + content = admin.basicInfo ? admin.basicInfo : content; + return ( <> - + @@ -60,11 +60,11 @@ export class ClusterDetail extends React.Component { - + - - + + diff --git a/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx b/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx index 93a58703..b0ae63f4 100644 --- a/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx +++ b/kafka-manager-console/src/container/admin/cluster-detail/logical-cluster.tsx @@ -40,15 +40,15 @@ export class LogicalCluster extends SearchAndFilterContainer { key: 'logicalClusterId', }, { - title: '逻辑集群中文名称', + title: '逻辑集群名称', dataIndex: 'logicalClusterName', key: 'logicalClusterName', width: '150px' }, { - title: '逻辑集群英文名称', - dataIndex: 'logicalClusterName', - key: 'logicalClusterName1', + title: '逻辑集群标识', + dataIndex: 'logicalClusterIdentification', + key: 'logicalClusterIdentification', width: '150px' }, { diff --git a/kafka-manager-console/src/container/admin/cluster-list/index.tsx b/kafka-manager-console/src/container/admin/cluster-list/index.tsx index 97d6cb5d..dfac45d7 100644 --- a/kafka-manager-console/src/container/admin/cluster-list/index.tsx +++ b/kafka-manager-console/src/container/admin/cluster-list/index.tsx @@ -1,5 +1,5 @@ import * as React from 'react'; -import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert } from 'component/antd'; +import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Popover } from 'component/antd'; import { wrapper } from 'store'; import { observer } from 'mobx-react'; import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type'; @@ -12,6 +12,7 @@ import { urlPrefix } from 'constants/left-menu'; import { indexUrl } from 'constants/strategy' import { region } from 'store'; import './index.less'; +import Monacoeditor from 'component/editor/monacoEditor'; import { getAdminClusterColumns } from '../config'; const { confirm } = Modal; @@ -58,7 +59,7 @@ export class ClusterList extends SearchAndFilterContainer { message: '请输入zookeeper地址', }], attrs: { - placeholder: '请输入zookeeper地址', + placeholder: '请输入zookeeper地址,例如:192.168.0.1:2181,192.168.0.2:2181/logi-kafka', rows: 2, disabled: item ? true : false, }, @@ -72,7 +73,7 @@ export class ClusterList extends SearchAndFilterContainer { message: '请输入bootstrapServers', }], attrs: { - placeholder: '请输入bootstrapServers', + placeholder: '请输入bootstrapServers,例如:192.168.1.1:9092,192.168.1.2:9092', rows: 2, disabled: item ? true : false, }, @@ -131,7 +132,26 @@ export class ClusterList extends SearchAndFilterContainer { { "security.protocol": "SASL_PLAINTEXT", "sasl.mechanism": "PLAIN", - "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"xxxxxx\" password=\"xxxxxx\";" + "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\\"xxxxxx\\" password=\\"xxxxxx\\";" +}`, + rows: 8, + }, + }, + { + key: 'jmxProperties', + label: 'JMX认证', + type: 'text_area', + rules: [{ + required: false, + message: '请输入JMX认证', + }], + attrs: { + placeholder: `请输入JMX认证,例如: +{ +"maxConn": 10, #KM对单台Broker对最大连接数 +"username": "xxxxx", #用户名 +"password": "xxxxx", #密码 +"openSSL": true, #开启SSL,true表示开启SSL,false表示关闭 }`, rows: 8, }, @@ -271,11 +291,13 @@ export class ClusterList extends SearchAndFilterContainer { cancelText="取消" okText="确认" > - - {item.status === 1 ? '暂停监控' : '开始监控'} - + + + {item.status === 1 ? '暂停监控' : '开始监控'} + + 删除 diff --git a/kafka-manager-console/src/container/admin/config.tsx b/kafka-manager-console/src/container/admin/config.tsx index bce02cce..09a70f83 100644 --- a/kafka-manager-console/src/container/admin/config.tsx +++ b/kafka-manager-console/src/container/admin/config.tsx @@ -1,8 +1,8 @@ import * as React from 'react'; -import { IUser, IUploadFile, IConfigure, IMetaData, IBrokersPartitions } from 'types/base-type'; +import { IUser, IUploadFile, IConfigure, IConfigGateway, IMetaData } from 'types/base-type'; import { users } from 'store/users'; import { version } from 'store/version'; -import { showApplyModal, showModifyModal, showConfigureModal } from 'container/modal/admin'; +import { showApplyModal, showApplyModalModifyPassword, showModifyModal, showConfigureModal, showConfigGatewayModal } from 'container/modal/admin'; import { Popconfirm, Tooltip } from 'component/antd'; import { admin } from 'store/admin'; import { cellStyle } from 'constants/table'; @@ -27,6 +27,7 @@ export const getUserColumns = () => { return ( showApplyModal(record)}>编辑 + showApplyModalModifyPassword(record)}>修改密码 users.deleteUser(record.username)} @@ -184,6 +185,87 @@ export const getConfigureColumns = () => { return columns; }; +// 网关配置 +export const getConfigColumns = () => { + const columns = [ + { + title: '配置类型', + dataIndex: 'type', + key: 'type', + width: '25%', + ellipsis: true, + sorter: (a: IConfigGateway, b: IConfigGateway) => a.type.charCodeAt(0) - b.type.charCodeAt(0), + }, + { + title: '配置键', + dataIndex: 'name', + key: 'name', + width: '15%', + ellipsis: true, + sorter: (a: IConfigGateway, b: IConfigGateway) => a.name.charCodeAt(0) - b.name.charCodeAt(0), + }, + { + title: '配置值', + dataIndex: 'value', + key: 'value', + width: '20%', + ellipsis: true, + sorter: (a: IConfigGateway, b: IConfigGateway) => a.value.charCodeAt(0) - b.value.charCodeAt(0), + render: (t: string) => { + return t.substr(0, 1) === '{' && t.substr(0, -1) === '}' ? JSON.stringify(JSON.parse(t), null, 4) : t; + }, + }, + { + title: '修改时间', + dataIndex: 'modifyTime', + key: 'modifyTime', + width: '15%', + sorter: (a: IConfigGateway, b: IConfigGateway) => b.modifyTime - a.modifyTime, + render: (t: number) => moment(t).format(timeFormat), + }, + { + title: '版本号', + dataIndex: 'version', + key: 'version', + width: '10%', + ellipsis: true, + sorter: (a: IConfigGateway, b: IConfigGateway) => b.version.charCodeAt(0) - a.version.charCodeAt(0), + }, + { + title: '描述信息', + dataIndex: 'description', + key: 'description', + width: '20%', + ellipsis: true, + onCell: () => ({ + style: { + maxWidth: 180, + ...cellStyle, + }, + }), + }, + { + title: '操作', + width: '10%', + render: (text: string, record: IConfigGateway) => { + return ( + + showConfigGatewayModal(record)}>编辑 + admin.deleteConfigGateway({ id: record.id })} + cancelText="取消" + okText="确认" + > + 删除 + + ); + }, + }, + ]; + return columns; +}; + const renderClusterHref = (value: number | string, item: IMetaData, key: number) => { return ( // 0 暂停监控--不可点击 1 监控中---可正常点击 <> diff --git a/kafka-manager-console/src/container/admin/configure-management.tsx b/kafka-manager-console/src/container/admin/configure-management.tsx index 680d1da7..5c3494b9 100644 --- a/kafka-manager-console/src/container/admin/configure-management.tsx +++ b/kafka-manager-console/src/container/admin/configure-management.tsx @@ -3,11 +3,11 @@ import { SearchAndFilterContainer } from 'container/search-filter'; import { Table, Button, Spin } from 'component/antd'; import { admin } from 'store/admin'; import { observer } from 'mobx-react'; -import { IConfigure } from 'types/base-type'; +import { IConfigure, IConfigGateway } from 'types/base-type'; import { users } from 'store/users'; import { pagination } from 'constants/table'; -import { getConfigureColumns } from './config'; -import { showConfigureModal } from 'container/modal/admin'; +import { getConfigureColumns, getConfigColumns } from './config'; +import { showConfigureModal, showConfigGatewayModal } from 'container/modal/admin'; @observer export class ConfigureManagement extends SearchAndFilterContainer { @@ -17,7 +17,12 @@ export class ConfigureManagement extends SearchAndFilterContainer { }; public componentDidMount() { - admin.getConfigure(); + if (this.props.isShow) { + admin.getGatewayList(); + admin.getGatewayType(); + } else { + admin.getConfigure(); + } } public getData(origin: T[]) { @@ -34,15 +39,34 @@ export class ConfigureManagement extends SearchAndFilterContainer { return data; } + public getGatewayData(origin: T[]) { + let data: T[] = origin; + let { searchKey } = this.state; + searchKey = (searchKey + '').trim().toLowerCase(); + + data = searchKey ? origin.filter((item: IConfigGateway) => + ((item.name !== undefined && item.name !== null) && item.name.toLowerCase().includes(searchKey as string)) + || ((item.value !== undefined && item.value !== null) && item.value.toLowerCase().includes(searchKey as string)) + || ((item.description !== undefined && item.description !== null) && + item.description.toLowerCase().includes(searchKey as string)), + ) : origin; + return data; + } + public renderTable() { return ( -
:
+ />} ); @@ -53,7 +77,7 @@ export class ConfigureManagement extends SearchAndFilterContainer {
    {this.renderSearch('', '请输入配置键、值或描述')}
  • - +
); diff --git a/kafka-manager-console/src/container/admin/data-curve/index.tsx b/kafka-manager-console/src/container/admin/data-curve/index.tsx index bd113aeb..b822957c 100644 --- a/kafka-manager-console/src/container/admin/data-curve/index.tsx +++ b/kafka-manager-console/src/container/admin/data-curve/index.tsx @@ -6,6 +6,7 @@ import { curveKeys, CURVE_KEY_MAP, PERIOD_RADIO_MAP, PERIOD_RADIO } from './conf import moment = require('moment'); import { observer } from 'mobx-react'; import { timeStampStr } from 'constants/strategy'; +import { adminMonitor } from 'store/admin-monitor'; @observer export class DataCurveFilter extends React.Component { @@ -21,6 +22,7 @@ export class DataCurveFilter extends React.Component { } public refreshAll = () => { + adminMonitor.setRequestId(null); Object.keys(curveKeys).forEach((c: curveKeys) => { const { typeInfo, curveInfo: option } = CURVE_KEY_MAP.get(c); const { parser } = typeInfo; @@ -32,7 +34,7 @@ export class DataCurveFilter extends React.Component { return ( <> - {PERIOD_RADIO.map(p => {p.label})} + {PERIOD_RADIO.map(p => {p.label})} ); } - + public renderChart() { return (
- this.chart = ref } getChartData={this.getData.bind(this, null)} /> + this.chart = ref} getChartData={this.getData.bind(this, null)} />
); } @@ -132,7 +132,7 @@ export class IndividualBill extends React.Component { <>
- 账单趋势  - } + } key="1" > {this.renderDatePick()} diff --git a/kafka-manager-console/src/container/admin/platform-management.tsx b/kafka-manager-console/src/container/admin/platform-management.tsx index d48823a4..25f7f0bd 100644 --- a/kafka-manager-console/src/container/admin/platform-management.tsx +++ b/kafka-manager-console/src/container/admin/platform-management.tsx @@ -13,17 +13,20 @@ export class PlatformManagement extends React.Component { public render() { return ( <> - - - - - - - - - - - + + + + + + + + + + + + + + ); } diff --git a/kafka-manager-console/src/container/admin/user-management.tsx b/kafka-manager-console/src/container/admin/user-management.tsx index 757ceabb..1dc38e06 100644 --- a/kafka-manager-console/src/container/admin/user-management.tsx +++ b/kafka-manager-console/src/container/admin/user-management.tsx @@ -29,7 +29,7 @@ export class UserManagement extends SearchAndFilterContainer { searchKey = (searchKey + '').trim().toLowerCase(); data = searchKey ? origin.filter((item: IUser) => - (item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin ; + (item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin; return data; } diff --git a/kafka-manager-console/src/container/alarm/add-alarm/alarm-select.tsx b/kafka-manager-console/src/container/alarm/add-alarm/alarm-select.tsx index 6d19ec26..5cd1f4f0 100644 --- a/kafka-manager-console/src/container/alarm/add-alarm/alarm-select.tsx +++ b/kafka-manager-console/src/container/alarm/add-alarm/alarm-select.tsx @@ -1,7 +1,7 @@ import * as React from 'react'; import { alarm } from 'store/alarm'; import { IMonitorGroups } from 'types/base-type'; -import { getValueFromLocalStorage, setValueToLocalStorage } from 'lib/local-storage'; +import { getValueFromLocalStorage, setValueToLocalStorage, deleteValueFromLocalStorage } from 'lib/local-storage'; import { VirtualScrollSelect } from '../../../component/virtual-scroll-select'; interface IAlarmSelectProps { @@ -36,6 +36,10 @@ export class AlarmSelect extends React.Component { onChange && onChange(params); } + public componentWillUnmount() { + deleteValueFromLocalStorage('monitorGroups'); + } + public render() { const { value, isDisabled } = this.props; return ( diff --git a/kafka-manager-console/src/container/alarm/add-alarm/filter-form.tsx b/kafka-manager-console/src/container/alarm/add-alarm/filter-form.tsx index 50e9ce32..3e5dd0b7 100644 --- a/kafka-manager-console/src/container/alarm/add-alarm/filter-form.tsx +++ b/kafka-manager-console/src/container/alarm/add-alarm/filter-form.tsx @@ -11,6 +11,7 @@ import { filterKeys } from 'constants/strategy'; import { VirtualScrollSelect } from 'component/virtual-scroll-select'; import { IsNotNaN } from 'lib/utils'; import { searchProps } from 'constants/table'; +import { toJS } from 'mobx'; interface IDynamicProps { form?: any; @@ -33,6 +34,7 @@ export class DynamicSetFilter extends React.Component { public monitorType: string = null; public clusterId: number = null; public clusterName: string = null; + public clusterIdentification: string | number = null; public topicName: string = null; public consumerGroup: string = null; public location: string = null; @@ -45,16 +47,18 @@ export class DynamicSetFilter extends React.Component { this.props.form.validateFields((err: Error, values: any) => { if (!err) { monitorType = values.monitorType; - const index = cluster.clusterData.findIndex(item => item.clusterId === values.cluster); + const index = cluster.clusterData.findIndex(item => item.clusterIdentification === values.cluster); if (index > -1) { + values.clusterIdentification = cluster.clusterData[index].clusterIdentification; values.clusterName = cluster.clusterData[index].clusterName; } for (const key of Object.keys(values)) { if (filterKeys.indexOf(key) > -1) { // 只有这几种值可以设置 filterList.push({ - tkey: key === 'clusterName' ? 'cluster' : key, // 传参需要将clusterName转成cluster + tkey: key === 'clusterName' ? 'cluster' : key, // clusterIdentification topt: '=', tval: [values[key]], + clusterIdentification: values.clusterIdentification }); } } @@ -74,13 +78,13 @@ export class DynamicSetFilter extends React.Component { public resetFormValue( monitorType: string = null, - clusterId: number = null, + clusterIdentification: any = null, topicName: string = null, consumerGroup: string = null, location: string = null) { const { setFieldsValue } = this.props.form; setFieldsValue({ - cluster: clusterId, + cluster: clusterIdentification, topic: topicName, consumerGroup, location, @@ -88,18 +92,18 @@ export class DynamicSetFilter extends React.Component { }); } - public getClusterId = (clusterName: string) => { + public getClusterId = async (clusterIdentification: any) => { let clusterId = null; - const index = cluster.clusterData.findIndex(item => item.clusterName === clusterName); + const index = cluster.clusterData.findIndex(item => item.clusterIdentification === clusterIdentification); if (index > -1) { clusterId = cluster.clusterData[index].clusterId; } if (clusterId) { - cluster.getClusterMetaTopics(clusterId); + await cluster.getClusterMetaTopics(clusterId); this.clusterId = clusterId; return this.clusterId; - } - return this.clusterId = clusterName as any; + }; + return this.clusterId = clusterId as any; } public async initFormValue(monitorRule: IRequestParams) { @@ -108,17 +112,19 @@ export class DynamicSetFilter extends React.Component { const topicFilter = strategyFilterList.filter(item => item.tkey === 'topic')[0]; const consumerFilter = strategyFilterList.filter(item => item.tkey === 'consumerGroup')[0]; - const clusterName = clusterFilter ? clusterFilter.tval[0] : null; + const clusterIdentification = clusterFilter ? clusterFilter.tval[0] : null; const topic = topicFilter ? topicFilter.tval[0] : null; const consumerGroup = consumerFilter ? consumerFilter.tval[0] : null; const location: string = null; const monitorType = monitorRule.strategyExpressionList[0].metric; alarm.changeMonitorStrategyType(monitorType); - - await this.getClusterId(clusterName); + //增加clusterIdentification替代原来的clusterName + this.clusterIdentification = clusterIdentification; + await this.getClusterId(this.clusterIdentification); + // await this.handleSelectChange(topic, 'topic'); await this.handleSelectChange(consumerGroup, 'consumerGroup'); - this.resetFormValue(monitorType, this.clusterId, topic, consumerGroup, location); + this.resetFormValue(monitorType, this.clusterIdentification, topic, consumerGroup, location); } public clearFormData() { @@ -130,11 +136,12 @@ export class DynamicSetFilter extends React.Component { this.resetFormValue(); } - public async handleClusterChange(e: number) { - this.clusterId = e; + public async handleClusterChange(e: any) { + this.clusterIdentification = e; this.topicName = null; topic.setLoading(true); - await cluster.getClusterMetaTopics(e); + const clusterId = await this.getClusterId(e); + await cluster.getClusterMetaTopics(clusterId); this.resetFormValue(this.monitorType, e, null, this.consumerGroup, this.location); topic.setLoading(false); } @@ -170,7 +177,7 @@ export class DynamicSetFilter extends React.Component { } this.consumerGroup = null; this.location = null; - this.resetFormValue(this.monitorType, this.clusterId, this.topicName); + this.resetFormValue(this.monitorType, this.clusterIdentification, this.topicName); topic.setLoading(false); } @@ -213,17 +220,24 @@ export class DynamicSetFilter extends React.Component { }, rules: [{ required: true, message: '请选择监控指标' }], } as IVritualScrollSelect; + const clusterData = toJS(cluster.clusterData); + const options = clusterData?.length ? clusterData.map(item => { + return { + label: `${item.clusterName}${item.description ? '(' + item.description + ')' : ''}`, + value: item.clusterIdentification + } + }) : null; const clusterItem = { label: '集群', - options: cluster.clusterData, - defaultValue: this.clusterId, + options, + defaultValue: this.clusterIdentification, rules: [{ required: true, message: '请选择集群' }], attrs: { placeholder: '请选择集群', - className: 'middle-size', + className: 'large-size', disabled: this.isDetailPage, - onChange: (e: number) => this.handleClusterChange(e), + onChange: (e: any) => this.handleClusterChange(e), }, key: 'cluster', } as unknown as IVritualScrollSelect; @@ -241,7 +255,7 @@ export class DynamicSetFilter extends React.Component { }), attrs: { placeholder: '请选择Topic', - className: 'middle-size', + className: 'large-size', disabled: this.isDetailPage, onChange: (e: string) => this.handleSelectChange(e, 'topic'), }, @@ -329,7 +343,7 @@ export class DynamicSetFilter extends React.Component { key={v.value || v.key || index} value={v.value} > - {v.label.length > 25 ? + {v.label?.length > 25 ? {v.label} : v.label} diff --git a/kafka-manager-console/src/container/alarm/add-alarm/index.less b/kafka-manager-console/src/container/alarm/add-alarm/index.less index 9946c0c1..11d9b3a3 100644 --- a/kafka-manager-console/src/container/alarm/add-alarm/index.less +++ b/kafka-manager-console/src/container/alarm/add-alarm/index.less @@ -43,21 +43,23 @@ Icon { margin-left: 8px; } + .ant-form-item-label { + // padding-left: 10px; + width: 118px; + text-align: right !important; + } &.type-form { - padding-top: 10px; + padding-top: 10px; .ant-form{ min-width: 755px; } .ant-form-item { - width: 30%; + width: 45%; min-width: 360px; } - .ant-form-item-label { - padding-left: 10px; - } .ant-form-item-control { - width: 220px; + width: 300px; } } diff --git a/kafka-manager-console/src/container/alarm/add-alarm/index.tsx b/kafka-manager-console/src/container/alarm/add-alarm/index.tsx index ae201823..590c5847 100644 --- a/kafka-manager-console/src/container/alarm/add-alarm/index.tsx +++ b/kafka-manager-console/src/container/alarm/add-alarm/index.tsx @@ -12,7 +12,6 @@ import { alarm } from 'store/alarm'; import { app } from 'store/app'; import Url from 'lib/url-parser'; import { IStrategyExpression, IRequestParams } from 'types/alarm'; - @observer export class AddAlarm extends SearchAndFilterContainer { public isDetailPage = window.location.pathname.includes('/alarm-detail'); // 判断是否为详情 @@ -90,8 +89,8 @@ export class AddAlarm extends SearchAndFilterContainer { const filterObj = this.typeForm.getFormData().filterObj; // tslint:disable-next-line:max-line-length if (!actionValue || !timeValue || !typeValue || !strategyList.length || !filterObj || !filterObj.filterList.length) { - message.error('请正确填写必填项'); - return null; + message.error('请正确填写必填项'); + return null; } if (filterObj.monitorType === 'online-kafka-topic-throttled') { @@ -101,13 +100,17 @@ export class AddAlarm extends SearchAndFilterContainer { tval: [typeValue.app], }); } + this.id && filterObj.filterList.forEach((item: any) => { + if (item.tkey === 'cluster') { + item.tval = [item.clusterIdentification] + } + }) strategyList = strategyList.map((row: IStrategyExpression) => { return { ...row, metric: filterObj.monitorType, }; }); - return { appId: typeValue.app, name: typeValue.alarmName, @@ -129,7 +132,7 @@ export class AddAlarm extends SearchAndFilterContainer { public renderAlarmStrategy() { return (
- 报警策略 + 报警策略
this.strategyForm = form} />
@@ -139,9 +142,9 @@ export class AddAlarm extends SearchAndFilterContainer { public renderTimeForm() { return ( - <> - this.timeForm = form} /> - + <> + this.timeForm = form} /> + ); } @@ -164,7 +167,7 @@ export class AddAlarm extends SearchAndFilterContainer { {this.renderAlarmStrategy()} {this.renderTimeForm()} this.actionForm = actionForm} /> -
+
); } diff --git a/kafka-manager-console/src/container/alarm/add-alarm/strategy-form.tsx b/kafka-manager-console/src/container/alarm/add-alarm/strategy-form.tsx index 1852c468..677462fd 100644 --- a/kafka-manager-console/src/container/alarm/add-alarm/strategy-form.tsx +++ b/kafka-manager-console/src/container/alarm/add-alarm/strategy-form.tsx @@ -5,6 +5,7 @@ import { IStringMap } from 'types/base-type'; import { IRequestParams } from 'types/alarm'; import { IFormSelect, IFormItem, FormItemType } from 'component/x-form'; import { searchProps } from 'constants/table'; +import { alarm } from 'store/alarm'; interface IDynamicProps { form: any; @@ -27,6 +28,7 @@ class DynamicSetStrategy extends React.Component { public crudList = [] as ICRUDItem[]; public state = { shouldUpdate: false, + monitorType: alarm.monitorType }; public componentDidMount() { @@ -130,7 +132,7 @@ class DynamicSetStrategy extends React.Component { if (lineValue.func === 'happen' && paramsArray.length > 1 && paramsArray[0] < paramsArray[1]) { strategyList = []; // 清空赋值 - return message.error('周期值应大于次数') ; + return message.error('周期值应大于次数'); } lineValue.params = paramsArray.join(','); @@ -292,8 +294,39 @@ class DynamicSetStrategy extends React.Component { } return element; } - - public renderFormList(row: ICRUDItem) { + public unit(monitorType: string) { + let element = null; + switch (monitorType) { + case 'online-kafka-topic-msgIn': + element = "条/秒" + break; + case 'online-kafka-topic-bytesIn': + element = "字节/秒" + break; + case 'online-kafka-topic-bytesRejected': + element = "字节/秒" + break; + case 'online-kafka-topic-produce-throttled': + element = "1表示被限流" + break; + case 'online-kafka-topic-fetch-throttled': + element = "1表示被限流" + break; + case 'online-kafka-consumer-maxLag': + element = "条" + break; + case 'online-kafka-consumer-lag': + element = "条" + break; + case 'online-kafka-consumer-maxDelayTime': + element = "秒" + break; + } + return ( + {element} + ) + } + public renderFormList(row: ICRUDItem, monitorType: string) { const key = row.id; const funcType = row.func; @@ -309,6 +342,7 @@ class DynamicSetStrategy extends React.Component { key: key + '-func', } as IFormSelect)} {this.getFuncItem(row)} + {row.func !== 'c_avg_rate_abs' && row.func !== 'pdiff' ? this.unit(monitorType) : null} ); } @@ -340,8 +374,8 @@ class DynamicSetStrategy extends React.Component {
{crudList.map((row, index) => { return ( -
- {this.renderFormList(row)} +
+ {this.renderFormList(row, alarm.monitorType)} { crudList.length > 1 ? ( -
- 基本信息 -
- this.$form = form} - formData={formData} - formMap={xTypeFormMap} - layout="inline" - /> -
-
-
- 选择指标 -
- this.filterForm = form} /> -
-
+
+ 基本信息 +
+ this.$form = form} + formData={formData} + formMap={xTypeFormMap} + layout="inline" + /> +
+
+
+ 选择指标 +
+ this.filterForm = form} /> +
+
); } diff --git a/kafka-manager-console/src/container/alarm/alarm-list.tsx b/kafka-manager-console/src/container/alarm/alarm-list.tsx index 6dd6680b..54d266f1 100644 --- a/kafka-manager-console/src/container/alarm/alarm-list.tsx +++ b/kafka-manager-console/src/container/alarm/alarm-list.tsx @@ -9,6 +9,7 @@ import { pagination } from 'constants/table'; import { urlPrefix } from 'constants/left-menu'; import { alarm } from 'store/alarm'; import 'styles/table-filter.less'; +import { Link } from 'react-router-dom'; @observer export class AlarmList extends SearchAndFilterContainer { @@ -24,7 +25,7 @@ export class AlarmList extends SearchAndFilterContainer { if (app.active !== '-1' || searchKey !== '') { data = origin.filter(d => ((d.name !== undefined && d.name !== null) && d.name.toLowerCase().includes(searchKey as string) - || ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string))) + || ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string))) && (app.active === '-1' || d.appId === (app.active + '')), ); } else { @@ -55,9 +56,7 @@ export class AlarmList extends SearchAndFilterContainer { {this.renderSearch('名称:', '请输入告警规则或者操作人')}
  • @@ -68,6 +67,9 @@ export class AlarmList extends SearchAndFilterContainer { if (!alarm.monitorStrategies.length) { alarm.getMonitorStrategies(); } + if (!app.data.length) { + app.getAppList(); + } } public render() { diff --git a/kafka-manager-console/src/container/cluster/cluster-detail/cluster-overview.tsx b/kafka-manager-console/src/container/cluster/cluster-detail/cluster-overview.tsx index 49a25a3c..84c17703 100644 --- a/kafka-manager-console/src/container/cluster/cluster-detail/cluster-overview.tsx +++ b/kafka-manager-console/src/container/cluster/cluster-detail/cluster-overview.tsx @@ -31,11 +31,11 @@ export class ClusterOverview extends React.Component { const content = this.props.basicInfo as IBasicInfo; const clusterContent = [{ value: content.clusterName, - label: '集群中文名称', + label: '集群名称', }, { - value: content.clusterName, - label: '集群英文名称', + value: content.clusterIdentification, + label: '集群标识', }, { value: clusterTypeMap[content.mode], @@ -44,8 +44,8 @@ export class ClusterOverview extends React.Component { value: moment(content.gmtCreate).format(timeFormat), label: '接入时间', }, { - value: content.physicalClusterId, - label: '物理集群ID', + value: content.clusterId, + label: '集群ID', }]; const clusterInfo = [{ value: content.clusterVersion, diff --git a/kafka-manager-console/src/container/cluster/config.tsx b/kafka-manager-console/src/container/cluster/config.tsx index bbd03cb0..ca2ea880 100644 --- a/kafka-manager-console/src/container/cluster/config.tsx +++ b/kafka-manager-console/src/container/cluster/config.tsx @@ -13,32 +13,14 @@ const { confirm } = Modal; export const getClusterColumns = (urlPrefix: string) => { return [ { - title: '逻辑集群ID', + title: '集群ID', dataIndex: 'clusterId', key: 'clusterId', width: '9%', sorter: (a: IClusterData, b: IClusterData) => b.clusterId - a.clusterId, }, { - title: '逻辑集群中文名称', - dataIndex: 'clusterName', - key: 'clusterName', - width: '13%', - onCell: () => ({ - style: { - maxWidth: 120, - ...cellStyle, - }, - }), - sorter: (a: IClusterData, b: IClusterData) => a.clusterName.charCodeAt(0) - b.clusterName.charCodeAt(0), - render: (text: string, record: IClusterData) => ( - - {text} - - ), - }, - { - title: '逻辑集群英文名称', + title: '集群名称', dataIndex: 'clusterName', key: 'clusterName', width: '13%', @@ -55,6 +37,24 @@ export const getClusterColumns = (urlPrefix: string) => { ), }, + // { + // title: '逻辑集群英文名称', + // dataIndex: 'clusterName', + // key: 'clusterName', + // width: '13%', + // onCell: () => ({ + // style: { + // maxWidth: 120, + // ...cellStyle, + // }, + // }), + // sorter: (a: IClusterData, b: IClusterData) => a.clusterName.charCodeAt(0) - b.clusterName.charCodeAt(0), + // render: (text: string, record: IClusterData) => ( + // + // {text} + // + // ), + // }, { title: 'Topic数量', dataIndex: 'topicNum', diff --git a/kafka-manager-console/src/container/cluster/my-cluster.tsx b/kafka-manager-console/src/container/cluster/my-cluster.tsx index 7fc2f666..e017b0dd 100644 --- a/kafka-manager-console/src/container/cluster/my-cluster.tsx +++ b/kafka-manager-console/src/container/cluster/my-cluster.tsx @@ -78,7 +78,7 @@ export class MyCluster extends SearchAndFilterContainer { rules: [ { required: true, - pattern: /^.{5,}.$/, + pattern: /^.{4,}.$/, message: '请输入至少5个字符', }, ], @@ -91,7 +91,7 @@ export class MyCluster extends SearchAndFilterContainer { ], formData: {}, visible: true, - title: '申请集群', + title:
    申请集群资源申请文档
    , okText: '确认', onSubmit: (value: any) => { value.idc = region.currentRegion; @@ -160,7 +160,7 @@ export class MyCluster extends SearchAndFilterContainer { data = searchKey ? origin.filter((item: IClusterData) => (item.clusterName !== undefined && item.clusterName !== null) && item.clusterName.toLowerCase().includes(searchKey as string), - ) : origin ; + ) : origin; return data; } diff --git a/kafka-manager-console/src/container/drawer/data-migration.tsx b/kafka-manager-console/src/container/drawer/data-migration.tsx index 0743476b..4da64f5c 100644 --- a/kafka-manager-console/src/container/drawer/data-migration.tsx +++ b/kafka-manager-console/src/container/drawer/data-migration.tsx @@ -117,17 +117,17 @@ class DataMigrationFormTable extends React.Component { key: 'maxThrottle', editable: true, }, { - title: '迁移保存时间(h)', + title: '迁移后Topic保存时间(h)', dataIndex: 'reassignRetentionTime', key: 'reassignRetentionTime', editable: true, }, { - title: '原本保存时间(h)', + title: '原Topic保存时间(h)', dataIndex: 'retentionTime', key: 'retentionTime', // originalRetentionTime width: '132px', sorter: (a: IRenderData, b: IRenderData) => b.retentionTime - a.retentionTime, - render: (time: any) => transMSecondToHour(time), + render: (time: any) => transMSecondToHour(time), }, { title: 'BrokerID', dataIndex: 'brokerIdList', @@ -254,7 +254,7 @@ class DataMigrationFormTable extends React.Component { dataSource={this.props.data} columns={columns} pagination={false} - scroll={{y: 520}} + scroll={{ y: 520 }} className="migration-table" /> @@ -316,7 +316,7 @@ export class InfoForm extends React.Component { {getFieldDecorator('description', { initialValue: '', - rules: [{ required: true, message: '请输入至少5个字符', pattern: /^.{5,}.$/ }], + rules: [{ required: true, message: '请输入至少5个字符', pattern: /^.{4,}.$/ }], })( , )} diff --git a/kafka-manager-console/src/container/modal/admin/cluster.ts b/kafka-manager-console/src/container/modal/admin/cluster.ts index ccf1aa54..20ed9098 100644 --- a/kafka-manager-console/src/container/modal/admin/cluster.ts +++ b/kafka-manager-console/src/container/modal/admin/cluster.ts @@ -23,13 +23,22 @@ export const showEditClusterTopic = (item: IClusterTopics) => { { key: 'appId', label: '应用ID', + type: 'select', + options: app.adminAppData.map(item => { + return { + label: item.appId, + value: item.appId, + }; + }), rules: [{ required: true, - message: '请输入应用ID', + // message: '请输入应用ID', + // message: '请输入应用ID,应用名称只支持字母、数字、下划线、短划线,长度限制在3-64字符', + // pattern: /[_a-zA-Z0-9_-]{3,64}$/, }], attrs: { placeholder: '请输入应用ID', - disabled: true, + // disabled: true, }, }, { @@ -52,6 +61,7 @@ export const showEditClusterTopic = (item: IClusterTopics) => { attrs: { placeholder: '请输入保存时间', suffix: '小时', + prompttype:'修改保存时间,预计一分钟左右生效!' }, }, { @@ -104,7 +114,7 @@ export const showLogicalClusterOpModal = (clusterId: number, record?: ILogicalCl } const updateFormModal = (isShow: boolean) => { const formMap = wrapper.xFormWrapper.formMap; - isShow ? formMap.splice(2, 0, + isShow ? formMap.splice(3, 0, { key: 'appId', label: '所属应用', @@ -119,7 +129,7 @@ export const showLogicalClusterOpModal = (clusterId: number, record?: ILogicalCl attrs: { placeholder: '请选择所属应用', }, - }) : formMap.splice(2, 1); + }) : formMap.splice(3, 1); const formData = wrapper.xFormWrapper.formData; wrapper.ref && wrapper.ref.updateFormMap$(formMap, formData || {}); }; @@ -129,30 +139,30 @@ export const showLogicalClusterOpModal = (clusterId: number, record?: ILogicalCl formMap: [ { key: 'logicalClusterName', - label: '逻辑集群中文名称', + label: '逻辑集群名称', // defaultValue:'', - rules: [{ - required: true, - message: '请输入逻辑集群中文名称,支持中文、字母、数字、下划线(_)和短划线(-)组成,长度在3-128字符之间', // 不能以下划线(_)和短划线(-)开头和结尾 + rules: [{ + required: true, + message: '请输入逻辑集群名称,支持中文、字母、数字、下划线(_)和短划线(-)组成,长度在3-128字符之间', // 不能以下划线(_)和短划线(-)开头和结尾 pattern: /^[a-zA-Z0-9_\-\u4e00-\u9fa5]{3,128}$/g, //(?!(_|\-))(?!.*?(_|\-)$) }], attrs: { // disabled: record ? true : false, - placeholder:'请输入逻辑集群中文名称' + placeholder: '请输入逻辑集群名称' }, }, { - key: 'logicalClusterName1', - label: '逻辑集群英文名称', + key: 'logicalClusterIdentification', + label: '逻辑集群标识', // defaultValue:'', - rules: [{ - required: true, - message: '请输入逻辑集群英文名称,支持字母、数字、下划线(_)和短划线(-)组成,长度在3-128字符之间', //不能以下划线(_)和短划线(-)开头和结尾 - pattern:/^[a-zA-Z0-9_\-]{3,128}$/g, //(?!(_|\-))(?!.*?(_|\-)$) + rules: [{ + required: true, + message: '请输入逻辑集群标识,支持字母、数字、下划线(_)和短划线(-)组成,长度在3-128字符之间', //不能以下划线(_)和短划线(-)开头和结尾 + pattern: /^[a-zA-Z0-9_\-]{3,128}$/g, //(?!(_|\-))(?!.*?(_|\-)$) }], attrs: { disabled: record ? true : false, - placeholder:'请输入逻辑集群英文名称,创建后无法修改' + placeholder: '请输入逻辑集标识,创建后无法修改' }, }, { @@ -233,7 +243,7 @@ export const showLogicalClusterOpModal = (clusterId: number, record?: ILogicalCl id: record ? record.logicalClusterId : '', mode: value.mode, name: value.logicalClusterName, - englishName:value.logicalClusterEName, // 存储逻辑集群英文名称 + identification: value.logicalClusterIdentification, regionIdList: value.regionIdList, } as INewLogical; if (record) { @@ -246,7 +256,25 @@ export const showLogicalClusterOpModal = (clusterId: number, record?: ILogicalCl }); }, }; - + if (record && record.mode != 0) { + isShow = true; + let formMap: any = xFormModal.formMap + formMap.splice(3, 0, { + key: 'appId', + label: '所属应用', + rules: [{ required: true, message: '请选择所属应用' }], + type: 'select', + options: app.adminAppData.map(item => { + return { + label: item.name, + value: item.appId, + }; + }), + attrs: { + placeholder: '请选择所属应用', + }, + }) + } wrapper.open(xFormModal); }; diff --git a/kafka-manager-console/src/container/modal/admin/expand-partition.tsx b/kafka-manager-console/src/container/modal/admin/expand-partition.tsx index 89133734..dfb51ba9 100644 --- a/kafka-manager-console/src/container/modal/admin/expand-partition.tsx +++ b/kafka-manager-console/src/container/modal/admin/expand-partition.tsx @@ -50,7 +50,10 @@ class CustomForm extends React.Component { notification.success({ message: '扩分成功' }); this.props.form.resetFields(); admin.getClusterTopics(this.props.clusterId); - }); + }).catch(err => { + notification.error({ message: '扩分成功' }); + + }) } }); } @@ -93,7 +96,7 @@ class CustomForm extends React.Component { {/* 运维管控-topic信息-扩分区操作 */} {getFieldDecorator('regionNameList', { - initialValue: admin.topicsBasic ? admin.topicsBasic.regionNameList : '', + initialValue: admin.topicsBasic && admin.topicsBasic.regionNameList.length > 0 ? admin.topicsBasic.regionNameList.join(',') : ' ', rules: [{ required: true, message: '请输入所属region' }], })()} diff --git a/kafka-manager-console/src/container/modal/admin/task.ts b/kafka-manager-console/src/container/modal/admin/task.ts index 8b9e5086..d9a609ac 100644 --- a/kafka-manager-console/src/container/modal/admin/task.ts +++ b/kafka-manager-console/src/container/modal/admin/task.ts @@ -158,26 +158,26 @@ export const createMigrationTasks = () => { }, { key: 'originalRetentionTime', - label: '原本保存时间', + label: '原Topic保存时间', rules: [{ required: true, - message: '请输入原本保存时间', + message: '请输入原Topic保存时间', }], attrs: { disabled: true, - placeholder: '请输入原本保存时间', + placeholder: '请输入原Topic保存时间', suffix: '小时', }, }, { key: 'reassignRetentionTime', - label: '迁移保存时间', + label: '迁移后Topic保存时间', rules: [{ required: true, - message: '请输入迁移保存时间', + message: '请输入迁移后Topic保存时间', }], attrs: { - placeholder: '请输入迁移保存时间', + placeholder: '请输入迁移后Topic保存时间', suffix: '小时', }, }, @@ -186,10 +186,10 @@ export const createMigrationTasks = () => { label: '初始限流', rules: [{ required: true, - message: '请输入初始限流', + message: '请输入初始限流,并按照:“限流上限>初始限流>限流下限”的大小顺序', }], attrs: { - placeholder: '请输入初始限流', + placeholder: '请输入初始限流,并按照:“限流上限>初始限流>限流下限”的大小顺序', suffix: 'MB/s', }, }, @@ -198,10 +198,10 @@ export const createMigrationTasks = () => { label: '限流上限', rules: [{ required: true, - message: '请输入限流上限', + message: '请输入限流上限,并按照:“限流上限>初始限流>限流下限”的大小顺序', }], attrs: { - placeholder: '请输入限流上限', + placeholder: '请输入限流上限,并按照:“限流上限>初始限流>限流下限”的大小顺序', suffix: 'MB/s', }, }, @@ -210,10 +210,10 @@ export const createMigrationTasks = () => { label: '限流下限', rules: [{ required: true, - message: '请输入限流下限', + message: '请输入限流下限,并按照:“限流上限>初始限流>限流下限”的大小顺序', }], attrs: { - placeholder: '请输入限流下限', + placeholder: '请输入限流下限,并按照:“限流上限>初始限流>限流下限”的大小顺序', suffix: 'MB/s', }, }, @@ -224,7 +224,7 @@ export const createMigrationTasks = () => { rules: [{ required: false, message: '请输入至少5个字符', - pattern: /^.{5,}.$/, + pattern: /^.{4,}.$/, }], attrs: { placeholder: '请输入备注', diff --git a/kafka-manager-console/src/container/modal/admin/user.ts b/kafka-manager-console/src/container/modal/admin/user.ts index 9f35e4cf..51ca360d 100644 --- a/kafka-manager-console/src/container/modal/admin/user.ts +++ b/kafka-manager-console/src/container/modal/admin/user.ts @@ -24,26 +24,111 @@ export const showApplyModal = (record?: IUser) => { value: +item, })), rules: [{ required: true, message: '请选择角色' }], - }, { - key: 'password', - label: '密码', - type: FormItemType.inputPassword, - rules: [{ required: !record, message: '请输入密码' }], - }, + }, + // { + // key: 'password', + // label: '密码', + // type: FormItemType.inputPassword, + // rules: [{ required: !record, message: '请输入密码' }], + // }, ], formData: record || {}, visible: true, title: record ? '修改用户' : '新增用户', onSubmit: (value: IUser) => { if (record) { - return users.modfiyUser(value).then(() => { - message.success('操作成功'); - }); + return users.modfiyUser(value) } return users.addUser(value).then(() => { message.success('操作成功'); }); }, }; + if(!record){ + let formMap: any = xFormModal.formMap + formMap.splice(2, 0,{ + key: 'password', + label: '密码', + type: FormItemType.inputPassword, + rules: [{ required: !record, message: '请输入密码' }], + },) + } wrapper.open(xFormModal); }; + +// const handleCfPassword = (rule:any, value:any, callback:any)=>{ +// if() +// } +export const showApplyModalModifyPassword = (record: IUser) => { + const xFormModal:any = { + formMap: [ + // { + // key: 'oldPassword', + // label: '旧密码', + // type: FormItemType.inputPassword, + // rules: [{ + // required: true, + // message: '请输入旧密码', + // }] + // }, + { + key: 'newPassword', + label: '新密码', + type: FormItemType.inputPassword, + rules: [ + { + required: true, + message: '请输入新密码', + } + ], + attrs:{ + onChange:(e:any)=>{ + users.setNewPassWord(e.target.value) + } + } + }, + { + key: 'confirmPassword', + label: '确认密码', + type: FormItemType.inputPassword, + rules: [ + { + required: true, + message: '请确认密码', + validator:(rule:any, value:any, callback:any) => { + // 验证新密码的一致性 + if(users.newPassWord){ + if(value!==users.newPassWord){ + rule.message = "两次密码输入不一致"; + callback('两次密码输入不一致') + }else{ + callback() + } + }else if(!value){ + rule.message = "请确认密码"; + callback('请确认密码'); + }else{ + callback() + } + }, + } + ], + }, + ], + formData: record || {}, + visible: true, + title: '修改密码', + onSubmit: (value: IUser) => { + let params:any = { + username:record?.username, + password:value.confirmPassword, + role:record?.role, + } + return users.modfiyUser(params).then(() => { + message.success('操作成功'); + }); + }, + } + wrapper.open(xFormModal); +}; + diff --git a/kafka-manager-console/src/container/modal/admin/version.ts b/kafka-manager-console/src/container/modal/admin/version.ts index ea642a8f..c863eba1 100644 --- a/kafka-manager-console/src/container/modal/admin/version.ts +++ b/kafka-manager-console/src/container/modal/admin/version.ts @@ -1,6 +1,6 @@ import * as React from 'react'; -import { notification } from 'component/antd'; -import { IUploadFile, IConfigure } from 'types/base-type'; +import { notification, Select } from 'component/antd'; +import { IUploadFile, IConfigure, IConfigGateway } from 'types/base-type'; import { version } from 'store/version'; import { admin } from 'store/admin'; import { wrapper } from 'store'; @@ -97,8 +97,8 @@ const updateFormModal = (type: number) => { formMap[2].attrs = { accept: version.fileSuffix, }, - // tslint:disable-next-line:no-unused-expression - wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData, true); + // tslint:disable-next-line:no-unused-expression + wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData, true); } }; @@ -157,8 +157,8 @@ export const showModifyModal = (record: IUploadFile) => { export const showConfigureModal = async (record?: IConfigure) => { if (record) { - const result:any = await format2json(record.configValue); - record.configValue = result.result; + const result: any = await format2json(record.configValue); + record.configValue = result.result || record.configValue; } const xFormModal = { formMap: [ @@ -193,10 +193,69 @@ export const showConfigureModal = async (record?: IConfigure) => { return admin.editConfigure(value).then(data => { notification.success({ message: '编辑配置成功' }); }); + } else { + return admin.addNewConfigure(value).then(data => { + notification.success({ message: '新建配置成功' }); + }); + } + }, + }; + wrapper.open(xFormModal); +}; + +export const showConfigGatewayModal = async (record?: IConfigGateway) => { + const xFormModal = { + formMap: [ + { + key: 'type', + label: '配置类型', + rules: [{ required: true, message: '请选择配置类型' }], + type: "select", + options: admin.gatewayType.map((item: any, index: number) => ({ + key: index, + label: item.configName, + value: item.configType, + })), + attrs: { + disabled: record ? true : false, + } + }, { + key: 'name', + label: '配置键', + rules: [{ required: true, message: '请输入配置键' }], + attrs: { + disabled: record ? true : false, + }, + }, { + key: 'value', + label: '配置值', + type: 'text_area', + rules: [{ + required: true, + message: '请输入配置值', + }], + }, { + key: 'description', + label: '描述', + type: 'text_area', + rules: [{ required: true, message: '请输入备注' }], + }, + ], + formData: record || {}, + visible: true, + isWaitting: true, + title: `${record ? '编辑配置' : '新建配置'}`, + onSubmit: async (parmas: IConfigGateway) => { + if (record) { + parmas.id = record.id; + return admin.editConfigGateway(parmas).then(data => { + notification.success({ message: '编辑配置成功' }); + }); + } else { + return admin.addNewConfigGateway(parmas).then(data => { + notification.success({ message: '新建配置成功' }); + }); } - return admin.addNewConfigure(value).then(data => { - notification.success({ message: '新建配置成功' }); - }); }, }; wrapper.open(xFormModal); diff --git a/kafka-manager-console/src/container/modal/app.tsx b/kafka-manager-console/src/container/modal/app.tsx index 26becc73..bb0320ec 100644 --- a/kafka-manager-console/src/container/modal/app.tsx +++ b/kafka-manager-console/src/container/modal/app.tsx @@ -29,7 +29,7 @@ export const showEditModal = (record?: IAppItem, from?: string, isDisabled?: boo rules: [{ required: isDisabled ? false : true, message: '应用名称只支持中文、字母、数字、下划线、短划线,长度限制在3-64字符', - pattern: /[\u4e00-\u9fa5_a-zA-Z0-9_-]{3,64}/, + pattern: /[\u4e00-\u9fa5_a-zA-Z0-9_-]{3,64}$/, }], attrs: { disabled: isDisabled }, }, { @@ -85,7 +85,7 @@ export const showEditModal = (record?: IAppItem, from?: string, isDisabled?: boo ], formData: record, visible: true, - title: isDisabled ? '详情' : record ? '编辑' :
    应用申请应用申请文档
    , + title: isDisabled ? '详情' : record ? '编辑' :
    应用申请资源申请文档
    , // customRenderElement: isDisabled ? '' : record ? '' : 集群资源充足时,预计1分钟自动审批通过, isWaitting: true, onSubmit: (value: IAppItem) => { diff --git a/kafka-manager-console/src/container/modal/cluster.tsx b/kafka-manager-console/src/container/modal/cluster.tsx index 23cb48ba..8ea048ef 100644 --- a/kafka-manager-console/src/container/modal/cluster.tsx +++ b/kafka-manager-console/src/container/modal/cluster.tsx @@ -29,7 +29,7 @@ export const showCpacityModal = (item: IClusterData) => { key: 'description', label: '申请原因', type: 'text_area', - rules: [{ required: true, pattern: /^.{5,}.$/, message: '请输入至少5个字符' }], + rules: [{ required: true, pattern: /^.{4,}.$/, message: '请输入至少5个字符' }], attrs: { placeholder: '请输入至少5个字符', }, @@ -44,12 +44,12 @@ export const showCpacityModal = (item: IClusterData) => { type: value.type, applicant: users.currentUser.username, description: value.description, - extensions: JSON.stringify({clusterId: item.clusterId}), + extensions: JSON.stringify({ clusterId: item.clusterId }), }; cluster.applyCpacity(cpacityParams).then(data => { notification.success({ message: `申请${value.type === 5 ? '扩容' : '缩容'}成功`, - }); + }); window.location.href = `${urlPrefix}/user/order-detail/?orderId=${data.id}®ion=${region.currentRegion}`; }); }, diff --git a/kafka-manager-console/src/container/modal/expert.tsx b/kafka-manager-console/src/container/modal/expert.tsx index 96a1f312..2ff5e5f2 100644 --- a/kafka-manager-console/src/container/modal/expert.tsx +++ b/kafka-manager-console/src/container/modal/expert.tsx @@ -20,14 +20,14 @@ export interface IRenderData { } export const migrationModal = (renderData: IRenderData[]) => { - const xFormWrapper = { + const xFormWrapper = { type: 'drawer', visible: true, width: 1000, title: '新建迁移任务', - customRenderElement: , + customRenderElement: , nofooter: true, noform: true, }; - wrapper.open(xFormWrapper as IXFormWrapper); + wrapper.open(xFormWrapper as IXFormWrapper); }; diff --git a/kafka-manager-console/src/container/modal/order.tsx b/kafka-manager-console/src/container/modal/order.tsx index 2930e3df..c982db4c 100644 --- a/kafka-manager-console/src/container/modal/order.tsx +++ b/kafka-manager-console/src/container/modal/order.tsx @@ -75,8 +75,8 @@ export const showApprovalModal = (info: IOrderInfo, status: number, from?: strin // }], rules: [{ required: true, - message: '请输入大于12小于999的整数', - pattern: /^([1-9]{1}[0-9]{2})$|^([2-9]{1}[0-9]{1})$|^(1[2-9]{1})$/, + message: '请输入大于0小于10000的整数', + pattern: /^\+?[1-9]\d{0,3}(\.\d*)?$/, }], }, { key: 'species', diff --git a/kafka-manager-console/src/container/modal/topic.tsx b/kafka-manager-console/src/container/modal/topic.tsx index e4c47f14..d4df0318 100644 --- a/kafka-manager-console/src/container/modal/topic.tsx +++ b/kafka-manager-console/src/container/modal/topic.tsx @@ -22,7 +22,7 @@ export const applyTopic = () => { formMap: [ { key: 'clusterId', - label: '所属逻辑集群:', + label: '所属集群:', type: 'select', options: cluster.clusterData, rules: [{ required: true, message: '请选择' }], @@ -75,7 +75,7 @@ export const applyTopic = () => { key: 'description', label: '申请原因', type: 'text_area', - rules: [{ required: true, pattern: /^.{5,}.$/s, message: '请输入至少5个字符' }], + rules: [{ required: true, pattern: /^.{4,}.$/s, message: '请输入至少5个字符' }], attrs: { placeholder: `概要描述Topic的数据源, Topic数据的生产者/消费者, Topic的申请原因及备注信息等。(最多100个字) 例如: @@ -88,7 +88,7 @@ export const applyTopic = () => { ], formData: {}, visible: true, - title: '申请Topic', + title: , okText: '确认', // customRenderElement: 集群资源充足时,预计1分钟自动审批通过, isWaitting: true, @@ -180,13 +180,14 @@ export const showApplyQuatoModal = (item: ITopic | IAppsIdInfo, record: IQuotaQu const isConsume = item.access === 0 || item.access === 2; const xFormModal = { formMap: [ + // { + // key: 'clusterName', + // label: '逻辑集群名称', + // rules: [{ required: true, message: '' }], + // attrs: { disabled: true }, + // invisible: !item.hasOwnProperty('clusterName'), + // }, { - key: 'clusterName', - label: '逻辑集群名称', - rules: [{ required: true, message: '' }], - attrs: { disabled: true }, - invisible: !item.hasOwnProperty('clusterName'), - }, { key: 'topicName', label: 'Topic名称', rules: [{ required: true, message: '' }], @@ -225,7 +226,7 @@ export const showApplyQuatoModal = (item: ITopic | IAppsIdInfo, record: IQuotaQu key: 'description', label: '申请原因', type: 'text_area', - rules: [{ required: true, pattern: /^.{5,}.$/, message: quotaRemarks }], + rules: [{ required: true, pattern: /^.{4,}.$/, message: quotaRemarks }], attrs: { placeholder: quotaRemarks, }, @@ -292,13 +293,15 @@ const updateFormModal = (appId: string) => { export const showTopicApplyQuatoModal = (item: ITopic) => { const xFormModal = { formMap: [ + // { + // key: 'clusterName', + // label: '逻辑集群名称', + // rules: [{ required: true, message: '' }], + // attrs: { disabled: true }, + // defaultValue: item.clusterName, + // // invisible: !item.hasOwnProperty('clusterName'), + // }, { - key: 'clusterName', - label: '逻辑集群名称', - rules: [{ required: true, message: '' }], - attrs: { disabled: true }, - // invisible: !item.hasOwnProperty('clusterName'), - }, { key: 'topicName', label: 'Topic名称', rules: [{ required: true, message: '' }], @@ -530,7 +533,7 @@ const showAllPermission = (appId: string, item: ITopic, access: number) => { rules: [{ required: true, validator: (rule: any, value: string, callback: any) => { - const regexp = /^.{5,}.$/; + const regexp = /^.{4,}.$/; value = value.trim(); if (!regexp.test(value)) { callback('请输入至少5个字符'); @@ -629,7 +632,7 @@ export const showPermissionModal = (item: ITopic) => { rules: [{ required: true, validator: (rule: any, value: string, callback: any) => { - const regexp = /^.{5,}.$/; + const regexp = /^.{4,}.$/; value = value.trim(); if (!regexp.test(value)) { callback('请输入至少5个字符'); @@ -678,7 +681,7 @@ export const showTopicEditModal = (item: ITopic) => { key: 'description', label: '备注', type: 'text_area', - rules: [{ required: false }, { pattern: /^.{5,}.$/, message: '请输入至少5个字符' }], + rules: [{ required: false }, { pattern: /^.{4,}.$/, message: '请输入至少5个字符' }], }, ], formData: { diff --git a/kafka-manager-console/src/container/search-filter.tsx b/kafka-manager-console/src/container/search-filter.tsx index 12603d40..ac5d6bc1 100644 --- a/kafka-manager-console/src/container/search-filter.tsx +++ b/kafka-manager-console/src/container/search-filter.tsx @@ -126,7 +126,7 @@ export class SearchAndFilterContainer extends React.Component diff --git a/kafka-manager-console/src/container/topic/config.tsx b/kafka-manager-console/src/container/topic/config.tsx index a845cec1..8485ec42 100644 --- a/kafka-manager-console/src/container/topic/config.tsx +++ b/kafka-manager-console/src/container/topic/config.tsx @@ -85,7 +85,6 @@ export const applyQuotaQuery = (item: ITopic) => { }; export const applyTopicQuotaQuery = async (item: ITopic) => { - console.log(item) await app.getTopicAppQuota(item.clusterId, item.topicName); await showTopicApplyQuatoModal(item); }; @@ -142,7 +141,7 @@ export const getAllTopicColumns = (urlPrefix: string) => { {text} ); }, diff --git a/kafka-manager-console/src/container/topic/topic-all.tsx b/kafka-manager-console/src/container/topic/topic-all.tsx index 19bdc709..59456660 100644 --- a/kafka-manager-console/src/container/topic/topic-all.tsx +++ b/kafka-manager-console/src/container/topic/topic-all.tsx @@ -60,7 +60,7 @@ export class AllTopic extends SearchAndFilterContainer { if (cluster.allActive !== -1 || searchKey !== '') { data = origin.filter(d => ((d.topicName !== undefined && d.topicName !== null) && d.topicName.toLowerCase().includes(searchKey as string) - || ((d.appPrincipals !== undefined && d.appPrincipals !== null) && d.appPrincipals.toLowerCase().includes(searchKey as string))) + || ((d.appPrincipals !== undefined && d.appPrincipals !== null) && d.appPrincipals.toLowerCase().includes(searchKey as string))) && (cluster.allActive === -1 || d.clusterId === cluster.allActive), ); } else { diff --git a/kafka-manager-console/src/container/topic/topic-detail/base-information.tsx b/kafka-manager-console/src/container/topic/topic-detail/base-information.tsx index 48a58f36..a9d91a2f 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/base-information.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/base-information.tsx @@ -69,7 +69,7 @@ export class BaseInformation extends React.Component { label: '压缩格式', value: baseInfo.topicCodeC, }, { - label: '所属物理集群ID', + label: '集群ID', value: baseInfo.clusterId, }, { label: '所属region', diff --git a/kafka-manager-console/src/container/topic/topic-detail/bill-information.tsx b/kafka-manager-console/src/container/topic/topic-detail/bill-information.tsx index d7bb90b6..5a10b666 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/bill-information.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/bill-information.tsx @@ -95,23 +95,23 @@ export class BillInformation extends SearchAndFilterContainer { } public render() { - return( + return ( <> -
    -
      -
    • 账单信息  +
      +
        +
      • 账单信息  - - -
      • - {this.renderDatePick()} -
      - {this.renderChart()} -
      + // tslint:disable-next-line:max-line-length + href="https://github.com/didi/kafka-manager" + target="_blank" + > + + +
    • + {this.renderDatePick()} +
    + {this.renderChart()} +
    ); } diff --git a/kafka-manager-console/src/container/topic/topic-detail/connect-information.tsx b/kafka-manager-console/src/container/topic/topic-detail/connect-information.tsx index f323310c..1e5ab182 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/connect-information.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/connect-information.tsx @@ -101,7 +101,9 @@ export class ConnectInformation extends SearchAndFilterContainer { <>
      -
    • 连接信息
    • +
    • + 连接信息 展示近20分钟的连接信息 +
    • {this.renderSearch('', '请输入连接信息', 'searchKey')}
    {this.renderConnectionInfo(this.getData(topic.connectionInfo))} diff --git a/kafka-manager-console/src/container/topic/topic-detail/group-id.tsx b/kafka-manager-console/src/container/topic/topic-detail/group-id.tsx index 20b7642f..b173ac41 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/group-id.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/group-id.tsx @@ -138,7 +138,7 @@ export class GroupID extends SearchAndFilterContainer { public renderConsumerDetails() { const consumerGroup = this.consumerGroup; - const columns = [{ + const columns: any = [{ title: 'Partition ID', dataIndex: 'partitionId', key: 'partitionId', @@ -179,7 +179,8 @@ export class GroupID extends SearchAndFilterContainer { <>
    {consumerGroup} -
    +
    + {this.renderSearch('', '请输入Consumer ID')} @@ -187,7 +188,7 @@ export class GroupID extends SearchAndFilterContainer {
    @@ -214,7 +215,12 @@ export class GroupID extends SearchAndFilterContainer { dataIndex: 'location', key: 'location', width: '34%', - }, + }, { + title: '状态', + dataIndex: 'state', + key: 'state', + width: '34%', + } ]; return ( <> @@ -236,7 +242,17 @@ export class GroupID extends SearchAndFilterContainer { data = searchKey ? origin.filter((item: IConsumerGroups) => (item.consumerGroup !== undefined && item.consumerGroup !== null) && item.consumerGroup.toLowerCase().includes(searchKey as string), - ) : origin ; + ) : origin; + return data; + } + + public getDetailData(origin: T[]) { + let data: T[] = origin; + let { searchKey } = this.state; + searchKey = (searchKey + '').trim().toLowerCase(); + data = searchKey ? origin.filter((item: IConsumeDetails) => + (item.clientId !== undefined && item.clientId !== null) && item.clientId.toLowerCase().includes(searchKey as string), + ) : origin; return data; } diff --git a/kafka-manager-console/src/container/topic/topic-detail/index.tsx b/kafka-manager-console/src/container/topic/topic-detail/index.tsx index 451e5382..0220341b 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/index.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/index.tsx @@ -1,7 +1,7 @@ import * as React from 'react'; import './index.less'; import { wrapper, region } from 'store'; -import { Tabs, PageHeader, Button, notification, Drawer, message, Icon } from 'antd'; +import { Tabs, PageHeader, Button, notification, Drawer, message, Icon, Spin } from 'antd'; import { observer } from 'mobx-react'; import { BaseInformation } from './base-information'; import { StatusChart } from './status-chart'; @@ -44,6 +44,7 @@ export class TopicDetail extends React.Component { drawerVisible: false, infoVisible: false, infoTopicList: [] as IInfoData[], + isExecutionBtn: false }; private $formRef: any; @@ -54,7 +55,7 @@ export class TopicDetail extends React.Component { const url = Url(); this.clusterId = Number(url.search.clusterId); this.needAuth = url.search.needAuth; - this.clusterName = url.search.clusterName; + this.clusterName = decodeURI(decodeURI(url.search.clusterName)); this.topicName = url.search.topic; const isPhysical = Url().search.hasOwnProperty('isPhysicalClusterId'); this.isPhysicalTrue = isPhysical ? '&isPhysicalClusterId=true' : ''; @@ -197,7 +198,9 @@ export class TopicDetail extends React.Component { formData={formData} formMap={formMap} /> - + {infoVisible ? this.renderInfo() : null} @@ -243,7 +246,11 @@ export class TopicDetail extends React.Component { ); } + // 执行加载图标 + public antIcon = + public drawerSubmit = (value: any) => { + this.setState({ isExecutionBtn: true }) this.$formRef.validateFields((error: Error, result: any) => { if (error) { return; @@ -253,9 +260,12 @@ export class TopicDetail extends React.Component { this.setState({ infoTopicList: data, infoVisible: true, + isExecutionBtn: false }); message.success('采样成功'); - }); + }).catch(err => { + this.setState({ isExecutionBtn: false }) + }) }); } @@ -315,6 +325,7 @@ export class TopicDetail extends React.Component { public componentDidMount() { topic.getTopicBasicInfo(this.clusterId, this.topicName); topic.getTopicBusiness(this.clusterId, this.topicName); + app.getAppList(); } public render() { @@ -326,7 +337,6 @@ export class TopicDetail extends React.Component { topicName: this.topicName, clusterName: this.clusterName } as ITopic; - app.getAppList(); return ( <> @@ -342,9 +352,9 @@ export class TopicDetail extends React.Component { {this.needAuth == "true" && } - - - {showEditBtn && } + + + {/* {showEditBtn && } */} } /> diff --git a/kafka-manager-console/src/container/topic/topic-detail/reset-offset.tsx b/kafka-manager-console/src/container/topic/topic-detail/reset-offset.tsx index be0767e8..531f69c6 100644 --- a/kafka-manager-console/src/container/topic/topic-detail/reset-offset.tsx +++ b/kafka-manager-console/src/container/topic/topic-detail/reset-offset.tsx @@ -71,32 +71,32 @@ class ResetOffset extends React.Component { const { getFieldDecorator } = this.props.form; const { typeValue, offsetValue } = this.state; return ( - <> - - - + <> + + + {/* */}
    - + 重置到指定时间
    - - 最新offset - 自定义 - + + 最新offset + 自定义 + {typeValue === 'time' && offsetValue === 'custom' && getFieldDecorator('timestamp', { rules: [{ required: false, message: '' }], initialValue: moment(), - })( + })( { 重置指定分区及偏移 - + diff --git a/kafka-manager-console/src/container/topic/topic-mine.tsx b/kafka-manager-console/src/container/topic/topic-mine.tsx index c0a0b14a..d4f473d1 100644 --- a/kafka-manager-console/src/container/topic/topic-mine.tsx +++ b/kafka-manager-console/src/container/topic/topic-mine.tsx @@ -30,7 +30,7 @@ export class MineTopic extends SearchAndFilterContainer { if (cluster.active !== -1 || app.active !== '-1' || searchKey !== '') { data = origin.filter(d => ((d.topicName !== undefined && d.topicName !== null) && d.topicName.toLowerCase().includes(searchKey as string) - || ((d.appName !== undefined && d.appName !== null) && d.appName.toLowerCase().includes(searchKey as string))) + || ((d.appName !== undefined && d.appName !== null) && d.appName.toLowerCase().includes(searchKey as string))) && (cluster.active === -1 || d.clusterId === cluster.active) && (app.active === '-1' || d.appId === (app.active + '')), ); @@ -152,18 +152,18 @@ export class MineTopic extends SearchAndFilterContainer { public render() { return ( <> -
    - this.handleTabKey(key)}> - - {this.renderOperationPanel(1)} - {this.renderMyTopicTable(this.getData(topic.mytopicData))} - - - {this.renderOperationPanel(2)} - {this.renderDeprecatedTopicTable(this.getData(topic.expireData))} - - -
    +
    + this.handleTabKey(key)}> + + {this.renderOperationPanel(1)} + {this.renderMyTopicTable(this.getData(topic.mytopicData))} + + + {this.renderOperationPanel(2)} + {this.renderDeprecatedTopicTable(this.getData(topic.expireData))} + + +
    ); } diff --git a/kafka-manager-console/src/container/user-center/my-bill.tsx b/kafka-manager-console/src/container/user-center/my-bill.tsx index 3383861a..1449ef9a 100644 --- a/kafka-manager-console/src/container/user-center/my-bill.tsx +++ b/kafka-manager-console/src/container/user-center/my-bill.tsx @@ -79,7 +79,7 @@ export class MyBill extends React.Component { } public renderTableList() { - const userUrl=`${urlPrefix}/user/bill-detail` + const userUrl = `${urlPrefix}/user/bill-detail` return (
    ); } - + public renderChart() { return (
    - this.chart = ref } getChartData={this.getData.bind(this, null)} /> + this.chart = ref} getChartData={this.getData.bind(this, null)} />
    ); } @@ -131,7 +131,7 @@ export class MyBill extends React.Component { <>
    - 账单趋势  - } + } key="1" > {this.renderDatePick()} diff --git a/kafka-manager-console/src/lib/api.ts b/kafka-manager-console/src/lib/api.ts index f53f6852..8716d4ea 100644 --- a/kafka-manager-console/src/lib/api.ts +++ b/kafka-manager-console/src/lib/api.ts @@ -1,5 +1,5 @@ import fetch, { formFetch } from './fetch'; -import { IUploadFile, IUser, IQuotaModelItem, ILimitsItem, ITopic, IOrderParams, ISample, IMigration, IExecute, IEepand, IUtils, ITopicMetriceParams, IRegister, IEditTopic, IExpand, IDeleteTopic, INewRegions, INewLogical, IRebalance, INewBulidEnums, ITrigger, IApprovalOrder, IMonitorSilences, IConfigure, IBatchApproval } from 'types/base-type'; +import { IUploadFile, IUser, IQuotaModelItem, ILimitsItem, ITopic, IOrderParams, ISample, IMigration, IExecute, IEepand, IUtils, ITopicMetriceParams, IRegister, IEditTopic, IExpand, IDeleteTopic, INewRegions, INewLogical, IRebalance, INewBulidEnums, ITrigger, IApprovalOrder, IMonitorSilences, IConfigure, IConfigGateway, IBatchApproval } from 'types/base-type'; import { IRequestParams } from 'types/alarm'; import { apiCache } from 'lib/api-cache'; @@ -442,6 +442,34 @@ export const deleteConfigure = (configKey: string) => { }); }; +export const getGatewayList = () => { + return fetch(`/rd/gateway-configs`); +}; + +export const getGatewayType = () => { + return fetch(`/op/gateway-configs/type-enums`); +}; + +export const addNewConfigGateway = (params: IConfigGateway) => { + return fetch(`/op/gateway-configs`, { + method: 'POST', + body: JSON.stringify(params), + }); +}; + +export const editConfigGateway = (params: IConfigGateway) => { + return fetch(`/op/gateway-configs`, { + method: 'PUT', + body: JSON.stringify(params), + }); +}; +export const deleteConfigGateway = (params: IConfigure) => { + return fetch(`/op/gateway-configs`, { + method: 'DELETE', + body: JSON.stringify(params), + }); +}; + export const getDataCenter = () => { return fetch(`/normal/configs/idc`); }; @@ -530,6 +558,23 @@ export const getControllerHistory = (clusterId: number) => { return fetch(`/rd/clusters/${clusterId}/controller-history`); }; +export const getCandidateController = (clusterId: number) => { + return fetch(`/rd/clusters/${clusterId}/controller-preferred-candidates`); +}; + +export const addCandidateController = (params:any) => { + return fetch(`/op/cluster-controller/preferred-candidates`, { + method: 'POST', + body: JSON.stringify(params), + }); +}; + +export const deleteCandidateCancel = (params:any)=>{ + return fetch(`/op/cluster-controller/preferred-candidates`, { + method: 'DELETE', + body: JSON.stringify(params), + }); +} /** * 运维管控 broker */ diff --git a/kafka-manager-console/src/lib/fetch.ts b/kafka-manager-console/src/lib/fetch.ts index 037b0787..ef307ccb 100644 --- a/kafka-manager-console/src/lib/fetch.ts +++ b/kafka-manager-console/src/lib/fetch.ts @@ -33,6 +33,7 @@ const checkStatus = (res: Response) => { }; const filter = (init: IInit) => (res: IRes) => { + if (res.code !== 0 && res.code !== 200) { if (!init.errorNoTips) { notification.error({ diff --git a/kafka-manager-console/src/lib/line-charts-config.ts b/kafka-manager-console/src/lib/line-charts-config.ts index fe9880a6..4a667c0c 100644 --- a/kafka-manager-console/src/lib/line-charts-config.ts +++ b/kafka-manager-console/src/lib/line-charts-config.ts @@ -77,7 +77,7 @@ export const getControlMetricOption = (type: IOptionType, data: IClusterMetrics[ name = '条'; data.map(item => { item.messagesInPerSec = item.messagesInPerSec !== null ? Number(item.messagesInPerSec.toFixed(2)) : null; - }); + }); break; case 'brokerNum': case 'topicNum': @@ -224,7 +224,7 @@ export const getClusterMetricOption = (type: IOptionType, record: IClusterMetric name = '条'; data.map(item => { item.messagesInPerSec = item.messagesInPerSec !== null ? Number(item.messagesInPerSec.toFixed(2)) : null; - }); + }); break; default: const { name: unitName, data: xData } = dealFlowData(metricTypeMap[type], data); @@ -248,8 +248,8 @@ export const getClusterMetricOption = (type: IOptionType, record: IClusterMetric const unitSeries = item.data[item.seriesName] !== null ? Number(item.data[item.seriesName]) : null; // tslint:disable-next-line:max-line-length result += ''; - if ( (item.data.produceThrottled && item.seriesName === 'appIdBytesInPerSec') - || (item.data.consumeThrottled && item.seriesName === 'appIdBytesOutPerSec') ) { + if ((item.data.produceThrottled && item.seriesName === 'appIdBytesInPerSec') + || (item.data.consumeThrottled && item.seriesName === 'appIdBytesOutPerSec')) { return result += item.seriesName + ': ' + unitSeries + '(被限流)' + '
    '; } return result += item.seriesName + ': ' + unitSeries + '
    '; @@ -317,7 +317,7 @@ export const getMonitorMetricOption = (seriesName: string, data: IMetricPoint[]) if (ele.name === item.seriesName) { // tslint:disable-next-line:max-line-length result += ''; - return result += item.seriesName + ': ' + (item.data.value === null ? '' : item.data.value.toFixed(2)) + '
    '; + return result += item.seriesName + ': ' + (item.data.value === null ? '' : item.data.value.toFixed(2)) + '
    '; } }); }); diff --git a/kafka-manager-console/src/store/admin-monitor.ts b/kafka-manager-console/src/store/admin-monitor.ts index 4071e1c5..7e257637 100644 --- a/kafka-manager-console/src/store/admin-monitor.ts +++ b/kafka-manager-console/src/store/admin-monitor.ts @@ -3,6 +3,11 @@ import { observable, action } from 'mobx'; import { getBrokersMetricsHistory } from 'lib/api'; import { IClusterMetrics } from 'types/base-type'; +const STATUS = { + PENDING: 'pending', + REJECT: 'reject', + FULLFILLED: 'fullfilled' +} class AdminMonitor { @observable public currentClusterId = null as number; @@ -33,33 +38,42 @@ class AdminMonitor { @action.bound public setBrokersChartsData(data: IClusterMetrics[]) { this.brokersMetricsHistory = data; - this.setRequestId(null); + this.setRequestId(STATUS.FULLFILLED); + Promise.all(this.taskQueue).then(() => { + this.setRequestId(null); + this.taskQueue = []; + }) return data; } + public taskQueue = [] as any[]; public getBrokersMetricsList = async (startTime: string, endTime: string) => { - if (this.requestId && this.requestId !== 'error') { - return new Promise((res, rej) => { - window.setTimeout(() => { - if (this.requestId === 'error') { - rej(); - } else { + if (this.requestId) { + //逐条定时查询任务状态 + const p = new Promise((res, rej) => { + const timer = window.setInterval(() => { + if (this.requestId === STATUS.REJECT) { + rej(this.brokersMetricsHistory); + window.clearInterval(timer); + } else if (this.requestId === STATUS.FULLFILLED) { res(this.brokersMetricsHistory); + window.clearInterval(timer); } - }, 800); // TODO: 该实现方式待优化 + }, (this.taskQueue.length + 1) * 100); }); + this.taskQueue.push(p); + return p; } - this.setRequestId('requesting'); + this.setRequestId(STATUS.PENDING); return getBrokersMetricsHistory(this.currentClusterId, this.currentBrokerId, startTime, endTime) - .then(this.setBrokersChartsData).catch(() => this.setRequestId('error')); + .then(this.setBrokersChartsData).catch(() => this.setRequestId(STATUS.REJECT)); } public getBrokersChartsData = async (startTime: string, endTime: string, reload?: boolean) => { if (this.brokersMetricsHistory && !reload) { return new Promise(res => res(this.brokersMetricsHistory)); } - return this.getBrokersMetricsList(startTime, endTime); } } diff --git a/kafka-manager-console/src/store/admin.ts b/kafka-manager-console/src/store/admin.ts index f3d08264..bd641773 100644 --- a/kafka-manager-console/src/store/admin.ts +++ b/kafka-manager-console/src/store/admin.ts @@ -1,5 +1,5 @@ import { observable, action } from 'mobx'; -import { INewBulidEnums, ILabelValue, IClusterReal, IOptionType, IClusterMetrics, IClusterTopics, IKafkaFiles, IMetaData, IConfigure, IBrokerData, IOffset, IController, IBrokersBasicInfo, IBrokersStatus, IBrokersTopics, IBrokersPartitions, IBrokersAnalysis, IAnalysisTopicVO, IBrokersMetadata, IBrokersRegions, IThrottles, ILogicalCluster, INewRegions, INewLogical, ITaskManage, IPartitionsLocation, ITaskType, ITasksEnums, ITasksMetaData, ITaskStatusDetails, IKafkaRoles, IEnumsMap, IStaffSummary, IBill, IBillDetail } from 'types/base-type'; +import { INewBulidEnums, ILabelValue, IClusterReal, IOptionType, IClusterMetrics, IClusterTopics, IKafkaFiles, IMetaData, IConfigure, IConfigGateway, IBrokerData, IOffset, IController, IBrokersBasicInfo, IBrokersStatus, IBrokersTopics, IBrokersPartitions, IBrokersAnalysis, IAnalysisTopicVO, IBrokersMetadata, IBrokersRegions, IThrottles, ILogicalCluster, INewRegions, INewLogical, ITaskManage, IPartitionsLocation, ITaskType, ITasksEnums, ITasksMetaData, ITaskStatusDetails, IKafkaRoles, IEnumsMap, IStaffSummary, IBill, IBillDetail } from 'types/base-type'; import { deleteCluster, getBasicInfo, @@ -12,7 +12,12 @@ import { getConfigure, addNewConfigure, editConfigure, + addNewConfigGateway, deleteConfigure, + getGatewayList, + getGatewayType, + editConfigGateway, + deleteConfigGateway, getDataCenter, getClusterBroker, getClusterConsumer, @@ -49,6 +54,9 @@ import { getStaffSummary, getBillStaffSummary, getBillStaffDetail, + getCandidateController, + addCandidateController, + deleteCandidateCancel } from 'lib/api'; import { getControlMetricOption, getClusterMetricOption } from 'lib/line-charts-config'; @@ -59,6 +67,7 @@ import { transBToMB } from 'lib/utils'; import moment from 'moment'; import { timestore } from './time'; +import { message } from 'component/antd'; class Admin { @observable @@ -97,6 +106,12 @@ class Admin { @observable public configureList: IConfigure[] = []; + @observable + public configGatewayList: IConfigGateway[] = []; + + @observable + public gatewayType: []; + @observable public dataCenterList: string[] = []; @@ -142,6 +157,12 @@ class Admin { @observable public controllerHistory: IController[] = []; + @observable + public controllerCandidate: IController[] = []; + + @observable + public filtercontrollerCandidate: string = ''; + @observable public brokersPartitions: IBrokersPartitions[] = []; @@ -152,7 +173,7 @@ class Admin { public brokersAnalysisTopic: IAnalysisTopicVO[] = []; @observable - public brokersMetadata: IBrokersMetadata[] = []; + public brokersMetadata: IBrokersMetadata[] | any = []; @observable public brokersRegions: IBrokersRegions[] = []; @@ -206,10 +227,10 @@ class Admin { public kafkaRoles: IKafkaRoles[]; @observable - public controlType: IOptionType = 'byteIn/byteOut' ; + public controlType: IOptionType = 'byteIn/byteOut'; @observable - public type: IOptionType = 'byteIn/byteOut' ; + public type: IOptionType = 'byteIn/byteOut'; @observable public currentClusterId = null as number; @@ -241,7 +262,7 @@ class Admin { @action.bound public setClusterRealTime(data: IClusterReal) { - this.clusterRealData = data; + this.clusterRealData = data; this.getRealClusterLoading(false); } @@ -284,7 +305,7 @@ class Admin { return { ...item, label: item.fileName, - value: item.fileName + ',' + item.fileMd5, + value: item.fileName + ',' + item.fileMd5, }; })); } @@ -306,6 +327,20 @@ class Admin { }) : []; } + @action.bound + public setConfigGatewayList(data: IConfigGateway[]) { + this.configGatewayList = data ? data.map((item, index) => { + item.key = index; + return item; + }) : []; + } + + @action.bound + public setConfigGatewayType(data: any) { + this.setLoading(false); + this.gatewayType = data || []; + } + @action.bound public setDataCenter(data: string[]) { this.dataCenterList = data || []; @@ -335,6 +370,17 @@ class Admin { }) : []; } + @action.bound + public setCandidateController(data: IController[]) { + this.controllerCandidate = data ? data.map((item, index) => { + item.key = index; + return item; + }) : []; + this.filtercontrollerCandidate = data?data.map((item,index)=>{ + return item.brokerId + }).join(','):'' + } + @action.bound public setBrokersBasicInfo(data: IBrokersBasicInfo) { this.brokersBasicInfo = data; @@ -356,10 +402,10 @@ class Admin { this.replicaStatus = data.brokerReplicaStatusList.slice(1); this.bytesInStatus.forEach((item, index) => { - this.peakValueList.push({ name: peakValueMap[index], value: item}); + this.peakValueList.push({ name: peakValueMap[index], value: item }); }); this.replicaStatus.forEach((item, index) => { - this.copyValueList.push({name: copyValueMap[index], value: item}); + this.copyValueList.push({ name: copyValueMap[index], value: item }); }); } @@ -415,16 +461,16 @@ class Admin { } @action.bound - public setBrokersMetadata(data: IBrokersMetadata[]) { - this.brokersMetadata = data ? data.map((item, index) => { - item.key = index; - return { - ...item, - text: `${item.host} (BrokerID:${item.brokerId})`, - label: item.host, - value: item.brokerId, - }; - }) : []; + public setBrokersMetadata(data: IBrokersMetadata[]|any) { + this.brokersMetadata = data ? data.map((item:any, index:any) => { + item.key = index; + return { + ...item, + text: `${item.host} (BrokerID:${item.brokerId})`, + label: item.host, + value: item.brokerId, + }; + }) : []; } @action.bound @@ -461,9 +507,9 @@ class Admin { @action.bound public setLogicalClusters(data: ILogicalCluster[]) { this.logicalClusters = data ? data.map((item, index) => { - item.key = index; - return item; - }) : []; + item.key = index; + return item; + }) : []; } @action.bound @@ -474,25 +520,25 @@ class Admin { @action.bound public setClustersThrottles(data: IThrottles[]) { this.clustersThrottles = data ? data.map((item, index) => { - item.key = index; - return item; - }) : []; + item.key = index; + return item; + }) : []; } @action.bound public setPartitionsLocation(data: IPartitionsLocation[]) { this.partitionsLocation = data ? data.map((item, index) => { - item.key = index; - return item; - }) : []; + item.key = index; + return item; + }) : []; } @action.bound public setTaskManagement(data: ITaskManage[]) { this.taskManagement = data ? data.map((item, index) => { - item.key = index; - return item; - }) : []; + item.key = index; + return item; + }) : []; } @action.bound @@ -568,7 +614,7 @@ class Admin { return deleteCluster(clusterId).then(() => this.getMetaData(true)); } - public getPeakFlowChartData(value: ILabelValue[], map: string []) { + public getPeakFlowChartData(value: ILabelValue[], map: string[]) { return getPieChartOption(value, map); } @@ -627,6 +673,30 @@ class Admin { deleteConfigure(configKey).then(() => this.getConfigure()); } + public getGatewayList() { + getGatewayList().then(this.setConfigGatewayList); + } + + public getGatewayType() { + this.setLoading(true); + getGatewayType().then(this.setConfigGatewayType); + } + + public addNewConfigGateway(params: IConfigGateway) { + return addNewConfigGateway(params).then(() => this.getGatewayList()); + } + + public editConfigGateway(params: IConfigGateway) { + return editConfigGateway(params).then(() => this.getGatewayList()); + } + + public deleteConfigGateway(params: any) { + deleteConfigGateway(params).then(() => { + // message.success('删除成功') + this.getGatewayList() + }); + } + public getDataCenter() { getDataCenter().then(this.setDataCenter); } @@ -643,6 +713,20 @@ class Admin { return getControllerHistory(clusterId).then(this.setControllerHistory); } + public getCandidateController(clusterId: number) { + return getCandidateController(clusterId).then(data=>{ + return this.setCandidateController(data) + }); + } + + public addCandidateController(clusterId: number, brokerIdList: any) { + return addCandidateController({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId)); + } + + public deleteCandidateCancel(clusterId: number, brokerIdList: any){ + return deleteCandidateCancel({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId)); + } + public getBrokersBasicInfo(clusterId: number, brokerId: number) { return getBrokersBasicInfo(clusterId, brokerId).then(this.setBrokersBasicInfo); } diff --git a/kafka-manager-console/src/store/alarm.ts b/kafka-manager-console/src/store/alarm.ts index 7139a42e..e57631f0 100644 --- a/kafka-manager-console/src/store/alarm.ts +++ b/kafka-manager-console/src/store/alarm.ts @@ -96,7 +96,8 @@ class Alarm { @action.bound public setMonitorType(data: IMonitorMetricType) { this.monitorTypeList = data.metricNames || []; - this.monitorType = this.monitorTypeList[0].metricName; + // this.monitorType = this.monitorTypeList[0].metricName; + this.monitorType = ''; } @action.bound @@ -180,6 +181,7 @@ class Alarm { public modifyMonitorStrategy(params: IRequestParams) { return modifyMonitorStrategy(params).then(() => { message.success('操作成功'); + window.location.href = `${urlPrefix}/alarm`; }).finally(() => this.setLoading(false)); } diff --git a/kafka-manager-console/src/store/cluster.ts b/kafka-manager-console/src/store/cluster.ts index aabe41c9..7fb32793 100644 --- a/kafka-manager-console/src/store/cluster.ts +++ b/kafka-manager-console/src/store/cluster.ts @@ -21,7 +21,7 @@ class Cluster { public selectData: IClusterData[] = [{ value: -1, label: '所有集群', - } as IClusterData, + } as IClusterData, ]; @observable @@ -31,7 +31,7 @@ class Cluster { public selectAllData: IClusterData[] = [{ value: -1, label: '所有集群', - } as IClusterData, + } as IClusterData, ]; @observable @@ -59,7 +59,7 @@ class Cluster { public clusterMetrics: IClusterMetrics[] = []; @observable - public type: IOptionType = 'byteIn/byteOut' ; + public type: IOptionType = 'byteIn/byteOut'; @observable public clusterTopics: IClusterTopics[] = []; @@ -130,11 +130,11 @@ class Cluster { public setClusterCombos(data: IConfigInfo[]) { this.clusterComboList = data || []; this.clusterComboList = this.clusterComboList.map(item => { - return { - ...item, - label: item.message, - value: item.code, - }; + return { + ...item, + label: item.message, + value: item.code, + }; }); } @@ -148,7 +148,7 @@ class Cluster { value: item.code, }; }); - this.clusterMode = (this.clusterModes && this.clusterModes.filter(ele => ele.code !== 0) ) || []; // 去除 0 共享集群 + this.clusterMode = (this.clusterModes && this.clusterModes.filter(ele => ele.code !== 0)) || []; // 去除 0 共享集群 } @action.bound @@ -158,7 +158,7 @@ class Cluster { @action.bound public setClusterDetailRealTime(data: IClusterReal) { - this.clusterRealData = data; + this.clusterRealData = data; this.setRealLoading(false); } @@ -192,9 +192,9 @@ class Cluster { @action.bound public setClusterDetailThrottles(data: IThrottles[]) { this.clustersThrottles = data ? data.map((item, index) => { - item.key = index; - return item; - }) : []; + item.key = index; + return item; + }) : []; } @action.bound diff --git a/kafka-manager-console/src/store/users.ts b/kafka-manager-console/src/store/users.ts index 8d53114e..249a0187 100644 --- a/kafka-manager-console/src/store/users.ts +++ b/kafka-manager-console/src/store/users.ts @@ -19,6 +19,9 @@ export class Users { @observable public staff: IStaff[] = []; + @observable + public newPassWord: any = null; + @action.bound public setAccount(data: IUser) { setCookie([{ key: 'role', value: `${data.role}`, time: 1 }]); @@ -42,6 +45,11 @@ export class Users { this.loading = value; } + @action.bound + public setNewPassWord(value: boolean) { + this.newPassWord = value; + } + public getAccount() { getAccount().then(this.setAccount); } diff --git a/kafka-manager-console/src/types/alarm.ts b/kafka-manager-console/src/types/alarm.ts index 3c4ddbd6..124282b0 100644 --- a/kafka-manager-console/src/types/alarm.ts +++ b/kafka-manager-console/src/types/alarm.ts @@ -19,6 +19,7 @@ export interface IStrategyFilter { tkey: string; topt: string; tval: string[]; + clusterIdentification?: string; } export interface IRequestParams { appId: string; diff --git a/kafka-manager-console/src/types/base-type.ts b/kafka-manager-console/src/types/base-type.ts index 5cfbb999..605fd4fc 100644 --- a/kafka-manager-console/src/types/base-type.ts +++ b/kafka-manager-console/src/types/base-type.ts @@ -23,6 +23,7 @@ export interface IBtn { } export interface IClusterData { + clusterIdentification: any; clusterId: number; mode: number; clusterName: string; @@ -189,6 +190,7 @@ export interface IUser { chineseName?: string; department?: string; key?: number; + confirmPassword?:string } export interface IOffset { @@ -485,6 +487,17 @@ export interface IConfigure { key?: number; } +export interface IConfigGateway { + id: number; + key?: number; + modifyTime: number; + name: string; + value: string; + version: string; + type: string; + description: string; +} + export interface IEepand { brokerIdList: number[]; clusterId: number; @@ -598,10 +611,12 @@ export interface IClusterReal { } export interface IBasicInfo { + clusterIdentification: any; bootstrapServers: string; clusterId: number; mode: number; clusterName: string; + clusterNameCn: string; clusterVersion: string; gmtCreate: number; gmtModify: number; @@ -647,8 +662,10 @@ export interface IBrokerData { export interface IController { brokerId: number; host: string; - timestamp: number; - version: number; + timestamp?: number; + version?: number; + startTime?: number; + status?: number; key?: number; } @@ -920,8 +937,9 @@ export interface INewLogical { mode: number; name: string; logicalClusterName?: string; - logicalClusterEName?: string; + logicalClusterNameCn?: string; regionIdList: number[]; + logicalClusterIdentification?:string } export interface IPartitionsLocation { diff --git a/kafka-manager-core/pom.xml b/kafka-manager-core/pom.xml index 2360dffd..81675a43 100644 --- a/kafka-manager-core/pom.xml +++ b/kafka-manager-core/pom.xml @@ -5,13 +5,13 @@ 4.0.0 com.xiaojukeji.kafka kafka-manager-core - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/KafkaClientPool.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/KafkaClientPool.java index ce0753e4..921b13ba 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/KafkaClientPool.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/KafkaClientPool.java @@ -1,8 +1,8 @@ package com.xiaojukeji.kafka.manager.service.cache; import com.alibaba.fastjson.JSONObject; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory; import kafka.admin.AdminClient; import org.apache.commons.pool2.impl.GenericObjectPool; @@ -103,6 +103,21 @@ public class KafkaClientPool { } } + public static void closeKafkaConsumerPool(Long clusterId) { + lock.lock(); + try { + GenericObjectPool objectPool = KAFKA_CONSUMER_POOL.remove(clusterId); + if (objectPool == null) { + return; + } + objectPool.close(); + } catch (Exception e) { + LOGGER.error("close kafka consumer pool failed, clusterId:{}.", clusterId, e); + } finally { + lock.unlock(); + } + } + public static KafkaConsumer borrowKafkaConsumerClient(ClusterDO clusterDO) { if (ValidateUtils.isNull(clusterDO)) { return null; @@ -132,7 +147,11 @@ public class KafkaClientPool { if (ValidateUtils.isNull(objectPool)) { return; } - objectPool.returnObject(kafkaConsumer); + try { + objectPool.returnObject(kafkaConsumer); + } catch (Exception e) { + LOGGER.error("return kafka consumer client failed, clusterId:{}", physicalClusterId, e); + } } public static AdminClient getAdminClient(Long clusterId) { diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/LogicalClusterMetadataManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/LogicalClusterMetadataManager.java index 72bdcb76..5cd81581 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/LogicalClusterMetadataManager.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/LogicalClusterMetadataManager.java @@ -69,6 +69,19 @@ public class LogicalClusterMetadataManager { return LOGICAL_CLUSTER_ID_BROKER_ID_MAP.getOrDefault(logicClusterId, new HashSet<>()); } + public Long getTopicLogicalClusterId(Long physicalClusterId, String topicName) { + if (!LOADED.get()) { + flush(); + } + + Map logicalClusterIdMap = TOPIC_LOGICAL_MAP.get(physicalClusterId); + if (ValidateUtils.isNull(logicalClusterIdMap)) { + return null; + } + + return logicalClusterIdMap.get(topicName); + } + public LogicalClusterDO getTopicLogicalCluster(Long physicalClusterId, String topicName) { if (!LOADED.get()) { flush(); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/PhysicalClusterMetadataManager.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/PhysicalClusterMetadataManager.java index 345f7b9c..631b254f 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/PhysicalClusterMetadataManager.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/cache/PhysicalClusterMetadataManager.java @@ -4,22 +4,23 @@ import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum; import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion; -import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; +import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; +import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; -import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; -import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData; -import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; -import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl; -import com.xiaojukeji.kafka.manager.dao.ControllerDao; +import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig; import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap; -import com.xiaojukeji.kafka.manager.dao.TopicDao; -import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao; -import com.xiaojukeji.kafka.manager.service.service.JmxService; -import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; -import com.xiaojukeji.kafka.manager.service.zookeeper.*; -import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl; import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil; +import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData; +import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; +import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata; +import com.xiaojukeji.kafka.manager.dao.ControllerDao; +import com.xiaojukeji.kafka.manager.service.service.ClusterService; +import com.xiaojukeji.kafka.manager.service.service.JmxService; +import com.xiaojukeji.kafka.manager.service.zookeeper.BrokerStateListener; +import com.xiaojukeji.kafka.manager.service.zookeeper.ControllerStateListener; +import com.xiaojukeji.kafka.manager.service.zookeeper.TopicStateListener; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; @@ -47,15 +48,6 @@ public class PhysicalClusterMetadataManager { @Autowired private ClusterService clusterService; - @Autowired - private ConfigUtils configUtils; - - @Autowired - private TopicDao topicDao; - - @Autowired - private AuthorityDao authorityDao; - private final static Map CLUSTER_MAP = new ConcurrentHashMap<>(); private final static Map CONTROLLER_DATA_MAP = new ConcurrentHashMap<>(); @@ -118,13 +110,20 @@ public class PhysicalClusterMetadataManager { return; } + JmxConfig jmxConfig = null; + try { + jmxConfig = JsonUtils.stringToObj(clusterDO.getJmxProperties(), JmxConfig.class); + } catch (Exception e) { + LOGGER.error("class=PhysicalClusterMetadataManager||method=addNew||clusterDO={}||msg=parse jmx properties failed", JsonUtils.toJSONString(clusterDO)); + } + //增加Broker监控 - BrokerStateListener brokerListener = new BrokerStateListener(clusterDO.getId(), zkConfig, configUtils.getJmxMaxConn()); + BrokerStateListener brokerListener = new BrokerStateListener(clusterDO.getId(), zkConfig, jmxConfig); brokerListener.init(); zkConfig.watchChildren(ZkPathUtil.BROKER_IDS_ROOT, brokerListener); //增加Topic监控 - TopicStateListener topicListener = new TopicStateListener(clusterDO.getId(), zkConfig, topicDao, authorityDao); + TopicStateListener topicListener = new TopicStateListener(clusterDO.getId(), zkConfig); topicListener.init(); zkConfig.watchChildren(ZkPathUtil.BROKER_TOPICS_ROOT, topicListener); @@ -163,8 +162,12 @@ public class PhysicalClusterMetadataManager { CLUSTER_MAP.remove(clusterId); } - public Set getClusterIdSet() { - return CLUSTER_MAP.keySet(); + public static Map getClusterMap() { + return CLUSTER_MAP; + } + + public static void updateClusterMap(ClusterDO clusterDO) { + CLUSTER_MAP.put(clusterDO.getId(), clusterDO); } public static ClusterDO getClusterFromCache(Long clusterId) { @@ -280,7 +283,7 @@ public class PhysicalClusterMetadataManager { //---------------------------Broker元信息相关-------------- - public static void putBrokerMetadata(Long clusterId, Integer brokerId, BrokerMetadata brokerMetadata, Integer jmxMaxConn) { + public static void putBrokerMetadata(Long clusterId, Integer brokerId, BrokerMetadata brokerMetadata, JmxConfig jmxConfig) { Map metadataMap = BROKER_METADATA_MAP.get(clusterId); if (metadataMap == null) { return; @@ -288,7 +291,7 @@ public class PhysicalClusterMetadataManager { metadataMap.put(brokerId, brokerMetadata); Map jmxMap = JMX_CONNECTOR_MAP.getOrDefault(clusterId, new ConcurrentHashMap<>()); - jmxMap.put(brokerId, new JmxConnectorWrap(brokerMetadata.getHost(), brokerMetadata.getJmxPort(), jmxMaxConn)); + jmxMap.put(brokerId, new JmxConnectorWrap(brokerMetadata.getHost(), brokerMetadata.getJmxPort(), jmxConfig)); JMX_CONNECTOR_MAP.put(clusterId, jmxMap); Map versionMap = KAFKA_VERSION_MAP.getOrDefault(clusterId, new ConcurrentHashMap<>()); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java index 004a3f51..2feb321b 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ClusterService.java @@ -4,6 +4,7 @@ import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO; import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO; import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterMetricsDO; @@ -43,7 +44,7 @@ public interface ClusterService { ClusterNameDTO getClusterName(Long logicClusterId); - ResultStatus deleteById(Long clusterId); + ResultStatus deleteById(Long clusterId, String operator); /** * 获取优先被选举为controller的broker @@ -51,4 +52,20 @@ public interface ClusterService { * @return void */ Result> getControllerPreferredCandidates(Long clusterId); + + /** + * 增加优先被选举为controller的broker + * @param clusterId 集群ID + * @param brokerIdList brokerId列表 + * @return + */ + Result addControllerPreferredCandidates(Long clusterId, List brokerIdList); + + /** + * 减少优先被选举为controller的broker + * @param clusterId 集群ID + * @param brokerIdList brokerId列表 + * @return + */ + Result deleteControllerPreferredCandidates(Long clusterId, List brokerIdList); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/OperateRecordService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/OperateRecordService.java index c5007ac6..5b2909ca 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/OperateRecordService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/OperateRecordService.java @@ -1,9 +1,12 @@ package com.xiaojukeji.kafka.manager.service.service; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.entity.dto.rd.OperateRecordDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; import java.util.List; +import java.util.Map; /** * @author zhongyuankai @@ -12,5 +15,7 @@ import java.util.List; public interface OperateRecordService { int insert(OperateRecordDO operateRecordDO); + int insert(String operator, ModuleEnum module, String resourceName, OperateEnum operate, Map content); + List queryByCondt(OperateRecordDTO dto); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java index d24b2d24..d52d3bc7 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/ZookeeperService.java @@ -26,4 +26,20 @@ public interface ZookeeperService { * @return 操作结果 */ Result> getControllerPreferredCandidates(Long clusterId); + + /** + * 增加优先被选举为controller的broker + * @param clusterId 集群ID + * @param brokerId brokerId + * @return + */ + Result addControllerPreferredCandidate(Long clusterId, Integer brokerId); + + /** + * 减少优先被选举为controller的broker + * @param clusterId 集群ID + * @param brokerId brokerId + * @return + */ + Result deleteControllerPreferredCandidate(Long clusterId, Integer brokerId); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java index c78946b6..82aa5513 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/AppService.java @@ -17,7 +17,7 @@ public interface AppService { * @param appDO appDO * @return int */ - ResultStatus addApp(AppDO appDO); + ResultStatus addApp(AppDO appDO, String operator); /** * 删除数据 diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java index 09b4a071..200b3cf4 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/AppServiceImpl.java @@ -60,10 +60,8 @@ public class AppServiceImpl implements AppService { @Autowired private OperateRecordService operateRecordService; - - @Override - public ResultStatus addApp(AppDO appDO) { + public ResultStatus addApp(AppDO appDO, String operator) { try { if (appDao.insert(appDO) < 1) { LOGGER.warn("class=AppServiceImpl||method=addApp||AppDO={}||msg=add fail,{}",appDO,ResultStatus.MYSQL_ERROR.getMessage()); @@ -75,6 +73,15 @@ public class AppServiceImpl implements AppService { kafkaUserDO.setOperation(OperationStatusEnum.CREATE.getCode()); kafkaUserDO.setUserType(0); kafkaUserDao.insert(kafkaUserDO); + + Map content = new HashMap<>(); + content.put("appId", appDO.getAppId()); + content.put("name", appDO.getName()); + content.put("applicant", appDO.getApplicant()); + content.put("password", appDO.getPassword()); + content.put("principals", appDO.getPrincipals()); + content.put("description", appDO.getDescription()); + operateRecordService.insert(operator, ModuleEnum.APP, appDO.getName(), OperateEnum.ADD, content); } catch (DuplicateKeyException e) { LOGGER.error("class=AppServiceImpl||method=addApp||errMsg={}||appDO={}|", e.getMessage(), appDO, e); return ResultStatus.RESOURCE_ALREADY_EXISTED; @@ -141,6 +148,12 @@ public class AppServiceImpl implements AppService { appDO.setDescription(dto.getDescription()); if (appDao.updateById(appDO) > 0) { + Map content = new HashMap<>(); + content.put("appId", appDO.getAppId()); + content.put("name", appDO.getName()); + content.put("principals", appDO.getPrincipals()); + content.put("description", appDO.getDescription()); + operateRecordService.insert(operator, ModuleEnum.APP, appDO.getName(), OperateEnum.EDIT, content); return ResultStatus.SUCCESS; } } catch (DuplicateKeyException e) { diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/GatewayConfigServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/GatewayConfigServiceImpl.java index fce7b605..18ee0a0d 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/GatewayConfigServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/gateway/impl/GatewayConfigServiceImpl.java @@ -221,13 +221,24 @@ public class GatewayConfigServiceImpl implements GatewayConfigService { if (ValidateUtils.isNull(oldGatewayConfigDO)) { return Result.buildFrom(ResultStatus.RESOURCE_NOT_EXIST); } + if (!oldGatewayConfigDO.getName().equals(newGatewayConfigDO.getName()) || !oldGatewayConfigDO.getType().equals(newGatewayConfigDO.getType()) || ValidateUtils.isBlank(newGatewayConfigDO.getValue())) { return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } - newGatewayConfigDO.setVersion(oldGatewayConfigDO.getVersion() + 1); - if (gatewayConfigDao.updateById(oldGatewayConfigDO) > 0) { + + // 获取当前同类配置, 插入之后需要增大这个version + List gatewayConfigDOList = gatewayConfigDao.getByConfigType(newGatewayConfigDO.getType()); + Long version = 1L; + for (GatewayConfigDO elem: gatewayConfigDOList) { + if (elem.getVersion() > version) { + version = elem.getVersion() + 1L; + } + } + + newGatewayConfigDO.setVersion(version); + if (gatewayConfigDao.updateById(newGatewayConfigDO) > 0) { return Result.buildSuc(); } return Result.buildFrom(ResultStatus.MYSQL_ERROR); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java index 9f9727e1..b505bad0 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ClusterServiceImpl.java @@ -1,7 +1,8 @@ package com.xiaojukeji.kafka.manager.service.service.impl; import com.xiaojukeji.kafka.manager.common.bizenum.DBStatusEnum; -import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO; @@ -16,10 +17,7 @@ import com.xiaojukeji.kafka.manager.dao.ClusterMetricsDao; import com.xiaojukeji.kafka.manager.dao.ControllerDao; import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; -import com.xiaojukeji.kafka.manager.service.service.ClusterService; -import com.xiaojukeji.kafka.manager.service.service.ConsumerService; -import com.xiaojukeji.kafka.manager.service.service.RegionService; -import com.xiaojukeji.kafka.manager.service.service.ZookeeperService; +import com.xiaojukeji.kafka.manager.service.service.*; import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; @@ -66,15 +64,24 @@ public class ClusterServiceImpl implements ClusterService { @Autowired private ZookeeperService zookeeperService; + @Autowired + private OperateRecordService operateRecordService; + @Override public ResultStatus addNew(ClusterDO clusterDO, String operator) { if (ValidateUtils.isNull(clusterDO) || ValidateUtils.isNull(operator)) { return ResultStatus.PARAM_ILLEGAL; } if (!isZookeeperLegal(clusterDO.getZookeeper())) { - return ResultStatus.CONNECT_ZOOKEEPER_FAILED; + return ResultStatus.ZOOKEEPER_CONNECT_FAILED; } try { + Map content = new HashMap<>(); + content.put("zk address", clusterDO.getZookeeper()); + content.put("bootstrap servers", clusterDO.getBootstrapServers()); + content.put("security properties", clusterDO.getSecurityProperties()); + content.put("jmx properties", clusterDO.getJmxProperties()); + operateRecordService.insert(operator, ModuleEnum.CLUSTER, clusterDO.getClusterName(), OperateEnum.ADD, content); if (clusterDao.insert(clusterDO) <= 0) { LOGGER.error("add new cluster failed, clusterDO:{}.", clusterDO); return ResultStatus.MYSQL_ERROR; @@ -102,8 +109,14 @@ public class ClusterServiceImpl implements ClusterService { if (!originClusterDO.getZookeeper().equals(clusterDO.getZookeeper())) { // 不允许修改zk地址 - return ResultStatus.CHANGE_ZOOKEEPER_FORBIDEN; + return ResultStatus.CHANGE_ZOOKEEPER_FORBIDDEN; } + Map content = new HashMap<>(); + content.put("cluster id", clusterDO.getId().toString()); + content.put("security properties", clusterDO.getSecurityProperties()); + content.put("jmx properties", clusterDO.getJmxProperties()); + operateRecordService.insert(operator, ModuleEnum.CLUSTER, clusterDO.getClusterName(), OperateEnum.EDIT, content); + clusterDO.setStatus(originClusterDO.getStatus()); return updateById(clusterDO); } @@ -192,20 +205,31 @@ public class ClusterServiceImpl implements ClusterService { } private boolean isZookeeperLegal(String zookeeper) { + boolean status = false; + ZooKeeper zk = null; try { zk = new ZooKeeper(zookeeper, 1000, null); - } catch (Throwable t) { - return false; + for (int i = 0; i < 15; ++i) { + if (zk.getState().isConnected()) { + // 只有状态是connected的时候,才表示地址是合法的 + status = true; + break; + } + Thread.sleep(1000); + } + } catch (Exception e) { + LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=zk address illegal||errMsg={}", zookeeper, e.getMessage()); } finally { try { if (zk != null) { zk.close(); } - } catch (Throwable t) { + } catch (Exception e) { + LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=close zk client failed||errMsg={}", zookeeper, e.getMessage()); } } - return true; + return status; } @Override @@ -254,12 +278,15 @@ public class ClusterServiceImpl implements ClusterService { } @Override - public ResultStatus deleteById(Long clusterId) { + public ResultStatus deleteById(Long clusterId, String operator) { List regionDOList = regionService.getByClusterId(clusterId); if (!ValidateUtils.isEmptyList(regionDOList)) { return ResultStatus.OPERATION_FORBIDDEN; } try { + Map content = new HashMap<>(); + content.put("cluster id", clusterId.toString()); + operateRecordService.insert(operator, ModuleEnum.CLUSTER, String.valueOf(clusterId), OperateEnum.DELETE, content); if (clusterDao.deleteById(clusterId) <= 0) { LOGGER.error("delete cluster failed, clusterId:{}.", clusterId); return ResultStatus.MYSQL_ERROR; @@ -273,8 +300,9 @@ public class ClusterServiceImpl implements ClusterService { private ClusterDetailDTO getClusterDetailDTO(ClusterDO clusterDO, Boolean needDetail) { if (ValidateUtils.isNull(clusterDO)) { - return null; + return new ClusterDetailDTO(); } + ClusterDetailDTO dto = new ClusterDetailDTO(); dto.setClusterId(clusterDO.getId()); dto.setClusterName(clusterDO.getClusterName()); @@ -283,6 +311,7 @@ public class ClusterServiceImpl implements ClusterService { dto.setKafkaVersion(physicalClusterMetadataManager.getKafkaVersionFromCache(clusterDO.getId())); dto.setIdc(configUtils.getIdc()); dto.setSecurityProperties(clusterDO.getSecurityProperties()); + dto.setJmxProperties(clusterDO.getJmxProperties()); dto.setStatus(clusterDO.getStatus()); dto.setGmtCreate(clusterDO.getGmtCreate()); dto.setGmtModify(clusterDO.getGmtModify()); @@ -321,4 +350,39 @@ public class ClusterServiceImpl implements ClusterService { } return Result.buildSuc(controllerPreferredCandidateList); } + + @Override + public Result addControllerPreferredCandidates(Long clusterId, List brokerIdList) { + if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(brokerIdList)) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); + } + + // 增加的BrokerId需要判断是否存活 + for (Integer brokerId: brokerIdList) { + if (!PhysicalClusterMetadataManager.isBrokerAlive(clusterId, brokerId)) { + return Result.buildFrom(ResultStatus.BROKER_NOT_EXIST); + } + + Result result = zookeeperService.addControllerPreferredCandidate(clusterId, brokerId); + if (result.failed()) { + return result; + } + } + return Result.buildSuc(); + } + + @Override + public Result deleteControllerPreferredCandidates(Long clusterId, List brokerIdList) { + if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(brokerIdList)) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); + } + + for (Integer brokerId: brokerIdList) { + Result result = zookeeperService.deleteControllerPreferredCandidate(clusterId, brokerId); + if (result.failed()) { + return result; + } + } + return Result.buildSuc(); + } } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ConsumerServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ConsumerServiceImpl.java index e228d36c..0d60d828 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ConsumerServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ConsumerServiceImpl.java @@ -129,7 +129,7 @@ public class ConsumerServiceImpl implements ConsumerService { } summary.setState(consumerGroupSummary.state()); - java.util.Iterator> it = JavaConversions.asJavaIterator(consumerGroupSummary.consumers().iterator()); + Iterator> it = JavaConversions.asJavaIterator(consumerGroupSummary.consumers().iterator()); while (it.hasNext()) { List consumerSummaryList = JavaConversions.asJavaList(it.next()); for (AdminClient.ConsumerSummary consumerSummary: consumerSummaryList) { diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java index 5b2fb703..9a6f40be 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/LogicalClusterServiceImpl.java @@ -113,6 +113,7 @@ public class LogicalClusterServiceImpl implements LogicalClusterService { LogicalCluster logicalCluster = new LogicalCluster(); logicalCluster.setLogicalClusterId(logicalClusterDO.getId()); logicalCluster.setLogicalClusterName(logicalClusterDO.getName()); + logicalCluster.setLogicalClusterIdentification(logicalClusterDO.getIdentification()); logicalCluster.setClusterVersion( physicalClusterMetadataManager.getKafkaVersion( logicalClusterDO.getClusterId(), diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/OperateRecordServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/OperateRecordServiceImpl.java index 47702eaa..290bbae5 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/OperateRecordServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/OperateRecordServiceImpl.java @@ -1,7 +1,10 @@ package com.xiaojukeji.kafka.manager.service.service.impl; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.entity.dto.rd.OperateRecordDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO; +import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.dao.OperateRecordDao; import com.xiaojukeji.kafka.manager.service.service.OperateRecordService; @@ -10,6 +13,7 @@ import org.springframework.stereotype.Service; import java.util.Date; import java.util.List; +import java.util.Map; /** * @author zhongyuankai @@ -25,6 +29,17 @@ public class OperateRecordServiceImpl implements OperateRecordService { return operateRecordDao.insert(operateRecordDO); } + @Override + public int insert(String operator, ModuleEnum module, String resourceName, OperateEnum operate, Map content) { + OperateRecordDO operateRecordDO = new OperateRecordDO(); + operateRecordDO.setOperator(operator); + operateRecordDO.setModuleId(module.getCode()); + operateRecordDO.setResource(resourceName); + operateRecordDO.setOperateId(operate.getCode()); + operateRecordDO.setContent(JsonUtils.toJSONString(content)); + return insert(operateRecordDO); + } + @Override public List queryByCondt(OperateRecordDTO dto) { return operateRecordDao.queryByCondt( diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java index 0b42d068..6ee9a499 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/TopicManagerServiceImpl.java @@ -1,7 +1,10 @@ package com.xiaojukeji.kafka.manager.service.service.impl; import com.xiaojukeji.kafka.manager.common.bizenum.KafkaClientEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum; +import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant; import com.xiaojukeji.kafka.manager.common.constant.KafkaMetricsCollections; import com.xiaojukeji.kafka.manager.common.constant.TopicCreationConstant; import com.xiaojukeji.kafka.manager.common.entity.Result; @@ -80,6 +83,9 @@ public class TopicManagerServiceImpl implements TopicManagerService { @Autowired private RegionService regionService; + @Autowired + private OperateRecordService operateRecordService; + @Override public List listAll() { try { @@ -293,6 +299,10 @@ public class TopicManagerServiceImpl implements TopicManagerService { Map topicMap) { List dtoList = new ArrayList<>(); for (String topicName: PhysicalClusterMetadataManager.getTopicNameList(clusterDO.getId())) { + if (topicName.equals(KafkaConstant.COORDINATOR_TOPIC_NAME) || topicName.equals(KafkaConstant.TRANSACTION_TOPIC_NAME)) { + continue; + } + LogicalClusterDO logicalClusterDO = logicalClusterMetadataManager.getTopicLogicalCluster( clusterDO.getId(), topicName @@ -336,6 +346,12 @@ public class TopicManagerServiceImpl implements TopicManagerService { if (ValidateUtils.isNull(topicDO)) { return ResultStatus.TOPIC_NOT_EXIST; } + + Map content = new HashMap<>(2); + content.put("clusterId", clusterId); + content.put("topicName", topicName); + recordOperation(content, topicName, operator); + topicDO.setDescription(description); if (topicDao.updateByName(topicDO) > 0) { return ResultStatus.SUCCESS; @@ -359,6 +375,12 @@ public class TopicManagerServiceImpl implements TopicManagerService { return ResultStatus.APP_NOT_EXIST; } + Map content = new HashMap<>(4); + content.put("clusterId", clusterId); + content.put("topicName", topicName); + content.put("appId", appId); + recordOperation(content, topicName, operator); + TopicDO topicDO = topicDao.getByTopicName(clusterId, topicName); if (ValidateUtils.isNull(topicDO)) { // 不存在, 则需要插入 @@ -389,6 +411,16 @@ public class TopicManagerServiceImpl implements TopicManagerService { return ResultStatus.MYSQL_ERROR; } + private void recordOperation(Map content, String topicName, String operator) { + OperateRecordDO operateRecordDO = new OperateRecordDO(); + operateRecordDO.setModuleId(ModuleEnum.TOPIC.getCode()); + operateRecordDO.setOperateId(OperateEnum.EDIT.getCode()); + operateRecordDO.setResource(topicName); + operateRecordDO.setContent(JsonUtils.toJSONString(content)); + operateRecordDO.setOperator(operator); + operateRecordService.insert(operateRecordDO); + } + @Override public int deleteByTopicName(Long clusterId, String topicName) { try { diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java index 796410da..c4c89513 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/service/impl/ZookeeperServiceImpl.java @@ -53,7 +53,7 @@ public class ZookeeperServiceImpl implements ZookeeperService { } ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId); if (ValidateUtils.isNull(zkConfig)) { - return Result.buildFrom(ResultStatus.CONNECT_ZOOKEEPER_FAILED); + return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED); } try { @@ -68,6 +68,60 @@ public class ZookeeperServiceImpl implements ZookeeperService { } catch (Exception e) { LOGGER.error("class=ZookeeperServiceImpl||method=getControllerPreferredCandidates||clusterId={}||errMsg={}", clusterId, e.getMessage()); } - return Result.buildFrom(ResultStatus.READ_ZOOKEEPER_FAILED); + return Result.buildFrom(ResultStatus.ZOOKEEPER_READ_FAILED); + } + + @Override + public Result addControllerPreferredCandidate(Long clusterId, Integer brokerId) { + if (ValidateUtils.isNull(clusterId)) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); + } + ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId); + if (ValidateUtils.isNull(zkConfig)) { + return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED); + } + + try { + if (zkConfig.checkPathExists(ZkPathUtil.getControllerCandidatePath(brokerId))) { + // 节点已经存在, 则直接忽略 + return Result.buildSuc(); + } + + if (!zkConfig.checkPathExists(ZkPathUtil.D_CONFIG_EXTENSION_ROOT_NODE)) { + zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.D_CONFIG_EXTENSION_ROOT_NODE, ""); + } + + if (!zkConfig.checkPathExists(ZkPathUtil.D_CONTROLLER_CANDIDATES)) { + zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.D_CONTROLLER_CANDIDATES, ""); + } + + zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.getControllerCandidatePath(brokerId), ""); + return Result.buildSuc(); + } catch (Exception e) { + LOGGER.error("class=ZookeeperServiceImpl||method=addControllerPreferredCandidate||clusterId={}||brokerId={}||errMsg={}||", clusterId, brokerId, e.getMessage()); + } + return Result.buildFrom(ResultStatus.ZOOKEEPER_WRITE_FAILED); + } + + @Override + public Result deleteControllerPreferredCandidate(Long clusterId, Integer brokerId) { + if (ValidateUtils.isNull(clusterId)) { + return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); + } + ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId); + if (ValidateUtils.isNull(zkConfig)) { + return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED); + } + + try { + if (!zkConfig.checkPathExists(ZkPathUtil.getControllerCandidatePath(brokerId))) { + return Result.buildSuc(); + } + zkConfig.delete(ZkPathUtil.getControllerCandidatePath(brokerId)); + return Result.buildSuc(); + } catch (Exception e) { + LOGGER.error("class=ZookeeperServiceImpl||method=deleteControllerPreferredCandidate||clusterId={}||brokerId={}||errMsg={}||", clusterId, brokerId, e.getMessage()); + } + return Result.buildFrom(ResultStatus.ZOOKEEPER_DELETE_FAILED); } } \ No newline at end of file diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java index 53e9a2ba..2c2cc253 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/ConfigUtils.java @@ -13,9 +13,6 @@ public class ConfigUtils { @Value(value = "${custom.idc}") private String idc; - @Value("${custom.jmx.max-conn}") - private Integer jmxMaxConn; - @Value(value = "${spring.profiles.active}") private String kafkaManagerEnv; @@ -30,14 +27,6 @@ public class ConfigUtils { this.idc = idc; } - public Integer getJmxMaxConn() { - return jmxMaxConn; - } - - public void setJmxMaxConn(Integer jmxMaxConn) { - this.jmxMaxConn = jmxMaxConn; - } - public String getKafkaManagerEnv() { return kafkaManagerEnv; } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java index 58e5d98b..6995eb97 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/utils/TopicCommands.java @@ -44,7 +44,7 @@ public class TopicCommands { ); // 生成分配策略 - scala.collection.Map> replicaAssignment = + scala.collection.Map> replicaAssignment = AdminUtils.assignReplicasToBrokers( convert2BrokerMetadataSeq(brokerIdList), partitionNum, @@ -177,7 +177,7 @@ public class TopicCommands { ) ); - Map> existingAssignJavaMap = + Map> existingAssignJavaMap = JavaConversions.asJavaMap(existingAssignScalaMap); // 新增分区的分配策略和旧的分配策略合并 Map> targetMap = new HashMap<>(); diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/BrokerStateListener.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/BrokerStateListener.java index 16a185e0..a94ec9de 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/BrokerStateListener.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/BrokerStateListener.java @@ -1,5 +1,6 @@ package com.xiaojukeji.kafka.manager.service.zookeeper; +import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig; import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata; import com.xiaojukeji.kafka.manager.common.zookeeper.StateChangeListener; import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl; @@ -22,12 +23,12 @@ public class BrokerStateListener implements StateChangeListener { private ZkConfigImpl zkConfig; - private Integer jmxMaxConn; + private JmxConfig jmxConfig; - public BrokerStateListener(Long clusterId, ZkConfigImpl zkConfig, Integer jmxMaxConn) { + public BrokerStateListener(Long clusterId, ZkConfigImpl zkConfig, JmxConfig jmxConfig) { this.clusterId = clusterId; this.zkConfig = zkConfig; - this.jmxMaxConn = jmxMaxConn; + this.jmxConfig = jmxConfig; } @Override @@ -84,7 +85,7 @@ public class BrokerStateListener implements StateChangeListener { } brokerMetadata.setClusterId(clusterId); brokerMetadata.setBrokerId(brokerId); - PhysicalClusterMetadataManager.putBrokerMetadata(clusterId, brokerId, brokerMetadata, jmxMaxConn); + PhysicalClusterMetadataManager.putBrokerMetadata(clusterId, brokerId, brokerMetadata, jmxConfig); } catch (Exception e) { LOGGER.error("add broker failed, clusterId:{} brokerMetadata:{}.", clusterId, brokerMetadata, e); } diff --git a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/TopicStateListener.java b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/TopicStateListener.java index f808b976..4314a101 100644 --- a/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/TopicStateListener.java +++ b/kafka-manager-core/src/main/java/com/xiaojukeji/kafka/manager/service/zookeeper/TopicStateListener.java @@ -5,8 +5,6 @@ import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata import com.xiaojukeji.kafka.manager.common.zookeeper.StateChangeListener; import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl; import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil; -import com.xiaojukeji.kafka.manager.dao.TopicDao; -import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao; import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager; import com.xiaojukeji.kafka.manager.service.cache.ThreadPool; import org.apache.zookeeper.data.Stat; @@ -24,28 +22,17 @@ import java.util.concurrent.*; * @date 20/5/14 */ public class TopicStateListener implements StateChangeListener { - private final static Logger LOGGER = LoggerFactory.getLogger(TopicStateListener.class); + private static final Logger LOGGER = LoggerFactory.getLogger(TopicStateListener.class); private Long clusterId; private ZkConfigImpl zkConfig; - private TopicDao topicDao; - - private AuthorityDao authorityDao; - public TopicStateListener(Long clusterId, ZkConfigImpl zkConfig) { this.clusterId = clusterId; this.zkConfig = zkConfig; } - public TopicStateListener(Long clusterId, ZkConfigImpl zkConfig, TopicDao topicDao, AuthorityDao authorityDao) { - this.clusterId = clusterId; - this.zkConfig = zkConfig; - this.topicDao = topicDao; - this.authorityDao = authorityDao; - } - @Override public void init() { try { @@ -53,7 +40,7 @@ public class TopicStateListener implements StateChangeListener { FutureTask[] taskList = new FutureTask[topicNameList.size()]; for (int i = 0; i < topicNameList.size(); i++) { String topicName = topicNameList.get(i); - taskList[i] = new FutureTask(new Callable() { + taskList[i] = new FutureTask(new Callable() { @Override public Object call() throws Exception { processTopicAdded(topicName); @@ -65,7 +52,6 @@ public class TopicStateListener implements StateChangeListener { } catch (Exception e) { LOGGER.error("init topics metadata failed, clusterId:{}.", clusterId, e); } - return; } @Override @@ -92,8 +78,6 @@ public class TopicStateListener implements StateChangeListener { private void processTopicDelete(String topicName) { LOGGER.warn("delete topic, clusterId:{} topicName:{}.", clusterId, topicName); PhysicalClusterMetadataManager.removeTopicMetadata(clusterId, topicName); - topicDao.removeTopicInCache(clusterId, topicName); - authorityDao.removeAuthorityInCache(clusterId, topicName); } private void processTopicAdded(String topicName) { @@ -122,4 +106,4 @@ public class TopicStateListener implements StateChangeListener { LOGGER.error("add topic failed, clusterId:{} topicMetadata:{}.", clusterId, topicMetadata, e); } } -} \ No newline at end of file +} diff --git a/kafka-manager-dao/pom.xml b/kafka-manager-dao/pom.xml index 41122856..8b30c431 100644 --- a/kafka-manager-dao/pom.xml +++ b/kafka-manager-dao/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-dao - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/TopicDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/TopicDao.java index 3d3f5410..64e089a6 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/TopicDao.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/TopicDao.java @@ -22,6 +22,4 @@ public interface TopicDao { List listAll(); TopicDO getTopic(Long clusterId, String topicName, String appId); - - TopicDO removeTopicInCache(Long clusterId, String topicName); } \ No newline at end of file diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AppDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AppDao.java index 218c8656..7802005a 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AppDao.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AppDao.java @@ -16,8 +16,6 @@ public interface AppDao { */ int insert(AppDO appDO); - int insertIgnoreGatewayDB(AppDO appDO); - /** * 删除appId * @param appName App名称 @@ -60,6 +58,4 @@ public interface AppDao { * @return int */ int updateById(AppDO appDO); - - List listNewAll(); } \ No newline at end of file diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java index a7a8affe..655218e9 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/AuthorityDao.java @@ -15,8 +15,6 @@ public interface AuthorityDao { */ int insert(AuthorityDO authorityDO); - int replaceIgnoreGatewayDB(AuthorityDO authorityDO); - /** * 获取权限 * @param clusterId 集群id @@ -38,7 +36,5 @@ public interface AuthorityDao { Map>> getAllAuthority(); - void removeAuthorityInCache(Long clusterId, String topicName); - int deleteAuthorityByTopic(Long clusterId, String topicName); } diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AppDaoImpl.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AppDaoImpl.java index aa08c1b4..62475b9b 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AppDaoImpl.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AppDaoImpl.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.dao.gateway.impl; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO; import com.xiaojukeji.kafka.manager.dao.gateway.AppDao; +import com.xiaojukeji.kafka.manager.task.Constant; import org.mybatis.spring.SqlSessionTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository; @@ -21,7 +22,7 @@ public class AppDaoImpl implements AppDao { /** * APP最近的一次更新时间, 更新之后的缓存 */ - private static Long APP_CACHE_LATEST_UPDATE_TIME = 0L; + private static volatile long APP_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; private static final Map APP_MAP = new ConcurrentHashMap<>(); @Override @@ -29,11 +30,6 @@ public class AppDaoImpl implements AppDao { return sqlSession.insert("AppDao.insert", appDO); } - @Override - public int insertIgnoreGatewayDB(AppDO appDO) { - return sqlSession.insert("AppDao.insert", appDO); - } - @Override public int deleteByName(String appName) { return sqlSession.delete("AppDao.deleteByName", appName); @@ -66,7 +62,12 @@ public class AppDaoImpl implements AppDao { } private void updateTopicCache() { - Long timestamp = System.currentTimeMillis(); + long timestamp = System.currentTimeMillis(); + + if (timestamp + 1000 <= APP_CACHE_LATEST_UPDATE_TIME) { + // 近一秒内的请求不走db + return; + } Date afterTime = new Date(APP_CACHE_LATEST_UPDATE_TIME); List doList = sqlSession.selectList("AppDao.listAfterTime", afterTime); @@ -76,19 +77,22 @@ public class AppDaoImpl implements AppDao { /** * 更新APP缓存 */ - synchronized private void updateTopicCache(List doList, Long timestamp) { + private synchronized void updateTopicCache(List doList, long timestamp) { if (doList == null || doList.isEmpty() || APP_CACHE_LATEST_UPDATE_TIME >= timestamp) { // 本次无数据更新, 或者本次更新过时 时, 忽略本次更新 return; } + if (APP_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) { + APP_MAP.clear(); + } + for (AppDO elem: doList) { APP_MAP.put(elem.getAppId(), elem); } APP_CACHE_LATEST_UPDATE_TIME = timestamp; } - @Override - public List listNewAll() { - return sqlSession.selectList("AppDao.listNewAll"); + public static void resetCache() { + APP_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; } } \ No newline at end of file diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java index 74a7cab0..1b5df873 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/gateway/impl/AuthorityDaoImpl.java @@ -1,8 +1,8 @@ package com.xiaojukeji.kafka.manager.dao.gateway.impl; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO; -import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao; +import com.xiaojukeji.kafka.manager.task.Constant; import org.mybatis.spring.SqlSessionTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository; @@ -23,7 +23,8 @@ public class AuthorityDaoImpl implements AuthorityDao { * Authority最近的一次更新时间, 更新之后的缓存 * >> */ - private static Long AUTHORITY_CACHE_LATEST_UPDATE_TIME = 0L; + private static volatile long AUTHORITY_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; + private static final Map>> AUTHORITY_MAP = new ConcurrentHashMap<>(); @Override @@ -31,11 +32,6 @@ public class AuthorityDaoImpl implements AuthorityDao { return sqlSession.insert("AuthorityDao.replace", authorityDO); } - @Override - public int replaceIgnoreGatewayDB(AuthorityDO authorityDO) { - return sqlSession.insert("AuthorityDao.replace", authorityDO); - } - @Override public List getAuthority(Long clusterId, String topicName, String appId) { Map params = new HashMap<>(3); @@ -62,8 +58,8 @@ public class AuthorityDaoImpl implements AuthorityDao { } List authorityDOList = new ArrayList<>(); - for (Long clusterId: doMap.keySet()) { - authorityDOList.addAll(doMap.get(clusterId).values()); + for (Map.Entry> entry: doMap.entrySet()) { + authorityDOList.addAll(entry.getValue().values()); } return authorityDOList; } @@ -87,23 +83,6 @@ public class AuthorityDaoImpl implements AuthorityDao { return AUTHORITY_MAP; } - @Override - public void removeAuthorityInCache(Long clusterId, String topicName) { - AUTHORITY_MAP.forEach((appId, map) -> { - map.forEach((id, subMap) -> { - if (id.equals(clusterId)) { - subMap.remove(topicName); - if (subMap.isEmpty()) { - map.remove(id); - } - } - }); - if (map.isEmpty()) { - AUTHORITY_MAP.remove(appId); - } - }); - } - @Override public int deleteAuthorityByTopic(Long clusterId, String topicName) { Map params = new HashMap<>(2); @@ -116,6 +95,11 @@ public class AuthorityDaoImpl implements AuthorityDao { private void updateAuthorityCache() { Long timestamp = System.currentTimeMillis(); + if (timestamp + 1000 <= AUTHORITY_CACHE_LATEST_UPDATE_TIME) { + // 近一秒内的请求不走db + return; + } + Date afterTime = new Date(AUTHORITY_CACHE_LATEST_UPDATE_TIME); List doList = sqlSession.selectList("AuthorityDao.listAfterTime", afterTime); updateAuthorityCache(doList, timestamp); @@ -124,11 +108,15 @@ public class AuthorityDaoImpl implements AuthorityDao { /** * 更新Topic缓存 */ - synchronized private void updateAuthorityCache(List doList, Long timestamp) { + private synchronized void updateAuthorityCache(List doList, Long timestamp) { if (doList == null || doList.isEmpty() || AUTHORITY_CACHE_LATEST_UPDATE_TIME >= timestamp) { // 本次无数据更新, 或者本次更新过时 时, 忽略本次更新 return; } + if (AUTHORITY_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) { + AUTHORITY_MAP.clear(); + } + for (AuthorityDO elem: doList) { Map> doMap = AUTHORITY_MAP.getOrDefault(elem.getAppId(), new ConcurrentHashMap<>()); @@ -139,4 +127,8 @@ public class AuthorityDaoImpl implements AuthorityDao { } AUTHORITY_CACHE_LATEST_UPDATE_TIME = timestamp; } + + public static void resetCache() { + AUTHORITY_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; + } } diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/TopicDaoImpl.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/TopicDaoImpl.java index ba4468df..3c1ba335 100644 --- a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/TopicDaoImpl.java +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/dao/impl/TopicDaoImpl.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.dao.impl; import com.xiaojukeji.kafka.manager.common.entity.pojo.TopicDO; import com.xiaojukeji.kafka.manager.dao.TopicDao; +import com.xiaojukeji.kafka.manager.task.Constant; import org.mybatis.spring.SqlSessionTemplate; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository; @@ -18,7 +19,8 @@ public class TopicDaoImpl implements TopicDao { /** * Topic最近的一次更新时间, 更新之后的缓存 */ - private static Long TOPIC_CACHE_LATEST_UPDATE_TIME = 0L; + private static volatile long TOPIC_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; + private static final Map> TOPIC_MAP = new ConcurrentHashMap<>(); @Autowired @@ -62,7 +64,7 @@ public class TopicDaoImpl implements TopicDao { @Override public List getByClusterId(Long clusterId) { updateTopicCache(); - return new ArrayList<>(TOPIC_MAP.getOrDefault(clusterId, new ConcurrentHashMap<>(0)).values()); + return new ArrayList<>(TOPIC_MAP.getOrDefault(clusterId, Collections.emptyMap()).values()); } @Override @@ -75,28 +77,28 @@ public class TopicDaoImpl implements TopicDao { updateTopicCache(); List doList = new ArrayList<>(); for (Long clusterId: TOPIC_MAP.keySet()) { - doList.addAll(TOPIC_MAP.getOrDefault(clusterId, new ConcurrentHashMap<>(0)).values()); + doList.addAll(TOPIC_MAP.getOrDefault(clusterId, Collections.emptyMap()).values()); } return doList; } @Override public TopicDO getTopic(Long clusterId, String topicName, String appId) { - Map params = new HashMap<>(2); + Map params = new HashMap<>(3); params.put("clusterId", clusterId); params.put("topicName", topicName); params.put("appId", appId); return sqlSession.selectOne("TopicDao.getTopic", params); } - @Override - public TopicDO removeTopicInCache(Long clusterId, String topicName) { - return TOPIC_MAP.getOrDefault(clusterId, new HashMap<>(0)).remove(topicName); - } - private void updateTopicCache() { Long timestamp = System.currentTimeMillis(); + if (timestamp + 1000 <= TOPIC_CACHE_LATEST_UPDATE_TIME) { + // 近一秒内的请求不走db + return; + } + Date afterTime = new Date(TOPIC_CACHE_LATEST_UPDATE_TIME); List doList = sqlSession.selectList("TopicDao.listAfterTime", afterTime); updateTopicCache(doList, timestamp); @@ -105,11 +107,15 @@ public class TopicDaoImpl implements TopicDao { /** * 更新Topic缓存 */ - synchronized private void updateTopicCache(List doList, Long timestamp) { + private synchronized void updateTopicCache(List doList, Long timestamp) { if (doList == null || doList.isEmpty() || TOPIC_CACHE_LATEST_UPDATE_TIME >= timestamp) { // 本次无数据更新, 或者本次更新过时 时, 忽略本次更新 return; } + if (TOPIC_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) { + TOPIC_MAP.clear(); + } + for (TopicDO elem: doList) { Map doMap = TOPIC_MAP.getOrDefault(elem.getClusterId(), new ConcurrentHashMap<>()); doMap.put(elem.getTopicName(), elem); @@ -117,4 +123,8 @@ public class TopicDaoImpl implements TopicDao { } TOPIC_CACHE_LATEST_UPDATE_TIME = timestamp; } + + public static void resetCache() { + TOPIC_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP; + } } \ No newline at end of file diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/Constant.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/Constant.java new file mode 100644 index 00000000..3a50d7c1 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/Constant.java @@ -0,0 +1,5 @@ +package com.xiaojukeji.kafka.manager.task; + +public class Constant { + public static final long START_TIMESTAMP = 0; +} diff --git a/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/DaoBackgroundTask.java b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/DaoBackgroundTask.java new file mode 100644 index 00000000..a750aff8 --- /dev/null +++ b/kafka-manager-dao/src/main/java/com/xiaojukeji/kafka/manager/task/DaoBackgroundTask.java @@ -0,0 +1,41 @@ +package com.xiaojukeji.kafka.manager.task; + +import com.xiaojukeji.kafka.manager.common.utils.factory.DefaultThreadFactory; +import com.xiaojukeji.kafka.manager.dao.gateway.impl.AppDaoImpl; +import com.xiaojukeji.kafka.manager.dao.gateway.impl.AuthorityDaoImpl; +import com.xiaojukeji.kafka.manager.dao.impl.TopicDaoImpl; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.stereotype.Service; + +import javax.annotation.PostConstruct; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +/** + * 后台任务线程 + * @author zengqiao + * @date 21/02/02 + */ +@Service +public class DaoBackgroundTask { + private static final Logger LOGGER = LoggerFactory.getLogger(DaoBackgroundTask.class); + + private static final ScheduledExecutorService SYNC_CACHE_THREAD_POOL = Executors.newSingleThreadScheduledExecutor(new DefaultThreadFactory("syncCacheTask")); + + @PostConstruct + public void init() { + SYNC_CACHE_THREAD_POOL.scheduleAtFixedRate(() -> { + LOGGER.info("class=DaoBackgroundTask||method=init||msg=sync cache start"); + + TopicDaoImpl.resetCache(); + + AppDaoImpl.resetCache(); + + AuthorityDaoImpl.resetCache(); + + LOGGER.info("class=DaoBackgroundTask||method=init||msg=sync cache finished"); + }, 1, 10, TimeUnit.MINUTES); + } +} diff --git a/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml b/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml index a03eb6e0..53b90293 100644 --- a/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml +++ b/kafka-manager-dao/src/main/resources/mapper/ClusterDao.xml @@ -12,6 +12,7 @@ + INSERT INTO cluster ( - cluster_name, zookeeper, bootstrap_servers, security_properties + cluster_name, zookeeper, bootstrap_servers, security_properties, jmx_properties ) VALUES ( - #{clusterName}, #{zookeeper}, #{bootstrapServers}, #{securityProperties} + #{clusterName}, #{zookeeper}, #{bootstrapServers}, #{securityProperties}, #{jmxProperties} ) @@ -30,6 +31,7 @@ cluster_name=#{clusterName}, bootstrap_servers=#{bootstrapServers}, security_properties=#{securityProperties}, + jmx_properties=#{jmxProperties}, status=#{status} WHERE id = #{id} diff --git a/kafka-manager-dao/src/main/resources/mapper/GatewayConfigDao.xml b/kafka-manager-dao/src/main/resources/mapper/GatewayConfigDao.xml index 8aa91925..ac003836 100644 --- a/kafka-manager-dao/src/main/resources/mapper/GatewayConfigDao.xml +++ b/kafka-manager-dao/src/main/resources/mapper/GatewayConfigDao.xml @@ -8,6 +8,7 @@ + @@ -27,9 +28,9 @@ @@ -45,7 +46,8 @@ `type`=#{type}, `name`=#{name}, `value`=#{value}, - `version`=#{version} + `version`=#{version}, + `description`=#{description} WHERE id=#{id} ]]> diff --git a/kafka-manager-dao/src/main/resources/mapper/LogicalClusterDao.xml b/kafka-manager-dao/src/main/resources/mapper/LogicalClusterDao.xml index b4478067..eef0b79f 100644 --- a/kafka-manager-dao/src/main/resources/mapper/LogicalClusterDao.xml +++ b/kafka-manager-dao/src/main/resources/mapper/LogicalClusterDao.xml @@ -1,24 +1,25 @@ - - - - + + + + - - - - - - + + + + + + + INSERT INTO logical_cluster - (name, app_id, cluster_id, region_list, mode, description) + (name, identification, app_id, cluster_id, region_list, mode, description) VALUES - (#{name}, #{appId}, #{clusterId}, #{regionList}, #{mode}, #{description}) + (#{name}, #{identification}, #{appId}, #{clusterId}, #{regionList}, #{mode}, #{description}) @@ -27,7 +28,8 @@ UPDATE logical_cluster SET - + name=#{name}, + cluster_id=#{clusterId}, region_list=#{regionList}, description=#{description}, diff --git a/kafka-manager-extends/kafka-manager-account/pom.xml b/kafka-manager-extends/kafka-manager-account/pom.xml index 3d129969..a3cb47fb 100644 --- a/kafka-manager-extends/kafka-manager-account/pom.xml +++ b/kafka-manager-extends/kafka-manager-account/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-account - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/AccountService.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/AccountService.java index bb845932..7f4974ea 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/AccountService.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/AccountService.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.account; import com.xiaojukeji.kafka.manager.account.common.EnterpriseStaff; import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO; @@ -25,14 +26,14 @@ public interface AccountService { * @param username 用户名 * @return */ - AccountDO getAccountDO(String username); + Result getAccountDO(String username); /** * 删除用户 * @param username 用户名 * @return */ - ResultStatus deleteByName(String username); + ResultStatus deleteByName(String username, String operator); /** * 更新账号 diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/LoginService.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/LoginService.java index 0a061737..98e8bab1 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/LoginService.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/LoginService.java @@ -1,5 +1,6 @@ package com.xiaojukeji.kafka.manager.account; +import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO; @@ -11,7 +12,7 @@ import javax.servlet.http.HttpServletResponse; * @date 20/8/20 */ public interface LoginService { - Account login(HttpServletRequest request, HttpServletResponse response, LoginDTO dto); + Result login(HttpServletRequest request, HttpServletResponse response, LoginDTO dto); void logout(HttpServletRequest request, HttpServletResponse response, Boolean needJump2LoginPage); diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/AbstractSingleSignOn.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/AbstractSingleSignOn.java index d5528f0b..d6257364 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/AbstractSingleSignOn.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/AbstractSingleSignOn.java @@ -1,5 +1,6 @@ package com.xiaojukeji.kafka.manager.account.component; +import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO; import javax.servlet.http.HttpServletRequest; @@ -18,7 +19,7 @@ public abstract class AbstractSingleSignOn { protected static final String HEADER_REDIRECT_KEY = "location"; - public abstract String loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto); + public abstract Result loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto); public abstract void logout(HttpServletRequest request, HttpServletResponse response, Boolean needJump2LoginPage); diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/account/BaseEnterpriseStaffService.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/account/BaseEnterpriseStaffService.java index b931eecd..2eef7774 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/account/BaseEnterpriseStaffService.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/account/BaseEnterpriseStaffService.java @@ -41,7 +41,14 @@ public class BaseEnterpriseStaffService extends AbstractEnterpriseStaffService { @Override public List searchEnterpriseStaffByKeyWord(String keyWord) { try { - List doList = accountDao.searchByNamePrefix(keyWord); + List doList = null; + if (ValidateUtils.isBlank(keyWord)) { + // 当用户没有任何输入的时候, 返回全部的用户 + doList = accountDao.list(); + } else { + doList = accountDao.searchByNamePrefix(keyWord); + } + if (ValidateUtils.isEmptyList(doList)) { return new ArrayList<>(); } diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/sso/BaseSessionSignOn.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/sso/BaseSessionSignOn.java index 1e2dbb97..c67cca08 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/sso/BaseSessionSignOn.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/component/sso/BaseSessionSignOn.java @@ -2,12 +2,17 @@ package com.xiaojukeji.kafka.manager.account.component.sso; import com.xiaojukeji.kafka.manager.account.AccountService; import com.xiaojukeji.kafka.manager.account.component.AbstractSingleSignOn; +import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum; import com.xiaojukeji.kafka.manager.common.constant.LoginConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO; import com.xiaojukeji.kafka.manager.common.utils.EncryptUtil; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.common.utils.ldap.LDAPAuthentication; import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; import javax.servlet.http.HttpServletRequest; @@ -22,19 +27,60 @@ public class BaseSessionSignOn extends AbstractSingleSignOn { @Autowired private AccountService accountService; + @Autowired + private LDAPAuthentication ldapAuthentication; + + //是否开启ldap验证 + @Value(value = "${ldap.enabled}") + private boolean ldapEnabled; + + //ldap自动注册的默认角色。请注意:它通常来说都是低权限角色 + @Value(value = "${ldap.auth-user-registration-role}") + private String authUserRegistrationRole; + + //ldap自动注册是否开启 + @Value(value = "${ldap.auth-user-registration}") + private boolean authUserRegistration; + @Override - public String loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto) { + public Result loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto) { if (ValidateUtils.isBlank(dto.getUsername()) || ValidateUtils.isNull(dto.getPassword())) { - return null; + return Result.buildFailure("Missing parameters"); } - AccountDO accountDO = accountService.getAccountDO(dto.getUsername()); - if (ValidateUtils.isNull(accountDO)) { - return null; + + Result accountResult = accountService.getAccountDO(dto.getUsername()); + + //modifier limin + //判断是否激活了LDAP验证。若激活并且数据库无此用户则自动注册 + if(ldapEnabled){ + //去LDAP验证账密 + if(!ldapAuthentication.authenricate(dto.getUsername(),dto.getPassword())){ + return Result.buildFrom(ResultStatus.LDAP_AUTHENTICATION_FAILED); + } + + if((ValidateUtils.isNull(accountResult) || ValidateUtils.isNull(accountResult.getData())) && authUserRegistration){ + //自动注册 + AccountDO accountDO = new AccountDO(); + accountDO.setUsername(dto.getUsername()); + accountDO.setRole(AccountRoleEnum.getUserRoleEnum(authUserRegistrationRole).getRole()); + accountDO.setPassword(EncryptUtil.md5(dto.getPassword())); + accountService.createAccount(accountDO); + } + + return Result.buildSuc(dto.getUsername()); + } - if (!accountDO.getPassword().equals(EncryptUtil.md5(dto.getPassword()))) { - return null; + + if (ValidateUtils.isNull(accountResult) || accountResult.failed()) { + return new Result<>(accountResult.getCode(), accountResult.getMessage()); } - return dto.getUsername(); + if (ValidateUtils.isNull(accountResult.getData())) { + return Result.buildFailure("username illegal"); + } + if (!accountResult.getData().getPassword().equals(EncryptUtil.md5(dto.getPassword()))) { + return Result.buildFailure("password illegal"); + } + return Result.buildSuc(accountResult.getData().getUsername()); } @Override @@ -60,4 +106,4 @@ public class BaseSessionSignOn extends AbstractSingleSignOn { response.setStatus(AbstractSingleSignOn.REDIRECT_CODE); response.addHeader(AbstractSingleSignOn.HEADER_REDIRECT_KEY, ""); } -} \ No newline at end of file +} diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/AccountServiceImpl.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/AccountServiceImpl.java index b03cd195..e4d03c23 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/AccountServiceImpl.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/AccountServiceImpl.java @@ -6,7 +6,10 @@ import com.xiaojukeji.kafka.manager.account.AccountService; import com.xiaojukeji.kafka.manager.account.common.EnterpriseStaff; import com.xiaojukeji.kafka.manager.account.component.AbstractEnterpriseStaffService; import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum; +import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum; import com.xiaojukeji.kafka.manager.common.constant.Constant; +import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO; @@ -14,6 +17,7 @@ import com.xiaojukeji.kafka.manager.common.utils.EncryptUtil; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.dao.AccountDao; import com.xiaojukeji.kafka.manager.service.service.ConfigService; +import com.xiaojukeji.kafka.manager.service.service.OperateRecordService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; @@ -47,6 +51,9 @@ public class AccountServiceImpl implements AccountService { @Autowired private AbstractEnterpriseStaffService enterpriseStaffService; + @Autowired + private OperateRecordService operateRecordService; + /** * 用户组织信息 * @@ -81,9 +88,12 @@ public class AccountServiceImpl implements AccountService { } @Override - public ResultStatus deleteByName(String username) { + public ResultStatus deleteByName(String username, String operator) { try { if (accountDao.deleteByName(username) > 0) { + Map content = new HashMap<>(); + content.put("username", username); + operateRecordService.insert(operator, ModuleEnum.AUTHORITY, username, OperateEnum.DELETE, content); return ResultStatus.SUCCESS; } } catch (Exception e) { @@ -101,7 +111,7 @@ public class AccountServiceImpl implements AccountService { return ResultStatus.ACCOUNT_NOT_EXIST; } - if (!ValidateUtils.isNull(accountDO.getPassword())) { + if (!ValidateUtils.isBlank(accountDO.getPassword())) { accountDO.setPassword(EncryptUtil.md5(accountDO.getPassword())); } else { accountDO.setPassword(oldAccountDO.getPassword()); @@ -117,8 +127,13 @@ public class AccountServiceImpl implements AccountService { } @Override - public AccountDO getAccountDO(String username) { - return accountDao.getByName(username); + public Result getAccountDO(String username) { + try { + return Result.buildSuc(accountDao.getByName(username)); + } catch (Exception e) { + LOGGER.warn("class=AccountServiceImpl||method=getAccountDO||username={}||errMsg={}||msg=get account fail", username, e.getMessage()); + } + return Result.buildFrom(ResultStatus.MYSQL_ERROR); } @Override diff --git a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/LoginServiceImpl.java b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/LoginServiceImpl.java index d6acb2f1..591768fb 100644 --- a/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/LoginServiceImpl.java +++ b/kafka-manager-extends/kafka-manager-account/src/main/java/com/xiaojukeji/kafka/manager/account/impl/LoginServiceImpl.java @@ -6,6 +6,7 @@ import com.xiaojukeji.kafka.manager.account.LoginService; import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum; import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.constant.LoginConstant; +import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; @@ -34,15 +35,15 @@ public class LoginServiceImpl implements LoginService { private AbstractSingleSignOn singleSignOn; @Override - public Account login(HttpServletRequest request, HttpServletResponse response, LoginDTO loginDTO) { - String username = singleSignOn.loginAndGetLdap(request, response, loginDTO); - if (ValidateUtils.isBlank(username)) { + public Result login(HttpServletRequest request, HttpServletResponse response, LoginDTO loginDTO) { + Result userResult = singleSignOn.loginAndGetLdap(request, response, loginDTO); + if (ValidateUtils.isNull(userResult) || userResult.failed()) { logout(request, response, false); - return null; + return new Result<>(userResult.getCode(), userResult.getMessage()); } - Account account = accountService.getAccountFromCache(username); + Account account = accountService.getAccountFromCache(userResult.getData()); initLoginContext(request, response, account); - return account; + return Result.buildSuc(account); } private void initLoginContext(HttpServletRequest request, HttpServletResponse response, Account account) { @@ -64,6 +65,11 @@ public class LoginServiceImpl implements LoginService { @Override public boolean checkLogin(HttpServletRequest request, HttpServletResponse response) { String uri = request.getRequestURI(); + if (uri.contains("..")) { + LOGGER.error("class=LoginServiceImpl||method=checkLogin||msg=uri illegal||uri={}", uri); + return false; + } + if (!(uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX) || uri.contains(ApiPrefix.API_V1_RD_PREFIX) || uri.contains(ApiPrefix.API_V1_OP_PREFIX))) { diff --git a/kafka-manager-extends/kafka-manager-bpm/pom.xml b/kafka-manager-extends/kafka-manager-bpm/pom.xml index 6a670849..c8ecf459 100644 --- a/kafka-manager-extends/kafka-manager-bpm/pom.xml +++ b/kafka-manager-extends/kafka-manager-bpm/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-bpm - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionAddGatewayConfigDTO.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionAddGatewayConfigDTO.java index 0045bfe2..6a2c0bb4 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionAddGatewayConfigDTO.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionAddGatewayConfigDTO.java @@ -18,6 +18,9 @@ public class OrderExtensionAddGatewayConfigDTO { @ApiModelProperty(value = "值") private String value; + @ApiModelProperty(value = "描述说明") + private String description; + public String getType() { return type; } @@ -42,12 +45,21 @@ public class OrderExtensionAddGatewayConfigDTO { this.value = value; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + @Override public String toString() { return "OrderExtensionAddGatewayConfigDTO{" + "type='" + type + '\'' + ", name='" + name + '\'' + ", value='" + value + '\'' + + ", description='" + description + '\'' + '}'; } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionModifyGatewayConfigDTO.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionModifyGatewayConfigDTO.java index f5212f8c..3f749ea7 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionModifyGatewayConfigDTO.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/common/entry/apply/gateway/OrderExtensionModifyGatewayConfigDTO.java @@ -23,6 +23,9 @@ public class OrderExtensionModifyGatewayConfigDTO { @ApiModelProperty(value = "值") private String value; + @ApiModelProperty(value = "描述说明") + private String description; + public Long getId() { return id; } @@ -55,6 +58,14 @@ public class OrderExtensionModifyGatewayConfigDTO { this.value = value; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + @Override public String toString() { return "OrderExtensionModifyGatewayConfigDTO{" + @@ -62,6 +73,7 @@ public class OrderExtensionModifyGatewayConfigDTO { ", type='" + type + '\'' + ", name='" + name + '\'' + ", value='" + value + '\'' + + ", description='" + description + '\'' + '}'; } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAppOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAppOrder.java index d902abed..1528ada8 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAppOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAppOrder.java @@ -87,6 +87,6 @@ public class ApplyAppOrder extends AbstractAppOrder { appDO.setDescription(orderDO.getDescription()); appDO.generateAppIdAndPassword(orderDO.getId(), configUtils.getIdc()); appDO.setType(0); - return appService.addApp(appDO); + return appService.addApp(appDO, userName); } } diff --git a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java index e2f57b28..60119352 100644 --- a/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java +++ b/kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/ApplyAuthorityOrder.java @@ -95,7 +95,7 @@ public class ApplyAuthorityOrder extends AbstractAuthorityOrder { } TopicDO topicDO = topicManagerService.getByTopicName(physicalClusterId, orderExtensionDTO.getTopicName()); if (ValidateUtils.isNull(topicDO)) { - return ResultStatus.TOPIC_NOT_EXIST; + return ResultStatus.TOPIC_BIZ_DATA_NOT_EXIST; } AppDO appDO = appService.getByAppId(topicDO.getAppId()); if (!appDO.getPrincipals().contains(userName)) { diff --git a/kafka-manager-extends/kafka-manager-kcm/pom.xml b/kafka-manager-extends/kafka-manager-kcm/pom.xml index 4e087dd1..7ffd00e3 100644 --- a/kafka-manager-extends/kafka-manager-kcm/pom.xml +++ b/kafka-manager-extends/kafka-manager-kcm/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-kcm - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml @@ -68,5 +68,10 @@ spring-test ${spring-version} + + + io.minio + minio + \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/KafkaFileService.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/KafkaFileService.java index b2de3a32..babfeb15 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/KafkaFileService.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/KafkaFileService.java @@ -4,6 +4,7 @@ import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.KafkaFileDTO; import com.xiaojukeji.kafka.manager.common.entity.pojo.KafkaFileDO; +import org.springframework.web.multipart.MultipartFile; import java.util.List; @@ -24,7 +25,7 @@ public interface KafkaFileService { KafkaFileDO getFileByFileName(String fileName); - Result downloadKafkaConfigFile(Long fileId); + Result downloadKafkaFile(Long fileId); String getDownloadBaseUrl(); } diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/Constant.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/Constant.java new file mode 100644 index 00000000..f73c3fd6 --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/Constant.java @@ -0,0 +1,18 @@ +package com.xiaojukeji.kafka.manager.kcm.common; + +public class Constant { + /** + * + */ + public static final String TASK_TITLE_PREFIX = "Logi-Kafka"; + + /** + * 并发度,顺序执行 + */ + public static final Integer AGENT_TASK_BATCH = 1; + + /** + * 失败的容忍度为0 + */ + public static final Integer AGENT_TASK_TOLERANCE = 0; +} diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java index 556acab8..a51e2c68 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/bizenum/ClusterTaskActionEnum.java @@ -6,34 +6,35 @@ package com.xiaojukeji.kafka.manager.kcm.common.bizenum; * @date 20/4/26 */ public enum ClusterTaskActionEnum { - START(0, "start"), - PAUSE(1, "pause"), - IGNORE(2, "ignore"), - CANCEL(3, "cancel"), - ROLLBACK(4, "rollback"), + UNKNOWN("unknown"), + + START("start"), + PAUSE("pause"), + + IGNORE("ignore"), + CANCEL("cancel"), + + REDO("redo"), + KILL("kill"), + + ROLLBACK("rollback"), + ; - private Integer code; - private String message; + private String action; - ClusterTaskActionEnum(Integer code, String message) { - this.code = code; - this.message = message; + ClusterTaskActionEnum(String action) { + this.action = action; } - public Integer getCode() { - return code; - } - - public String getMessage() { - return message; + public String getAction() { + return action; } @Override public String toString() { - return "TaskActionEnum{" + - "code=" + code + - ", message='" + message + '\'' + + return "ClusterTaskActionEnum{" + + "action='" + action + '\'' + '}'; } } diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/ClusterTaskLog.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/ClusterTaskLog.java new file mode 100644 index 00000000..ff89fa99 --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/ClusterTaskLog.java @@ -0,0 +1,24 @@ +package com.xiaojukeji.kafka.manager.kcm.common.entry.ao; + +public class ClusterTaskLog { + private String stdout; + + public ClusterTaskLog(String stdout) { + this.stdout = stdout; + } + + public String getStdout() { + return stdout; + } + + public void setStdout(String stdout) { + this.stdout = stdout; + } + + @Override + public String toString() { + return "AgentOperationTaskLog{" + + "stdout='" + stdout + '\'' + + '}'; + } +} diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/CreationTaskData.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/CreationTaskData.java index bc025d5c..8c2cd1ec 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/CreationTaskData.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/common/entry/ao/CreationTaskData.java @@ -1,5 +1,7 @@ package com.xiaojukeji.kafka.manager.kcm.common.entry.ao; +import com.xiaojukeji.kafka.manager.common.entity.Result; + import java.util.List; /** @@ -119,7 +121,7 @@ public class CreationTaskData { @Override public String toString() { - return "CreationTaskDTO{" + + return "CreationTaskData{" + "uuid='" + uuid + '\'' + ", clusterId=" + clusterId + ", hostList=" + hostList + diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java index 88872868..70ce5902 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/AbstractAgent.java @@ -1,9 +1,18 @@ package com.xiaojukeji.kafka.manager.kcm.component.agent; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskSubStateEnum; +import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.CreationTaskData; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; import java.util.Map; @@ -13,33 +22,79 @@ import java.util.Map; * @date 20/4/26 */ public abstract class AbstractAgent { + private static final Logger LOGGER = LoggerFactory.getLogger(AbstractAgent.class); + /** * 创建任务 + * @param creationTaskData 创建任务参数 + * @return 任务ID */ - public abstract Long createTask(CreationTaskData dto); + public abstract Result createTask(CreationTaskData creationTaskData); /** - * 任务动作 + * 执行任务 + * @param taskId 任务ID + * @param actionEnum 执行动作 + * @return true:触发成功, false:触发失败 */ - public abstract Boolean actionTask(Long taskId, String action); + public abstract boolean actionTask(Long taskId, ClusterTaskActionEnum actionEnum); /** - * 任务动作 + * 执行任务 + * @param taskId 任务ID + * @param actionEnum 执行动作 + * @param hostname 具体主机 + * @return true:触发成功, false:触发失败 */ - public abstract Boolean actionHostTask(Long taskId, String action, String hostname); + public abstract boolean actionHostTask(Long taskId, ClusterTaskActionEnum actionEnum, String hostname); /** - * 获取任务状态 + * 获取任务运行的状态[阻塞, 执行中, 完成等] + * @param taskId 任务ID + * @return 任务状态 */ - public abstract ClusterTaskStateEnum getTaskState(Long agentTaskId); + public abstract Result getTaskExecuteState(Long taskId); /** * 获取任务结果 + * @param taskId 任务ID + * @return 任务结果 */ - public abstract Map getTaskResult(Long taskId); + public abstract Result> getTaskResult(Long taskId); /** - * 获取任务日志 + * 获取任务执行日志 + * @param taskId 任务ID + * @param hostname 具体主机 + * @return 机器运行日志 */ - public abstract String getTaskLog(Long agentTaskId, String hostname); + public abstract Result getTaskLog(Long taskId, String hostname); + + protected static String readScriptInJarFile(String fileName) { + InputStream inputStream = AbstractAgent.class.getClassLoader().getResourceAsStream(fileName); + if (inputStream == null) { + LOGGER.error("class=AbstractAgent||method=readScriptInJarFile||fileName={}||msg=read script failed", fileName); + return ""; + } + + try { + BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); + String line = null; + + StringBuilder sb = new StringBuilder(); + while ((line = bufferedReader.readLine()) != null) { + sb.append(line).append("\n"); + } + return sb.toString(); + } catch (Exception e) { + LOGGER.error("class=AbstractAgent||method=readScriptInJarFile||fileName={}||errMsg={}||msg=read script failed", fileName, e.getMessage()); + } finally { + try { + inputStream.close(); + } catch (IOException e) { + LOGGER.error("class=AbstractAgent||method=readScriptInJarFile||fileName={}||errMsg={}||msg=close reading script failed", fileName, e.getMessage()); + } + } + return ""; + } } \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java index f1f4b586..6e3fa677 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/N9e.java @@ -1,8 +1,11 @@ package com.xiaojukeji.kafka.manager.kcm.component.agent.n9e; -import com.alibaba.fastjson.JSON; import com.xiaojukeji.kafka.manager.common.bizenum.KafkaFileEnum; +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.kcm.common.Constant; +import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskTypeEnum; +import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.CreationTaskData; import com.xiaojukeji.kafka.manager.common.utils.HttpUtils; import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; @@ -11,20 +14,17 @@ import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskSubStateEnum; import com.xiaojukeji.kafka.manager.kcm.component.agent.AbstractAgent; +import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eCreationTask; import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eResult; -import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eTaskResultDTO; -import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eTaskStatusEnum; -import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eTaskStdoutDTO; +import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eTaskResult; +import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.N9eTaskStdoutLog; +import com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.bizenum.N9eTaskStatusEnum; import org.springframework.beans.factory.annotation.Value; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.stereotype.Service; import javax.annotation.PostConstruct; -import java.io.BufferedReader; -import java.io.IOException; -import java.io.InputStream; -import java.io.InputStreamReader; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -54,16 +54,6 @@ public class N9e extends AbstractAgent { private String script; - /** - * 并发度,顺序执行 - */ - private static final Integer BATCH = 1; - - /** - * 失败的容忍度为0 - */ - private static final Integer TOLERANCE = 0; - private static final String CREATE_TASK_URI = "/api/job-ce/tasks"; private static final String ACTION_TASK_URI = "/api/job-ce/task/{taskId}/action"; @@ -82,143 +72,134 @@ public class N9e extends AbstractAgent { } @Override - public Long createTask(CreationTaskData creationTaskData) { - Map param = buildCreateTaskParam(creationTaskData); + public Result createTask(CreationTaskData creationTaskData) { + String content = JsonUtils.toJSONString(buildCreateTaskParam(creationTaskData)); String response = null; try { - response = HttpUtils.postForString( - baseUrl + CREATE_TASK_URI, - JsonUtils.toJSONString(param), - buildHeader() - ); - N9eResult zr = JSON.parseObject(response, N9eResult.class); - if (!ValidateUtils.isBlank(zr.getErr())) { - LOGGER.warn("class=N9e||method=createTask||param={}||errMsg={}||msg=call create task fail", JsonUtils.toJSONString(param),zr.getErr()); - return null; + response = HttpUtils.postForString(baseUrl + CREATE_TASK_URI, content, buildHeader()); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (!ValidateUtils.isBlank(nr.getErr())) { + LOGGER.error("class=N9e||method=createTask||param={}||response={}||msg=call create task failed", content, response); + return Result.buildFailure(nr.getErr()); } - return Long.valueOf(zr.getDat().toString()); + return Result.buildSuc(Long.valueOf(nr.getDat().toString())); } catch (Exception e) { - LOGGER.error("create task failed, req:{}.", creationTaskData, e); + LOGGER.error("class=N9e||method=createTask||param={}||response={}||errMsg={}||msg=call create task failed", content, response, e.getMessage()); } - return null; + return Result.buildFailure("create n9e task failed"); } @Override - public Boolean actionTask(Long taskId, String action) { + public boolean actionTask(Long taskId, ClusterTaskActionEnum actionEnum) { Map param = new HashMap<>(1); - param.put("action", action); + param.put("action", actionEnum.getAction()); String response = null; try { - response = HttpUtils.putForString( - baseUrl + ACTION_TASK_URI.replace("{taskId}", taskId.toString()), - JSON.toJSONString(param), - buildHeader() - ); - N9eResult zr = JSON.parseObject(response, N9eResult.class); - if (ValidateUtils.isBlank(zr.getErr())) { + response = HttpUtils.putForString(baseUrl + ACTION_TASK_URI.replace("{taskId}", String.valueOf(taskId)), JsonUtils.toJSONString(param), buildHeader()); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (ValidateUtils.isBlank(nr.getErr())) { return true; } - LOGGER.warn("class=N9e||method=actionTask||param={}||errMsg={}||msg=call action task fail", JSON.toJSONString(param),zr.getErr()); + + LOGGER.error("class=N9e||method=actionTask||param={}||response={}||msg=call action task fail", JsonUtils.toJSONString(param), response); return false; } catch (Exception e) { - LOGGER.error("action task failed, taskId:{}, action:{}.", taskId, action, e); + LOGGER.error("class=N9e||method=actionTask||param={}||response={}||errMsg={}||msg=call action task fail", JsonUtils.toJSONString(param), response, e.getMessage()); } return false; } @Override - public Boolean actionHostTask(Long taskId, String action, String hostname) { - Map param = new HashMap<>(2); - param.put("action", action); - param.put("hostname", hostname); + public boolean actionHostTask(Long taskId, ClusterTaskActionEnum actionEnum, String hostname) { + Map params = new HashMap<>(2); + params.put("action", actionEnum.getAction()); + params.put("hostname", hostname); String response = null; try { - response = HttpUtils.putForString( - baseUrl + ACTION_HOST_TASK_URI.replace("{taskId}", taskId.toString()), - JSON.toJSONString(param), - buildHeader() - ); - N9eResult zr = JSON.parseObject(response, N9eResult.class); - if (ValidateUtils.isBlank(zr.getErr())) { + response = HttpUtils.putForString(baseUrl + ACTION_HOST_TASK_URI.replace("{taskId}", String.valueOf(taskId)), JsonUtils.toJSONString(params), buildHeader()); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (ValidateUtils.isBlank(nr.getErr())) { return true; } - LOGGER.warn("class=N9e||method=actionHostTask||param={}||errMsg={}||msg=call action host task fail", JSON.toJSONString(param),zr.getErr()); + + LOGGER.error("class=N9e||method=actionHostTask||params={}||response={}||msg=call action host task fail", JsonUtils.toJSONString(params), response); return false; } catch (Exception e) { - LOGGER.error("action task failed, taskId:{} action:{} hostname:{}.", taskId, action, hostname, e); + LOGGER.error("class=N9e||method=actionHostTask||params={}||response={}||errMsg={}||msg=call action host task fail", JsonUtils.toJSONString(params), response, e.getMessage()); } return false; } @Override - public ClusterTaskStateEnum getTaskState(Long agentTaskId) { + public Result getTaskExecuteState(Long taskId) { String response = null; try { // 获取任务的state - response = HttpUtils.get( - baseUrl + TASK_STATE_URI.replace("{taskId}", agentTaskId.toString()), null - ); - N9eResult n9eResult = JSON.parseObject(response, N9eResult.class); - if (!ValidateUtils.isBlank(n9eResult.getErr())) { - LOGGER.error("get response result failed, agentTaskId:{} response:{}.", agentTaskId, response); - return null; + response = HttpUtils.get(baseUrl + TASK_STATE_URI.replace("{taskId}", String.valueOf(taskId)), null); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (!ValidateUtils.isBlank(nr.getErr())) { + return Result.buildFailure(nr.getErr()); } - String state = JSON.parseObject(JSON.toJSONString(n9eResult.getDat()), String.class); + + String state = JsonUtils.stringToObj(JsonUtils.toJSONString(nr.getDat()), String.class); + N9eTaskStatusEnum n9eTaskStatusEnum = N9eTaskStatusEnum.getByMessage(state); if (ValidateUtils.isNull(n9eTaskStatusEnum)) { - LOGGER.error("get task status failed, agentTaskId:{} state:{}.", agentTaskId, state); - return null; + LOGGER.error("class=N9e||method=getTaskExecuteState||taskId={}||response={}||msg=get task state failed", taskId, response); + return Result.buildFailure("unknown state, state:" + state); } - return n9eTaskStatusEnum.getStatus(); + return Result.buildSuc(n9eTaskStatusEnum.getStatus()); } catch (Exception e) { - LOGGER.error("get task status failed, agentTaskId:{} response:{}.", agentTaskId, response, e); + LOGGER.error("class=N9e||method=getTaskExecuteState||taskId={}||response={}||errMsg={}||msg=get task state failed", taskId, response, e.getMessage()); } - return null; + return Result.buildFailure("get task state failed"); } @Override - public Map getTaskResult(Long agentTaskId) { + public Result> getTaskResult(Long taskId) { String response = null; try { // 获取子任务的state - response = HttpUtils.get(baseUrl + TASK_SUB_STATE_URI.replace("{taskId}", agentTaskId.toString()), null); - N9eResult n9eResult = JSON.parseObject(response, N9eResult.class); + response = HttpUtils.get(baseUrl + TASK_SUB_STATE_URI.replace("{taskId}", String.valueOf(taskId)), null); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (!ValidateUtils.isBlank(nr.getErr())) { + LOGGER.error("class=N9e||method=getTaskResult||taskId={}||response={}||msg=get task result failed", taskId, response); + return Result.buildFailure(nr.getErr()); + } - N9eTaskResultDTO n9eTaskResultDTO = - JSON.parseObject(JSON.toJSONString(n9eResult.getDat()), N9eTaskResultDTO.class); - return n9eTaskResultDTO.convert2HostnameStatusMap(); + return Result.buildSuc(JsonUtils.stringToObj(JsonUtils.toJSONString(nr.getDat()), N9eTaskResult.class).convert2HostnameStatusMap()); } catch (Exception e) { - LOGGER.error("get task result failed, agentTaskId:{} response:{}.", agentTaskId, response, e); + LOGGER.error("class=N9e||method=getTaskResult||taskId={}||response={}||errMsg={}||msg=get task result failed", taskId, response, e.getMessage()); } - return null; + return Result.buildFailure("get task result failed"); } @Override - public String getTaskLog(Long agentTaskId, String hostname) { + public Result getTaskLog(Long taskId, String hostname) { + Map params = new HashMap<>(1); + params.put("hostname", hostname); + String response = null; try { - Map params = new HashMap<>(1); - params.put("hostname", hostname); + response = HttpUtils.get(baseUrl + TASK_STD_LOG_URI.replace("{taskId}", String.valueOf(taskId)), params); + N9eResult nr = JsonUtils.stringToObj(response, N9eResult.class); + if (!ValidateUtils.isBlank(nr.getErr())) { + LOGGER.error("class=N9e||method=getTaskLog||taskId={}||response={}||msg=get task log failed", taskId, response); + return Result.buildFailure(nr.getErr()); + } - response = HttpUtils.get(baseUrl + TASK_STD_LOG_URI.replace("{taskId}", agentTaskId.toString()), params); - N9eResult n9eResult = JSON.parseObject(response, N9eResult.class); - if (!ValidateUtils.isBlank(n9eResult.getErr())) { - LOGGER.error("get task log failed, agentTaskId:{} response:{}.", agentTaskId, response); - return null; - } - List dtoList = - JSON.parseArray(JSON.toJSONString(n9eResult.getDat()), N9eTaskStdoutDTO.class); + List dtoList = JsonUtils.stringToArrObj(JsonUtils.toJSONString(nr.getDat()), N9eTaskStdoutLog.class); if (ValidateUtils.isEmptyList(dtoList)) { - return ""; + return Result.buildSuc(new ClusterTaskLog("")); } - return dtoList.get(0).getStdout(); + return Result.buildSuc(new ClusterTaskLog(dtoList.get(0).getStdout())); } catch (Exception e) { - LOGGER.error("get task log failed, agentTaskId:{}.", agentTaskId, e); + LOGGER.error("class=N9e||method=getTaskLog||taskId={}||response={}||errMsg={}||msg=get task log failed", taskId, response, e.getMessage()); } - return null; + return Result.buildFailure("get task log failed"); } private Map buildHeader() { @@ -228,7 +209,7 @@ public class N9e extends AbstractAgent { return headers; } - private Map buildCreateTaskParam(CreationTaskData creationTaskData) { + private N9eCreationTask buildCreateTaskParam(CreationTaskData creationTaskData) { StringBuilder sb = new StringBuilder(); sb.append(creationTaskData.getUuid()).append(",,"); sb.append(creationTaskData.getClusterId()).append(",,"); @@ -240,46 +221,17 @@ public class N9e extends AbstractAgent { sb.append(creationTaskData.getServerPropertiesMd5()).append(",,"); sb.append(creationTaskData.getServerPropertiesUrl()); - Map params = new HashMap<>(10); - params.put("title", String.format("集群ID=%d-升级部署", creationTaskData.getClusterId())); - params.put("batch", BATCH); - params.put("tolerance", TOLERANCE); - params.put("timeout", timeout); - params.put("pause", ListUtils.strList2String(creationTaskData.getPauseList())); - params.put("script", this.script); - params.put("args", sb.toString()); - params.put("account", account); - params.put("action", "pause"); - params.put("hosts", creationTaskData.getHostList()); - return params; - } - - private static String readScriptInJarFile(String fileName) { - InputStream inputStream = N9e.class.getClassLoader().getResourceAsStream(fileName); - if (inputStream == null) { - LOGGER.error("read kcm script failed, filename:{}", fileName); - return ""; - } - - try { - BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); - String line = null; - StringBuilder stringBuilder = new StringBuilder(""); - - while ((line = bufferedReader.readLine()) != null) { - stringBuilder.append(line); - stringBuilder.append("\n"); - } - return stringBuilder.toString(); - } catch (IOException e) { - LOGGER.error("read kcm script failed, filename:{}", fileName, e); - return ""; - } finally { - try { - inputStream.close(); - } catch (IOException e) { - LOGGER.error("close reading kcm script failed, filename:{}", fileName, e); - } - } + N9eCreationTask n9eCreationTask = new N9eCreationTask(); + n9eCreationTask.setTitle(Constant.TASK_TITLE_PREFIX + "-集群ID:" + creationTaskData.getClusterId()); + n9eCreationTask.setBatch(Constant.AGENT_TASK_BATCH); + n9eCreationTask.setTolerance(Constant.AGENT_TASK_TOLERANCE); + n9eCreationTask.setTimeout(this.timeout); + n9eCreationTask.setPause(ListUtils.strList2String(creationTaskData.getPauseList())); + n9eCreationTask.setScript(this.script); + n9eCreationTask.setArgs(sb.toString()); + n9eCreationTask.setAccount(this.account); + n9eCreationTask.setAction(ClusterTaskActionEnum.PAUSE.getAction()); + n9eCreationTask.setHosts(creationTaskData.getHostList()); + return n9eCreationTask; } } \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eCreationTask.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eCreationTask.java new file mode 100644 index 00000000..6ca4c85c --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eCreationTask.java @@ -0,0 +1,151 @@ +package com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry; + +import java.util.List; + +public class N9eCreationTask { + /** + * 任务标题 + */ + private String title; + + /** + * 并发度, =2则表示两台并发执行 + */ + private Integer batch; + + /** + * 错误容忍度, 达到容忍度之上时, 任务会被暂停并不可以继续执行 + */ + private Integer tolerance; + + /** + * 单台任务的超时时间(秒) + */ + private Integer timeout; + + /** + * 暂停点, 格式: host1,host2,host3 + */ + private String pause; + + /** + * 任务执行对应的脚本 + */ + private String script; + + /** + * 任务参数 + */ + private String args; + + /** + * 使用的账号 + */ + private String account; + + /** + * 动作 + */ + private String action; + + /** + * 操作的主机列表 + */ + private List hosts; + + public String getTitle() { + return title; + } + + public void setTitle(String title) { + this.title = title; + } + + public Integer getBatch() { + return batch; + } + + public void setBatch(Integer batch) { + this.batch = batch; + } + + public Integer getTolerance() { + return tolerance; + } + + public void setTolerance(Integer tolerance) { + this.tolerance = tolerance; + } + + public Integer getTimeout() { + return timeout; + } + + public void setTimeout(Integer timeout) { + this.timeout = timeout; + } + + public String getPause() { + return pause; + } + + public void setPause(String pause) { + this.pause = pause; + } + + public String getScript() { + return script; + } + + public void setScript(String script) { + this.script = script; + } + + public String getArgs() { + return args; + } + + public void setArgs(String args) { + this.args = args; + } + + public String getAccount() { + return account; + } + + public void setAccount(String account) { + this.account = account; + } + + public String getAction() { + return action; + } + + public void setAction(String action) { + this.action = action; + } + + public List getHosts() { + return hosts; + } + + public void setHosts(List hosts) { + this.hosts = hosts; + } + + @Override + public String toString() { + return "N9eCreationTask{" + + "title='" + title + '\'' + + ", batch=" + batch + + ", tolerance=" + tolerance + + ", timeout=" + timeout + + ", pause='" + pause + '\'' + + ", script='" + script + '\'' + + ", args='" + args + '\'' + + ", account='" + account + '\'' + + ", action='" + action + '\'' + + ", hosts=" + hosts + + '}'; + } +} diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResultDTO.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResult.java similarity index 99% rename from kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResultDTO.java rename to kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResult.java index b787f016..e0e67b0e 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResultDTO.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskResult.java @@ -12,7 +12,7 @@ import java.util.Map; * @author zengqiao * @date 20/9/7 */ -public class N9eTaskResultDTO { +public class N9eTaskResult { private List waiting; private List running; diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskStdoutLog.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskStdoutLog.java new file mode 100644 index 00000000..622aaa3e --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/N9eTaskStdoutLog.java @@ -0,0 +1,35 @@ +package com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry; + +/** + * @author zengqiao + * @date 20/9/7 + */ +public class N9eTaskStdoutLog { + private String host; + + private String stdout; + + public String getHost() { + return host; + } + + public void setHost(String host) { + this.host = host; + } + + public String getStdout() { + return stdout; + } + + public void setStdout(String stdout) { + this.stdout = stdout; + } + + @Override + public String toString() { + return "N9eTaskStdoutDTO{" + + "host='" + host + '\'' + + ", stdout='" + stdout + '\'' + + '}'; + } +} \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/bizenum/N9eTaskStatusEnum.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/bizenum/N9eTaskStatusEnum.java new file mode 100644 index 00000000..4453e703 --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/agent/n9e/entry/bizenum/N9eTaskStatusEnum.java @@ -0,0 +1,59 @@ +package com.xiaojukeji.kafka.manager.kcm.component.agent.n9e.entry.bizenum; + +import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; + +/** + * @author zengqiao + * @date 20/9/3 + */ +public enum N9eTaskStatusEnum { + DONE(0, "done", ClusterTaskStateEnum.FINISHED), + PAUSE(1, "pause", ClusterTaskStateEnum.BLOCKED), + START(2, "start", ClusterTaskStateEnum.RUNNING), + ; + + private Integer code; + + private String message; + + private ClusterTaskStateEnum status; + + N9eTaskStatusEnum(Integer code, String message, ClusterTaskStateEnum status) { + this.code = code; + this.message = message; + this.status = status; + } + + public Integer getCode() { + return code; + } + + public void setCode(Integer code) { + this.code = code; + } + + public String getMessage() { + return message; + } + + public void setMessage(String message) { + this.message = message; + } + + public ClusterTaskStateEnum getStatus() { + return status; + } + + public void setStatus(ClusterTaskStateEnum status) { + this.status = status; + } + + public static N9eTaskStatusEnum getByMessage(String message) { + for (N9eTaskStatusEnum elem: N9eTaskStatusEnum.values()) { + if (elem.message.equals(message)) { + return elem; + } + } + return null; + } +} \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/AbstractStorageService.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/AbstractStorageService.java index 90192b0b..34c209ac 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/AbstractStorageService.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/AbstractStorageService.java @@ -10,13 +10,20 @@ import org.springframework.web.multipart.MultipartFile; public abstract class AbstractStorageService { /** * 上传 + * @param fileName 文件名 + * @param fileMd5 文件md5 + * @param uploadFile 文件 + * @return 上传结果 */ public abstract boolean upload(String fileName, String fileMd5, MultipartFile uploadFile); /** - * 下载 + * 下载文件 + * @param fileName 文件名 + * @param fileMd5 文件md5 + * @return 文件 */ - public abstract Result download(String fileName, String fileMd5); + public abstract Result download(String fileName, String fileMd5); /** * 下载base地址 diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/local/Local.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/local/Local.java deleted file mode 100644 index 40841de4..00000000 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/local/Local.java +++ /dev/null @@ -1,33 +0,0 @@ -package com.xiaojukeji.kafka.manager.kcm.component.storage.local; - -import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; -import org.springframework.beans.factory.annotation.Value; -import org.springframework.stereotype.Service; -import com.xiaojukeji.kafka.manager.kcm.component.storage.AbstractStorageService; -import org.springframework.web.multipart.MultipartFile; - -/** - * @author zengqiao - * @date 20/9/17 - */ -@Service("storageService") -public class Local extends AbstractStorageService { - @Value("${kcm.storage.base-url}") - private String baseUrl; - - @Override - public boolean upload(String fileName, String fileMd5, MultipartFile uploadFile) { - return false; - } - - @Override - public Result download(String fileName, String fileMd5) { - return Result.buildFrom(ResultStatus.DOWNLOAD_FILE_FAIL); - } - - @Override - public String getDownloadBaseUrl() { - return baseUrl; - } -} \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/s3/S3Service.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/s3/S3Service.java new file mode 100644 index 00000000..9519efd2 --- /dev/null +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/component/storage/s3/S3Service.java @@ -0,0 +1,128 @@ +package com.xiaojukeji.kafka.manager.kcm.component.storage.s3; + +import com.xiaojukeji.kafka.manager.common.entity.Result; +import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; +import com.xiaojukeji.kafka.manager.kcm.component.storage.AbstractStorageService; +import io.minio.*; +import org.springframework.beans.factory.annotation.Value; +import org.springframework.mock.web.MockMultipartFile; +import org.springframework.stereotype.Service; +import org.springframework.web.multipart.MultipartFile; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.PostConstruct; +import java.io.IOException; +import java.io.InputStream; + + +@Service("storageService") +public class S3Service extends AbstractStorageService { + private final static Logger LOGGER = LoggerFactory.getLogger(S3Service.class); + + @Value("${kcm.s3.endpoint:}") + private String endpoint; + + @Value("${kcm.s3.access-key:}") + private String accessKey; + + @Value("${kcm.s3.secret-key:}") + private String secretKey; + + @Value("${kcm.s3.bucket:}") + private String bucket; + + private MinioClient minioClient; + + @PostConstruct + public void init() { + try { + if (ValidateUtils.anyBlank(this.endpoint, this.accessKey, this.secretKey, this.bucket)) { + // without config s3 + return; + } + minioClient = new MinioClient(endpoint, accessKey, secretKey); + } catch (Exception e) { + LOGGER.error("class=S3Service||method=init||fields={}||errMsg={}", this.toString(), e.getMessage()); + } + } + + @Override + public boolean upload(String fileName, String fileMd5, MultipartFile uploadFile) { + InputStream inputStream = null; + try { + if (!createBucketIfNotExist()) { + return false; + } + + inputStream = uploadFile.getInputStream(); + minioClient.putObject(PutObjectArgs.builder() + .bucket(this.bucket) + .object(fileName) + .stream(inputStream, inputStream.available(), -1) + .build() + ); + return true; + } catch (Exception e) { + LOGGER.error("class=S3Service||method=upload||fileName={}||errMsg={}||msg=upload failed", fileName, e.getMessage()); + } finally { + if (inputStream != null) { + try { + inputStream.close(); + } catch (IOException e) { + ; // ignore + } + } + } + return false; + } + + @Override + public Result download(String fileName, String fileMd5) { + try { + final ObjectStat stat = minioClient.statObject(this.bucket, fileName); + + InputStream is = minioClient.getObject(this.bucket, fileName); + + return Result.buildSuc(new MockMultipartFile(fileName, fileName, stat.contentType(), is)); + } catch (Exception e) { + LOGGER.error("class=S3Service||method=download||fileName={}||errMsg={}||msg=download failed", fileName, e.getMessage()); + } + return Result.buildFrom(ResultStatus.STORAGE_DOWNLOAD_FILE_FAILED); + } + + @Override + public String getDownloadBaseUrl() { + if (this.endpoint.startsWith("http://")) { + return this.endpoint + "/" + this.bucket; + } + return "http://" + this.endpoint + "/" + this.bucket; + } + + private boolean createBucketIfNotExist() { + try { + boolean found = minioClient.bucketExists(BucketExistsArgs.builder().bucket(this.bucket).build()); + if (!found) { + minioClient.makeBucket(MakeBucketArgs.builder().bucket(this.bucket).build()); + } + + LOGGER.info("class=S3Service||method=createBucketIfNotExist||bucket={}||msg=check and create bucket success", this.bucket); + return true; + } catch (Exception e) { + LOGGER.error("class=S3Service||method=createBucketIfNotExist||bucket={}||errMsg={}||msg=create bucket failed", this.bucket, e.getMessage()); + } + return false; + } + + @Override + public String toString() { + return "S3Service{" + + "endpoint='" + endpoint + '\'' + + ", accessKey='" + accessKey + '\'' + + ", secretKey='" + secretKey + '\'' + + ", bucket='" + bucket + '\'' + + '}'; + } +} diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java index a190350a..b3ef959a 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/ClusterTaskServiceImpl.java @@ -6,6 +6,7 @@ import com.xiaojukeji.kafka.manager.kcm.ClusterTaskService; import com.xiaojukeji.kafka.manager.kcm.common.Converters; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskActionEnum; import com.xiaojukeji.kafka.manager.kcm.common.entry.ClusterTaskConstant; +import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskLog; import com.xiaojukeji.kafka.manager.kcm.common.entry.ao.ClusterTaskSubStatus; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskStateEnum; import com.xiaojukeji.kafka.manager.kcm.common.bizenum.ClusterTaskSubStateEnum; @@ -34,7 +35,7 @@ import java.util.*; */ @Service("clusterTaskService") public class ClusterTaskServiceImpl implements ClusterTaskService { - private final static Logger LOGGER = LoggerFactory.getLogger(ClusterTaskServiceImpl.class); + private static final Logger LOGGER = LoggerFactory.getLogger(ClusterTaskServiceImpl.class); @Autowired private AbstractAgent abstractAgent; @@ -63,13 +64,13 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { } // 创建任务 - Long agentTaskId = abstractAgent.createTask(dtoResult.getData()); - if (ValidateUtils.isNull(agentTaskId)) { + Result createResult = abstractAgent.createTask(dtoResult.getData()); + if (ValidateUtils.isNull(createResult) || createResult.failed()) { return Result.buildFrom(ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED); } try { - if (clusterTaskDao.insert(Converters.convert2ClusterTaskDO(agentTaskId, dtoResult.getData(), operator)) > 0) { + if (clusterTaskDao.insert(Converters.convert2ClusterTaskDO(createResult.getData(), dtoResult.getData(), operator)) > 0) { return Result.buildFrom(ResultStatus.SUCCESS); } } catch (Exception e) { @@ -87,45 +88,44 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { Long agentTaskId = getActiveAgentTaskId(clusterTaskDO); Boolean rollback = inRollback(clusterTaskDO); - ClusterTaskStateEnum stateEnum = abstractAgent.getTaskState(agentTaskId); - if (ClusterTaskActionEnum.START.getMessage().equals(action) - && ClusterTaskStateEnum.BLOCKED.equals(stateEnum)) { + Result stateEnumResult = abstractAgent.getTaskExecuteState(agentTaskId); + if (ValidateUtils.isNull(stateEnumResult) || stateEnumResult.failed()) { + return ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED; + } + + if (ClusterTaskActionEnum.START.getAction().equals(action) && ClusterTaskStateEnum.BLOCKED.equals(stateEnumResult.getData())) { // 暂停状态, 可以执行开始 - return actionTaskExceptRollbackAction(agentTaskId, action, ""); + return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.START, ""); } - if (ClusterTaskActionEnum.PAUSE.getMessage().equals(action) - && ClusterTaskStateEnum.RUNNING.equals(stateEnum)) { + if (ClusterTaskActionEnum.PAUSE.getAction().equals(action) && ClusterTaskStateEnum.RUNNING.equals(stateEnumResult.getData())) { // 运行状态, 可以执行暂停 - return actionTaskExceptRollbackAction(agentTaskId, action, ""); + return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.PAUSE, ""); } - if (ClusterTaskActionEnum.IGNORE.getMessage().equals(action) - || ClusterTaskActionEnum.CANCEL.getMessage().equals(action)) { + if (ClusterTaskActionEnum.IGNORE.getAction().equals(action)) { // 忽略 & 取消随时都可以操作 - return actionTaskExceptRollbackAction(agentTaskId, action, hostname); + return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.IGNORE, hostname); } - if ((!ClusterTaskStateEnum.FINISHED.equals(stateEnum) || !rollback) - && ClusterTaskActionEnum.ROLLBACK.getMessage().equals(action)) { + if (ClusterTaskActionEnum.CANCEL.getAction().equals(action)) { + // 忽略 & 取消随时都可以操作 + return actionTaskExceptRollbackAction(agentTaskId, ClusterTaskActionEnum.CANCEL, hostname); + } + if ((!ClusterTaskStateEnum.FINISHED.equals(stateEnumResult.getData()) || !rollback) + && ClusterTaskActionEnum.ROLLBACK.getAction().equals(action)) { // 暂未操作完时可以回滚, 回滚所有操作过的机器到上一个版本 return actionTaskRollback(clusterTaskDO); } return ResultStatus.OPERATION_FAILED; } - private ResultStatus actionTaskExceptRollbackAction(Long agentId, String action, String hostname) { + private ResultStatus actionTaskExceptRollbackAction(Long agentId, ClusterTaskActionEnum actionEnum, String hostname) { if (!ValidateUtils.isBlank(hostname)) { - return actionHostTaskExceptRollbackAction(agentId, action, hostname); + return actionHostTaskExceptRollbackAction(agentId, actionEnum, hostname); } - if (abstractAgent.actionTask(agentId, action)) { - return ResultStatus.SUCCESS; - } - return ResultStatus.OPERATION_FAILED; + return abstractAgent.actionTask(agentId, actionEnum)? ResultStatus.SUCCESS: ResultStatus.OPERATION_FAILED; } - private ResultStatus actionHostTaskExceptRollbackAction(Long agentId, String action, String hostname) { - if (abstractAgent.actionHostTask(agentId, action, hostname)) { - return ResultStatus.SUCCESS; - } - return ResultStatus.OPERATION_FAILED; + private ResultStatus actionHostTaskExceptRollbackAction(Long agentId, ClusterTaskActionEnum actionEnum, String hostname) { + return abstractAgent.actionHostTask(agentId, actionEnum, hostname)? ResultStatus.SUCCESS: ResultStatus.OPERATION_FAILED; } private ResultStatus actionTaskRollback(ClusterTaskDO clusterTaskDO) { @@ -133,9 +133,9 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { return ResultStatus.OPERATION_FORBIDDEN; } - Map subStatusEnumMap = + Result> subStatusEnumMapResult = abstractAgent.getTaskResult(clusterTaskDO.getAgentTaskId()); - if (ValidateUtils.isNull(subStatusEnumMap)) { + if (ValidateUtils.isNull(subStatusEnumMapResult) || subStatusEnumMapResult.failed()) { return ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED; } @@ -143,7 +143,7 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { List rollbackHostList = new ArrayList<>(); List rollbackPauseHostList = new ArrayList<>(); for (String host: ListUtils.string2StrList(clusterTaskDO.getHostList())) { - ClusterTaskSubStateEnum subStateEnum = subStatusEnumMap.get(host); + ClusterTaskSubStateEnum subStateEnum = subStatusEnumMapResult.getData().get(host); if (ValidateUtils.isNull(subStateEnum)) { // 机器对应的任务查询失败 return ResultStatus.OPERATION_FAILED; @@ -166,17 +166,17 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { clusterTaskDO.setRollbackPauseHostList(ListUtils.strList2String(rollbackPauseHostList)); // 创建任务 - Long agentTaskId = abstractAgent.createTask(Converters.convert2CreationTaskData(clusterTaskDO)); - if (ValidateUtils.isNull(agentTaskId)) { + Result createResult = abstractAgent.createTask(Converters.convert2CreationTaskData(clusterTaskDO)); + if (ValidateUtils.isNull(createResult) || createResult.failed()) { return ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED; } try { - clusterTaskDO.setAgentRollbackTaskId(agentTaskId); + clusterTaskDO.setAgentRollbackTaskId(createResult.getData()); if (clusterTaskDao.updateRollback(clusterTaskDO) <= 0) { return ResultStatus.MYSQL_ERROR; } - abstractAgent.actionTask(clusterTaskDO.getAgentTaskId(), ClusterTaskActionEnum.CANCEL.getMessage()); + abstractAgent.actionTask(clusterTaskDO.getAgentTaskId(), ClusterTaskActionEnum.CANCEL); return ResultStatus.SUCCESS; } catch (Exception e) { LOGGER.error("create cluster task failed, clusterTaskDO:{}.", clusterTaskDO, e); @@ -191,11 +191,11 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { return Result.buildFrom(ResultStatus.TASK_NOT_EXIST); } - String stdoutLog = abstractAgent.getTaskLog(getActiveAgentTaskId(clusterTaskDO, hostname), hostname); - if (ValidateUtils.isNull(stdoutLog)) { + Result stdoutLogResult = abstractAgent.getTaskLog(getActiveAgentTaskId(clusterTaskDO, hostname), hostname); + if (ValidateUtils.isNull(stdoutLogResult) || stdoutLogResult.failed()) { return Result.buildFrom(ResultStatus.CALL_CLUSTER_TASK_AGENT_FAILED); } - return new Result<>(stdoutLog); + return new Result<>(stdoutLogResult.getData().getStdout()); } @Override @@ -205,24 +205,33 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { return Result.buildFrom(ResultStatus.TASK_NOT_EXIST); } + Result statusEnumResult = abstractAgent.getTaskExecuteState(getActiveAgentTaskId(clusterTaskDO)); + if (ValidateUtils.isNull(statusEnumResult) || statusEnumResult.failed()) { + return new Result<>(statusEnumResult.getCode(), statusEnumResult.getMessage()); + } + return new Result<>(new ClusterTaskStatus( clusterTaskDO.getId(), clusterTaskDO.getClusterId(), inRollback(clusterTaskDO), - abstractAgent.getTaskState(getActiveAgentTaskId(clusterTaskDO)), + statusEnumResult.getData(), getTaskSubStatus(clusterTaskDO) )); } @Override public ClusterTaskStateEnum getTaskState(Long agentTaskId) { - return abstractAgent.getTaskState(agentTaskId); + Result statusEnumResult = abstractAgent.getTaskExecuteState(agentTaskId); + if (ValidateUtils.isNull(statusEnumResult) || statusEnumResult.failed()) { + return null; + } + return statusEnumResult.getData(); } private List getTaskSubStatus(ClusterTaskDO clusterTaskDO) { Map statusMap = this.getClusterTaskSubState(clusterTaskDO); if (ValidateUtils.isNull(statusMap)) { - return null; + return Collections.emptyList(); } List pauseList = ListUtils.string2StrList(clusterTaskDO.getPauseHostList()); @@ -242,20 +251,22 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { } private Map getClusterTaskSubState(ClusterTaskDO clusterTaskDO) { - Map statusMap = abstractAgent.getTaskResult(clusterTaskDO.getAgentTaskId()); - if (ValidateUtils.isNull(statusMap)) { + Result> statusMapResult = abstractAgent.getTaskResult(clusterTaskDO.getAgentTaskId()); + if (ValidateUtils.isNull(statusMapResult) || statusMapResult.failed()) { return null; } + Map statusMap = statusMapResult.getData(); if (!inRollback(clusterTaskDO)) { return statusMap; } - Map rollbackStatusMap = + Result> rollbackStatusMapResult = abstractAgent.getTaskResult(clusterTaskDO.getAgentRollbackTaskId()); - if (ValidateUtils.isNull(rollbackStatusMap)) { + if (ValidateUtils.isNull(rollbackStatusMapResult) || rollbackStatusMapResult.failed()) { return null; } - statusMap.putAll(rollbackStatusMap); + + statusMap.putAll(rollbackStatusMapResult.getData()); return statusMap; } @@ -276,7 +287,7 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { } catch (Exception e) { LOGGER.error("get all cluster task failed."); } - return null; + return Collections.emptyList(); } @Override @@ -302,9 +313,6 @@ public class ClusterTaskServiceImpl implements ClusterTaskService { } private boolean inRollback(ClusterTaskDO clusterTaskDO) { - if (ClusterTaskConstant.INVALID_AGENT_TASK_ID.equals(clusterTaskDO.getAgentRollbackTaskId())) { - return false; - } - return true; + return !ClusterTaskConstant.INVALID_AGENT_TASK_ID.equals(clusterTaskDO.getAgentRollbackTaskId()); } } \ No newline at end of file diff --git a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/KafkaFileServiceImpl.java b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/KafkaFileServiceImpl.java index f97510fd..bef2fb89 100644 --- a/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/KafkaFileServiceImpl.java +++ b/kafka-manager-extends/kafka-manager-kcm/src/main/java/com/xiaojukeji/kafka/manager/kcm/impl/KafkaFileServiceImpl.java @@ -15,6 +15,7 @@ import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.dao.DuplicateKeyException; import org.springframework.stereotype.Service; +import org.springframework.web.multipart.MultipartFile; import java.util.ArrayList; import java.util.List; @@ -52,7 +53,7 @@ public class KafkaFileServiceImpl implements KafkaFileService { kafkaFileDTO.getUploadFile()) ) { kafkaFileDao.deleteById(kafkaFileDO.getId()); - return ResultStatus.UPLOAD_FILE_FAIL; + return ResultStatus.STORAGE_UPLOAD_FILE_FAILED; } return ResultStatus.SUCCESS; } catch (DuplicateKeyException e) { @@ -113,7 +114,7 @@ public class KafkaFileServiceImpl implements KafkaFileService { if (kafkaFileDao.updateById(kafkaFileDO) <= 0) { return ResultStatus.MYSQL_ERROR; } - return ResultStatus.UPLOAD_FILE_FAIL; + return ResultStatus.STORAGE_UPLOAD_FILE_FAILED; } catch (Exception e) { LOGGER.error("rollback modify kafka file failed, kafkaFileDTO:{}.", kafkaFileDTO, e); } @@ -163,13 +164,13 @@ public class KafkaFileServiceImpl implements KafkaFileService { } @Override - public Result downloadKafkaConfigFile(Long fileId) { + public Result downloadKafkaFile(Long fileId) { KafkaFileDO kafkaFileDO = kafkaFileDao.getById(fileId); if (ValidateUtils.isNull(kafkaFileDO)) { return Result.buildFrom(ResultStatus.RESOURCE_NOT_EXIST); } if (KafkaFileEnum.PACKAGE.getCode().equals(kafkaFileDO.getFileType())) { - return Result.buildFrom(ResultStatus.FILE_TYPE_NOT_SUPPORT); + return Result.buildFrom(ResultStatus.STORAGE_FILE_TYPE_NOT_SUPPORT); } return storageService.download(kafkaFileDO.getFileName(), kafkaFileDO.getFileMd5()); diff --git a/kafka-manager-extends/kafka-manager-monitor/pom.xml b/kafka-manager-extends/kafka-manager-monitor/pom.xml index 9d198a49..0948a190 100644 --- a/kafka-manager-extends/kafka-manager-monitor/pom.xml +++ b/kafka-manager-extends/kafka-manager-monitor/pom.xml @@ -5,13 +5,13 @@ 4.0.0 com.xiaojukeji.kafka kafka-manager-monitor - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml diff --git a/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/N9eConverter.java b/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/N9eConverter.java index 7735caf8..c69ae906 100644 --- a/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/N9eConverter.java +++ b/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/N9eConverter.java @@ -4,6 +4,7 @@ import com.xiaojukeji.kafka.manager.common.utils.ListUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.monitor.common.entry.*; import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.*; +import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.bizenum.CategoryEnum; import java.util.*; @@ -44,7 +45,7 @@ public class N9eConverter { if (!ValidateUtils.isNull(strategy.getId())) { n9eStrategy.setId(strategy.getId().intValue()); } - n9eStrategy.setCategory(1); + n9eStrategy.setCategory(CategoryEnum.DEVICE_INDEPENDENT.getCode()); n9eStrategy.setName(strategy.getName()); n9eStrategy.setNid(monitorN9eNid); n9eStrategy.setExcl_nid(new ArrayList<>()); @@ -77,7 +78,13 @@ public class N9eConverter { n9eStrategy.setRecovery_notify(0); StrategyAction strategyAction = strategy.getStrategyActionList().get(0); - n9eStrategy.setConverge(ListUtils.string2IntList(strategyAction.getConverge())); + + // 单位转换, 夜莺的单位是秒, KM前端的单位是分钟 + List convergeList = ListUtils.string2IntList(strategyAction.getConverge()); + if (!ValidateUtils.isEmptyList(convergeList)) { + convergeList.set(0, convergeList.get(0) * 60); + } + n9eStrategy.setConverge(convergeList); List notifyGroups = new ArrayList<>(); for (String name: ListUtils.string2StrList(strategyAction.getNotifyGroup())) { @@ -167,7 +174,13 @@ public class N9eConverter { } strategyAction.setNotifyGroup(ListUtils.strList2String(notifyGroups)); - strategyAction.setConverge(ListUtils.intList2String(n9eStrategy.getConverge())); + // 单位转换, 夜莺的单位是秒, KM前端的单位是分钟 + List convergeList = n9eStrategy.getConverge(); + if (!ValidateUtils.isEmptyList(convergeList)) { + convergeList.set(0, convergeList.get(0) / 60); + } + strategyAction.setConverge(ListUtils.intList2String(convergeList)); + strategyAction.setCallback(n9eStrategy.getCallback()); strategy.setStrategyActionList(Arrays.asList(strategyAction)); diff --git a/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/entry/bizenum/CategoryEnum.java b/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/entry/bizenum/CategoryEnum.java new file mode 100644 index 00000000..9695c757 --- /dev/null +++ b/kafka-manager-extends/kafka-manager-monitor/src/main/java/com/xiaojukeji/kafka/manager/monitor/component/n9e/entry/bizenum/CategoryEnum.java @@ -0,0 +1,23 @@ +package com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.bizenum; + +public enum CategoryEnum { + DEVICE_RELATED(1, "设备相关"), + DEVICE_INDEPENDENT(2, "设备无关"), + ; + private int code; + + private String msg; + + CategoryEnum(int code, String msg) { + this.code = code; + this.msg = msg; + } + + public int getCode() { + return code; + } + + public String getMsg() { + return msg; + } +} diff --git a/kafka-manager-extends/kafka-manager-notify/pom.xml b/kafka-manager-extends/kafka-manager-notify/pom.xml index c15dba32..a2fd2c4b 100644 --- a/kafka-manager-extends/kafka-manager-notify/pom.xml +++ b/kafka-manager-extends/kafka-manager-notify/pom.xml @@ -5,13 +5,13 @@ 4.0.0 com.xiaojukeji.kafka kafka-manager-notify - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml diff --git a/kafka-manager-extends/kafka-manager-openapi/pom.xml b/kafka-manager-extends/kafka-manager-openapi/pom.xml index a0c4c277..caaa1242 100644 --- a/kafka-manager-extends/kafka-manager-openapi/pom.xml +++ b/kafka-manager-extends/kafka-manager-openapi/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-openapi - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} ../../pom.xml diff --git a/kafka-manager-task/pom.xml b/kafka-manager-task/pom.xml index 86c06a99..8927ef8e 100644 --- a/kafka-manager-task/pom.xml +++ b/kafka-manager-task/pom.xml @@ -5,13 +5,13 @@ 4.0.0 com.xiaojukeji.kafka kafka-manager-task - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} diff --git a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/SyncTopic2DB.java b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/SyncTopic2DB.java index ae10a21d..bb069aa8 100644 --- a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/SyncTopic2DB.java +++ b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/dispatch/op/SyncTopic2DB.java @@ -125,7 +125,7 @@ public class SyncTopic2DB extends AbstractScheduledTask { if (ValidateUtils.isNull(syncTopic2DBConfig.isAddAuthority()) || !syncTopic2DBConfig.isAddAuthority()) { // 不增加权限信息, 则直接忽略 - return; + continue; } // TODO 当前添加 Topic 和 添加 Authority 是非事务的, 中间出现异常之后, 会导致数据错误, 后续还需要优化一下 diff --git a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkCommunityTopicMetrics2Monitor.java b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkCommunityTopicMetrics2Monitor.java index e8df775b..e2ac74a9 100644 --- a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkCommunityTopicMetrics2Monitor.java +++ b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkCommunityTopicMetrics2Monitor.java @@ -73,7 +73,7 @@ public class SinkCommunityTopicMetrics2Monitor extends AbstractScheduledTask MonitorSinkConstant.MONITOR_SYSTEM_SINK_THRESHOLD) { abstractMonitor.sinkMetrics(metricSinkPoints); metricSinkPoints.clear(); diff --git a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkConsumerMetrics2Monitor.java b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkConsumerMetrics2Monitor.java index 3b5f0ad4..4ca276f9 100644 --- a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkConsumerMetrics2Monitor.java +++ b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkConsumerMetrics2Monitor.java @@ -64,7 +64,7 @@ public class SinkConsumerMetrics2Monitor implements ApplicationListener MonitorSinkConstant.MONITOR_SYSTEM_SINK_THRESHOLD) { abstractMonitor.sinkMetrics(metricSinkPoints); metricSinkPoints.clear(); diff --git a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkTopicThrottledMetrics2Monitor.java b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkTopicThrottledMetrics2Monitor.java index c4871905..fb95947c 100644 --- a/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkTopicThrottledMetrics2Monitor.java +++ b/kafka-manager-task/src/main/java/com/xiaojukeji/kafka/manager/task/listener/SinkTopicThrottledMetrics2Monitor.java @@ -57,7 +57,7 @@ public class SinkTopicThrottledMetrics2Monitor implements ApplicationListener doList = clusterService.list(); + Map dbClusterMap = clusterService.list().stream().collect(Collectors.toMap(ClusterDO::getId, Function.identity(), (key1, key2) -> key2)); - Set newClusterIdSet = new HashSet<>(); - Set oldClusterIdSet = physicalClusterMetadataManager.getClusterIdSet(); - for (ClusterDO clusterDO: doList) { - newClusterIdSet.add(clusterDO.getId()); + Map cacheClusterMap = PhysicalClusterMetadataManager.getClusterMap(); - // 添加集群 - physicalClusterMetadataManager.addNew(clusterDO); - } + // 新增的集群 + for (ClusterDO clusterDO: dbClusterMap.values()) { + if (cacheClusterMap.containsKey(clusterDO.getId())) { + // 已经存在 + continue; + } + add(clusterDO); + } - for (Long clusterId: oldClusterIdSet) { - if (newClusterIdSet.contains(clusterId)) { - continue; - } + // 移除的集群 + for (ClusterDO clusterDO: cacheClusterMap.values()) { + if (dbClusterMap.containsKey(clusterDO.getId())) { + // 已经存在 + continue; + } + remove(clusterDO.getId()); + } - // 移除集群 - physicalClusterMetadataManager.remove(clusterId); - } + // 被修改配置的集群 + for (ClusterDO dbClusterDO: dbClusterMap.values()) { + ClusterDO cacheClusterDO = cacheClusterMap.get(dbClusterDO.getId()); + if (ValidateUtils.anyNull(cacheClusterDO) || dbClusterDO.equals(cacheClusterDO)) { + // 不存在 || 相等 + continue; + } + modifyConfig(dbClusterDO); + } } + + private void add(ClusterDO clusterDO) { + if (ValidateUtils.anyNull(clusterDO)) { + return; + } + physicalClusterMetadataManager.addNew(clusterDO); + } + + private void modifyConfig(ClusterDO clusterDO) { + if (ValidateUtils.anyNull(clusterDO)) { + return; + } + PhysicalClusterMetadataManager.updateClusterMap(clusterDO); + KafkaClientPool.closeKafkaConsumerPool(clusterDO.getId()); + } + + private void remove(Long clusterId) { + if (ValidateUtils.anyNull(clusterId)) { + return; + } + // 移除缓存信息 + physicalClusterMetadataManager.remove(clusterId); + + // 清除客户端池子 + KafkaClientPool.closeKafkaConsumerPool(clusterId); + } + } \ No newline at end of file diff --git a/kafka-manager-web/pom.xml b/kafka-manager-web/pom.xml index f40e1c35..849eb304 100644 --- a/kafka-manager-web/pom.xml +++ b/kafka-manager-web/pom.xml @@ -4,13 +4,13 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 kafka-manager-web - 2.1.0-SNAPSHOT + ${kafka-manager.revision} jar kafka-manager com.xiaojukeji.kafka - 2.1.0-SNAPSHOT + ${kafka-manager.revision} diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/LoginController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/LoginController.java index 06ef70a6..462b46a6 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/LoginController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/LoginController.java @@ -1,8 +1,6 @@ package com.xiaojukeji.kafka.manager.web.api.versionone; -import com.xiaojukeji.kafka.manager.common.constant.Constant; import com.xiaojukeji.kafka.manager.common.entity.Result; -import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account; import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO; import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountVO; @@ -11,8 +9,6 @@ import com.xiaojukeji.kafka.manager.account.LoginService; import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; @@ -28,26 +24,22 @@ import javax.servlet.http.HttpServletResponse; @RestController @RequestMapping(ApiPrefix.API_V1_SSO_PREFIX) public class LoginController { - private static final Logger LOGGER = LoggerFactory.getLogger(LoginController.class); - @Autowired private LoginService loginService; @ApiOperation(value = "登陆", notes = "") @RequestMapping(value = "login", method = RequestMethod.POST) @ResponseBody - public Result login(HttpServletRequest request, - HttpServletResponse response, - @RequestBody LoginDTO dto){ - Account account = loginService.login(request, response, dto); - if (ValidateUtils.isNull(account)) { - return Result.buildFrom(ResultStatus.LOGIN_FAILED); + public Result login(HttpServletRequest request, HttpServletResponse response, @RequestBody LoginDTO dto){ + Result accountResult = loginService.login(request, response, dto); + if (ValidateUtils.isNull(accountResult) || accountResult.failed()) { + return new Result<>(accountResult.getCode(), accountResult.getMessage()); } AccountVO vo = new AccountVO(); - vo.setUsername(account.getUsername()); - vo.setChineseName(account.getChineseName()); - vo.setDepartment(account.getDepartment()); - vo.setRole(account.getAccountRoleEnum().getRole()); + vo.setUsername(accountResult.getData().getUsername()); + vo.setChineseName(accountResult.getData().getChineseName()); + vo.setDepartment(accountResult.getData().getDepartment()); + vo.setRole(accountResult.getData().getAccountRoleEnum().getRole()); return new Result<>(vo); } @@ -58,28 +50,4 @@ public class LoginController { loginService.logout(request, response, true); return new Result(); } - - @Deprecated - @ApiOperation(value = "登录检查", notes = "检查SSO返回的Code") - @RequestMapping(value = "xiaojukeji/login-check", method = RequestMethod.POST) - @ResponseBody - public Result checkCodeAndGetStaffInfo(HttpServletRequest request, - HttpServletResponse response, - @RequestBody LoginDTO dto) { - Result ra = login(request, response, dto); - if (!Constant.SUCCESS.equals(ra.getCode())) { - LOGGER.info("user login failed, req:{} result:{}.", dto, ra); - } else { - LOGGER.info("user login success, req:{} result:{}.", dto, ra); - } - return ra; - } - - @Deprecated - @ApiOperation(value = "登出", notes = "") - @RequestMapping(value = "xiaojukeji/logout", method = RequestMethod.DELETE) - @ResponseBody - public Result logout(HttpServletRequest request, HttpServletResponse response) { - return logoff(request, response); - } } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayHeartbeatController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayHeartbeatController.java index 4fe01e22..02a11497 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayHeartbeatController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayHeartbeatController.java @@ -50,7 +50,7 @@ public class GatewayHeartbeatController { doList = JsonUtils.parseTopicConnections(clusterId, jsonObject, System.currentTimeMillis()); } catch (Exception e) { LOGGER.error("class=GatewayHeartbeatController||method=receiveTopicConnections||clusterId={}||brokerId={}||msg=parse data failed||exception={}", clusterId, brokerId, e.getMessage()); - return Result.buildFailure("fail"); + return Result.buildGatewayFailure("fail"); } topicConnectionService.batchAdd(doList); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayServiceDiscoveryController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayServiceDiscoveryController.java index e490368d..425eba75 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayServiceDiscoveryController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/gateway/GatewayServiceDiscoveryController.java @@ -31,7 +31,6 @@ import java.util.Map; @RestController @RequestMapping(ApiPrefix.GATEWAY_API_V1_PREFIX) public class GatewayServiceDiscoveryController { - private final static Logger LOGGER = LoggerFactory.getLogger(GatewayHeartbeatController.class); @Autowired @@ -65,7 +64,7 @@ public class GatewayServiceDiscoveryController { KafkaBootstrapServerConfig config = gatewayConfigService.getKafkaBootstrapServersConfig(Long.MIN_VALUE); if (ValidateUtils.isNull(config) || ValidateUtils.isNull(config.getClusterIdBootstrapServersMap())) { - return Result.buildFailure("call init kafka bootstrap servers failed"); + return Result.buildGatewayFailure("call init kafka bootstrap servers failed"); } if (ValidateUtils.isEmptyMap(config.getClusterIdBootstrapServersMap())) { return Result.buildSuc(); @@ -81,7 +80,7 @@ public class GatewayServiceDiscoveryController { KafkaBootstrapServerConfig config = gatewayConfigService.getKafkaBootstrapServersConfig(versionNumber); if (ValidateUtils.isNull(config) || ValidateUtils.isNull(config.getClusterIdBootstrapServersMap())) { - return Result.buildFailure("call update kafka bootstrap servers failed"); + return Result.buildGatewayFailure("call update kafka bootstrap servers failed"); } if (ValidateUtils.isEmptyMap(config.getClusterIdBootstrapServersMap())) { return Result.buildSuc(); @@ -99,7 +98,7 @@ public class GatewayServiceDiscoveryController { public Result getMaxRequestNum(@RequestParam("versionNumber") long versionNumber) { RequestQueueConfig config = gatewayConfigService.getRequestQueueConfig(versionNumber); if (ValidateUtils.isNull(config)) { - return Result.buildFailure("call get request queue size config failed"); + return Result.buildGatewayFailure("call get request queue size config failed"); } if (ValidateUtils.isNull(config.getMaxRequestQueueSize())) { return Result.buildSuc(); @@ -119,7 +118,7 @@ public class GatewayServiceDiscoveryController { public Result getAppIdRate(@RequestParam("versionNumber") long versionNumber) { AppRateConfig config = gatewayConfigService.getAppRateConfig(versionNumber); if (ValidateUtils.isNull(config)) { - return Result.buildFailure("call get app rate config failed"); + return Result.buildGatewayFailure("call get app rate config failed"); } if (ValidateUtils.isNull(config.getAppRateLimit())) { return Result.buildSuc(); @@ -139,7 +138,7 @@ public class GatewayServiceDiscoveryController { public Result getIpRate(@RequestParam("versionNumber") long versionNumber) { IpRateConfig config = gatewayConfigService.getIpRateConfig(versionNumber); if (ValidateUtils.isNull(config)) { - return Result.buildFailure("call get ip rate config failed"); + return Result.buildGatewayFailure("call get ip rate config failed"); } if (ValidateUtils.isNull(config.getIpRateLimit())) { return Result.buildSuc(); @@ -160,7 +159,7 @@ public class GatewayServiceDiscoveryController { SpRateConfig config = gatewayConfigService.getSpRateConfig(versionNumber); if (ValidateUtils.isNull(config) || ValidateUtils.isNull(config.getSpRateMap())) { - return Result.buildFailure("call update kafka bootstrap servers failed"); + return Result.buildGatewayFailure("call update kafka bootstrap servers failed"); } if (ValidateUtils.isEmptyMap(config.getSpRateMap())) { return Result.buildSuc(); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAccountController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAccountController.java index 91a0dbaf..9b35ec87 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAccountController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalAccountController.java @@ -40,8 +40,7 @@ public class NormalAccountController { public Result> searchOnJobStaffByKeyWord(@RequestParam("keyWord") String keyWord) { List staffList = accountService.searchAccountByPrefix(keyWord); if (ValidateUtils.isEmptyList(staffList)) { - LOGGER.info("class=NormalAccountController||method=searchOnJobStaffByKeyWord||keyWord={}||msg=staffList is empty!" - ,keyWord); + LOGGER.info("class=NormalAccountController||method=searchOnJobStaffByKeyWord||keyWord={}||msg=staffList is empty!", keyWord); return new Result<>(); } List voList = new ArrayList<>(); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicController.java index efc0eec8..6e59816b 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/normal/NormalTopicController.java @@ -69,7 +69,8 @@ public class NormalTopicController { } return new Result<>(TopicModelConverter.convert2TopicBasicVO( topicService.getTopicBasicDTO(physicalClusterId, topicName), - clusterService.getById(physicalClusterId) + clusterService.getById(physicalClusterId), + logicalClusterMetadataManager.getTopicLogicalClusterId(physicalClusterId, topicName) )); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java index 21547aa9..2caaa69b 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpClusterController.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.op; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO; import com.xiaojukeji.kafka.manager.common.entity.dto.rd.ClusterDTO; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.service.service.ClusterService; @@ -13,6 +14,7 @@ import io.swagger.annotations.ApiOperation; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; + /** * @author zengqiao * @date 20/4/23 @@ -25,48 +27,56 @@ public class OpClusterController { private ClusterService clusterService; @ApiOperation(value = "接入集群") - @RequestMapping(value = "clusters", method = RequestMethod.POST) + @PostMapping(value = "clusters") @ResponseBody public Result addNew(@RequestBody ClusterDTO dto) { if (ValidateUtils.isNull(dto) || !dto.legal()) { return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } return Result.buildFrom( - clusterService.addNew( - ClusterModelConverter.convert2ClusterDO(dto), - SpringTool.getUserName() - ) + clusterService.addNew(ClusterModelConverter.convert2ClusterDO(dto), SpringTool.getUserName()) ); } @ApiOperation(value = "删除集群") - @RequestMapping(value = "clusters", method = RequestMethod.DELETE) + @DeleteMapping(value = "clusters") @ResponseBody public Result delete(@RequestParam(value = "clusterId") Long clusterId) { - return Result.buildFrom(clusterService.deleteById(clusterId)); + return Result.buildFrom(clusterService.deleteById(clusterId, SpringTool.getUserName())); } @ApiOperation(value = "修改集群信息") - @RequestMapping(value = "clusters", method = RequestMethod.PUT) + @PutMapping(value = "clusters") @ResponseBody public Result modify(@RequestBody ClusterDTO reqObj) { if (ValidateUtils.isNull(reqObj) || !reqObj.legal() || ValidateUtils.isNull(reqObj.getClusterId())) { return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } - ResultStatus rs = clusterService.updateById( - ClusterModelConverter.convert2ClusterDO(reqObj), - SpringTool.getUserName() + return Result.buildFrom( + clusterService.updateById(ClusterModelConverter.convert2ClusterDO(reqObj), SpringTool.getUserName()) ); - return Result.buildFrom(rs); } @ApiOperation(value = "开启|关闭集群监控") - @RequestMapping(value = "clusters/{clusterId}/monitor", method = RequestMethod.PUT) + @PutMapping(value = "clusters/{clusterId}/monitor") @ResponseBody - public Result modifyStatus(@PathVariable Long clusterId, - @RequestParam("status") Integer status) { + public Result modifyStatus(@PathVariable Long clusterId, @RequestParam("status") Integer status) { return Result.buildFrom( clusterService.modifyStatus(clusterId, status, SpringTool.getUserName()) ); } + + @ApiOperation(value = "增加Controller优先候选的Broker", notes = "滴滴内部引擎特性") + @PostMapping(value = "cluster-controller/preferred-candidates") + @ResponseBody + public Result addControllerPreferredCandidates(@RequestBody ControllerPreferredCandidateDTO dto) { + return clusterService.addControllerPreferredCandidates(dto.getClusterId(), dto.getBrokerIdList()); + } + + @ApiOperation(value = "删除Controller优先候选的Broker", notes = "滴滴内部引擎特性") + @DeleteMapping(value = "cluster-controller/preferred-candidates") + @ResponseBody + public Result deleteControllerPreferredCandidates(@RequestBody ControllerPreferredCandidateDTO dto) { + return clusterService.deleteControllerPreferredCandidates(dto.getClusterId(), dto.getBrokerIdList()); + } } \ No newline at end of file diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpGatewayConfigController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpGatewayConfigController.java index a97bb386..66eb3b7e 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpGatewayConfigController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpGatewayConfigController.java @@ -3,8 +3,11 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.op; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.gateway.OrderExtensionAddGatewayConfigDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.gateway.OrderExtensionDeleteGatewayConfigDTO; import com.xiaojukeji.kafka.manager.bpm.common.entry.apply.gateway.OrderExtensionModifyGatewayConfigDTO; +import com.xiaojukeji.kafka.manager.common.bizenum.gateway.GatewayConfigKeyEnum; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; +import com.xiaojukeji.kafka.manager.common.utils.JsonUtils; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.service.service.gateway.GatewayConfigService; import com.xiaojukeji.kafka.manager.web.converters.GatewayModelConverter; @@ -16,12 +19,20 @@ import org.springframework.web.bind.annotation.*; @Api(tags = "OP-Gateway配置相关接口(REST)") @RestController +@RequestMapping(ApiPrefix.API_V1_OP_PREFIX) public class OpGatewayConfigController { @Autowired private GatewayConfigService gatewayConfigService; + @ApiOperation(value = "Gateway配置类型", notes = "") + @GetMapping(value = "gateway-configs/type-enums") + @ResponseBody + public Result getClusterModesEnum() { + return new Result<>(JsonUtils.toJson(GatewayConfigKeyEnum.class)); + } + @ApiOperation(value = "创建Gateway配置", notes = "") - @RequestMapping(value = "gateway-configs", method = RequestMethod.POST) + @PostMapping(value = "gateway-configs") @ResponseBody public Result createGatewayConfig(@RequestBody OrderExtensionAddGatewayConfigDTO dto) { if (ValidateUtils.isNull(dto) || !dto.legal()) { @@ -31,7 +42,7 @@ public class OpGatewayConfigController { } @ApiOperation(value = "修改Gateway配置", notes = "") - @RequestMapping(value = "gateway-configs", method = RequestMethod.PUT) + @PutMapping(value = "gateway-configs") @ResponseBody public Result modifyGatewayConfig(@RequestBody OrderExtensionModifyGatewayConfigDTO dto) { if (ValidateUtils.isNull(dto) || !dto.legal()) { @@ -41,7 +52,7 @@ public class OpGatewayConfigController { } @ApiOperation(value = "删除Gateway配置", notes = "") - @RequestMapping(value = "gateway-configs", method = RequestMethod.DELETE) + @DeleteMapping(value = "gateway-configs") @ResponseBody public Result deleteGatewayConfig(@RequestBody OrderExtensionDeleteGatewayConfigDTO dto) { if (ValidateUtils.isNull(dto) || !dto.legal()) { diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpUtilsController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpUtilsController.java index c7b36cba..6d9e7a74 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpUtilsController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/op/OpUtilsController.java @@ -166,7 +166,7 @@ public class OpUtilsController { if (!ResultStatus.SUCCESS.equals(rs)) { return Result.buildFrom(rs); } - topicManagerService.modifyTopic(dto.getClusterId(), dto.getTopicName(), dto.getDescription(), operator); + topicManagerService.modifyTopicByOp(dto.getClusterId(), dto.getTopicName(), dto.getAppId(), dto.getDescription(), operator); return new Result(); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAccountController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAccountController.java index 1df3dce6..2ca29082 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAccountController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdAccountController.java @@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.rd; import com.xiaojukeji.kafka.manager.common.entity.ResultStatus; import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountVO; +import com.xiaojukeji.kafka.manager.common.utils.SpringTool; import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils; import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.web.converters.AccountConverter; @@ -35,7 +36,7 @@ public class RdAccountController { @RequestMapping(value = "accounts", method = RequestMethod.POST) @ResponseBody public Result addAccount(@RequestBody AccountDTO dto) { - if (!dto.legal() || ValidateUtils.isNull(dto.getPassword())) { + if (!dto.legal() || ValidateUtils.isBlank(dto.getPassword())) { return Result.buildFrom(ResultStatus.PARAM_ILLEGAL); } ResultStatus rs = accountService.createAccount(AccountConverter.convert2AccountDO(dto)); @@ -46,7 +47,7 @@ public class RdAccountController { @RequestMapping(value = "accounts", method = RequestMethod.DELETE) @ResponseBody public Result deleteAccount(@RequestParam("username") String username) { - ResultStatus rs = accountService.deleteByName(username); + ResultStatus rs = accountService.deleteByName(username, SpringTool.getUserName()); return Result.buildFrom(rs); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdGatewayConfigController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdGatewayConfigController.java index 3748c3ca..6a46ff0a 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdGatewayConfigController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdGatewayConfigController.java @@ -1,5 +1,6 @@ package com.xiaojukeji.kafka.manager.web.api.versionone.rd; +import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.common.entity.Result; import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.GatewayConfigDO; import com.xiaojukeji.kafka.manager.common.entity.vo.rd.GatewayConfigVO; @@ -15,12 +16,13 @@ import java.util.List; @Api(tags = "RD-Gateway配置相关接口(REST)") @RestController +@RequestMapping(ApiPrefix.API_V1_RD_PREFIX) public class RdGatewayConfigController { @Autowired private GatewayConfigService gatewayConfigService; @ApiOperation(value = "Gateway相关配置信息", notes = "") - @RequestMapping(value = "gateway-configs", method = RequestMethod.GET) + @GetMapping(value = "gateway-configs") @ResponseBody public Result> getGatewayConfigs() { List doList = gatewayConfigService.list(); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdKafkaFileController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdKafkaFileController.java index 823bbe70..eaab7dc9 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdKafkaFileController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdKafkaFileController.java @@ -15,9 +15,16 @@ import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix; import com.xiaojukeji.kafka.manager.web.converters.KafkaFileConverter; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; +import org.apache.tomcat.util.http.fileupload.IOUtils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; +import org.springframework.web.multipart.MultipartFile; +import javax.servlet.http.HttpServletResponse; +import java.io.InputStream; +import java.net.URLEncoder; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -30,6 +37,8 @@ import java.util.Map; @RestController @RequestMapping(ApiPrefix.API_V1_RD_PREFIX) public class RdKafkaFileController { + private final static Logger LOGGER = LoggerFactory.getLogger(RdKafkaFileController.class); + @Autowired private ClusterService clusterService; @@ -71,9 +80,33 @@ public class RdKafkaFileController { return new Result<>(KafkaFileConverter.convertKafkaFileVOList(kafkaFileDOList, clusterService)); } - @ApiOperation(value = "文件预览", notes = "") + @Deprecated + @ApiOperation(value = "文件下载", notes = "") @RequestMapping(value = "kafka-files/{fileId}/config-files", method = RequestMethod.GET) - public Result previewKafkaFile(@PathVariable("fileId") Long fileId) { - return kafkaFileService.downloadKafkaConfigFile(fileId); + public Result downloadKafkaFile(@PathVariable("fileId") Long fileId, HttpServletResponse response) { + Result multipartFileResult = kafkaFileService.downloadKafkaFile(fileId); + + if (multipartFileResult.failed() || ValidateUtils.isNull(multipartFileResult.getData())) { + return multipartFileResult; + } + + InputStream is = null; + try { + response.setContentType(multipartFileResult.getData().getContentType()); + response.setCharacterEncoding("UTF-8"); + response.setHeader("Content-Disposition", "attachment;filename=" + URLEncoder.encode(multipartFileResult.getData().getOriginalFilename(), "UTF-8")); + is = multipartFileResult.getData().getInputStream(); + IOUtils.copy(is, response.getOutputStream()); + } catch (Exception e) { + LOGGER.error("class=RdKafkaFileController||method=downloadKafkaFile||fileId={}||errMsg={}||msg=modify response failed", fileId, e.getMessage()); + } finally { + try { + if (is != null) { + is.close(); + } + } catch (Exception e) { + } + } + return Result.buildSuc(); } } \ No newline at end of file diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdOperateRecordController.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdOperateRecordController.java index 11f063e6..68068f97 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdOperateRecordController.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/api/versionone/rd/RdOperateRecordController.java @@ -24,14 +24,13 @@ import java.util.List; @RestController @RequestMapping(ApiPrefix.API_V1_RD_PREFIX) public class RdOperateRecordController { - private static final int MAX_RECORD_COUNT = 200; @Autowired private OperateRecordService operateRecordService; @ApiOperation(value = "查询操作记录", notes = "") - @RequestMapping(value = "operate-record", method = RequestMethod.POST) + @PostMapping(value = "operate-record") @ResponseBody public Result> geOperateRecords(@RequestBody OperateRecordDTO dto) { if (ValidateUtils.isNull(dto) || !dto.legal()) { diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/SwaggerConfig.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/SwaggerConfig.java index 209d15b5..91d0080c 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/SwaggerConfig.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/config/SwaggerConfig.java @@ -39,10 +39,10 @@ public class SwaggerConfig implements WebMvcConfigurer { private ApiInfo apiInfo() { return new ApiInfoBuilder() - .title("Kafka云平台-接口文档") - .description("欢迎使用滴滴出行开源kafka-manager") + .title("Logi-KafkaManager 接口文档") + .description("欢迎使用滴滴Logi-KafkaManager") .contact("huangyiminghappy@163.com") - .version("2.0") + .version("2.2.0") .build(); } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java index 9c76a8e5..d92967dd 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/ClusterModelConverter.java @@ -55,6 +55,7 @@ public class ClusterModelConverter { CopyUtils.copyProperties(vo, logicalCluster); vo.setClusterId(logicalCluster.getLogicalClusterId()); vo.setClusterName(logicalCluster.getLogicalClusterName()); + vo.setClusterIdentification(logicalCluster.getLogicalClusterIdentification()); return vo; } @@ -78,9 +79,8 @@ public class ClusterModelConverter { ClusterDO clusterDO = new ClusterDO(); CopyUtils.copyProperties(clusterDO, reqObj); clusterDO.setId(reqObj.getClusterId()); - clusterDO.setSecurityProperties( - ValidateUtils.isNull(clusterDO.getSecurityProperties())? "": clusterDO.getSecurityProperties() - ); + clusterDO.setSecurityProperties(ValidateUtils.isNull(reqObj.getSecurityProperties())? "": reqObj.getSecurityProperties()); + clusterDO.setJmxProperties(ValidateUtils.isNull(reqObj.getJmxProperties())? "": reqObj.getJmxProperties()); return clusterDO; } diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/GatewayModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/GatewayModelConverter.java index f032e921..6a8b5f79 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/GatewayModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/GatewayModelConverter.java @@ -67,6 +67,7 @@ public class GatewayModelConverter { vo.setName(configDO.getName()); vo.setValue(configDO.getValue()); vo.setVersion(configDO.getVersion()); + vo.setDescription(configDO.getDescription()); vo.setCreateTime(configDO.getCreateTime()); vo.setModifyTime(configDO.getModifyTime()); voList.add(vo); @@ -76,18 +77,20 @@ public class GatewayModelConverter { public static GatewayConfigDO convert2GatewayConfigDO(OrderExtensionAddGatewayConfigDTO configDTO) { GatewayConfigDO configDO = new GatewayConfigDO(); - configDO.setType(configDO.getType()); - configDO.setName(configDO.getName()); - configDO.setValue(configDO.getValue()); + configDO.setType(configDTO.getType()); + configDO.setName(configDTO.getName()); + configDO.setValue(configDTO.getValue()); + configDO.setDescription(ValidateUtils.isNull(configDTO.getDescription())? "": configDTO.getDescription()); return configDO; } public static GatewayConfigDO convert2GatewayConfigDO(OrderExtensionModifyGatewayConfigDTO configDTO) { GatewayConfigDO configDO = new GatewayConfigDO(); - configDO.setId(configDO.getId()); - configDO.setType(configDO.getType()); - configDO.setName(configDO.getName()); - configDO.setValue(configDO.getValue()); + configDO.setId(configDTO.getId()); + configDO.setType(configDTO.getType()); + configDO.setName(configDTO.getName()); + configDO.setValue(configDTO.getValue()); + configDO.setDescription(ValidateUtils.isNull(configDTO.getDescription())? "": configDTO.getDescription()); return configDO; } } \ No newline at end of file diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/LogicalClusterModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/LogicalClusterModelConverter.java index 3067aa12..afdf0f03 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/LogicalClusterModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/LogicalClusterModelConverter.java @@ -21,6 +21,7 @@ public class LogicalClusterModelConverter { LogicalClusterVO vo = new LogicalClusterVO(); vo.setLogicalClusterId(logicalClusterDO.getId()); vo.setLogicalClusterName(logicalClusterDO.getName()); + vo.setLogicalClusterIdentification(logicalClusterDO.getIdentification()); vo.setPhysicalClusterId(logicalClusterDO.getClusterId()); vo.setMode(logicalClusterDO.getMode()); vo.setRegionIdList(ListUtils.string2LongList(logicalClusterDO.getRegionList())); @@ -45,6 +46,7 @@ public class LogicalClusterModelConverter { public static LogicalClusterDO convert2LogicalClusterDO(LogicalClusterDTO dto) { LogicalClusterDO logicalClusterDO = new LogicalClusterDO(); logicalClusterDO.setName(dto.getName()); + logicalClusterDO.setIdentification(dto.getIdentification()); logicalClusterDO.setClusterId(dto.getClusterId()); logicalClusterDO.setRegionList(ListUtils.longList2String(dto.getRegionIdList())); logicalClusterDO.setMode(dto.getMode()); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java index 133ac019..4e28ca8b 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/converters/TopicModelConverter.java @@ -22,9 +22,9 @@ import java.util.List; * @date 2017/6/1. */ public class TopicModelConverter { - public static TopicBasicVO convert2TopicBasicVO(TopicBasicDTO dto, ClusterDO clusterDO) { + public static TopicBasicVO convert2TopicBasicVO(TopicBasicDTO dto, ClusterDO clusterDO, Long logicalClusterId) { TopicBasicVO vo = new TopicBasicVO(); - vo.setClusterId(dto.getClusterId()); + vo.setClusterId(logicalClusterId); vo.setAppId(dto.getAppId()); vo.setAppName(dto.getAppName()); vo.setPartitionNum(dto.getPartitionNum()); diff --git a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/inteceptor/WebMetricsInterceptor.java b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/inteceptor/WebMetricsInterceptor.java index bf8bc1e1..576fe036 100644 --- a/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/inteceptor/WebMetricsInterceptor.java +++ b/kafka-manager-web/src/main/java/com/xiaojukeji/kafka/manager/web/inteceptor/WebMetricsInterceptor.java @@ -119,7 +119,7 @@ public class WebMetricsInterceptor { ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes(); String uri = attributes.getRequest().getRequestURI(); if (uri.contains(ApiPrefix.GATEWAY_API_V1_PREFIX)) { - return Result.buildFailure("api limited"); + return Result.buildGatewayFailure("api limited"); } return new Result<>(ResultStatus.OPERATION_FORBIDDEN); } diff --git a/kafka-manager-web/src/main/resources/application.yml b/kafka-manager-web/src/main/resources/application.yml index 6d7d9bec..89fca91c 100644 --- a/kafka-manager-web/src/main/resources/application.yml +++ b/kafka-manager-web/src/main/resources/application.yml @@ -11,7 +11,8 @@ spring: name: kafkamanager datasource: kafka-manager: - jdbc-url: jdbc:mysql://127.0.0.1:3306/kafka_manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8 + + jdbc-url: jdbc:mysql://127.0.0.1:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8 username: admin password: admin driver-class-name: com.mysql.jdbc.Driver @@ -31,7 +32,7 @@ logging: custom: idc: cn jmx: - max-conn: 10 + max-conn: 10 # 2.3版本配置不在这个地方生效 store-metrics-task: community: broker-metrics-enabled: true @@ -52,8 +53,11 @@ account: kcm: enabled: false - storage: - base-url: http://127.0.0.1 + s3: + endpoint: s3.didiyunapi.com + access-key: 1234567890 + secret-key: 0987654321 + bucket: logi-kafka n9e: base-url: http://127.0.0.1:8004 user-token: 12345678 @@ -79,3 +83,16 @@ notify: topic-name: didi-kafka-notify order: detail-url: http://127.0.0.1 + +ldap: + enabled: false + url: ldap://127.0.0.1:389/ + basedn: dc=tsign,dc=cn + factory: com.sun.jndi.ldap.LdapCtxFactory + filter: sAMAccountName + security: + authentication: simple + principal: cn=admin,dc=tsign,dc=cn + credentials: admin + auth-user-registration-role: normal + auth-user-registration: true diff --git a/pom.xml b/pom.xml index d5e74d61..d4165a85 100644 --- a/pom.xml +++ b/pom.xml @@ -6,7 +6,7 @@ com.xiaojukeji.kafka kafka-manager pom - 2.1.0-SNAPSHOT + ${kafka-manager.revision} org.springframework.boot @@ -16,11 +16,10 @@ - 2.0.0-SNAPSHOT + 2.3.0-SNAPSHOT 2.7.0 1.5.13 - true true 1.8 @@ -224,6 +223,12 @@ curator-recipes 2.10.0 + + + io.minio + minio + 7.1.0 + \ No newline at end of file