Merge pull request #1 from didi/master

同步提交
This commit is contained in:
杨光
2021-04-23 11:31:35 +08:00
committed by GitHub
238 changed files with 4786 additions and 1477 deletions

104
README.md
View File

@@ -5,66 +5,94 @@
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## 主要功能特性
### 快速体验
- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin
### 集群监控维度
- 多版本集群管控,支持从`0.10.2``2.x`版本;
- 集群Topic、Broker等多维度历史与实时关键指标查看
### 集群管控维度
- 集群运维包括逻辑Region方式管理集群
- Broker运维包括优先副本选举
- Topic运维包括创建、查询、扩容、修改属性、数据采样及迁移等
- 消费组运维,包括指定时间或指定偏移两种方式进行重置消费偏移
阅读本README文档您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息并通过体验地址快速体验Kafka集群指标监控与运维管控的全流程。<br>若滴滴Logi-KafkaManager已在贵司的生产环境进行使用并想要获得官方更好地支持和指导可以通过[`OCE认证`](http://obsuite.didiyun.com/open/openAuth),加入官方交流平台。
### 用户使用维度
## 1 产品简介
滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。
- Kafka用户、Kafka研发、Kafka运维 视角区分
- Kafka用户、Kafka研发、Kafka运维 权限区分
### 1.1 快速体验地址
- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin
### 1.2 体验地图
相比较于同类产品的用户视角单一大多为管理员视角滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是**用户体验地图、运维体验地图、运营体验地图**
#### 1.2.1 用户体验地图
- 平台租户申请&nbsp;&nbsp;申请应用App作为Kafka中的用户名并用 AppID+password作为身份验证
- 集群资源申请&nbsp;&nbsp;:按需申请、按需使用。可使用平台提供的共享集群,也可为应用申请独立的集群
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;可根据应用App创建Topic或者申请其他topic的读写权限
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Topic数据采样、调整配额、申请分区等操作
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;基于Topic生产消费各环节耗时统计监控不同分位数性能指标
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:支持将消费偏移重置至指定时间或指定位置
#### 1.2.2 运维体验地图
- 多版本集群管控&nbsp;&nbsp;:支持从`0.10.2``2.x`版本
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;集群Topic、Broker等多维度历史与实时关键指标查看建立健康分体系
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;划分部分Broker作为Region使用Region定义资源划分单位并按照业务、保障能力区分逻辑集群
- Broker&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:包括优先副本选举等操作
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:包括创建、查询、扩容、修改属性、迁移、下线等
## kafka-manager架构
#### 1.2.3 运营体验地
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;沉淀资源治理方法。针对Topic分区热点、分区不足等高频常见问题沉淀资源治理方法实现资源治理专家化
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;工单体系。Topic创建、调整配额、申请分区等操作由专业运维人员审批规范资源使用保障平台平稳运行
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;成本控制。Topic资源、集群资源按需申请、按需使用。根据流量核算费用帮助企业建设大数据成本核算体系
### 1.3 核心优势
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题
- 便&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;按照Region定义集群资源划分单位将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时实现对服务端的强管控
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;基于滴滴内部多年运营实践沉淀资源治理方法建立健康分体系。针对Topic分区热点、分区不足等高频常见问题实现资源治理专家化
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效
### 1.4 滴滴Logi-KafkaManager架构图
![kafka-manager-arch](https://img-ys011.didistatic.com/static/dicloudpub/do1_xgDHNDLj2ChKxctSuf72)
## 相关文档
## 2 相关文档
- [kafka-manager 安装手册](docs/install_guide/install_guide_cn.md)
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md)
- [kafka-manager 用户使用手册](docs/user_guide/user_guide_cn.md)
- [kafka-manager FAQ](docs/user_guide/faq.md)
### 2.1 产品文档
- [滴滴Logi-KafkaManager 安装手册](docs/install_guide/install_guide_cn.md)
- [滴滴Logi-KafkaManager 接入集群](docs/user_guide/add_cluster/add_cluster.md)
- [滴滴Logi-KafkaManager 用户使用手册](docs/user_guide/user_guide_cn.md)
- [滴滴Logi-KafkaManager FAQ](docs/user_guide/faq.md)
## 钉钉交流群
### 2.2 社区文章
- [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html)
- [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw)
- [滴滴Logi-KafkaManager 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ)
- [滴滴Logi-KafkaManager 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64)
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
- [kafka实践十五滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
钉钉群ID32821440
## OCE认证
OCE是一个认证机制和交流平台为Logi-KafkaManager生产用户量身打造我们会为OCE企业提供更好的技术支持比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等如果贵司Logi-KafkaManager上了生产[快来加入吧](http://obsuite.didiyun.com/open/openAuth)
## 3 滴滴Logi开源用户交流群
## 项目成员
![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png)
微信加群:关注公众号 Obsuite(官方公众号) 回复 "Logi加群"
### 内部核心人员
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw`
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
钉钉群ID32821440
### 外部贡献者
## 4 OCE认证
OCE是一个认证机制和交流平台为滴滴Logi-KafkaManager生产用户量身打造我们会为OCE企业提供更好的技术支持比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等如果贵司Logi-KafkaManager上了生产[快来加入吧](http://obsuite.didiyun.com/open/openAuth)
## 5 项目成员
### 5.1 内部核心人员
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw``zhaoyinrui``marzkonglingxu``joysunchao`
### 5.2 外部贡献者
`fangjunyu``zhoutaiyang`
## 协议
## 6 协议
`kafka-manager`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)

97
Releases_Notes.md Normal file
View File

@@ -0,0 +1,97 @@
---
![kafka-manager-logo](./docs/assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## v2.3.0
版本上线时间2021-02-08
### 能力提升
- 新增支持docker化部署
- 可指定Broker作为候选controller
- 可新增并管理网关配置
- 可获取消费组状态
- 增加集群的JMX认证
### 体验优化
- 优化编辑用户角色、修改密码的流程
- 新增consumerID的搜索功能
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
- 在相应位置增加《资源申请文档》链接
### bug修复
- 修复Broker监控图表时间轴展示错误的问题
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
## v2.2.0
版本上线时间2021-01-25
### 能力提升
- 优化工单批量操作流程
- 增加获取Topic75分位/99分位的实时耗时数据
- 增加定时任务可将无主未落DB的Topic定期写入DB
### 体验优化
- 在相应位置增加《集群接入文档》链接
- 优化物理集群、逻辑集群含义
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
- 优化Topic审批时Topic数据保存时间的配置流程
- 优化Topic/应用申请、审批时的错误提示文案
- 优化Topic数据采样的操作项文案
- 优化运维人员删除Topic时的提示文案
- 优化运维人员删除Region的删除逻辑与提示文案
- 优化运维人员删除逻辑集群的提示文案
- 优化上传集群配置文件时的文件类型限制条件
### bug修复
- 修复填写应用名称时校验特殊字符出错的问题
- 修复普通用户越权访问应用详情的问题
- 修复由于Kafka版本升级导致的数据压缩格式无法获取的问题
- 修复删除逻辑集群或Topic之后界面依旧展示的问题
- 修复进行Leader rebalance操作时执行结果重复提示的问题
## v2.1.0
版本上线时间2020-12-19
### 体验优化
- 优化页面加载时的背景样式
- 优化普通用户申请Topic权限的流程
- 优化Topic申请配额、申请分区的权限限制
- 优化取消Topic权限的文案提示
- 优化申请配额表单的表单项名称
- 优化重置消费偏移的操作流程
- 优化创建Topic迁移任务的表单内容
- 优化Topic扩分区操作的弹窗界面样式
- 优化集群Broker监控可视化图表样式
- 优化创建逻辑集群的表单内容
- 优化集群安全协议的提示文案
### bug修复
- 修复偶发性重置消费偏移失败的问题

View File

@@ -4,8 +4,9 @@ cd $workspace
## constant
OUTPUT_DIR=./output
KM_VERSION=2.1.0
APP_NAME=kafka-manager-$KM_VERSION
KM_VERSION=2.3.1
APP_NAME=kafka-manager
APP_DIR=${APP_NAME}-${KM_VERSION}
MYSQL_TABLE_SQL_FILE=./docs/install_guide/create_mysql_table.sql
CONFIG_FILE=./kafka-manager-web/src/main/resources/application.yml
@@ -28,15 +29,15 @@ function build() {
function make_output() {
# 新建output目录
rm -rf ${OUTPUT_DIR} &>/dev/null
mkdir -p ${OUTPUT_DIR}/${APP_NAME} &>/dev/null
mkdir -p ${OUTPUT_DIR}/${APP_DIR} &>/dev/null
# 填充output目录, output内的内容
(
cp -rf ${MYSQL_TABLE_SQL_FILE} ${OUTPUT_DIR}/${APP_NAME} && # 拷贝 sql 初始化脚本 至output目录
cp -rf ${CONFIG_FILE} ${OUTPUT_DIR}/${APP_NAME} && # 拷贝 application.yml 至output目录
cp -rf ${MYSQL_TABLE_SQL_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 sql 初始化脚本 至output目录
cp -rf ${CONFIG_FILE} ${OUTPUT_DIR}/${APP_DIR} && # 拷贝 application.yml 至output目录
# 拷贝程序包到output路径
cp kafka-manager-web/target/kafka-manager-web-${KM_VERSION}-SNAPSHOT.jar ${OUTPUT_DIR}/${APP_NAME}/${APP_NAME}-SNAPSHOT.jar
cp kafka-manager-web/target/kafka-manager-web-${KM_VERSION}-SNAPSHOT.jar ${OUTPUT_DIR}/${APP_DIR}/${APP_NAME}.jar
echo -e "make output ok."
) || { echo -e "make output error"; exit 2; } # 填充output目录失败后, 退出码为 非0
}
@@ -44,7 +45,7 @@ function make_output() {
function make_package() {
# 压缩output目录
(
cd ${OUTPUT_DIR} && tar cvzf ${APP_NAME}.tar.gz ${APP_NAME}
cd ${OUTPUT_DIR} && tar cvzf ${APP_DIR}.tar.gz ${APP_DIR}
echo -e "make package ok."
) || { echo -e "make package error"; exit 2; } # 压缩output目录失败后, 退出码为 非0
}

View File

@@ -0,0 +1,44 @@
FROM openjdk:8-jdk-alpine3.9
LABEL author="yangvipguang"
ENV VERSION 2.1.0
ENV JAR_PATH kafka-manager-web/target
COPY $JAR_PATH/kafka-manager-web-$VERSION-SNAPSHOT.jar /tmp/app.jar
COPY $JAR_PATH/application.yml /km/
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
RUN apk add --no-cache --virtual .build-deps \
font-adobe-100dpi \
ttf-dejavu \
fontconfig \
curl \
apr \
apr-util \
apr-dev \
tomcat-native \
&& apk del .build-deps
ENV AGENT_HOME /opt/agent/
WORKDIR /tmp
COPY docker-depends/config.yaml $AGENT_HOME
COPY docker-depends/jmx_prometheus_javaagent-0.14.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.14.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \
-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:/tmp/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
#-Xlog:gc -Xlog:gc* -Xlog:gc+heap=trace -Xlog:safepoint
EXPOSE 8080 9999
ENTRYPOINT ["sh","-c","java -jar $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]
## 默认不带Prometheus JMX监控需要可以自行取消以下注释并注释上面一行默认Entrypoint 命令。
## ENTRYPOINT ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]

View File

@@ -0,0 +1,5 @@
---
startDelaySeconds: 0
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

24
container/helm/Chart.yaml Normal file
View File

@@ -0,0 +1,24 @@
apiVersion: v2
name: didi-km
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "didi-km.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "didi-km.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "didi-km.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "didi-km.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "didi-km.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "didi-km.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "didi-km.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "didi-km.labels" -}}
helm.sh/chart: {{ include "didi-km.chart" . }}
{{ include "didi-km.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "didi-km.selectorLabels" -}}
app.kubernetes.io/name: {{ include "didi-km.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "didi-km.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "didi-km.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,88 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: km-cm
data:
application.yml: |
server:
port: 8080
tomcat:
accept-count: 1000
max-connections: 10000
max-threads: 800
min-spare-threads: 100
spring:
application:
name: kafkamanager
datasource:
kafka-manager:
jdbc-url: jdbc:mysql://xxxxx:3306/kafka-manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false
username: admin
password: admin
driver-class-name: com.mysql.jdbc.Driver
main:
allow-bean-definition-overriding: true
profiles:
active: dev
servlet:
multipart:
max-file-size: 100MB
max-request-size: 100MB
logging:
config: classpath:logback-spring.xml
custom:
idc: cn
jmx:
max-conn: 20
store-metrics-task:
community:
broker-metrics-enabled: true
topic-metrics-enabled: true
didi:
app-topic-metrics-enabled: false
topic-request-time-metrics-enabled: false
topic-throttled-metrics: false
save-days: 7
# 任务相关的开关
task:
op:
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
account:
ldap:
kcm:
enabled: false
storage:
base-url: http://127.0.0.1
n9e:
base-url: http://127.0.0.1:8004
user-token: 12345678
timeout: 300
account: root
script-file: kcm_script.sh
monitor:
enabled: false
n9e:
nid: 2
user-token: 1234567890
mon:
base-url: http://127.0.0.1:8032
sink:
base-url: http://127.0.0.1:8006
rdb:
base-url: http://127.0.0.1:80
notify:
kafka:
cluster-id: 95
topic-name: didi-kafka-notify
order:
detail-url: http://127.0.0.1

View File

@@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "didi-km.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "didi-km.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "didi-km.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: jmx-metrics
containerPort: 9999
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "didi-km.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,41 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "didi-km.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "didi-km.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "didi-km.serviceAccountName" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "didi-km.fullname" . }}-test-connection"
labels:
{{- include "didi-km.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "didi-km.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -0,0 +1,79 @@
# Default values for didi-km.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: docker.io/yangvipguang/km
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "v18"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: "km"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 50m
memory: 2048Mi
requests:
cpu: 10m
memory: 200Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

View File

@@ -0,0 +1,101 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## JMX-连接失败问题解决
集群正常接入Logi-KafkaManager之后即可以看到集群的Broker列表此时如果查看不了Topic的实时流量或者是Broker的实时流量信息时那么大概率就是JMX连接的问题了。
下面我们按照步骤来一步一步的检查。
### 1、问题&说明
**类型一JMX配置未开启**
未开启时,直接到`2、解决方法`查看如何开启即可。
![check_jmx_opened](./assets/connect_jmx_failed/check_jmx_opened.jpg)
**类型二:配置错误**
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
- `JMX`配置错误:见`2、解决方法`
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`
错误日志例子:
```
# 错误一: 错误提示的是真实的IP这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二错误提示的是127.0.0.1这个IP这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
### 2、解决方法
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。
修改`kafka-server-start.sh`文件:
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
&nbsp;
修改`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
### 3、解决方法 —— 认证的JMX
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。
在JMX的配置等都没有问题的情况下如果是因为认证的原因导致连接不了的此时可以使用下面介绍的方法进行解决。
**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。**
`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。
这个数据是`json`格式的字符串,例子如下所示:
```json
{
"maxConn": 10, # KM对单台Broker的最大JMX连接数
"username": "xxxxx", # 用户名
"password": "xxxx", # 密码
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭
}
```
&nbsp;
SQL的例子
```sql
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx};
```

View File

@@ -0,0 +1,122 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 动态配置管理
## 0、目录
- 1、Topic定时同步任务
- 2、专家服务——Topic分区热点
- 3、专家服务——Topic分区不足
## 1、Topic定时同步任务
### 1.1、配置的用途
`Logi-KafkaManager`在设计上,所有的资源都是挂在应用(app)下面。 如果接入的Kafka集群已经存在Topic了那么会导致这些Topic不属于任何的应用从而导致很多管理上的不便。
因此需要有一个方式将这些无主的Topic挂到某个应用下面。
这里提供了一个配置会定时自动将集群无主的Topic挂到某个应用下面下面。
### 1.2、相关实现
就是一个定时任务,该任务会定期做同步的工作。具体代码的位置在`com.xiaojukeji.kafka.manager.task.dispatch.op`包下面的`SyncTopic2DB`类。
### 1.3、配置说明
**步骤一:开启该功能**
在application.yml文件中增加如下配置已经有该配置的话直接把false修改为true即可
```yml
# 任务相关的开关
task:
op:
sync-topic-enabled: true # 无主的Topic定期同步到DB中
```
**步骤二:配置管理中指定挂在那个应用下面**
配置的位置:
![sync_topic_to_db](./assets/dynamic_config_manager/sync_topic_to_db.jpg)
配置键:`SYNC_TOPIC_2_DB_CONFIG_KEY`
配置值(JSON数组)
- clusterId需要进行定时同步的集群ID
- defaultAppId该集群无主的Topic将挂在哪个应用下面
- addAuthority是否需要加上权限, 默认是false。因为考虑到这个挂载只是临时的我们不希望用户使用这个App同时后续可能移交给真正的所属的应用因此默认是不加上权限。
**注意这里的集群ID或者是应用ID不存在的话会导致配置不生效。该任务对已经在DB中的Topic不会进行修改**
```json
[
{
"clusterId": 1234567,
"defaultAppId": "ANONYMOUS",
"addAuthority": false
},
{
"clusterId": 7654321,
"defaultAppId": "ANONYMOUS",
"addAuthority": false
}
]
```
---
## 2、专家服务——Topic分区热点
`Region`所圈定的Broker范围内某个Topic的Leader数在这些圈定的Broker上分布不均衡时我们认为该Topic是存在热点的Topic。
备注单纯的查看Leader数的分布确实存在一定的局限性这块欢迎贡献更多的热点定义于代码。
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
REGION_HOT_TOPIC_CONFIG
```
配置Value
```json
{
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时则认为是存在热点的Topic
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```
---
## 3、专家服务——Topic分区不足
总流量除以分区数超过指定值时则我们认为存在Topic分区不足。
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
TOPIC_INSUFFICIENT_PARTITION_CONFIG
```
配置Value
```json
{
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```

View File

@@ -0,0 +1,10 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# Kafka-Gateway 配置说明

View File

@@ -0,0 +1,27 @@
---
![kafka-manager-logo](../../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 升级至`2.2.0`版本
`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段因此需要执行下面的sql进行字段的增加。
```sql
# cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息
ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`;
# logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键.
# 此后, name字段还是表示集群名称, identification字段表示的是集群标识, 只能是字母数字及下划线组成,
# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段.
ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`;
UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0;
ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC);
```

View File

@@ -0,0 +1,17 @@
---
![kafka-manager-logo](../../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 升级至`2.3.0`版本
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段因此需要执行下面的sql进行字段的增加。
```sql
ALTER TABLE `gateway_config`
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
```

View File

@@ -15,7 +15,7 @@
当前因为无法同时兼容`MySQL 8``MySQL 5.7`,因此代码中默认的版本还是`MySQL 5.7`
当前如需使用`MySQL 8`,则按照下述流程进行简单修改代码。
当前如需使用`MySQL 8`,则按照下述流程进行简单修改代码。
- Step1. 修改application.yml中的MySQL驱动类

View File

@@ -1,3 +1,8 @@
-- create database
CREATE DATABASE logi_kafka_manager;
USE logi_kafka_manager;
--
-- Table structure for table `account`
--
@@ -104,7 +109,8 @@ CREATE TABLE `cluster` (
`zookeeper` varchar(512) NOT NULL DEFAULT '' COMMENT 'zk地址',
`bootstrap_servers` varchar(512) NOT NULL DEFAULT '' COMMENT 'server地址',
`kafka_version` varchar(32) NOT NULL DEFAULT '' COMMENT 'kafka版本',
`security_properties` text COMMENT '安全认证参数',
`security_properties` text COMMENT 'Kafka安全认证参数',
`jmx_properties` text COMMENT 'JMX配置',
`status` tinyint(4) NOT NULL DEFAULT '1' COMMENT ' 监控标记, 0表示未监控, 1表示监控中',
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
@@ -197,7 +203,8 @@ CREATE TABLE `gateway_config` (
`type` varchar(128) NOT NULL DEFAULT '' COMMENT '配置类型',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '配置名称',
`value` text COMMENT '配置值',
`version` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT '版本信息',
`version` bigint(20) unsigned NOT NULL DEFAULT '1' COMMENT '版本信息',
`description` text COMMENT '描述信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
@@ -302,20 +309,22 @@ INSERT INTO kafka_user(app_id, password, user_type, operation) VALUES ('dkm_admi
-- Table structure for table `logical_cluster`
--
-- DROP TABLE IF EXISTS `logical_cluster`;
CREATE TABLE `logical_cluster` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称',
`mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群',
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用',
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表',
`description` text COMMENT '备注说明',
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表';
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称',
`identification` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识',
`mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群',
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用',
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表',
`description` text COMMENT '备注说明',
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_name` (`name`),
UNIQUE KEY `uniq_identification` (`identification`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表';
--
-- Table structure for table `monitor_rule`

View File

@@ -9,19 +9,39 @@
# 安装手册
## 1、环境依赖
## 环境依赖
如果是以Release包进行安装的则仅安装`Java``MySQL`即可。如果是要先进行源码包进行打包,然后再使用,则需要安装`Maven``Node`环境。
- `Maven 3.5+`(后端打包依赖)
- `node v12+`(前端打包依赖)
- `Java 8+`(运行环境需要)
- `MySQL 5.7`(数据存储)
- `Maven 3.5+`(后端打包依赖)
- `Node 10+`(前端打包依赖)
---
## 环境初始化
## 2、获取安装包
执行[create_mysql_table.sql](create_mysql_table.sql)中的SQL命令从而创建所需的MySQL库及表默认创建的库名是`kafka_manager`
**1、Release直接下载**
这里如果觉得麻烦然后也不想进行二次开发则可以直接下载Release包下载地址[Github Release包下载地址](https://github.com/didi/Logi-KafkaManager/releases)
如果觉得Github的下载地址太慢了也可以进入`Logi-KafkaManager`的用户群获取群地址在README中。
**2、源代码进行打包**
下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`sh build.sh`命令即可,执行完成之后会在`output/kafka-manager-xxx`目录下面生成一个jar包。
对于`windows`环境的用户,估计执行不了`sh build.sh`命令,因此可以直接执行`mvn install`,然后在`kafka-manager-web/target`目录下生成一个kafka-manager-web-xxx.jar的包。
获取到jar包之后我们继续下面的步骤。
---
## 3、MySQL-DB初始化
执行[create_mysql_table.sql](create_mysql_table.sql)中的SQL命令从而创建所需的MySQL库及表默认创建的库名是`logi_kafka_manager`
```
# 示例:
@@ -30,29 +50,15 @@ mysql -uXXXX -pXXX -h XXX.XXX.XXX.XXX -PXXXX < ./create_mysql_table.sql
---
## 打包
```bash
# 一次性打包
cd ..
mvn install
## 4、启动
```
# application.yml 是配置文件最简单的是仅修改MySQL相关的配置即可启动
---
## 启动
```
# application.yml 是配置文件
cp kafka-manager-web/src/main/resources/application.yml kafka-manager-web/target/
cd kafka-manager-web/target/
nohup java -jar kafka-manager-web-2.1.0-SNAPSHOT.jar --spring.config.location=./application.yml > /dev/null 2>&1 &
nohup java -jar kafka-manager.jar --spring.config.location=./application.yml > /dev/null 2>&1 &
```
## 使用
### 5、使用
本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md)

View File

@@ -0,0 +1,94 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## nginx配置-安装手册
# 一、独立部署
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
# 二、nginx配置
## 1、独立部署配置
```
#nginx 根目录访问配置如下
location / {
proxy_pass http://ip:port;
}
```
## 2、前后端分离&配置多个静态资源
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
### 1、源码下载
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
### 2、修改webpack.config.js 配置文件
修改`kafka-manager-console`模块 `webpack.config.js`
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
```
cd kafka-manager-console
vi webpack.config.js
# publicPath默认打包方式根目录下修改为nginx代理访问路径。
let publicPath = '/xxxx';
```
### 3、打包
```
npm cache clean --force && npm install
```
ps如果打包过程中报错运行`npm install clipboard@2.0.6`,相反请忽略!
### 4、部署
#### 1、前段静态文件部署
静态资源 `../kafka-manager-web/src/main/resources/templates`
上传到指定目录,目前以`root目录`做demo
#### 2、上传jar包并启动请参考[kafka-manager 安装手册](install_guide_cn.md)
#### 3、修改nginx 配置
```
location /xxxx {
# 静态文件存放位置
alias /root/templates;
try_files $uri $uri/ /xxxx/index.html;
index index.html;
}
location /api {
proxy_pass http://ip:port;
}
#后代端口建议使用/api如果冲突可以使用以下配置
#location /api/v2 {
# proxy_pass http://ip:port;
#}
#location /api/v1 {
# proxy_pass http://ip:port;
#}
```

View File

@@ -5,16 +5,26 @@
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 集群接入
集群的接入总共需要三个步骤,分别是:
1. 接入物理集群
2. 创建Region
3. 创建逻辑集群
## 主要概念讲解
面对大规模集群、业务场景复杂的情况引入Region、逻辑集群的概念
- Region划分部分Broker作为一个 Region用Region定义资源划分的单位提高扩展性和隔离性。如果部分Topic异常也不会影响大面积的Broker
- 逻辑集群逻辑集群由部分Region组成便于对大规模集群按照业务划分、保障能力进行管理
![op_cluster_arch](assets/op_cluster_arch.png)
备注接入集群需要2、3两步是因为普通用户的视角下看到的都是逻辑集群如果没有2、3两步那么普通用户看不到任何信息。
集群的接入总共需要三个步骤,分别是:
1. 接入物理集群:填写机器地址、安全协议等配置信息,接入真实的物理集群
2. 创建Region将部分Broker划分为一个Region
3. 创建逻辑集群逻辑集群由部分Region组成可根据业务划分、保障等级来创建相应的逻辑集群
![op_cluster_flow](assets/op_cluster_flow.png)
**备注接入集群需要2、3两步是因为普通用户的视角下看到的都是逻辑集群如果没有2、3两步那么普通用户看不到任何信息。**
## 1、接入物理集群
@@ -36,4 +46,4 @@
![op_add_logical_cluster](assets/op_add_logical_cluster.jpg)
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@@ -0,0 +1,25 @@
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## 报警策略-报警函数介绍
| 类别 | 函数 | 含义 |函数文案 |备注 |
| --- | --- | --- | --- | --- |
| 发生次数 |alln | 最近$n个周期内全发生 | 连续发生(all) | |
| 发生次数 | happen, n, m | 最近$n个周期内发生m次 | 出现(happen) | null点也计算在n内 |
| 数学统计 | sum, n | 最近$n个周期取值 的 和 | 求和(sum) | sum_over_time |
| 数学统计 | avg, n | 最近$n个周期取值 的 平均值 | 平均值(avg) | avg_over_time |
| 数学统计 | min, n | 最近$n个周期取值 的 最小值 | 最小值(min) | min_over_time |
| 数学统计 | max, n | 最近$n个周期取值 的 最大值 | 最大值(max | max_over_time |
| 变化率 | pdiff, n | 最近$n个点的变化率, 有一个满足 则触发 | 突增突降率(pdiff) | 假设, 最近3个周期的值分别为 v, v2, v3v为最新值那么计算公式为 any( (v-v2)/v2, (v-v3)/v3 )**区分正负** |
| 变化量 | diff, n | 最近$n个点的变化量, 有一个满足 则触发 | 突增突降值(diff) | 假设, 最近3个周期的值分别为 v, v2, v3v为最新值那么计算公式为 any( (v-v2), (v-v3) )**区分正负** |
| 变化量 | ndiff | 最近n个周期发生m次 v(t) - v(t-1) $OP threshold其中 v(t) 为最新值 | 连续变化(区分正负) - ndiff | |
| 数据中断 | nodata, t | 最近 $t 秒内 无数据上报 | 数据上报中断(nodata) | |
| 同环比 | c_avg_rate_abs, n | 最近$n个周期的取值相比 1天或7天前取值 的变化率 的绝对值 | 同比变化率(c_avg_rate_abs) | 假设最近的n个值为 v1, v2, v3历史取到的对应n'个值为 v1', v2'那么计算公式为abs((avg(v1,v2,v3) / avg(v1',v2') -1)* 100%) |
| 同环比 | c_avg_rate, n | 最近$n个周期的取值相比 1天或7天前取值 的变化率(**区分正负**) | 同比变化率(c_avg_rate) | 假设最近的n个值为 v1, v2, v3历史取到的对应n'个值为 v1', v2'那么计算公式为(avg(v1,v2,v3) / avg(v1',v2') -1)* 100% |

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -7,20 +7,37 @@
---
# FAQ
# FAQ
- 1、Topic申请时没有可选择的集群
- 0、支持哪些Kafka版本
- 1、Topic申请、新建监控告警等操作时没有可选择的集群
- 2、逻辑集群 & Region的用途
- 3、登录失败
- 4、页面流量信息等无数据
- 5、如何对接夜莺的监控告警功能
- 6、如何使用`MySQL 8`
- 7、`Jmx`连接失败如何解决?
- 8、`topic biz data not exist`错误及处理方式
- 9、进程启动后如何查看API文档
- 10、如何创建告警组
- 11、连接信息、耗时信息为什么没有数据
- 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
---
### 1、Topic申请时没有可选择的集群
### 0、支持哪些Kafka版本
- 参看 [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加
基本上只要所使用的Kafka还依赖于Zookeeper那么该版本的主要功能基本上应该就是支持的
---
### 1、Topic申请、新建监控告警等操作时没有可选择的集群
缺少逻辑集群导致的在Topic管理、监控告警、集群管理这三个Tab下面都是普通用户视角普通用户看到的集群都是逻辑集群因此在这三个Tab下进行操作时都需要有逻辑集群。
逻辑集群的创建参看:
- [kafka-manager 接入集群](add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加。
---
@@ -43,7 +60,7 @@
- 1、检查`Broker JMX`是否正确开启。
如若还未开启,具体可百度一下看如何开启
如若还未开启,具体可百度一下看如何开启,或者参看:[Jmx连接配置&问题解决说明文档](../dev_guide/connect_jmx_failed.md)
![helpcenter](./assets/faq/jmx_check.jpg)
@@ -53,7 +70,7 @@
- 3、数据库时区问题。
检查MySQL的topic表查看是否有数据如果有数据那么再检查设置的时区是否正确。
检查MySQL的topic_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
---
@@ -66,3 +83,46 @@
### 6、如何使用`MySQL 8`
- 参看 [kafka-manager 使用`MySQL 8`](../dev_guide/use_mysql_8.md) 说明。
---
### 7、`Jmx`连接失败如何解决?
- 参看 [Jmx连接配置&问题解决](../dev_guide/connect_jmx_failed.md) 说明。
---
### 8、`topic biz data not exist`错误及处理方式
**错误原因**
在进行权限审批的时候可能会出现这个错误出现这个错误的原因是因为Topic相关的业务信息没有在DB中存储或者更具体的说就是该Topic不属于任何应用导致的只需要将这些无主的Topic挂在某个应用下面即可。
**解决方式**
可以在`运维管控->集群列表->Topic信息`下面编辑申请权限的Topic为Topic选择一个应用即可。
以上仅仅只是针对单个Topic的场景如果你有非常多的Topic需要进行初始化的那么此时可以在配置管理中增加一个配置来定时的对无主的Topic进行同步具体见[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md)
---
### 9、进程启动后如何查看API文档
- 滴滴Logi-KafkaManager采用Swagger-API工具记录API文档。Swagger-API地址 [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-ui.html#/)
### 10、如何创建告警组
这块需要配合监控系统进行使用,现在默认已经实现了夜莺的对接,当然也可以对接自己内部的监控系统,不过需要实现一些接口。
具体的文档可见:[监控功能对接夜莺](../dev_guide/monitor_system_integrate_with_n9e.md)、[监控功能对接其他系统](../dev_guide/monitor_system_integrate_with_self.md)
### 11、连接信息、耗时信息为什么没有数据
这块需要结合滴滴内部的kafka-gateway一同使用才会有数据滴滴kafka-gateway暂未开源。
### 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
逻辑集群的申请与审批仅仅只是一个工单流程,并不会去实际创建逻辑集群,逻辑集群的创建还需要手动去创建。
具体的操作可见:[kafka-manager 接入集群](add_cluster/add_cluster.md)。

View File

@@ -0,0 +1,72 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# Topic 指标说明
## 1. 实时流量指标说明
| 指标名称| 单位| 指标含义|
|-- |---- |---|
| messagesIn| 条/s | 每秒发送到kafka的消息条数 |
| byteIn| B/s | 每秒发送到kafka的字节数 |
| byteOut| B/s | 每秒流出kafka的字节数所有消费组消费的流量如果是Kafka版本较低这个还包括副本同步的流量 |
| byteRejected| B/s | 每秒被拒绝的字节数 |
| failedFetchRequest| qps | 每秒拉取失败的请求数 |
| failedProduceRequest| qps | 每秒发送失败的请求数 |
| totalProduceRequest| qps | 每秒总共发送的请求数与messagesIn的区别是一个是发送请求里面可能会有多条消息 |
| totalFetchRequest| qps | 每秒总共拉取消息的请求数 |
&nbsp;
## 2. 历史流量指标说明
| 指标名称| 单位| 指标含义|
|-- |---- |---|
| messagesIn| 条/s | 近一分钟每秒发送到kafka的消息条数 |
| byteIn| B/s | 近一分钟每秒发送到kafka的字节数 |
| byteOut| B/s | 近一分钟每秒流出kafka的字节数所有消费组消费的流量如果是Kafka版本较低副本同步的流量 |
| byteRejected| B/s | 近一分钟每秒被拒绝的字节数 |
| totalProduceRequest| qps | 近一分钟每秒总共发送的请求数与messagesIn的区别是一个是发送请求里面可能会有多条消息 |
&nbsp;
## 3. 实时耗时指标说明
**基于滴滴加强版Kafka引擎的特性可以获取Broker的实时耗时信息和历史耗时信息**
| 指标名称| 单位 | 指标含义 | 耗时高原因 | 解决方案|
|-- |-- |-- |-- |--|
| RequestQueueTimeMs| ms | 请求队列排队时间 | 请求多,服务端处理不过来 | 联系运维人员处理 |
| LocalTimeMs| ms | Broker本地处理时间 | 服务端读写数据慢,可能是读写锁竞争 | 联系运维人员处理 |
| RemoteTimeMs| ms | 请求等待远程完成时间对于发送请求如果ack=-1该时间表示副本同步时间对于消费请求如果当前没有数据该时间为等待新数据时间如果请求的版本与topic存储的版本不同需要做版本转换也会拉高该时间 | 对于生产ack=-1必然会导致该指标耗时高对于消费如果topic数据写入很慢该指标高也正常。如果需要版本转换该指标耗时也会高 | 对于生产可以考虑修改ack=1消费端问题可以联系运维人员具体分析 |
| ThrottleTimeMs| ms | 请求限流时间 | 生产/消费被限流 | 申请提升限流值 |
| ResponseQueueTimeMs| ms | 响应队列排队时间 | 响应多,服务端处理不过来 | 联系运维人员处理 |
| ResponseSendTimeMs| ms | 响应返回客户端时间 | 1下游消费能力差导致向consumer发送数据时写网络缓冲区过慢2消费lag过大一直从磁盘读取数据 | 1:提升客户端消费性能2: 联系运维人员确认是否读取磁盘问题 |
| TotalTimeMs| ms | 接收到请求到完成总时间,理论上该时间等于上述六项时间之和,但由于各时间都是单独统计,总时间只是约等于上述六部分时间之和 | 上面六项有些耗时高 | 具体针对高的指标解决 |
**备注由于kafka消费端实现方式消费端一次会发送多个Fetch请求在接收到一个Response之后就会开始处理数据使Broker端返回其他Response等待因此ResponseSendTimeMs并不完全是服务端发送时间有时会包含一部分消费端处理数据时间**
## 4. 历史耗时指标说明
**基于滴滴加强版Kafka引擎的特性可以获取Broker的实时耗时信息和历史耗时信息**
| 指标名称| 单位| 指标含义|
|-- | ---- |---|
| produceRequestTime99thPercentile|ms|Topic近一分钟发送99分位耗时|
| fetchRequestTime99thPercentile|ms|Topic近一分钟拉取99分位耗时|
| produceRequestTime95thPercentile|ms|Topic近一分钟发送95分位耗时|
| fetchRequestTime95thPercentile|ms|Topic近一分钟拉取95分位耗时|
| produceRequestTime75thPercentile|ms|Topic近一分钟发送75分位耗时|
| fetchRequestTime75thPercentile|ms|Topic近一分钟拉取75分位耗时|
| produceRequestTime50thPercentile|ms|Topic近一分钟发送50分位耗时|
| fetchRequestTime50thPercentile|ms|Topic近一分钟拉取50分位耗时|

View File

@@ -0,0 +1,32 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 资源申请文档
## 主要名词解释
- 应用App作为Kafka中的账户使用AppID+password作为身份标识
- 集群:可使用平台提供的共享集群,也可为某一应用申请单独的集群
- Topic可申请创建Topic或申请其他Topic的生产/消费权限。进行生产/消费时通过Topic+AppID进行身份鉴权
![production_consumption_flow](assets/resource_apply/production_consumption_flow.png)
## 应用申请
应用App作为Kafka中的账户使用AppID+password作为身份标识。对Topic进行生产/消费时通过Topic+AppID进行身份鉴权。
用户申请应用经由运维人员审批审批通过后获得AppID和密钥
## 集群申请
可使用平台提供的共享集群,若对隔离性、稳定性、生产消费速率有更高的需求,可对某一应用申请单独的集群
## Topic申请
- 用户可根据已申请的应用创建Topic。创建后应用负责人默认拥有该Topic的生产/消费权限和管理权限
- 也可申请其他Topic的生产、消费权限。经由Topic所属应用的负责人审批后即可拥有相应权限。

View File

@@ -5,13 +5,13 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>kafka-manager-common</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>${kafka-manager.revision}</version>
<packaging>jar</packaging>
<parent>
<artifactId>kafka-manager</artifactId>
<groupId>com.xiaojukeji.kafka</groupId>
<version>2.1.0-SNAPSHOT</version>
<version>${kafka-manager.revision}</version>
</parent>
<properties>
@@ -104,5 +104,10 @@
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
</dependencies>
</project>

View File

@@ -47,4 +47,13 @@ public enum AccountRoleEnum {
}
return AccountRoleEnum.UNKNOWN;
}
public static AccountRoleEnum getUserRoleEnum(String roleName) {
for (AccountRoleEnum elem: AccountRoleEnum.values()) {
if (elem.message.equalsIgnoreCase(roleName)) {
return elem;
}
}
return AccountRoleEnum.UNKNOWN;
}
}

View File

@@ -1,45 +0,0 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 是否上报监控系统
* @author zengqiao
* @date 20/9/25
*/
public enum SinkMonitorSystemEnum {
SINK_MONITOR_SYSTEM(0, "上报监控系统"),
NOT_SINK_MONITOR_SYSTEM(1, "不上报监控系统"),
;
private Integer code;
private String message;
SinkMonitorSystemEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public void setCode(Integer code) {
this.code = code;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
@Override
public String toString() {
return "SinkMonitorSystemEnum{" +
"code=" + code +
", message='" + message + '\'' +
'}';
}
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 过期Topic状态
* @author zengqiao
* @date 21/01/25
*/
public enum TopicExpiredStatusEnum {
ALREADY_NOTIFIED_AND_DELETED(-2, "已通知, 已下线"),
ALREADY_NOTIFIED_AND_CAN_DELETE(-1, "已通知, 可下线"),
ALREADY_EXPIRED_AND_WAIT_NOTIFY(0, "已过期, 待通知"),
ALREADY_NOTIFIED_AND_WAIT_RESPONSE(1, "已通知, 待反馈"),
;
private int status;
private String message;
TopicExpiredStatusEnum(int status, String message) {
this.status = status;
this.message = message;
}
public int getStatus() {
return status;
}
public String getMessage() {
return message;
}
}

View File

@@ -7,18 +7,18 @@ package com.xiaojukeji.kafka.manager.common.constant;
*/
public class ApiPrefix {
public static final String API_PREFIX = "/api/";
public static final String API_V1_PREFIX = API_PREFIX + "v1/";
public static final String API_V2_PREFIX = API_PREFIX + "v2/";
private static final String API_V1_PREFIX = API_PREFIX + "v1/";
// login
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
// console
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
public static final String API_V1_NORMAL_PREFIX = API_V1_PREFIX + "normal/";
public static final String API_V1_RD_PREFIX = API_V1_PREFIX + "rd/";
public static final String API_V1_OP_PREFIX = API_V1_PREFIX + "op/";
// open
public static final String API_V1_THIRD_PART_PREFIX = API_V1_PREFIX + "third-part/";
public static final String API_V2_THIRD_PART_PREFIX = API_V2_PREFIX + "third-part/";
// gateway
public static final String GATEWAY_API_V1_PREFIX = "/gateway" + API_V1_PREFIX;

View File

@@ -97,7 +97,7 @@ public class Result<T> implements Serializable {
return result;
}
public static <T> Result<T> buildFailure(String message) {
public static <T> Result<T> buildGatewayFailure(String message) {
Result<T> result = new Result<T>();
result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode());
result.setMessage(message);
@@ -105,6 +105,14 @@ public class Result<T> implements Serializable {
return result;
}
public static <T> Result<T> buildFailure(String message) {
Result<T> result = new Result<T>();
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static Result buildFrom(ResultStatus resultStatus) {
Result result = new Result();
result.setCode(resultStatus.getCode());

View File

@@ -12,125 +12,105 @@ public enum ResultStatus {
SUCCESS(Constant.SUCCESS, "success"),
LOGIN_FAILED(1, "login failed, please check username and password"),
FAIL(1, "操作失败"),
/**
* 内部依赖错误, [1000, 1200)
* 操作错误[1000, 2000)
* ------------------------------------------------------------------------------------------
*/
MYSQL_ERROR(1000, "operate database failed"),
CONNECT_ZOOKEEPER_FAILED(1000, "connect zookeeper failed"),
READ_ZOOKEEPER_FAILED(1000, "read zookeeper failed"),
READ_JMX_FAILED(1000, "read jmx failed"),
// 内部依赖错误 —— Kafka特定错误, [1000, 1100)
BROKER_NUM_NOT_ENOUGH(1000, "broker not enough"),
CONTROLLER_NOT_ALIVE(1000, "controller not alive"),
CLUSTER_METADATA_ERROR(1000, "cluster metadata error"),
TOPIC_CONFIG_ERROR(1000, "topic config error"),
/**
* 外部依赖错误, [1200, 1400)
* ------------------------------------------------------------------------------------------
*/
CALL_CLUSTER_TASK_AGENT_FAILED(1000, " call cluster task agent failed"),
CALL_MONITOR_SYSTEM_ERROR(1000, " call monitor-system failed"),
/**
* 外部用户操作错误, [1400, 1600)
* ------------------------------------------------------------------------------------------
*/
PARAM_ILLEGAL(1400, "param illegal"),
OPERATION_FAILED(1401, "operation failed"),
OPERATION_FORBIDDEN(1402, "operation forbidden"),
API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"),
USER_WITHOUT_AUTHORITY(1404, "user without authority"),
CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"),
// 资源不存在
CLUSTER_NOT_EXIST(10000, "cluster not exist"),
BROKER_NOT_EXIST(10000, "broker not exist"),
TOPIC_NOT_EXIST(10000, "topic not exist"),
PARTITION_NOT_EXIST(10000, "partition not exist"),
ACCOUNT_NOT_EXIST(10000, "account not exist"),
APP_NOT_EXIST(1000, "app not exist"),
ORDER_NOT_EXIST(1000, "order not exist"),
CONFIG_NOT_EXIST(1000, "config not exist"),
IDC_NOT_EXIST(1000, "idc not exist"),
TASK_NOT_EXIST(1110, "task not exist"),
APP_OFFLINE_FORBIDDEN(1406, "先下线topic才能下线应用"),
AUTHORITY_NOT_EXIST(1000, "authority not exist"),
MONITOR_NOT_EXIST(1110, "monitor not exist"),
TOPIC_OPERATION_PARAM_NULL_POINTER(1450, "参数错误"),
TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(1451, "分区数错误"),
TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(1452, "Broker数不足错误"),
TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(1453, "Topic名称非法"),
TOPIC_OPERATION_TOPIC_EXISTED(1454, "Topic已存在"),
TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(1455, "Topic未知"),
TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(1456, "Topic配置错误"),
TOPIC_OPERATION_TOPIC_IN_DELETING(1457, "Topic正在删除"),
TOPIC_OPERATION_UNKNOWN_ERROR(1458, "未知错误"),
QUOTA_NOT_EXIST(1000, "quota not exist, please check clusterId, topicName and appId"),
/**
* 参数错误[2000, 3000)
* ------------------------------------------------------------------------------------------
*/
PARAM_ILLEGAL(2000, "param illegal"),
CG_LOCATION_ILLEGAL(2001, "consumer group location illegal"),
ORDER_ALREADY_HANDLED(2002, "order already handled"),
APP_ID_OR_PASSWORD_ILLEGAL(2003, "app or password illegal"),
SYSTEM_CODE_ILLEGAL(2004, "system code illegal"),
CLUSTER_TASK_HOST_LIST_ILLEGAL(2005, "主机列表错误,请检查主机列表"),
JSON_PARSER_ERROR(2006, "json parser error"),
// 资源不存在, 已存在, 已被使用
RESOURCE_NOT_EXIST(1200, "资源不存在"),
RESOURCE_ALREADY_EXISTED(1200, "资源已经存在"),
RESOURCE_NAME_DUPLICATED(1200, "资源名称重复"),
RESOURCE_ALREADY_USED(1000, "资源早已被使用"),
BROKER_NUM_NOT_ENOUGH(2050, "broker not enough"),
CONTROLLER_NOT_ALIVE(2051, "controller not alive"),
CLUSTER_METADATA_ERROR(2052, "cluster metadata error"),
TOPIC_CONFIG_ERROR(2053, "topic config error"),
/**
* 参数错误 - 资源检查错误
* 因为外部系统的问题, 操作时引起的错误, [7000, 8000)
* ------------------------------------------------------------------------------------------
*/
RESOURCE_NOT_EXIST(7100, "资源不存在"),
CLUSTER_NOT_EXIST(7101, "cluster not exist"),
BROKER_NOT_EXIST(7102, "broker not exist"),
TOPIC_NOT_EXIST(7103, "topic not exist"),
PARTITION_NOT_EXIST(7104, "partition not exist"),
ACCOUNT_NOT_EXIST(7105, "account not exist"),
APP_NOT_EXIST(7106, "app not exist"),
ORDER_NOT_EXIST(7107, "order not exist"),
CONFIG_NOT_EXIST(7108, "config not exist"),
IDC_NOT_EXIST(7109, "idc not exist"),
TASK_NOT_EXIST(7110, "task not exist"),
AUTHORITY_NOT_EXIST(7111, "authority not exist"),
MONITOR_NOT_EXIST(7112, "monitor not exist"),
QUOTA_NOT_EXIST(7113, "quota not exist, please check clusterId, topicName and appId"),
CONSUMER_GROUP_NOT_EXIST(7114, "consumerGroup not exist"),
TOPIC_BIZ_DATA_NOT_EXIST(7115, "topic biz data not exist, please sync topic to db"),
// 资源已存在
RESOURCE_ALREADY_EXISTED(7200, "资源已经存在"),
TOPIC_ALREADY_EXIST(7201, "topic already existed"),
// 资源重名
RESOURCE_NAME_DUPLICATED(7300, "资源名称重复"),
// 资源已被使用
RESOURCE_ALREADY_USED(7400, "资源早已被使用"),
/**
* 资源参数错误
* 因为外部系统的问题, 操作时引起的错误, [8000, 9000)
* ------------------------------------------------------------------------------------------
*/
CG_LOCATION_ILLEGAL(10000, "consumer group location illegal"),
ORDER_ALREADY_HANDLED(1000, "order already handled"),
MYSQL_ERROR(8010, "operate database failed"),
APP_ID_OR_PASSWORD_ILLEGAL(1000, "app or password illegal"),
SYSTEM_CODE_ILLEGAL(1000, "system code illegal"),
ZOOKEEPER_CONNECT_FAILED(8020, "zookeeper connect failed"),
ZOOKEEPER_READ_FAILED(8021, "zookeeper read failed"),
ZOOKEEPER_WRITE_FAILED(8022, "zookeeper write failed"),
ZOOKEEPER_DELETE_FAILED(8023, "zookeeper delete failed"),
CLUSTER_TASK_HOST_LIST_ILLEGAL(1000, "主机列表错误,请检查主机列表"),
// 调用集群任务里面的agent失败
CALL_CLUSTER_TASK_AGENT_FAILED(8030, " call cluster task agent failed"),
// 调用监控系统失败
CALL_MONITOR_SYSTEM_ERROR(8040, " call monitor-system failed"),
// 存储相关的调用失败
STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"),
STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"),
STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"),
LDAP_AUTHENTICATION_FAILED(8053, "ldap authentication failed"),
///////////////////////////////////////////////////////////////
USER_WITHOUT_AUTHORITY(1000, "user without authority"),
JSON_PARSER_ERROR(1000, "json parser error"),
TOPIC_OPERATION_PARAM_NULL_POINTER(2, "参数错误"),
TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(3, "分区数错误"),
TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(4, "Broker数不足错误"),
TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(5, "Topic名称非法"),
TOPIC_OPERATION_TOPIC_EXISTED(6, "Topic已存在"),
TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(7, "Topic未知"),
TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(8, "Topic配置错误"),
TOPIC_OPERATION_TOPIC_IN_DELETING(9, "Topic正在删除"),
TOPIC_OPERATION_UNKNOWN_ERROR(10, "未知错误"),
TOPIC_EXIST_CONNECT_CANNOT_DELETE(10, "topic exist connect cannot delete"),
EXIST_TOPIC_CANNOT_DELETE(10, "exist topic cannot delete"),
/**
* 工单
*/
CHANGE_ZOOKEEPER_FORBIDEN(100, "change zookeeper forbiden"),
// APP_EXIST_TOPIC_AUTHORITY_CANNOT_DELETE(1000, "app exist topic authority cannot delete"),
UPLOAD_FILE_FAIL(1000, "upload file fail"),
FILE_TYPE_NOT_SUPPORT(1000, "File type not support"),
DOWNLOAD_FILE_FAIL(1000, "download file fail"),
TOPIC_ALREADY_EXIST(17400, "topic already existed"),
CONSUMER_GROUP_NOT_EXIST(17411, "consumerGroup not exist"),
;
private int code;

View File

@@ -23,6 +23,8 @@ public class ClusterDetailDTO {
private String securityProperties;
private String jmxProperties;
private Integer status;
private Date gmtCreate;
@@ -103,6 +105,14 @@ public class ClusterDetailDTO {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
public Integer getStatus() {
return status;
}
@@ -176,8 +186,9 @@ public class ClusterDetailDTO {
", bootstrapServers='" + bootstrapServers + '\'' +
", kafkaVersion='" + kafkaVersion + '\'' +
", idc='" + idc + '\'' +
", mode='" + mode + '\'' +
", mode=" + mode +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +

View File

@@ -9,6 +9,8 @@ public class LogicalCluster {
private String logicalClusterName;
private String logicalClusterIdentification;
private Integer mode;
private Integer topicNum;
@@ -41,6 +43,14 @@ public class LogicalCluster {
this.logicalClusterName = logicalClusterName;
}
public String getLogicalClusterIdentification() {
return logicalClusterIdentification;
}
public void setLogicalClusterIdentification(String logicalClusterIdentification) {
this.logicalClusterIdentification = logicalClusterIdentification;
}
public Integer getMode() {
return mode;
}
@@ -81,6 +91,14 @@ public class LogicalCluster {
this.bootstrapServers = bootstrapServers;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getGmtCreate() {
return gmtCreate;
}
@@ -97,19 +115,12 @@ public class LogicalCluster {
this.gmtModify = gmtModify;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
@Override
public String toString() {
return "LogicalCluster{" +
"logicalClusterId=" + logicalClusterId +
", logicalClusterName='" + logicalClusterName + '\'' +
", logicalClusterIdentification='" + logicalClusterIdentification + '\'' +
", mode=" + mode +
", topicNum=" + topicNum +
", clusterVersion='" + clusterVersion + '\'' +

View File

@@ -1,57 +0,0 @@
package com.xiaojukeji.kafka.manager.common.entity.ao.config;
/**
* @author zengqiao
* @date 20/9/7
*/
public class SinkTopicRequestTimeMetricsConfig {
private Long clusterId;
private String topicName;
private Long startId;
private Long step;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Long getStartId() {
return startId;
}
public void setStartId(Long startId) {
this.startId = startId;
}
public Long getStep() {
return step;
}
public void setStep(Long step) {
this.step = step;
}
@Override
public String toString() {
return "SinkTopicRequestTimeMetricsConfig{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", startId=" + startId +
", step=" + step +
'}';
}
}

View File

@@ -0,0 +1,45 @@
package com.xiaojukeji.kafka.manager.common.entity.dto.op;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import java.util.List;
/**
* @author zengqiao
* @date 21/01/24
*/
@JsonIgnoreProperties(ignoreUnknown = true)
@ApiModel(description="优选为Controller的候选者")
public class ControllerPreferredCandidateDTO {
@ApiModelProperty(value="集群ID")
private Long clusterId;
@ApiModelProperty(value="优选为controller的BrokerId")
private List<Integer> brokerIdList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public List<Integer> getBrokerIdList() {
return brokerIdList;
}
public void setBrokerIdList(List<Integer> brokerIdList) {
this.brokerIdList = brokerIdList;
}
@Override
public String toString() {
return "ControllerPreferredCandidateDTO{" +
"clusterId=" + clusterId +
", brokerIdList=" + brokerIdList +
'}';
}
}

View File

@@ -40,6 +40,9 @@ public class TopicCreationDTO extends ClusterTopicDTO {
@ApiModelProperty(value = "Topic属性列表")
private Properties properties;
@ApiModelProperty(value = "最大写入字节数")
private Long peakBytesIn;
public String getAppId() {
return appId;
}
@@ -104,6 +107,14 @@ public class TopicCreationDTO extends ClusterTopicDTO {
this.properties = properties;
}
public Long getPeakBytesIn() {
return peakBytesIn;
}
public void setPeakBytesIn(Long peakBytesIn) {
this.peakBytesIn = peakBytesIn;
}
@Override
public String toString() {
return "TopicCreationDTO{" +
@@ -135,4 +146,4 @@ public class TopicCreationDTO extends ClusterTopicDTO {
}
return true;
}
}
}

View File

@@ -27,9 +27,12 @@ public class ClusterDTO {
@ApiModelProperty(value="数据中心")
private String idc;
@ApiModelProperty(value="安全配置参数")
@ApiModelProperty(value="Kafka安全配置")
private String securityProperties;
@ApiModelProperty(value="Jmx配置")
private String jmxProperties;
public Long getClusterId() {
return clusterId;
}
@@ -78,6 +81,14 @@ public class ClusterDTO {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
@Override
public String toString() {
return "ClusterDTO{" +
@@ -87,15 +98,15 @@ public class ClusterDTO {
", bootstrapServers='" + bootstrapServers + '\'' +
", idc='" + idc + '\'' +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
'}';
}
public Boolean legal() {
public boolean legal() {
if (ValidateUtils.isNull(clusterName)
|| ValidateUtils.isNull(zookeeper)
|| ValidateUtils.isNull(idc)
|| ValidateUtils.isNull(bootstrapServers)
) {
|| ValidateUtils.isNull(bootstrapServers)) {
return false;
}
return true;

View File

@@ -21,6 +21,9 @@ public class LogicalClusterDTO {
@ApiModelProperty(value = "名称")
private String name;
@ApiModelProperty(value = "集群标识, 用于告警的上报")
private String identification;
@ApiModelProperty(value = "集群模式")
private Integer mode;
@@ -52,6 +55,14 @@ public class LogicalClusterDTO {
this.name = name;
}
public String getIdentification() {
return identification;
}
public void setIdentification(String identification) {
this.identification = identification;
}
public Integer getMode() {
return mode;
}
@@ -97,6 +108,7 @@ public class LogicalClusterDTO {
return "LogicalClusterDTO{" +
"id=" + id +
", name='" + name + '\'' +
", identification='" + identification + '\'' +
", mode=" + mode +
", clusterId=" + clusterId +
", regionIdList=" + regionIdList +
@@ -117,6 +129,7 @@ public class LogicalClusterDTO {
}
appId = ValidateUtils.isNull(appId)? "": appId;
description = ValidateUtils.isNull(description)? "": description;
identification = ValidateUtils.isNull(identification)? name: identification;
return true;
}
}

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import java.util.Date;
import java.util.Objects;
/**
* @author zengqiao
@@ -17,6 +18,8 @@ public class ClusterDO implements Comparable<ClusterDO> {
private String securityProperties;
private String jmxProperties;
private Integer status;
private Date gmtCreate;
@@ -31,30 +34,6 @@ public class ClusterDO implements Comparable<ClusterDO> {
this.id = id;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
public String getClusterName() {
return clusterName;
}
@@ -87,6 +66,38 @@ public class ClusterDO implements Comparable<ClusterDO> {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
@Override
public String toString() {
return "ClusterDO{" +
@@ -95,6 +106,7 @@ public class ClusterDO implements Comparable<ClusterDO> {
", zookeeper='" + zookeeper + '\'' +
", bootstrapServers='" + bootstrapServers + '\'' +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
@@ -105,4 +117,22 @@ public class ClusterDO implements Comparable<ClusterDO> {
public int compareTo(ClusterDO clusterDO) {
return this.id.compareTo(clusterDO.id);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ClusterDO clusterDO = (ClusterDO) o;
return Objects.equals(id, clusterDO.id)
&& Objects.equals(clusterName, clusterDO.clusterName)
&& Objects.equals(zookeeper, clusterDO.zookeeper)
&& Objects.equals(bootstrapServers, clusterDO.bootstrapServers)
&& Objects.equals(securityProperties, clusterDO.securityProperties)
&& Objects.equals(jmxProperties, clusterDO.jmxProperties);
}
@Override
public int hashCode() {
return Objects.hash(id, clusterName, zookeeper, bootstrapServers, securityProperties, jmxProperties);
}
}

View File

@@ -11,6 +11,8 @@ public class LogicalClusterDO {
private String name;
private String identification;
private Integer mode;
private String appId;
@@ -41,6 +43,14 @@ public class LogicalClusterDO {
this.name = name;
}
public String getIdentification() {
return identification;
}
public void setIdentification(String identification) {
this.identification = identification;
}
public Integer getMode() {
return mode;
}
@@ -102,6 +112,7 @@ public class LogicalClusterDO {
return "LogicalClusterDO{" +
"id=" + id +
", name='" + name + '\'' +
", identification='" + identification + '\'' +
", mode=" + mode +
", appId='" + appId + '\'' +
", clusterId=" + clusterId +

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicCreationDTO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import java.util.Date;
@@ -95,6 +96,7 @@ public class TopicDO {
topicDO.setClusterId(dto.getClusterId());
topicDO.setTopicName(dto.getTopicName());
topicDO.setDescription(dto.getDescription());
topicDO.setPeakBytesIn(ValidateUtils.isNull(dto.getPeakBytesIn()) ? -1L : dto.getPeakBytesIn());
return topicDO;
}
}
}

View File

@@ -17,6 +17,8 @@ public class GatewayConfigDO {
private Long version;
private String description;
private Date createTime;
private Date modifyTime;
@@ -61,6 +63,14 @@ public class GatewayConfigDO {
this.version = version;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getCreateTime() {
return createTime;
}
@@ -85,6 +95,7 @@ public class GatewayConfigDO {
", name='" + name + '\'' +
", value='" + value + '\'' +
", version=" + version +
", description='" + description + '\'' +
", createTime=" + createTime +
", modifyTime=" + modifyTime +
'}';

View File

@@ -15,6 +15,9 @@ public class LogicClusterVO {
@ApiModelProperty(value="逻辑集群名称")
private String clusterName;
@ApiModelProperty(value="逻辑标识")
private String clusterIdentification;
@ApiModelProperty(value="逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群")
private Integer mode;
@@ -24,9 +27,6 @@ public class LogicClusterVO {
@ApiModelProperty(value="集群版本")
private String clusterVersion;
@ApiModelProperty(value="物理集群ID")
private Long physicalClusterId;
@ApiModelProperty(value="集群服务地址")
private String bootstrapServers;
@@ -55,6 +55,22 @@ public class LogicClusterVO {
this.clusterName = clusterName;
}
public String getClusterIdentification() {
return clusterIdentification;
}
public void setClusterIdentification(String clusterIdentification) {
this.clusterIdentification = clusterIdentification;
}
public Integer getMode() {
return mode;
}
public void setMode(Integer mode) {
this.mode = mode;
}
public Integer getTopicNum() {
return topicNum;
}
@@ -71,14 +87,6 @@ public class LogicClusterVO {
this.clusterVersion = clusterVersion;
}
public Long getPhysicalClusterId() {
return physicalClusterId;
}
public void setPhysicalClusterId(Long physicalClusterId) {
this.physicalClusterId = physicalClusterId;
}
public String getBootstrapServers() {
return bootstrapServers;
}
@@ -87,6 +95,14 @@ public class LogicClusterVO {
this.bootstrapServers = bootstrapServers;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getGmtCreate() {
return gmtCreate;
}
@@ -103,32 +119,15 @@ public class LogicClusterVO {
this.gmtModify = gmtModify;
}
public Integer getMode() {
return mode;
}
public void setMode(Integer mode) {
this.mode = mode;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
@Override
public String toString() {
return "LogicClusterVO{" +
"clusterId=" + clusterId +
", clusterName='" + clusterName + '\'' +
", clusterIdentification='" + clusterIdentification + '\'' +
", mode=" + mode +
", topicNum=" + topicNum +
", clusterVersion='" + clusterVersion + '\'' +
", physicalClusterId=" + physicalClusterId +
", bootstrapServers='" + bootstrapServers + '\'' +
", description='" + description + '\'' +
", gmtCreate=" + gmtCreate +

View File

@@ -28,7 +28,7 @@ public class ExpiredTopicVO {
@ApiModelProperty(value = "负责人")
private String principals;
@ApiModelProperty(value = "状态, -1:可下线, 0:过期待通知, 1+:已通知待反馈")
@ApiModelProperty(value = "状态, -1:已通知可下线, 0:过期待通知, 1+:已通知待反馈")
private Integer status;
public Long getClusterId() {

View File

@@ -26,6 +26,9 @@ public class GatewayConfigVO {
@ApiModelProperty(value="版本")
private Long version;
@ApiModelProperty(value="描述说明")
private String description;
@ApiModelProperty(value="创建时间")
private Date createTime;
@@ -72,6 +75,14 @@ public class GatewayConfigVO {
this.version = version;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getCreateTime() {
return createTime;
}
@@ -96,6 +107,7 @@ public class GatewayConfigVO {
", name='" + name + '\'' +
", value='" + value + '\'' +
", version=" + version +
", description='" + description + '\'' +
", createTime=" + createTime +
", modifyTime=" + modifyTime +
'}';

View File

@@ -32,9 +32,12 @@ public class ClusterBaseVO {
@ApiModelProperty(value="集群类型")
private Integer mode;
@ApiModelProperty(value="安全配置参数")
@ApiModelProperty(value="Kafka安全配置")
private String securityProperties;
@ApiModelProperty(value="Jmx配置")
private String jmxProperties;
@ApiModelProperty(value="1:监控中, 0:暂停监控")
private Integer status;
@@ -108,6 +111,14 @@ public class ClusterBaseVO {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
public Integer getStatus() {
return status;
}
@@ -141,8 +152,9 @@ public class ClusterBaseVO {
", bootstrapServers='" + bootstrapServers + '\'' +
", kafkaVersion='" + kafkaVersion + '\'' +
", idc='" + idc + '\'' +
", mode='" + mode + '\'' +
", mode=" + mode +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +

View File

@@ -18,6 +18,9 @@ public class LogicalClusterVO {
@ApiModelProperty(value = "逻辑集群名称")
private String logicalClusterName;
@ApiModelProperty(value = "逻辑集群标识")
private String logicalClusterIdentification;
@ApiModelProperty(value = "物理集群ID")
private Long physicalClusterId;
@@ -55,6 +58,14 @@ public class LogicalClusterVO {
this.logicalClusterName = logicalClusterName;
}
public String getLogicalClusterIdentification() {
return logicalClusterIdentification;
}
public void setLogicalClusterIdentification(String logicalClusterIdentification) {
this.logicalClusterIdentification = logicalClusterIdentification;
}
public Long getPhysicalClusterId() {
return physicalClusterId;
}
@@ -116,6 +127,7 @@ public class LogicalClusterVO {
return "LogicalClusterVO{" +
"logicalClusterId=" + logicalClusterId +
", logicalClusterName='" + logicalClusterName + '\'' +
", logicalClusterIdentification='" + logicalClusterIdentification + '\'' +
", physicalClusterId=" + physicalClusterId +
", regionIdList=" + regionIdList +
", mode=" + mode +

View File

@@ -53,6 +53,20 @@ public class JsonUtils {
return JSON.toJSONString(obj);
}
public static <T> T stringToObj(String src, Class<T> clazz) {
if (ValidateUtils.isBlank(src)) {
return null;
}
return JSON.parseObject(src, clazz);
}
public static <T> List<T> stringToArrObj(String src, Class<T> clazz) {
if (ValidateUtils.isBlank(src)) {
return null;
}
return JSON.parseArray(src, clazz);
}
public static List<TopicConnectionDO> parseTopicConnections(Long clusterId, JSONObject jsonObject, long postTime) {
List<TopicConnectionDO> connectionDOList = new ArrayList<>();
for (String clientType: jsonObject.keySet()) {

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.utils;
import org.apache.commons.lang.StringUtils;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Set;
@@ -11,6 +12,20 @@ import java.util.Set;
* @date 20/4/16
*/
public class ValidateUtils {
/**
* 任意一个为空, 则返回true
*/
public static boolean anyNull(Object... objects) {
return Arrays.stream(objects).anyMatch(ValidateUtils::isNull);
}
/**
* 是空字符串或者空
*/
public static boolean anyBlank(String... strings) {
return Arrays.stream(strings).anyMatch(StringUtils::isBlank);
}
/**
* 为空
*/

View File

@@ -0,0 +1,65 @@
package com.xiaojukeji.kafka.manager.common.utils.jmx;
public class JmxConfig {
/**
* 单台最大连接数
*/
private Integer maxConn;
/**
* 用户名
*/
private String username;
/**
* 密码
*/
private String password;
/**
* 开启SSL
*/
private Boolean openSSL;
public Integer getMaxConn() {
return maxConn;
}
public void setMaxConn(Integer maxConn) {
this.maxConn = maxConn;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public Boolean isOpenSSL() {
return openSSL;
}
public void setOpenSSL(Boolean openSSL) {
this.openSSL = openSSL;
}
@Override
public String toString() {
return "JmxConfig{" +
"maxConn=" + maxConn +
", username='" + username + '\'' +
", password='" + password + '\'' +
", openSSL=" + openSSL +
'}';
}
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.kafka.manager.common.utils.jmx;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -7,8 +8,14 @@ import javax.management.*;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import javax.management.remote.rmi.RMIConnectorServer;
import javax.naming.Context;
import javax.rmi.ssl.SslRMIClientSocketFactory;
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicInteger;
@@ -28,13 +35,19 @@ public class JmxConnectorWrap {
private AtomicInteger atomicInteger;
public JmxConnectorWrap(String host, int port, int maxConn) {
private JmxConfig jmxConfig;
public JmxConnectorWrap(String host, int port, JmxConfig jmxConfig) {
this.host = host;
this.port = port;
if (maxConn <= 0) {
maxConn = 1;
this.jmxConfig = jmxConfig;
if (ValidateUtils.isNull(this.jmxConfig)) {
this.jmxConfig = new JmxConfig();
}
this.atomicInteger = new AtomicInteger(maxConn);
if (ValidateUtils.isNullOrLessThanZero(this.jmxConfig.getMaxConn())) {
this.jmxConfig.setMaxConn(1);
}
this.atomicInteger = new AtomicInteger(this.jmxConfig.getMaxConn());
}
public boolean checkJmxConnectionAndInitIfNeed() {
@@ -64,8 +77,18 @@ public class JmxConnectorWrap {
}
String jmxUrl = String.format("service:jmx:rmi:///jndi/rmi://%s:%d/jmxrmi", host, port);
try {
JMXServiceURL url = new JMXServiceURL(jmxUrl);
jmxConnector = JMXConnectorFactory.connect(url, null);
Map<String, Object> environment = new HashMap<String, Object>();
if (!ValidateUtils.isBlank(this.jmxConfig.getUsername()) && !ValidateUtils.isBlank(this.jmxConfig.getPassword())) {
environment.put(JMXConnector.CREDENTIALS, Arrays.asList(this.jmxConfig.getUsername(), this.jmxConfig.getPassword()));
}
if (jmxConfig.isOpenSSL() != null && this.jmxConfig.isOpenSSL()) {
environment.put(Context.SECURITY_PROTOCOL, "ssl");
SslRMIClientSocketFactory clientSocketFactory = new SslRMIClientSocketFactory();
environment.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE, clientSocketFactory);
environment.put("com.sun.jndi.rmi.factory.socket", clientSocketFactory);
}
jmxConnector = JMXConnectorFactory.connect(new JMXServiceURL(jmxUrl), environment);
LOGGER.info("JMX connect success, host:{} port:{}.", host, port);
return true;
} catch (MalformedURLException e) {

View File

@@ -33,7 +33,9 @@ public class ZkPathUtil {
private static final String D_METRICS_CONFIG_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "KafkaExMetrics";
public static final String D_CONTROLLER_CANDIDATES = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension/candidates";
public static final String D_CONFIG_EXTENSION_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension";
public static final String D_CONTROLLER_CANDIDATES = D_CONFIG_EXTENSION_ROOT_NODE + ZOOKEEPER_SEPARATOR + "candidates";
public static String getBrokerIdNodePath(Integer brokerId) {
return BROKER_IDS_ROOT + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId);
@@ -111,6 +113,10 @@ public class ZkPathUtil {
}
public static String getKafkaExtraMetricsPath(Integer brokerId) {
return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId);
return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + brokerId;
}
public static String getControllerCandidatePath(Integer brokerId) {
return D_CONTROLLER_CANDIDATES + ZOOKEEPER_SEPARATOR + brokerId;
}
}

View File

@@ -1,8 +1,5 @@
package com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
/**
@@ -18,12 +15,11 @@ import java.util.List;
* "host":null,
* "timestamp":"1546632983233",
* "port":-1,
* "version":4
* "version":4,
* "rack": "CY"
* }
*/
public class BrokerMetadata implements Cloneable {
private final static Logger LOGGER = LoggerFactory.getLogger(TopicMetadata.class);
private long clusterId;
private int brokerId;
@@ -43,6 +39,8 @@ public class BrokerMetadata implements Cloneable {
private long timestamp;
private String rack;
public long getClusterId() {
return clusterId;
}
@@ -107,14 +105,12 @@ public class BrokerMetadata implements Cloneable {
this.timestamp = timestamp;
}
@Override
public Object clone() {
try {
return super.clone();
} catch (CloneNotSupportedException var3) {
LOGGER.error("clone BrokerMetadata failed.", var3);
}
return null;
public String getRack() {
return rack;
}
public void setRack(String rack) {
this.rack = rack;
}
@Override
@@ -128,6 +124,7 @@ public class BrokerMetadata implements Cloneable {
", jmxPort=" + jmx_port +
", version='" + version + '\'' +
", timestamp=" + timestamp +
", rack='" + rack + '\'' +
'}';
}
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.kafka.manager.common.utils;
import org.junit.Assert;
import org.junit.Test;
import java.util.HashMap;
import java.util.Map;
public class JsonUtilsTest {
@Test
public void testMapToJsonString() {
Map<String, Object> map = new HashMap<>();
map.put("key", "value");
map.put("int", 1);
String expectRes = "{\"key\":\"value\",\"int\":1}";
Assert.assertEquals(expectRes, JsonUtils.toJSONString(map));
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "mobx-ts-example",
"version": "1.0.0",
"name": "logi-kafka",
"version": "2.3.1",
"description": "",
"scripts": {
"start": "webpack-dev-server",
@@ -21,7 +21,7 @@
"@types/spark-md5": "^3.0.2",
"antd": "^3.26.15",
"clean-webpack-plugin": "^3.0.0",
"clipboard": "^2.0.6",
"clipboard": "2.0.6",
"cross-env": "^7.0.2",
"css-loader": "^2.1.0",
"echarts": "^4.5.0",
@@ -56,4 +56,4 @@
"dependencies": {
"format-to-json": "^1.0.4"
}
}
}

View File

@@ -8,7 +8,7 @@
<parent>
<artifactId>kafka-manager</artifactId>
<groupId>com.xiaojukeji.kafka</groupId>
<version>2.1.0-SNAPSHOT</version>
<version>${kafka-manager.revision}</version>
</parent>
<build>

View File

@@ -94,6 +94,9 @@ import 'antd/es/divider/style';
import Upload from 'antd/es/upload';
import 'antd/es/upload/style';
import Transfer from 'antd/es/transfer';
import 'antd/es/transfer/style';
import TimePicker from 'antd/es/time-picker';
import 'antd/es/time-picker/style';
@@ -142,5 +145,6 @@ export {
TimePicker,
RangePickerValue,
Badge,
Popover
Popover,
Transfer
};

View File

@@ -25,7 +25,7 @@
.editor{
height: 100%;
position: absolute;
left: -14%;
left: -12%;
width: 120%;
}
}

View File

@@ -21,24 +21,12 @@ class Monacoeditor extends React.Component<IEditorProps> {
public state = {
placeholder: '',
};
// public arr = '{"clusterId":95,"startId":37397856,"step":100,"topicName":"kmo_topic_metrics_tempory_zq"}';
// public Ars(a: string) {
// const obj = JSON.parse(a);
// const newobj: any = {};
// for (const item in obj) {
// if (typeof obj[item] === 'object') {
// this.Ars(obj[item]);
// } else {
// newobj[item] = obj[item];
// }
// }
// return JSON.stringify(newobj);
// }
public async componentDidMount() {
const { value, onChange } = this.props;
const format: any = await format2json(value);
this.editor = monaco.editor.create(this.ref, {
value: format.result,
value: format.result || value,
language: 'json',
lineNumbers: 'off',
scrollBeyondLastLine: false,
@@ -48,7 +36,7 @@ class Monacoeditor extends React.Component<IEditorProps> {
minimap: {
enabled: false,
},
// automaticLayout: true, // 自动布局
automaticLayout: true, // 自动布局
glyphMargin: true, // 字形边缘 {},[]
// useTabStops: false,
// formatOnPaste: true,

View File

@@ -2,6 +2,7 @@ import * as React from 'react';
import { Select, Input, InputNumber, Form, Switch, Checkbox, DatePicker, Radio, Upload, Button, Icon, Tooltip } from 'component/antd';
import Monacoeditor from 'component/editor/monacoEditor';
import { searchProps } from 'constants/table';
import { version } from 'store/version';
import './index.less';
const TextArea = Input.TextArea;
@@ -129,6 +130,8 @@ class XForm extends React.Component<IXFormProps> {
this.renderFormItem(formItem),
)}
{formItem.renderExtraElement ? formItem.renderExtraElement() : null}
{/* 添加保存时间提示文案 */}
{formItem.attrs?.prompttype ? <span style={{ color: "#cccccc", fontSize: '12px', lineHeight: '20px', display: 'block' }}>{formItem.attrs.prompttype}</span> : null}
</Form.Item>
);
})}
@@ -189,7 +192,7 @@ class XForm extends React.Component<IXFormProps> {
case FormItemType.upload:
return (
<Upload beforeUpload={(file: any) => false} {...item.attrs}>
<Button><Icon type="upload" /></Button>
<Button><Icon type="upload" /></Button>{version.fileSuffix && <span style={{ color: '#fb3939', padding: '0 0 0 10px' }}>{`请上传${version.fileSuffix}文件`}</span>}
</Upload>
);
}

View File

@@ -67,7 +67,7 @@ export const timeMonthStr = 'YYYY/MM';
// tslint:disable-next-line:max-line-length
export const indexUrl ={
indexUrl:'https://github.com/didi/kafka-manager',
indexUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/kafka_metrics_desc.md', // 指标说明
cagUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/add_cluster/add_cluster.md', // 集群接入指南 Cluster access Guide
}

View File

@@ -19,7 +19,7 @@ export const cellStyle = {
overflow: 'hidden',
whiteSpace: 'nowrap',
textOverflow: 'ellipsis',
cursor: 'pointer',
// cursor: 'pointer',
};
export const searchProps = {

View File

@@ -38,7 +38,7 @@ export class ClusterConsumer extends SearchAndFilterContainer {
key: 'operation',
width: '10%',
render: (t: string, item: IOffset) => {
return (<a onClick={() => this.getConsumeDetails(item)}></a>);
return (<a onClick={() => this.getConsumeDetails(item)}></a>);
},
}];
private xFormModal: IXFormWrapper;
@@ -100,7 +100,7 @@ export class ClusterConsumer extends SearchAndFilterContainer {
<div className="k-row">
<ul className="k-tab">
<li>{this.props.tab}</li>
{this.renderSearch()}
{this.renderSearch('', '请输入消费组名称')}
</ul>
<Table
columns={this.columns}
@@ -110,7 +110,7 @@ export class ClusterConsumer extends SearchAndFilterContainer {
/>
</div>
<Modal
title="消费的Topic"
title="消费详情"
visible={this.state.detailsVisible}
onOk={() => this.handleDetailsOk()}
onCancel={() => this.handleDetailsCancel()}

View File

@@ -2,7 +2,8 @@
import * as React from 'react';
import { SearchAndFilterContainer } from 'container/search-filter';
import { Table } from 'component/antd';
import { Table, Button, Popconfirm, Modal, Transfer, notification } from 'component/antd';
// import { Transfer } from 'antd'
import { observer } from 'mobx-react';
import { pagination } from 'constants/table';
import Url from 'lib/url-parser';
@@ -16,8 +17,12 @@ import { timeFormat } from 'constants/strategy';
export class ClusterController extends SearchAndFilterContainer {
public clusterId: number;
public state = {
public state: any = {
searchKey: '',
searchCandidateKey: '',
isCandidateModel: false,
mockData: [],
targetKeys: [],
};
constructor(props: any) {
@@ -37,6 +42,94 @@ export class ClusterController extends SearchAndFilterContainer {
return data;
}
public getCandidateData<T extends IController>(origin: T[]) {
let data: T[] = origin;
let { searchCandidateKey } = this.state;
searchCandidateKey = (searchCandidateKey + '').trim().toLowerCase();
data = searchCandidateKey ? origin.filter((item: IController) =>
(item.host !== undefined && item.host !== null) && item.host.toLowerCase().includes(searchCandidateKey as string),
) : origin;
return data;
}
// 候选controller
public renderCandidateController() {
const columns = [
{
title: 'BrokerId',
dataIndex: 'brokerId',
key: 'brokerId',
width: '20%',
sorter: (a: IController, b: IController) => b.brokerId - a.brokerId,
render: (r: string, t: IController) => {
return (
<a href={`${this.urlPrefix}/admin/broker-detail?clusterId=${this.clusterId}&brokerId=${t.brokerId}`}>{r}
</a>
);
},
},
{
title: 'BrokerHost',
key: 'host',
dataIndex: 'host',
width: '20%',
// render: (r: string, t: IController) => {
// return (
// <a href={`${this.urlPrefix}/admin/broker-detail?clusterId=${this.clusterId}&brokerId=${t.brokerId}`}>{r}
// </a>
// );
// },
},
{
title: 'Broker状态',
key: 'status',
dataIndex: 'status',
width: '20%',
render: (r: number, t: IController) => {
return (
<span>{r === 1 ? '不在线' : '在线'}</span>
);
},
},
{
title: '创建时间',
dataIndex: 'startTime',
key: 'startTime',
width: '25%',
sorter: (a: IController, b: IController) => b.timestamp - a.timestamp,
render: (t: number) => moment(t).format(timeFormat),
},
{
title: '操作',
dataIndex: 'operation',
key: 'operation',
width: '15%',
render: (r: string, t: IController) => {
return (
<Popconfirm
title="确定删除?"
onConfirm={() => this.deleteCandidateCancel(t)}
cancelText="取消"
okText="确认"
>
<a></a>
</Popconfirm>
);
},
},
];
return (
<Table
columns={columns}
dataSource={this.getCandidateData(admin.controllerCandidate)}
pagination={pagination}
rowKey="key"
/>
);
}
public renderController() {
const columns = [
@@ -58,12 +151,6 @@ export class ClusterController extends SearchAndFilterContainer {
key: 'host',
dataIndex: 'host',
width: '30%',
// render: (r: string, t: IController) => {
// return (
// <a href={`${this.urlPrefix}/admin/broker-detail?clusterId=${this.clusterId}&brokerId=${t.brokerId}`}>{r}
// </a>
// );
// },
},
{
title: '变更时间',
@@ -87,16 +174,104 @@ export class ClusterController extends SearchAndFilterContainer {
public componentDidMount() {
admin.getControllerHistory(this.clusterId);
admin.getCandidateController(this.clusterId);
admin.getBrokersMetadata(this.clusterId);
}
public addController = () => {
this.setState({ isCandidateModel: true, targetKeys: [] })
}
public addCandidateChange = (targetKeys: any) => {
this.setState({ targetKeys })
}
public handleCandidateCancel = () => {
this.setState({ isCandidateModel: false });
}
public handleCandidateOk = () => {
let brokerIdList = this.state.targetKeys.map((item: any) => {
return admin.brokersMetadata[item].brokerId
})
admin.addCandidateController(this.clusterId, brokerIdList).then(data => {
notification.success({ message: '新增成功' });
admin.getCandidateController(this.clusterId);
}).catch(err => {
notification.error({ message: '新增失败' });
})
this.setState({ isCandidateModel: false, targetKeys: [] });
}
public deleteCandidateCancel = (target: any) => {
admin.deleteCandidateCancel(this.clusterId, [target.brokerId]).then(() => {
notification.success({ message: '删除成功' });
});
this.setState({ isCandidateModel: false });
}
public renderAddCandidateController() {
let filterControllerCandidate = admin.brokersMetadata.filter((item: any) => {
return !admin.filtercontrollerCandidate.includes(item.brokerId)
})
return (
<Modal
title="新增候选Controller"
visible={this.state.isCandidateModel}
// okText="确认"
// cancelText="取消"
maskClosable={false}
// onOk={() => this.handleCandidateOk()}
onCancel={() => this.handleCandidateCancel()}
footer={<>
<Button style={{ width: '60px' }} onClick={() => this.handleCandidateCancel()}></Button>
<Button disabled={this.state.targetKeys.length > 0 ? false : true} style={{ width: '60px' }} type="primary" onClick={() => this.handleCandidateOk()}></Button>
</>
}
>
<Transfer
dataSource={filterControllerCandidate}
targetKeys={this.state.targetKeys}
render={item => item.host}
onChange={(targetKeys) => this.addCandidateChange(targetKeys)}
titles={['未选', '已选']}
locale={{
itemUnit: '项',
itemsUnit: '项',
}}
listStyle={{
width: "45%",
}}
/>
</Modal>
);
}
public render() {
return (
<div className="k-row">
<ul className="k-tab">
<li>
<span>Controller</span>
<span style={{ display: 'inline-block', color: "#a7a8a9", fontSize: '12px', marginLeft: '15px' }}>Controller将会优先从以下Broker中选举</span>
</li>
<div style={{ display: 'flex' }}>
<div style={{ marginRight: '15px' }}>
<Button onClick={() => this.addController()} type='primary'>Controller</Button>
</div>
{this.renderSearch('', '请查找Host', 'searchCandidateKey')}
</div>
</ul>
{this.renderCandidateController()}
<ul className="k-tab" style={{ marginTop: '10px' }}>
<li>{this.props.tab}</li>
{this.renderSearch('', '请输入Host')}
</ul>
{this.renderController()}
{this.renderAddCandidateController()}
</div>
);
}

View File

@@ -2,7 +2,7 @@ import * as React from 'react';
import Url from 'lib/url-parser';
import { region } from 'store';
import { admin } from 'store/admin';
import { topic } from 'store/topic';
import { app } from 'store/app';
import { Table, notification, Tooltip, Popconfirm } from 'antd';
import { pagination, cellStyle } from 'constants/table';
import { observer } from 'mobx-react';
@@ -56,8 +56,6 @@ export class ClusterTopic extends SearchAndFilterContainer {
public expandPartition(item: IClusterTopics) {
// getTopicBasicInfo
admin.getTopicsBasicInfo(item.clusterId, item.topicName).then(data => {
console.log(admin.topicsBasic);
console.log(admin.basicInfo);
this.clusterTopicsFrom = item;
this.setState({
expandVisible: true,
@@ -114,6 +112,7 @@ export class ClusterTopic extends SearchAndFilterContainer {
public componentDidMount() {
admin.getClusterTopics(this.clusterId);
app.getAdminAppList()
}
public renderClusterTopicList() {

View File

@@ -159,7 +159,6 @@ export class ExclusiveCluster extends SearchAndFilterContainer {
public handleDeleteRegion = (record: IBrokersRegions) => {
const filterRegion = admin.logicalClusters.filter(item => item.regionIdList.includes(record.id));
if (!filterRegion) {
return;
}
@@ -335,6 +334,7 @@ export class ExclusiveCluster extends SearchAndFilterContainer {
{this.renderSearch('', '请输入Region名称broker ID')}
</ul>
{this.renderRegion()}
{this.renderDeleteRegionModal()}
</div >
);
}

View File

@@ -94,4 +94,10 @@
.region-prompt{
font-weight: bold;
text-align: center;
}
.asd{
display: flex;
justify-content: space-around;
align-items: center;
}

View File

@@ -32,9 +32,9 @@ export class ClusterDetail extends React.Component {
}
public render() {
let content = {} as IMetaData;
content = admin.basicInfo ? admin.basicInfo : content;
return (
let content = {} as IMetaData;
content = admin.basicInfo ? admin.basicInfo : content;
return (
<>
<PageHeader
className="detail topic-detail-header"
@@ -46,7 +46,7 @@ export class ClusterDetail extends React.Component {
<ClusterOverview basicInfo={content} />
</TabPane>
<TabPane tab="Topic信息" key="2">
<ClusterTopic tab={'Topic信息'}/>
<ClusterTopic tab={'Topic信息'} />
</TabPane>
<TabPane tab="Broker信息" key="3">
<ClusterBroker tab={'Broker信息'} basicInfo={content} />
@@ -60,11 +60,11 @@ export class ClusterDetail extends React.Component {
<TabPane tab="逻辑集群信息" key="6">
<LogicalCluster tab={'逻辑集群信息'} basicInfo={content} />
</TabPane>
<TabPane tab="Controller变更历史" key="7">
<TabPane tab="Controller信息" key="7">
<ClusterController tab={'Controller变更历史'} />
</TabPane>
<TabPane tab="限流信息" key="8">
<CurrentLimiting tab={'限流信息'}/>
<TabPane tab="限流信息" key="8">
<CurrentLimiting tab={'限流信息'} />
</TabPane>
</Tabs>
</>

View File

@@ -40,15 +40,15 @@ export class LogicalCluster extends SearchAndFilterContainer {
key: 'logicalClusterId',
},
{
title: '逻辑集群中文名称',
title: '逻辑集群名称',
dataIndex: 'logicalClusterName',
key: 'logicalClusterName',
width: '150px'
},
{
title: '逻辑集群英文名称',
dataIndex: 'logicalClusterName',
key: 'logicalClusterName1',
title: '逻辑集群标识',
dataIndex: 'logicalClusterIdentification',
key: 'logicalClusterIdentification',
width: '150px'
},
{

View File

@@ -1,5 +1,5 @@
import * as React from 'react';
import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert } from 'component/antd';
import { Modal, Table, Button, notification, message, Tooltip, Icon, Popconfirm, Alert, Popover } from 'component/antd';
import { wrapper } from 'store';
import { observer } from 'mobx-react';
import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type';
@@ -12,6 +12,7 @@ import { urlPrefix } from 'constants/left-menu';
import { indexUrl } from 'constants/strategy'
import { region } from 'store';
import './index.less';
import Monacoeditor from 'component/editor/monacoEditor';
import { getAdminClusterColumns } from '../config';
const { confirm } = Modal;
@@ -58,7 +59,7 @@ export class ClusterList extends SearchAndFilterContainer {
message: '请输入zookeeper地址',
}],
attrs: {
placeholder: '请输入zookeeper地址',
placeholder: '请输入zookeeper地址例如192.168.0.1:2181,192.168.0.2:2181/logi-kafka',
rows: 2,
disabled: item ? true : false,
},
@@ -72,7 +73,7 @@ export class ClusterList extends SearchAndFilterContainer {
message: '请输入bootstrapServers',
}],
attrs: {
placeholder: '请输入bootstrapServers',
placeholder: '请输入bootstrapServers例如192.168.1.1:9092,192.168.1.2:9092',
rows: 2,
disabled: item ? true : false,
},
@@ -131,7 +132,26 @@ export class ClusterList extends SearchAndFilterContainer {
{
"security.protocol": "SASL_PLAINTEXT",
"sasl.mechanism": "PLAIN",
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"xxxxxx\" password=\"xxxxxx\";"
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\\"xxxxxx\\" password=\\"xxxxxx\\";"
}`,
rows: 8,
},
},
{
key: 'jmxProperties',
label: 'JMX认证',
type: 'text_area',
rules: [{
required: false,
message: '请输入JMX认证',
}],
attrs: {
placeholder: `请输入JMX认证例如
{
"maxConn": 10, #KM对单台Broker对最大连接数
"username": "xxxxx", #用户名
"password": "xxxxx", #密码
"openSSL": true, #开启SSLtrue表示开启SSLfalse表示关闭
}`,
rows: 8,
},
@@ -271,11 +291,13 @@ export class ClusterList extends SearchAndFilterContainer {
cancelText="取消"
okText="确认"
>
<a
className="action-button"
>
{item.status === 1 ? '暂停监控' : '开始监控'}
</a>
<Tooltip title="暂停监控将无法正常监控指标信息,建议开启监控">
<a
className="action-button"
>
{item.status === 1 ? '暂停监控' : '开始监控'}
</a>
</Tooltip>
</Popconfirm>
<a onClick={this.showMonitor.bind(this, item)}>

View File

@@ -1,8 +1,8 @@
import * as React from 'react';
import { IUser, IUploadFile, IConfigure, IMetaData, IBrokersPartitions } from 'types/base-type';
import { IUser, IUploadFile, IConfigure, IConfigGateway, IMetaData } from 'types/base-type';
import { users } from 'store/users';
import { version } from 'store/version';
import { showApplyModal, showModifyModal, showConfigureModal } from 'container/modal/admin';
import { showApplyModal, showApplyModalModifyPassword, showModifyModal, showConfigureModal, showConfigGatewayModal } from 'container/modal/admin';
import { Popconfirm, Tooltip } from 'component/antd';
import { admin } from 'store/admin';
import { cellStyle } from 'constants/table';
@@ -27,6 +27,7 @@ export const getUserColumns = () => {
return (
<span className="table-operation">
<a onClick={() => showApplyModal(record)}></a>
<a onClick={() => showApplyModalModifyPassword(record)}></a>
<Popconfirm
title="确定删除?"
onConfirm={() => users.deleteUser(record.username)}
@@ -184,6 +185,87 @@ export const getConfigureColumns = () => {
return columns;
};
// 网关配置
export const getConfigColumns = () => {
const columns = [
{
title: '配置类型',
dataIndex: 'type',
key: 'type',
width: '25%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.type.charCodeAt(0) - b.type.charCodeAt(0),
},
{
title: '配置键',
dataIndex: 'name',
key: 'name',
width: '15%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.name.charCodeAt(0) - b.name.charCodeAt(0),
},
{
title: '配置值',
dataIndex: 'value',
key: 'value',
width: '20%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.value.charCodeAt(0) - b.value.charCodeAt(0),
render: (t: string) => {
return t.substr(0, 1) === '{' && t.substr(0, -1) === '}' ? JSON.stringify(JSON.parse(t), null, 4) : t;
},
},
{
title: '修改时间',
dataIndex: 'modifyTime',
key: 'modifyTime',
width: '15%',
sorter: (a: IConfigGateway, b: IConfigGateway) => b.modifyTime - a.modifyTime,
render: (t: number) => moment(t).format(timeFormat),
},
{
title: '版本号',
dataIndex: 'version',
key: 'version',
width: '10%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => b.version.charCodeAt(0) - a.version.charCodeAt(0),
},
{
title: '描述信息',
dataIndex: 'description',
key: 'description',
width: '20%',
ellipsis: true,
onCell: () => ({
style: {
maxWidth: 180,
...cellStyle,
},
}),
},
{
title: '操作',
width: '10%',
render: (text: string, record: IConfigGateway) => {
return (
<span className="table-operation">
<a onClick={() => showConfigGatewayModal(record)}></a>
<Popconfirm
title="确定删除?"
onConfirm={() => admin.deleteConfigGateway({ id: record.id })}
cancelText="取消"
okText="确认"
>
<a></a>
</Popconfirm>
</span>);
},
},
];
return columns;
};
const renderClusterHref = (value: number | string, item: IMetaData, key: number) => {
return ( // 0 暂停监控--不可点击 1 监控中---可正常点击
<>

View File

@@ -3,11 +3,11 @@ import { SearchAndFilterContainer } from 'container/search-filter';
import { Table, Button, Spin } from 'component/antd';
import { admin } from 'store/admin';
import { observer } from 'mobx-react';
import { IConfigure } from 'types/base-type';
import { IConfigure, IConfigGateway } from 'types/base-type';
import { users } from 'store/users';
import { pagination } from 'constants/table';
import { getConfigureColumns } from './config';
import { showConfigureModal } from 'container/modal/admin';
import { getConfigureColumns, getConfigColumns } from './config';
import { showConfigureModal, showConfigGatewayModal } from 'container/modal/admin';
@observer
export class ConfigureManagement extends SearchAndFilterContainer {
@@ -17,7 +17,12 @@ export class ConfigureManagement extends SearchAndFilterContainer {
};
public componentDidMount() {
admin.getConfigure();
if (this.props.isShow) {
admin.getGatewayList();
admin.getGatewayType();
} else {
admin.getConfigure();
}
}
public getData<T extends IConfigure>(origin: T[]) {
@@ -34,15 +39,34 @@ export class ConfigureManagement extends SearchAndFilterContainer {
return data;
}
public getGatewayData<T extends IConfigGateway>(origin: T[]) {
let data: T[] = origin;
let { searchKey } = this.state;
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IConfigGateway) =>
((item.name !== undefined && item.name !== null) && item.name.toLowerCase().includes(searchKey as string))
|| ((item.value !== undefined && item.value !== null) && item.value.toLowerCase().includes(searchKey as string))
|| ((item.description !== undefined && item.description !== null) &&
item.description.toLowerCase().includes(searchKey as string)),
) : origin;
return data;
}
public renderTable() {
return (
<Spin spinning={users.loading}>
<Table
{this.props.isShow ? <Table
rowKey="key"
columns={getConfigColumns()}
dataSource={this.getGatewayData(admin.configGatewayList)}
pagination={pagination}
/> : <Table
rowKey="key"
columns={getConfigureColumns()}
dataSource={this.getData(admin.configureList)}
pagination={pagination}
/>
/>}
</Spin>
);
@@ -53,7 +77,7 @@ export class ConfigureManagement extends SearchAndFilterContainer {
<ul>
{this.renderSearch('', '请输入配置键、值或描述')}
<li className="right-btn-1">
<Button type="primary" onClick={() => showConfigureModal()}></Button>
<Button type="primary" onClick={() => this.props.isShow ? showConfigGatewayModal() : showConfigureModal()}></Button>
</li>
</ul>
);

View File

@@ -6,6 +6,7 @@ import { curveKeys, CURVE_KEY_MAP, PERIOD_RADIO_MAP, PERIOD_RADIO } from './conf
import moment = require('moment');
import { observer } from 'mobx-react';
import { timeStampStr } from 'constants/strategy';
import { adminMonitor } from 'store/admin-monitor';
@observer
export class DataCurveFilter extends React.Component {
@@ -21,6 +22,7 @@ export class DataCurveFilter extends React.Component {
}
public refreshAll = () => {
adminMonitor.setRequestId(null);
Object.keys(curveKeys).forEach((c: curveKeys) => {
const { typeInfo, curveInfo: option } = CURVE_KEY_MAP.get(c);
const { parser } = typeInfo;
@@ -32,7 +34,7 @@ export class DataCurveFilter extends React.Component {
return (
<>
<Radio.Group onChange={this.radioChange} defaultValue={curveInfo.periodKey}>
{PERIOD_RADIO.map(p => <Radio.Button key={p.key} value={p.key}>{p.label}</Radio.Button>)}
{PERIOD_RADIO.map(p => <Radio.Button key={p.key} value={p.key}>{p.label}</Radio.Button>)}
</Radio.Group>
<DatePicker.RangePicker
format={timeStampStr}

View File

@@ -79,7 +79,7 @@ export class IndividualBill extends React.Component {
}
public renderTableList() {
const adminUrl=`${urlPrefix}/admin/bill-detail`
const adminUrl = `${urlPrefix}/admin/bill-detail`
return (
<Table
rowKey="key"
@@ -89,11 +89,11 @@ export class IndividualBill extends React.Component {
/>
);
}
public renderChart() {
return (
<div className="chart-box">
<BarChartComponet ref={(ref) => this.chart = ref } getChartData={this.getData.bind(this, null)} />
<BarChartComponet ref={(ref) => this.chart = ref} getChartData={this.getData.bind(this, null)} />
</div>
);
}
@@ -132,7 +132,7 @@ export class IndividualBill extends React.Component {
<>
<div className="container">
<Tabs defaultActiveKey="1" type="card">
<TabPane
<TabPane
tab={<>
<span></span>&nbsp;
<a
@@ -142,7 +142,7 @@ export class IndividualBill extends React.Component {
>
<Icon type="question-circle" />
</a>
</>}
</>}
key="1"
>
{this.renderDatePick()}

View File

@@ -13,17 +13,20 @@ export class PlatformManagement extends React.Component {
public render() {
return (
<>
<Tabs activeKey={location.hash.substr(1) || '1'} type="card" onChange={handleTabKey}>
<TabPane tab="应用管理" key="1">
<AdminAppList />
</TabPane>
<TabPane tab="用户管理" key="2">
<UserManagement />
</TabPane>
<TabPane tab="配置管理" key="3">
<ConfigureManagement />
</TabPane>
</Tabs>
<Tabs activeKey={location.hash.substr(1) || '1'} type="card" onChange={handleTabKey}>
<TabPane tab="应用管理" key="1">
<AdminAppList />
</TabPane>
<TabPane tab="用户管理" key="2">
<UserManagement />
</TabPane>
<TabPane tab="平台配置" key="3">
<ConfigureManagement isShow={false} />
</TabPane>
<TabPane tab="网关配置" key="4">
<ConfigureManagement isShow={true} />
</TabPane>
</Tabs>
</>
);
}

View File

@@ -29,7 +29,7 @@ export class UserManagement extends SearchAndFilterContainer {
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IUser) =>
(item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin ;
(item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin;
return data;
}

View File

@@ -1,7 +1,7 @@
import * as React from 'react';
import { alarm } from 'store/alarm';
import { IMonitorGroups } from 'types/base-type';
import { getValueFromLocalStorage, setValueToLocalStorage } from 'lib/local-storage';
import { getValueFromLocalStorage, setValueToLocalStorage, deleteValueFromLocalStorage } from 'lib/local-storage';
import { VirtualScrollSelect } from '../../../component/virtual-scroll-select';
interface IAlarmSelectProps {
@@ -36,6 +36,10 @@ export class AlarmSelect extends React.Component<IAlarmSelectProps> {
onChange && onChange(params);
}
public componentWillUnmount() {
deleteValueFromLocalStorage('monitorGroups');
}
public render() {
const { value, isDisabled } = this.props;
return (

View File

@@ -11,6 +11,7 @@ import { filterKeys } from 'constants/strategy';
import { VirtualScrollSelect } from 'component/virtual-scroll-select';
import { IsNotNaN } from 'lib/utils';
import { searchProps } from 'constants/table';
import { toJS } from 'mobx';
interface IDynamicProps {
form?: any;
@@ -33,6 +34,7 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
public monitorType: string = null;
public clusterId: number = null;
public clusterName: string = null;
public clusterIdentification: string | number = null;
public topicName: string = null;
public consumerGroup: string = null;
public location: string = null;
@@ -45,16 +47,18 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
this.props.form.validateFields((err: Error, values: any) => {
if (!err) {
monitorType = values.monitorType;
const index = cluster.clusterData.findIndex(item => item.clusterId === values.cluster);
const index = cluster.clusterData.findIndex(item => item.clusterIdentification === values.cluster);
if (index > -1) {
values.clusterIdentification = cluster.clusterData[index].clusterIdentification;
values.clusterName = cluster.clusterData[index].clusterName;
}
for (const key of Object.keys(values)) {
if (filterKeys.indexOf(key) > -1) { // 只有这几种值可以设置
filterList.push({
tkey: key === 'clusterName' ? 'cluster' : key, // 传参需要将clusterName转成cluster
tkey: key === 'clusterName' ? 'cluster' : key, // clusterIdentification
topt: '=',
tval: [values[key]],
clusterIdentification: values.clusterIdentification
});
}
}
@@ -74,13 +78,13 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
public resetFormValue(
monitorType: string = null,
clusterId: number = null,
clusterIdentification: any = null,
topicName: string = null,
consumerGroup: string = null,
location: string = null) {
const { setFieldsValue } = this.props.form;
setFieldsValue({
cluster: clusterId,
cluster: clusterIdentification,
topic: topicName,
consumerGroup,
location,
@@ -88,18 +92,18 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
});
}
public getClusterId = (clusterName: string) => {
public getClusterId = async (clusterIdentification: any) => {
let clusterId = null;
const index = cluster.clusterData.findIndex(item => item.clusterName === clusterName);
const index = cluster.clusterData.findIndex(item => item.clusterIdentification === clusterIdentification);
if (index > -1) {
clusterId = cluster.clusterData[index].clusterId;
}
if (clusterId) {
cluster.getClusterMetaTopics(clusterId);
await cluster.getClusterMetaTopics(clusterId);
this.clusterId = clusterId;
return this.clusterId;
}
return this.clusterId = clusterName as any;
};
return this.clusterId = clusterId as any;
}
public async initFormValue(monitorRule: IRequestParams) {
@@ -108,17 +112,19 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
const topicFilter = strategyFilterList.filter(item => item.tkey === 'topic')[0];
const consumerFilter = strategyFilterList.filter(item => item.tkey === 'consumerGroup')[0];
const clusterName = clusterFilter ? clusterFilter.tval[0] : null;
const clusterIdentification = clusterFilter ? clusterFilter.tval[0] : null;
const topic = topicFilter ? topicFilter.tval[0] : null;
const consumerGroup = consumerFilter ? consumerFilter.tval[0] : null;
const location: string = null;
const monitorType = monitorRule.strategyExpressionList[0].metric;
alarm.changeMonitorStrategyType(monitorType);
await this.getClusterId(clusterName);
//增加clusterIdentification替代原来的clusterName
this.clusterIdentification = clusterIdentification;
await this.getClusterId(this.clusterIdentification);
//
await this.handleSelectChange(topic, 'topic');
await this.handleSelectChange(consumerGroup, 'consumerGroup');
this.resetFormValue(monitorType, this.clusterId, topic, consumerGroup, location);
this.resetFormValue(monitorType, this.clusterIdentification, topic, consumerGroup, location);
}
public clearFormData() {
@@ -130,11 +136,12 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
this.resetFormValue();
}
public async handleClusterChange(e: number) {
this.clusterId = e;
public async handleClusterChange(e: any) {
this.clusterIdentification = e;
this.topicName = null;
topic.setLoading(true);
await cluster.getClusterMetaTopics(e);
const clusterId = await this.getClusterId(e);
await cluster.getClusterMetaTopics(clusterId);
this.resetFormValue(this.monitorType, e, null, this.consumerGroup, this.location);
topic.setLoading(false);
}
@@ -170,7 +177,7 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
}
this.consumerGroup = null;
this.location = null;
this.resetFormValue(this.monitorType, this.clusterId, this.topicName);
this.resetFormValue(this.monitorType, this.clusterIdentification, this.topicName);
topic.setLoading(false);
}
@@ -213,17 +220,24 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
},
rules: [{ required: true, message: '请选择监控指标' }],
} as IVritualScrollSelect;
const clusterData = toJS(cluster.clusterData);
const options = clusterData?.length ? clusterData.map(item => {
return {
label: `${item.clusterName}${item.description ? '' + item.description + '' : ''}`,
value: item.clusterIdentification
}
}) : null;
const clusterItem = {
label: '集群',
options: cluster.clusterData,
defaultValue: this.clusterId,
options,
defaultValue: this.clusterIdentification,
rules: [{ required: true, message: '请选择集群' }],
attrs: {
placeholder: '请选择集群',
className: 'middle-size',
className: 'large-size',
disabled: this.isDetailPage,
onChange: (e: number) => this.handleClusterChange(e),
onChange: (e: any) => this.handleClusterChange(e),
},
key: 'cluster',
} as unknown as IVritualScrollSelect;
@@ -241,7 +255,7 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
}),
attrs: {
placeholder: '请选择Topic',
className: 'middle-size',
className: 'large-size',
disabled: this.isDetailPage,
onChange: (e: string) => this.handleSelectChange(e, 'topic'),
},
@@ -329,7 +343,7 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
key={v.value || v.key || index}
value={v.value}
>
{v.label.length > 25 ? <Tooltip placement="bottomLeft" title={v.label}>
{v.label?.length > 25 ? <Tooltip placement="bottomLeft" title={v.label}>
{v.label}
</Tooltip> : v.label}
</Select.Option>

View File

@@ -43,21 +43,23 @@
Icon {
margin-left: 8px;
}
.ant-form-item-label {
// padding-left: 10px;
width: 118px;
text-align: right !important;
}
&.type-form {
padding-top: 10px;
padding-top: 10px;
.ant-form{
min-width: 755px;
}
.ant-form-item {
width: 30%;
width: 45%;
min-width: 360px;
}
.ant-form-item-label {
padding-left: 10px;
}
.ant-form-item-control {
width: 220px;
width: 300px;
}
}

View File

@@ -12,7 +12,6 @@ import { alarm } from 'store/alarm';
import { app } from 'store/app';
import Url from 'lib/url-parser';
import { IStrategyExpression, IRequestParams } from 'types/alarm';
@observer
export class AddAlarm extends SearchAndFilterContainer {
public isDetailPage = window.location.pathname.includes('/alarm-detail'); // 判断是否为详情
@@ -90,8 +89,8 @@ export class AddAlarm extends SearchAndFilterContainer {
const filterObj = this.typeForm.getFormData().filterObj;
// tslint:disable-next-line:max-line-length
if (!actionValue || !timeValue || !typeValue || !strategyList.length || !filterObj || !filterObj.filterList.length) {
message.error('请正确填写必填项');
return null;
message.error('请正确填写必填项');
return null;
}
if (filterObj.monitorType === 'online-kafka-topic-throttled') {
@@ -101,13 +100,17 @@ export class AddAlarm extends SearchAndFilterContainer {
tval: [typeValue.app],
});
}
this.id && filterObj.filterList.forEach((item: any) => {
if (item.tkey === 'cluster') {
item.tval = [item.clusterIdentification]
}
})
strategyList = strategyList.map((row: IStrategyExpression) => {
return {
...row,
metric: filterObj.monitorType,
};
});
return {
appId: typeValue.app,
name: typeValue.alarmName,
@@ -129,7 +132,7 @@ export class AddAlarm extends SearchAndFilterContainer {
public renderAlarmStrategy() {
return (
<div className="config-wrapper">
<span className="span-tag"></span>
<span className="span-tag" data-set={alarm.monitorType}></span>
<div className="info-wrapper">
<WrappedDynamicSetStrategy wrappedComponentRef={(form: any) => this.strategyForm = form} />
</div>
@@ -139,9 +142,9 @@ export class AddAlarm extends SearchAndFilterContainer {
public renderTimeForm() {
return (
<>
<WrappedTimeForm wrappedComponentRef={(form: any) => this.timeForm = form} />
</>
<>
<WrappedTimeForm wrappedComponentRef={(form: any) => this.timeForm = form} />
</>
);
}
@@ -164,7 +167,7 @@ export class AddAlarm extends SearchAndFilterContainer {
{this.renderAlarmStrategy()}
{this.renderTimeForm()}
<ActionForm ref={(actionForm) => this.actionForm = actionForm} />
</div>
</div>
</Spin>
);
}

View File

@@ -5,6 +5,7 @@ import { IStringMap } from 'types/base-type';
import { IRequestParams } from 'types/alarm';
import { IFormSelect, IFormItem, FormItemType } from 'component/x-form';
import { searchProps } from 'constants/table';
import { alarm } from 'store/alarm';
interface IDynamicProps {
form: any;
@@ -27,6 +28,7 @@ class DynamicSetStrategy extends React.Component<IDynamicProps> {
public crudList = [] as ICRUDItem[];
public state = {
shouldUpdate: false,
monitorType: alarm.monitorType
};
public componentDidMount() {
@@ -130,7 +132,7 @@ class DynamicSetStrategy extends React.Component<IDynamicProps> {
if (lineValue.func === 'happen' && paramsArray.length > 1 && paramsArray[0] < paramsArray[1]) {
strategyList = []; // 清空赋值
return message.error('周期值应大于次数') ;
return message.error('周期值应大于次数');
}
lineValue.params = paramsArray.join(',');
@@ -292,8 +294,39 @@ class DynamicSetStrategy extends React.Component<IDynamicProps> {
}
return element;
}
public renderFormList(row: ICRUDItem) {
public unit(monitorType: string) {
let element = null;
switch (monitorType) {
case 'online-kafka-topic-msgIn':
element = "条/秒"
break;
case 'online-kafka-topic-bytesIn':
element = "字节/秒"
break;
case 'online-kafka-topic-bytesRejected':
element = "字节/秒"
break;
case 'online-kafka-topic-produce-throttled':
element = "1表示被限流"
break;
case 'online-kafka-topic-fetch-throttled':
element = "1表示被限流"
break;
case 'online-kafka-consumer-maxLag':
element = "条"
break;
case 'online-kafka-consumer-lag':
element = "条"
break;
case 'online-kafka-consumer-maxDelayTime':
element = "秒"
break;
}
return (
<span>{element}</span>
)
}
public renderFormList(row: ICRUDItem, monitorType: string) {
const key = row.id;
const funcType = row.func;
@@ -309,6 +342,7 @@ class DynamicSetStrategy extends React.Component<IDynamicProps> {
key: key + '-func',
} as IFormSelect)}
{this.getFuncItem(row)}
{row.func !== 'c_avg_rate_abs' && row.func !== 'pdiff' ? this.unit(monitorType) : null}
</div>
);
}
@@ -340,8 +374,8 @@ class DynamicSetStrategy extends React.Component<IDynamicProps> {
<Form>
{crudList.map((row, index) => {
return (
<div key={index}>
{this.renderFormList(row)}
<div key={`${index}-${this.state.monitorType}`}>
{this.renderFormList(row, alarm.monitorType)}
{
crudList.length > 1 ? (
<Icon

View File

@@ -50,23 +50,23 @@ export class TypeForm extends React.Component {
return (
<>
<div className="config-wrapper">
<span className="span-tag"></span>
<div className="alarm-x-form type-form">
<XFormComponent
ref={form => this.$form = form}
formData={formData}
formMap={xTypeFormMap}
layout="inline"
/>
</div>
</div >
<div className="config-wrapper">
<span className="span-tag"></span>
<div className="alarm-x-form type-form">
<WrappedDynamicSetFilter wrappedComponentRef={(form: any) => this.filterForm = form} />
</div>
</div >
<div className="config-wrapper">
<span className="span-tag"></span>
<div className="alarm-x-form type-form">
<XFormComponent
ref={form => this.$form = form}
formData={formData}
formMap={xTypeFormMap}
layout="inline"
/>
</div>
</div >
<div className="config-wrapper">
<span className="span-tag"></span>
<div className="alarm-x-form type-form">
<WrappedDynamicSetFilter wrappedComponentRef={(form: any) => this.filterForm = form} />
</div>
</div >
</>
);
}

View File

@@ -9,6 +9,7 @@ import { pagination } from 'constants/table';
import { urlPrefix } from 'constants/left-menu';
import { alarm } from 'store/alarm';
import 'styles/table-filter.less';
import { Link } from 'react-router-dom';
@observer
export class AlarmList extends SearchAndFilterContainer {
@@ -24,7 +25,7 @@ export class AlarmList extends SearchAndFilterContainer {
if (app.active !== '-1' || searchKey !== '') {
data = origin.filter(d =>
((d.name !== undefined && d.name !== null) && d.name.toLowerCase().includes(searchKey as string)
|| ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string)))
|| ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string)))
&& (app.active === '-1' || d.appId === (app.active + '')),
);
} else {
@@ -55,9 +56,7 @@ export class AlarmList extends SearchAndFilterContainer {
{this.renderSearch('名称:', '请输入告警规则或者操作人')}
<li className="right-btn-1">
<Button type="primary">
<a href={`${urlPrefix}/alarm/add`}>
</a>
<Link to={`/alarm/add`}></Link>
</Button>
</li>
</>
@@ -68,6 +67,9 @@ export class AlarmList extends SearchAndFilterContainer {
if (!alarm.monitorStrategies.length) {
alarm.getMonitorStrategies();
}
if (!app.data.length) {
app.getAppList();
}
}
public render() {

View File

@@ -31,11 +31,11 @@ export class ClusterOverview extends React.Component<IOverview> {
const content = this.props.basicInfo as IBasicInfo;
const clusterContent = [{
value: content.clusterName,
label: '集群中文名称',
label: '集群名称',
},
{
value: content.clusterName,
label: '集群英文名称',
value: content.clusterIdentification,
label: '集群标识',
},
{
value: clusterTypeMap[content.mode],
@@ -44,8 +44,8 @@ export class ClusterOverview extends React.Component<IOverview> {
value: moment(content.gmtCreate).format(timeFormat),
label: '接入时间',
}, {
value: content.physicalClusterId,
label: '物理集群ID',
value: content.clusterId,
label: '集群ID',
}];
const clusterInfo = [{
value: content.clusterVersion,

View File

@@ -13,32 +13,14 @@ const { confirm } = Modal;
export const getClusterColumns = (urlPrefix: string) => {
return [
{
title: '逻辑集群ID',
title: '集群ID',
dataIndex: 'clusterId',
key: 'clusterId',
width: '9%',
sorter: (a: IClusterData, b: IClusterData) => b.clusterId - a.clusterId,
},
{
title: '逻辑集群中文名称',
dataIndex: 'clusterName',
key: 'clusterName',
width: '13%',
onCell: () => ({
style: {
maxWidth: 120,
...cellStyle,
},
}),
sorter: (a: IClusterData, b: IClusterData) => a.clusterName.charCodeAt(0) - b.clusterName.charCodeAt(0),
render: (text: string, record: IClusterData) => (
<Tooltip placement="bottomLeft" title={text} >
<a href={`${urlPrefix}/cluster/cluster-detail?clusterId=${record.clusterId}`}> {text} </a>
</Tooltip>
),
},
{
title: '逻辑集群英文名称',
title: '集群名称',
dataIndex: 'clusterName',
key: 'clusterName',
width: '13%',
@@ -55,6 +37,24 @@ export const getClusterColumns = (urlPrefix: string) => {
</Tooltip>
),
},
// {
// title: '逻辑集群英文名称',
// dataIndex: 'clusterName',
// key: 'clusterName',
// width: '13%',
// onCell: () => ({
// style: {
// maxWidth: 120,
// ...cellStyle,
// },
// }),
// sorter: (a: IClusterData, b: IClusterData) => a.clusterName.charCodeAt(0) - b.clusterName.charCodeAt(0),
// render: (text: string, record: IClusterData) => (
// <Tooltip placement="bottomLeft" title={text} >
// <a href={`${urlPrefix}/cluster/cluster-detail?clusterId=${record.clusterId}`}> {text} </a>
// </Tooltip>
// ),
// },
{
title: 'Topic数量',
dataIndex: 'topicNum',

View File

@@ -78,7 +78,7 @@ export class MyCluster extends SearchAndFilterContainer {
rules: [
{
required: true,
pattern: /^.{5,}.$/,
pattern: /^.{4,}.$/,
message: '请输入至少5个字符',
},
],
@@ -91,7 +91,7 @@ export class MyCluster extends SearchAndFilterContainer {
],
formData: {},
visible: true,
title: '申请集群',
title: <div><span></span><a className='applicationDocument' href="https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/resource_apply.md" target='_blank'></a></div>,
okText: '确认',
onSubmit: (value: any) => {
value.idc = region.currentRegion;
@@ -160,7 +160,7 @@ export class MyCluster extends SearchAndFilterContainer {
data = searchKey ? origin.filter((item: IClusterData) =>
(item.clusterName !== undefined && item.clusterName !== null) && item.clusterName.toLowerCase().includes(searchKey as string),
) : origin ;
) : origin;
return data;
}

Some files were not shown because too many files have changed in this diff Show More