Compare commits

...

96 Commits

Author SHA1 Message Date
EricZeng
bf979fa3b3 Merge pull request #252 from didi/dev_2.4.0
Dev 2.4.0
2021-04-26 09:55:30 +08:00
EricZeng
b3b88891e9 Merge pull request #251 from lucasun/master
v2.4.0
2021-04-25 21:01:33 +08:00
lucasun
01c5de60dc Merge branch 'master' into master 2021-04-25 20:54:10 +08:00
孙超
47b8fe5022 V2.4.1 FE 2021-04-25 20:43:20 +08:00
zengqiao
324b37b875 v2.4.0 be code 2021-04-25 18:11:52 +08:00
zengqiao
76e7e192d8 bump version to 2.4.0 2021-04-25 17:40:47 +08:00
EricZeng
f9f3c4d923 Merge pull request #240 from yangvipguang/docker-dev
Docker容器镜像优化
2021-04-25 17:23:36 +08:00
杨光
a476476bd1 Update Dockerfile
添加进程管理器tini 防止僵尸应用
升级基础镜像到Java 16 alpine 
默认使用官方jar 包
默认开启JMX 监控
2021-04-23 14:10:40 +08:00
杨光
82a60a884a Add files via upload 2021-04-23 14:06:55 +08:00
杨光
f17727de18 Merge pull request #1 from didi/master
同步提交
2021-04-23 11:31:35 +08:00
EricZeng
f98446e139 Merge pull request #239 from Liu-XinYuan/i238
fix  create topic failed when not specify peak_byte_in
2021-04-22 19:02:48 +08:00
Liu-XinYuan
57a48dadaa modify from obj ==null to ValidateUtils.isNull 2021-04-22 18:44:34 +08:00
Liu-XinYuan
c65ec68e46 fix create topic failed when not specify peak_byte_in 2021-04-22 18:30:23 +08:00
zengqiao
d6559be3fc 部分后台任务获取Topic列表时不走缓存 2021-04-22 16:06:37 +08:00
zengqiao
59df5b24fe broker元信息中增加Rack信息 2021-04-20 19:28:36 +08:00
zengqiao
3e1544294b 删除无效代码 2021-04-20 17:22:26 +08:00
EricZeng
a12c398816 Merge pull request #232 from didi/dev
应用下线功能权限列表获取优化
2021-04-20 13:54:49 +08:00
EricZeng
0bd3e28348 Merge pull request #228 from PengShuaixin/dev
应用下线审批功能优化
2021-04-20 13:51:32 +08:00
PengShuaixin
ad4e39c088 应用下线功能权限列表获取优化 2021-04-20 11:22:11 +08:00
PengShuaixin
2668d96e6a Merge remote-tracking branch 'origin/dev' into dev
# Conflicts:
#	kafka-manager-extends/kafka-manager-bpm/src/main/java/com/xiaojukeji/kafka/manager/bpm/order/impl/DeleteAppOrder.java
2021-04-20 11:19:49 +08:00
shirenchuang
357c496aad 下线应用的时候 判断先下线topic 2021-04-20 11:18:07 +08:00
shirenchuang
22a513ba22 升级mysql驱动;支持Mysql 8.0+ 2021-04-20 11:18:07 +08:00
zengqiao
e6dd1119be 通过获取类的RequestMapping注解来判断当前请求是否需要登录 2021-04-20 11:18:07 +08:00
EricZeng
2dbe454e04 Merge pull request #231 from didi/master
merge master
2021-04-20 10:46:03 +08:00
zengqiao
e3a59b76eb 修复数据删空之后, 缓存不能被更新的BUG 2021-04-19 20:31:40 +08:00
zengqiao
b67a162d3f bump version to v2.3.1 2021-04-19 14:13:48 +08:00
shirenchuang
8bfde9fbaf Merge branch 'shirc_dev' into dev 2021-04-19 10:19:11 +08:00
shirenchuang
1fdecf8def 下线应用的时候 判断先下线topic 2021-04-19 10:17:29 +08:00
zengqiao
1141d4b833 通过获取类的RequestMapping注解来判断当前请求是否有权限 2021-04-15 18:12:21 +08:00
EricZeng
cdac92ca7b Merge pull request #229 from didi/dev
通过获取类的RequestMapping注解来判断当前请求是否需要登录
2021-04-14 19:47:43 +08:00
zengqiao
2a57c260cc 通过获取类的RequestMapping注解来判断当前请求是否需要登录 2021-04-14 19:40:19 +08:00
PengShuaixin
f41e29ab3a 应用下线功能优化 2021-04-14 12:29:59 +08:00
zengqiao
8f10624073 add jmx prometheus jar 2021-04-12 17:58:24 +08:00
EricZeng
eb1f8be11e Merge pull request #224 from didi/master
merge master
2021-04-12 13:51:14 +08:00
EricZeng
3333501ab9 Merge pull request #222 from zwOvO/master
删除无用import、删除无用代码
2021-04-09 19:28:04 +08:00
zwOvO
0f40820315 删除无用import、删除无用代码 2021-04-09 11:41:06 +08:00
shirenchuang
5f1a839620 升级mysql驱动;支持Mysql 8.0+ 2021-04-06 12:09:52 +08:00
zengqiao
b9bb1c775d change uri filter rule 2021-04-06 10:26:21 +08:00
zengqiao
1059b7376b forbiden request when uri contain .. 2021-04-06 10:01:29 +08:00
EricZeng
f38ab4a9ce Merge pull request #217 from didi/dev
拒绝包含./或/连续过多的接口请求
2021-03-31 20:00:52 +08:00
zengqiao
9e7450c012 拒绝包含./或/连续过多的接口请求 2021-03-31 19:45:18 +08:00
EricZeng
99a3e360fe Merge pull request #216 from didi/dev
接口过滤策略由接口黑名单转成接口白名单
2021-03-30 12:56:19 +08:00
lucasun
d45f8f78d6 Merge pull request #215 from zhangfenhua/master
增加nginx配置:前后端分离&配置多个静态资源
2021-03-30 11:11:58 +08:00
zengqiao
648af61116 接口过滤策略由接口黑名单转成接口白名单 2021-03-29 21:21:23 +08:00
zhangfenhua
eebf1b89b1 nginx配置手册 2021-03-29 11:53:50 +08:00
EricZeng
f8094bb624 Merge pull request #211 from didi/dev
add expert config desc
2021-03-23 15:23:10 +08:00
zengqiao
ed13e0d2c2 add expert config desc 2021-03-23 15:21:48 +08:00
EricZeng
aa830589b4 Merge pull request #210 from didi/dev
fix monitor enable time illegal bug
2021-03-22 17:22:44 +08:00
zengqiao
999a2bd929 fix monitor enable time illegal bug 2021-03-22 17:21:12 +08:00
EricZeng
d69ee98450 Merge pull request #209 from didi/dev
add faq, kafka version supported & apply logical cluster and how to handle it
2021-03-22 13:43:14 +08:00
zengqiao
f6712c24ad merge master 2021-03-22 13:42:09 +08:00
zengqiao
89d2772194 add faq, kafka version supported & apply logical cluster and how to handle it 2021-03-22 13:38:23 +08:00
mike.zhangliang
03352142b6 Update README.md
微信加群方式补充
2021-03-16 14:46:38 +08:00
lucasun
73a51e0c00 Merge pull request #205 from ZQKC/master
add qa
2021-03-10 19:27:01 +08:00
zengqiao
2e26f8caa6 add qa 2021-03-10 19:23:29 +08:00
EricZeng
f9bcce9e43 Merge pull request #3 from didi/master
merge didi Logi-KM
2021-03-10 19:20:39 +08:00
EricZeng
2ecc877ba8 fix add_cluster.md path
fix add_cluster.md path
2021-03-10 15:45:48 +08:00
EricZeng
3f8a3c69e3 Merge pull request #201 from ZQKC/master
optimize ldap
2021-03-10 14:12:35 +08:00
zengqiao
67c37a0984 optimize ldap 2021-03-10 13:52:09 +08:00
EricZeng
a58a55d00d Merge pull request #203 from lucasun/hotfix/v2.3.1
clipbord版本锁定在2.0.6,升级2.0.7会引起ts打包报错
2021-03-09 18:11:02 +08:00
孙超
06d51dd0b8 clipbord版本锁定在2.0.6,升级2.0.7会引起ts打包报错 2021-03-09 18:07:42 +08:00
zengqiao
d5db028f57 optimize ldap 2021-03-09 15:13:55 +08:00
EricZeng
fcb85ff4be Merge pull request #2 from didi/master
merge didi logi-km
2021-03-09 11:07:17 +08:00
EricZeng
3695b4363d Merge pull request #200 from didi/dev
del ResultStatus which in vo
2021-03-09 11:02:46 +08:00
zengqiao
cb11e6437c del ResultStatus in vo 2021-03-09 11:01:21 +08:00
EricZeng
5127bd11ce Merge pull request #198 from didi/master
merge master
2021-03-09 10:42:28 +08:00
EricZeng
91f90aefa1 Merge pull request #195 from fanghanyun/v2.3.0_ldap
support AD LDAP
2021-03-09 10:40:42 +08:00
fanghanyun
0a067bce36 Support AD LDAP 2021-03-09 10:19:08 +08:00
fanghanyun
f0aba433bf Support AD LDAP 2021-03-08 20:31:15 +08:00
EricZeng
f06467a0e3 Merge pull request #197 from didi/dev
delete without used code
2021-03-05 16:12:27 +08:00
zengqiao
68bcd3c710 delete without used code 2021-03-05 16:05:58 +08:00
EricZeng
a645733cc5 Merge pull request #196 from didi/dev
add gateway config docs
2021-03-05 15:31:53 +08:00
zengqiao
49fe5baf94 add gateway config docs 2021-03-05 14:59:40 +08:00
fanghanyun
411ee55653 support AD LDAP 2021-03-05 14:45:54 +08:00
EricZeng
e351ce7411 Merge pull request #194 from didi/dev
reject req when uri contains ..
2021-03-04 17:52:56 +08:00
zengqiao
f33e585a71 reject req when uri contains .. 2021-03-04 17:51:35 +08:00
EricZeng
77f3096e0d Merge pull request #191 from didi/dev
Dev
2021-02-28 22:04:34 +08:00
EricZeng
9a5b18c4e6 Merge pull request #190 from JokerQueue/dev
bug fix:correct way to judge a user does not exist
2021-02-28 14:36:28 +08:00
Joker
0c7112869a bug fix:correct way to judge a user does not exist 2021-02-27 22:35:35 +08:00
EricZeng
f66a4d71ea Merge pull request #188 from JokerQueue/dev
bug fix: unexpected stop of the topic sync task
2021-02-26 22:46:54 +08:00
Joker
9b0ab878df bug fix: unexpected stop of the topic sync task 2021-02-26 19:47:03 +08:00
EricZeng
d30b90dfd0 Merge pull request #186 from ZHAOYINRUI/master
新增releases_notes、更新FAQ
2021-02-26 09:59:18 +08:00
ZHAOYINRUI
efd28f8c27 Update faq.md 2021-02-26 00:03:25 +08:00
ZHAOYINRUI
e05e722387 Add files via upload 2021-02-26 00:01:09 +08:00
EricZeng
748e81956d Update faq.md 2021-02-24 14:10:41 +08:00
EricZeng
c9a41febce Merge pull request #184 from didi/dev
reject illegal zk address
2021-02-23 17:32:20 +08:00
zengqiao
18e244b756 reject illegal zk address 2021-02-23 17:18:49 +08:00
mrazkong
47676139a3 Merge pull request #183 from didi/dev
support dynamic change cluster auth
2021-02-23 16:56:26 +08:00
zengqiao
1ed933b7ad support dynamic change auth 2021-02-23 16:34:21 +08:00
EricZeng
f6a343ccd6 Merge pull request #182 from didi/master
merge master
2021-02-23 15:47:28 +08:00
EricZeng
dd6cdc22e5 Merge pull request #178 from Observe-secretly/v2.2.1_ldap
新功能:增加了对LDAP登录的支持
2021-02-10 12:35:07 +08:00
李民
f70f4348b3 Merge branch 'master' into v2.2.1_ldap 2021-02-10 10:00:32 +08:00
李民
e7349161f3 BUG FIX:修改LDAP登录重复注册用户的BUG 2021-02-09 15:22:26 +08:00
李民
2e2907ea09 修改LDAP获取UserDN的时候可能出错的问题 2021-02-09 14:33:53 +08:00
李民
25e84b2a6c 新功能:增加对LDAP的登录的支持 2021-02-09 11:33:54 +08:00
EricZeng
9aefc55534 Merge pull request #1 from didi/dev
merge didi dev
2021-01-23 11:16:35 +08:00
97 changed files with 1616 additions and 458 deletions

View File

@@ -67,11 +67,16 @@
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
- [kafka实践十五滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
## 3 滴滴Logi开源用户钉钉交流群
## 3 滴滴Logi开源用户交流群
![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png)
微信加群:关注公众号 Obsuite(官方公众号) 回复 "Logi加群"
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
钉钉群ID32821440
钉钉群ID32821440
## 4 OCE认证
OCE是一个认证机制和交流平台为滴滴Logi-KafkaManager生产用户量身打造我们会为OCE企业提供更好的技术支持比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等如果贵司Logi-KafkaManager上了生产[快来加入吧](http://obsuite.didiyun.com/open/openAuth)

97
Releases_Notes.md Normal file
View File

@@ -0,0 +1,97 @@
---
![kafka-manager-logo](./docs/assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## v2.3.0
版本上线时间2021-02-08
### 能力提升
- 新增支持docker化部署
- 可指定Broker作为候选controller
- 可新增并管理网关配置
- 可获取消费组状态
- 增加集群的JMX认证
### 体验优化
- 优化编辑用户角色、修改密码的流程
- 新增consumerID的搜索功能
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
- 在相应位置增加《资源申请文档》链接
### bug修复
- 修复Broker监控图表时间轴展示错误的问题
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
## v2.2.0
版本上线时间2021-01-25
### 能力提升
- 优化工单批量操作流程
- 增加获取Topic75分位/99分位的实时耗时数据
- 增加定时任务可将无主未落DB的Topic定期写入DB
### 体验优化
- 在相应位置增加《集群接入文档》链接
- 优化物理集群、逻辑集群含义
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
- 优化Topic审批时Topic数据保存时间的配置流程
- 优化Topic/应用申请、审批时的错误提示文案
- 优化Topic数据采样的操作项文案
- 优化运维人员删除Topic时的提示文案
- 优化运维人员删除Region的删除逻辑与提示文案
- 优化运维人员删除逻辑集群的提示文案
- 优化上传集群配置文件时的文件类型限制条件
### bug修复
- 修复填写应用名称时校验特殊字符出错的问题
- 修复普通用户越权访问应用详情的问题
- 修复由于Kafka版本升级导致的数据压缩格式无法获取的问题
- 修复删除逻辑集群或Topic之后界面依旧展示的问题
- 修复进行Leader rebalance操作时执行结果重复提示的问题
## v2.1.0
版本上线时间2020-12-19
### 体验优化
- 优化页面加载时的背景样式
- 优化普通用户申请Topic权限的流程
- 优化Topic申请配额、申请分区的权限限制
- 优化取消Topic权限的文案提示
- 优化申请配额表单的表单项名称
- 优化重置消费偏移的操作流程
- 优化创建Topic迁移任务的表单内容
- 优化Topic扩分区操作的弹窗界面样式
- 优化集群Broker监控可视化图表样式
- 优化创建逻辑集群的表单内容
- 优化集群安全协议的提示文案
### bug修复
- 修复偶发性重置消费偏移失败的问题

View File

@@ -4,7 +4,7 @@ cd $workspace
## constant
OUTPUT_DIR=./output
KM_VERSION=2.3.0
KM_VERSION=2.4.0
APP_NAME=kafka-manager
APP_DIR=${APP_NAME}-${KM_VERSION}

View File

@@ -1,14 +1,11 @@
FROM openjdk:8-jdk-alpine3.9
FROM openjdk:16-jdk-alpine3.13
LABEL author="yangvipguang"
ENV VERSION 2.1.0
ENV JAR_PATH kafka-manager-web/target
COPY $JAR_PATH/kafka-manager-web-$VERSION-SNAPSHOT.jar /tmp/app.jar
COPY $JAR_PATH/application.yml /km/
ENV VERSION 2.3.1
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
RUN apk add --no-cache --virtual .build-deps \
RUN apk add --no-cache --virtual .build-deps \
font-adobe-100dpi \
ttf-dejavu \
fontconfig \
@@ -19,26 +16,28 @@ RUN apk add --no-cache --virtual .build-deps \
tomcat-native \
&& apk del .build-deps
RUN apk add --no-cache tini
ENV AGENT_HOME /opt/agent/
WORKDIR /tmp
COPY $JAR_PATH/kafka-manager.jar app.jar
# COPY application.yml application.yml ##默认使用helm 挂载,防止敏感配置泄露
COPY docker-depends/config.yaml $AGENT_HOME
COPY docker-depends/jmx_prometheus_javaagent-0.14.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.14.0.jar=9999:$AGENT_HOME/config.yaml"
COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \
-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:/tmp/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
#-Xlog:gc -Xlog:gc* -Xlog:gc+heap=trace -Xlog:safepoint
EXPOSE 8080 9999
ENTRYPOINT ["sh","-c","java -jar $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]
## 默认不带Prometheus JMX监控需要可以自行取消以下注释并注释上面一行默认Entrypoint 命令。
## ENTRYPOINT ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]
ENTRYPOINT ["tini", "--"]
CMD ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"]

View File

@@ -9,6 +9,13 @@
# 动态配置管理
## 0、目录
- 1、Topic定时同步任务
- 2、专家服务——Topic分区热点
- 3、专家服务——Topic分区不足
## 1、Topic定时同步任务
### 1.1、配置的用途
@@ -63,3 +70,53 @@ task:
]
```
---
## 2、专家服务——Topic分区热点
`Region`所圈定的Broker范围内某个Topic的Leader数在这些圈定的Broker上分布不均衡时我们认为该Topic是存在热点的Topic。
备注单纯的查看Leader数的分布确实存在一定的局限性这块欢迎贡献更多的热点定义于代码。
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
REGION_HOT_TOPIC_CONFIG
```
配置Value
```json
{
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时则认为是存在热点的Topic
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```
---
## 3、专家服务——Topic分区不足
总流量除以分区数超过指定值时则我们认为存在Topic分区不足。
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
TOPIC_INSUFFICIENT_PARTITION_CONFIG
```
配置Value
```json
{
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```

View File

@@ -0,0 +1,10 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# Kafka-Gateway 配置说明

View File

@@ -0,0 +1,94 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## nginx配置-安装手册
# 一、独立部署
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
# 二、nginx配置
## 1、独立部署配置
```
#nginx 根目录访问配置如下
location / {
proxy_pass http://ip:port;
}
```
## 2、前后端分离&配置多个静态资源
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
### 1、源码下载
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
### 2、修改webpack.config.js 配置文件
修改`kafka-manager-console`模块 `webpack.config.js`
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
```
cd kafka-manager-console
vi webpack.config.js
# publicPath默认打包方式根目录下修改为nginx代理访问路径。
let publicPath = '/xxxx';
```
### 3、打包
```
npm cache clean --force && npm install
```
ps如果打包过程中报错运行`npm install clipboard@2.0.6`,相反请忽略!
### 4、部署
#### 1、前段静态文件部署
静态资源 `../kafka-manager-web/src/main/resources/templates`
上传到指定目录,目前以`root目录`做demo
#### 2、上传jar包并启动请参考[kafka-manager 安装手册](install_guide_cn.md)
#### 3、修改nginx 配置
```
location /xxxx {
# 静态文件存放位置
alias /root/templates;
try_files $uri $uri/ /xxxx/index.html;
index index.html;
}
location /api {
proxy_pass http://ip:port;
}
#后代端口建议使用/api如果冲突可以使用以下配置
#location /api/v2 {
# proxy_pass http://ip:port;
#}
#location /api/v1 {
# proxy_pass http://ip:port;
#}
```

View File

@@ -7,9 +7,9 @@
---
# FAQ
# FAQ
- 0、Github图裂问题解决
- 0、支持哪些Kafka版本
- 1、Topic申请、新建监控告警等操作时没有可选择的集群
- 2、逻辑集群 & Region的用途
- 3、登录失败
@@ -18,22 +18,16 @@
- 6、如何使用`MySQL 8`
- 7、`Jmx`连接失败如何解决?
- 8、`topic biz data not exist`错误及处理方式
- 9、进程启动后如何查看API文档
- 10、如何创建告警组
- 11、连接信息、耗时信息为什么没有数据
- 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
---
### 0、Github图裂问题解决
### 0、支持哪些Kafka版本
可以在本地机器`ping github.com`这个地址,获取到`github.com`地址的IP地址
然后将IP绑定到`/etc/hosts`文件中。
例如
```shell
# 在 /etc/hosts文件中增加如下信息
140.82.113.3 github.com
```
基本上只要所使用的Kafka还依赖于Zookeeper那么该版本的主要功能基本上应该就是支持的
---
@@ -43,7 +37,7 @@
逻辑集群的创建参看:
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加。
- [kafka-manager 接入集群](add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加。
---
@@ -76,7 +70,7 @@
- 3、数据库时区问题。
检查MySQL的topic表查看是否有数据如果有数据那么再检查设置的时区是否正确。
检查MySQL的topic_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
---
@@ -109,3 +103,26 @@
可以在`运维管控->集群列表->Topic信息`下面编辑申请权限的Topic为Topic选择一个应用即可。
以上仅仅只是针对单个Topic的场景如果你有非常多的Topic需要进行初始化的那么此时可以在配置管理中增加一个配置来定时的对无主的Topic进行同步具体见[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md)
---
### 9、进程启动后如何查看API文档
- 滴滴Logi-KafkaManager采用Swagger-API工具记录API文档。Swagger-API地址 [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-ui.html#/)
### 10、如何创建告警组
这块需要配合监控系统进行使用,现在默认已经实现了夜莺的对接,当然也可以对接自己内部的监控系统,不过需要实现一些接口。
具体的文档可见:[监控功能对接夜莺](../dev_guide/monitor_system_integrate_with_n9e.md)、[监控功能对接其他系统](../dev_guide/monitor_system_integrate_with_self.md)
### 11、连接信息、耗时信息为什么没有数据
这块需要结合滴滴内部的kafka-gateway一同使用才会有数据滴滴kafka-gateway暂未开源。
### 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
逻辑集群的申请与审批仅仅只是一个工单流程,并不会去实际创建逻辑集群,逻辑集群的创建还需要手动去创建。
具体的操作可见:[kafka-manager 接入集群](add_cluster/add_cluster.md)。

View File

@@ -47,4 +47,13 @@ public enum AccountRoleEnum {
}
return AccountRoleEnum.UNKNOWN;
}
public static AccountRoleEnum getUserRoleEnum(String roleName) {
for (AccountRoleEnum elem: AccountRoleEnum.values()) {
if (elem.message.equalsIgnoreCase(roleName)) {
return elem;
}
}
return AccountRoleEnum.UNKNOWN;
}
}

View File

@@ -46,7 +46,7 @@ public enum OperateEnum {
public static boolean validate(Integer code) {
if (code == null) {
return false;
return true;
}
for (OperateEnum state : OperateEnum.values()) {
if (state.getCode() == code) {

View File

@@ -1,45 +0,0 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 是否上报监控系统
* @author zengqiao
* @date 20/9/25
*/
public enum SinkMonitorSystemEnum {
SINK_MONITOR_SYSTEM(0, "上报监控系统"),
NOT_SINK_MONITOR_SYSTEM(1, "不上报监控系统"),
;
private Integer code;
private String message;
SinkMonitorSystemEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public void setCode(Integer code) {
this.code = code;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
@Override
public String toString() {
return "SinkMonitorSystemEnum{" +
"code=" + code +
", message='" + message + '\'' +
'}';
}
}

View File

@@ -7,18 +7,18 @@ package com.xiaojukeji.kafka.manager.common.constant;
*/
public class ApiPrefix {
public static final String API_PREFIX = "/api/";
public static final String API_V1_PREFIX = API_PREFIX + "v1/";
public static final String API_V2_PREFIX = API_PREFIX + "v2/";
private static final String API_V1_PREFIX = API_PREFIX + "v1/";
// login
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
// console
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
public static final String API_V1_NORMAL_PREFIX = API_V1_PREFIX + "normal/";
public static final String API_V1_RD_PREFIX = API_V1_PREFIX + "rd/";
public static final String API_V1_OP_PREFIX = API_V1_PREFIX + "op/";
// open
public static final String API_V1_THIRD_PART_PREFIX = API_V1_PREFIX + "third-part/";
public static final String API_V2_THIRD_PART_PREFIX = API_V2_PREFIX + "third-part/";
// gateway
public static final String GATEWAY_API_V1_PREFIX = "/gateway" + API_V1_PREFIX;

View File

@@ -25,6 +25,9 @@ public enum ResultStatus {
CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"),
APP_OFFLINE_FORBIDDEN(1406, "先下线topic才能下线应用"),
TOPIC_OPERATION_PARAM_NULL_POINTER(1450, "参数错误"),
TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(1451, "分区数错误"),
TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(1452, "Broker数不足错误"),
@@ -106,6 +109,7 @@ public enum ResultStatus {
STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"),
STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"),
STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"),
LDAP_AUTHENTICATION_FAILED(8053, "ldap authentication failed"),
;

View File

@@ -40,6 +40,9 @@ public class TopicCreationDTO extends ClusterTopicDTO {
@ApiModelProperty(value = "Topic属性列表")
private Properties properties;
@ApiModelProperty(value = "最大写入字节数")
private Long peakBytesIn;
public String getAppId() {
return appId;
}
@@ -104,6 +107,14 @@ public class TopicCreationDTO extends ClusterTopicDTO {
this.properties = properties;
}
public Long getPeakBytesIn() {
return peakBytesIn;
}
public void setPeakBytesIn(Long peakBytesIn) {
this.peakBytesIn = peakBytesIn;
}
@Override
public String toString() {
return "TopicCreationDTO{" +
@@ -135,4 +146,4 @@ public class TopicCreationDTO extends ClusterTopicDTO {
}
return true;
}
}
}

View File

@@ -81,11 +81,6 @@ public class OperateRecordDTO {
}
public boolean legal() {
if (!ModuleEnum.validate(moduleId) ||
(!ValidateUtils.isNull(operateId) && OperateEnum.validate(operateId))
) {
return false;
}
return true;
return !ValidateUtils.isNull(moduleId) && ModuleEnum.validate(moduleId) && OperateEnum.validate(operateId);
}
}

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import java.util.Date;
import java.util.Objects;
/**
* @author zengqiao
@@ -116,4 +117,22 @@ public class ClusterDO implements Comparable<ClusterDO> {
public int compareTo(ClusterDO clusterDO) {
return this.id.compareTo(clusterDO.id);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ClusterDO clusterDO = (ClusterDO) o;
return Objects.equals(id, clusterDO.id)
&& Objects.equals(clusterName, clusterDO.clusterName)
&& Objects.equals(zookeeper, clusterDO.zookeeper)
&& Objects.equals(bootstrapServers, clusterDO.bootstrapServers)
&& Objects.equals(securityProperties, clusterDO.securityProperties)
&& Objects.equals(jmxProperties, clusterDO.jmxProperties);
}
@Override
public int hashCode() {
return Objects.hash(id, clusterName, zookeeper, bootstrapServers, securityProperties, jmxProperties);
}
}

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.topic.TopicCreationDTO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import java.util.Date;
@@ -95,6 +96,7 @@ public class TopicDO {
topicDO.setClusterId(dto.getClusterId());
topicDO.setTopicName(dto.getTopicName());
topicDO.setDescription(dto.getDescription());
topicDO.setPeakBytesIn(ValidateUtils.isNull(dto.getPeakBytesIn()) ? -1L : dto.getPeakBytesIn());
return topicDO;
}
}
}

View File

@@ -1,4 +1,4 @@
package com.xiaojukeji.kafka.manager.openapi.common.vo;
package com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
@@ -14,7 +14,6 @@ public class TopicStatisticMetricsVO {
public TopicStatisticMetricsVO(Double peakBytesIn) {
this.peakBytesIn = peakBytesIn;
}
public Double getPeakBytesIn() {

View File

@@ -1,8 +1,5 @@
package com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
/**
@@ -18,12 +15,11 @@ import java.util.List;
* "host":null,
* "timestamp":"1546632983233",
* "port":-1,
* "version":4
* "version":4,
* "rack": "CY"
* }
*/
public class BrokerMetadata implements Cloneable {
private final static Logger LOGGER = LoggerFactory.getLogger(TopicMetadata.class);
private long clusterId;
private int brokerId;
@@ -43,6 +39,8 @@ public class BrokerMetadata implements Cloneable {
private long timestamp;
private String rack;
public long getClusterId() {
return clusterId;
}
@@ -107,14 +105,12 @@ public class BrokerMetadata implements Cloneable {
this.timestamp = timestamp;
}
@Override
public Object clone() {
try {
return super.clone();
} catch (CloneNotSupportedException var3) {
LOGGER.error("clone BrokerMetadata failed.", var3);
}
return null;
public String getRack() {
return rack;
}
public void setRack(String rack) {
this.rack = rack;
}
@Override
@@ -128,6 +124,7 @@ public class BrokerMetadata implements Cloneable {
", jmxPort=" + jmx_port +
", version='" + version + '\'' +
", timestamp=" + timestamp +
", rack='" + rack + '\'' +
'}';
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "mobx-ts-example",
"version": "1.0.0",
"name": "logi-kafka",
"version": "2.3.1",
"description": "",
"scripts": {
"start": "webpack-dev-server",
@@ -21,7 +21,7 @@
"@types/spark-md5": "^3.0.2",
"antd": "^3.26.15",
"clean-webpack-plugin": "^3.0.0",
"clipboard": "^2.0.6",
"clipboard": "2.0.6",
"cross-env": "^7.0.2",
"css-loader": "^2.1.0",
"echarts": "^4.5.0",
@@ -56,4 +56,4 @@
"dependencies": {
"format-to-json": "^1.0.4"
}
}
}

View File

@@ -68,8 +68,8 @@ export class StatusGraghCom<T extends IFlowInfo> extends React.Component {
public render() {
const statusData = this.getData();
const loading = this.getLoading();
if (!statusData) return null;
const data: any[] = [];
if (!statusData) return <Table columns={flowColumns} dataSource={data} />;
Object.keys(statusData).map((key) => {
if (statusData[key]) {
const v = key === 'byteIn' || key === 'byteOut' ? statusData[key].map(i => i && (i / 1024).toFixed(2)) :
@@ -85,7 +85,7 @@ export class StatusGraghCom<T extends IFlowInfo> extends React.Component {
}
});
return (
<Table columns={flowColumns} dataSource={data} pagination={false} loading={loading}/>
<Table columns={flowColumns} dataSource={data} pagination={false} loading={loading} />
);
}
}

View File

@@ -1,4 +1,4 @@
.ant-input-number {
.ant-input-number, .ant-form-item-children .ant-select {
width: 314px
}

View File

@@ -59,6 +59,10 @@ export const adminMenu = [{
href: `/admin/bill`,
i: 'k-icon-renwuliebiao',
title: '用户账单',
},{
href: `/admin/operation-record`,
i: 'k-icon-operationrecord',
title: '操作记录',
}] as ILeftMenu[];
export const expertMenu = [{

View File

@@ -172,7 +172,7 @@ export class ClusterTopic extends SearchAndFilterContainer {
key: 'appName',
// width: '10%',
render: (val: string, record: IClusterTopics) => (
<Tooltip placement="bottomLeft" title={record.appId} >
<Tooltip placement="bottomLeft" title={val} >
{val}
</Tooltip>
),

View File

@@ -314,8 +314,7 @@ export class ExclusiveCluster extends SearchAndFilterContainer {
>
<div className="region-prompt">
<span>
Region已被逻辑集群 {this.state.logicalClusterName} 使
Region与逻辑集群的关系
Region已被逻辑集群 {this.state.logicalClusterName} 使Region与逻辑集群的关系
</span>
</div>
</Modal>

View File

@@ -16,7 +16,7 @@
.traffic-table {
margin: 10px 0;
min-height: 450px;
min-height: 330px;
.traffic-header {
width: 100%;
height: 44px;

View File

@@ -4,6 +4,7 @@ import { wrapper } from 'store';
import { observer } from 'mobx-react';
import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type';
import { admin } from 'store/admin';
import { users } from 'store/users';
import { registerCluster, createCluster, pauseMonitoring } from 'lib/api';
import { SearchAndFilterContainer } from 'container/search-filter';
import { cluster } from 'store/cluster';
@@ -78,34 +79,34 @@ export class ClusterList extends SearchAndFilterContainer {
disabled: item ? true : false,
},
},
{
key: 'idc',
label: '数据中心',
defaultValue: region.regionName,
rules: [{ required: true, message: '请输入数据中心' }],
attrs: {
placeholder: '请输入数据中心',
disabled: true,
},
},
{
key: 'mode',
label: '集群类型',
type: 'select',
options: cluster.clusterModes.map(ele => {
return {
label: ele.message,
value: ele.code,
};
}),
rules: [{
required: true,
message: '请选择集群类型',
}],
attrs: {
placeholder: '请选择集群类型',
},
},
// {
// key: 'idc',
// label: '数据中心',
// defaultValue: region.regionName,
// rules: [{ required: true, message: '请输入数据中心' }],
// attrs: {
// placeholder: '请输入数据中心',
// disabled: true,
// },
// },
// {
// key: 'mode',
// label: '集群类型',
// type: 'select',
// options: cluster.clusterModes.map(ele => {
// return {
// label: ele.message,
// value: ele.code,
// };
// }),
// rules: [{
// required: true,
// message: '请选择集群类型',
// }],
// attrs: {
// placeholder: '请选择集群类型',
// },
// },
{
key: 'kafkaVersion',
label: 'kafka版本',
@@ -148,7 +149,7 @@ export class ClusterList extends SearchAndFilterContainer {
attrs: {
placeholder: `请输入JMX认证例如
{
"maxConn": 10, #KM对单台Broker最大连接数
"maxConn": 10, #KM对单台Broker最大jmx连接数
"username": "xxxxx", #用户名
"password": "xxxxx", #密码
"openSSL": true, #开启SSLtrue表示开启SSLfalse表示关闭
@@ -276,32 +277,41 @@ export class ClusterList extends SearchAndFilterContainer {
public getColumns = () => {
const cols = getAdminClusterColumns();
const role = users.currentUser.role;
const col = {
title: '操作',
render: (value: string, item: IMetaData) => (
<>
<a
onClick={this.createOrRegisterCluster.bind(this, item)}
className="action-button"
>
</a>
<Popconfirm
title={`确定${item.status === 1 ? '暂停' : '开始'}${item.clusterName}监控?`}
onConfirm={() => this.pauseMonitor(item)}
cancelText="取消"
okText="确认"
>
<Tooltip title="暂停监控将无法正常监控指标信息,建议开启监控">
{
role && role === 2 ? <>
<a
onClick={this.createOrRegisterCluster.bind(this, item)}
className="action-button"
>
{item.status === 1 ? '暂停监控' : '开始监控'}
>
</a>
</Tooltip>
</Popconfirm>
<a onClick={this.showMonitor.bind(this, item)}>
</a>
<Popconfirm
title={`确定${item.status === 1 ? '暂停' : '开始'}${item.clusterName}监控?`}
onConfirm={() => this.pauseMonitor(item)}
cancelText="取消"
okText="确认"
>
<Tooltip placement="left" title="暂停监控将无法正常监控指标信息,建议开启监控">
<a
className="action-button"
>
{item.status === 1 ? '暂停监控' : '开始监控'}
</a>
</Tooltip>
</Popconfirm>
<a onClick={this.showMonitor.bind(this, item)}>
</a>
</> : <Tooltip placement="left" title="该功能只对运维人员开放">
<a style={{ color: '#a0a0a0' }} className="action-button"></a>
<a className="action-button" style={{ color: '#a0a0a0' }}>{item.status === 1 ? '暂停监控' : '开始监控'}</a>
<a style={{ color: '#a0a0a0' }}></a>
</Tooltip>
}
</>
),
};
@@ -310,6 +320,7 @@ export class ClusterList extends SearchAndFilterContainer {
}
public renderClusterList() {
const role = users.currentUser.role;
return (
<>
<div className="container">
@@ -318,7 +329,14 @@ export class ClusterList extends SearchAndFilterContainer {
{this.renderSearch('', '请输入集群名称')}
<li className="right-btn-1">
<a style={{ display: 'inline-block', marginRight: '20px' }} href={indexUrl.cagUrl} target="_blank"></a>
<Button type="primary" onClick={this.createOrRegisterCluster.bind(this, null)}></Button>
{
role && role === 2 ?
<Button type="primary" onClick={this.createOrRegisterCluster.bind(this, null)}></Button>
:
<Tooltip placement="left" title="该功能只对运维人员开放" trigger='hover'>
<Button disabled type="primary"></Button>
</Tooltip>
}
</li>
</ul>
</div>

View File

@@ -11,3 +11,5 @@ export * from './operation-management/migration-detail';
export * from './configure-management';
export * from './individual-bill';
export * from './bill-detail';
export * from './operation-record';

View File

@@ -0,0 +1,134 @@
import * as React from 'react';
import { cellStyle } from 'constants/table';
import { Tooltip } from 'antd';
import { admin } from 'store/admin';
import moment = require('moment');
const moduleList = [
{ moduleId: 0, moduleName: 'Topic' },
{ moduleId: 1, moduleName: '应用' },
{ moduleId: 2, moduleName: '配额' },
{ moduleId: 3, moduleName: '权限' },
{ moduleId: 4, moduleName: '集群' },
{ moduleId: 5, moduleName: '分区' },
{ moduleId: 6, moduleName: 'Gateway配置' },
]
export const operateList = {
0: '新增',
1: '删除',
2: '修改'
}
// [
// { operate: '新增', operateId: 0 },
// { operate: '删除', operateId: 1 },
// { operate: '修改', operateId: 2 },
// ]
export const getJarFuncForm: any = (props: any) => {
const formMap = [
{
key: 'moduleId',
label: '模块',
type: 'select',
attrs: {
style: {
width: '130px'
},
placeholder: '请选择模块',
},
options: moduleList.map(item => {
return {
label: item.moduleName,
value: item.moduleId
}
}),
formAttrs: {
initialvalue: 0,
},
},
{
key: 'operator',
label: '操作人',
type: 'input',
attrs: {
style: {
width: '170px'
},
placeholder: '请输入操作人'
},
getvaluefromevent: (event: any) => {
return event.target.value.replace(/\s+/g, '')
},
},
// {
// key: 'resource',
// label: '资源名称',
// type: 'input',
// attrs: {
// style: {
// width: '170px'
// },
// placeholder: '请输入资源名称'
// },
// },
// {
// key: 'content',
// label: '操作内容',
// type: 'input',
// attrs: {
// style: {
// width: '170px'
// },
// placeholder: '请输入操作内容'
// },
// },
]
return formMap;
}
export const getOperateColumns = () => {
const columns: any = [
{
title: '模块',
dataIndex: 'module',
key: 'module',
align: 'center',
width: '12%'
},
{
title: '资源名称',
dataIndex: 'resource',
key: 'resource',
align: 'center',
width: '12%'
},
{
title: '操作内容',
dataIndex: 'content',
key: 'content',
align: 'center',
width: '25%',
onCell: () => ({
style: {
maxWidth: 350,
...cellStyle,
},
}),
render: (text: string, record: any) => {
return (
<Tooltip placement="topLeft" title={text} >{text}</Tooltip>);
},
},
{
title: '操作人',
dataIndex: 'operator',
align: 'center',
width: '12%'
},
];
return columns
}

View File

@@ -0,0 +1,130 @@
import * as React from 'react';
import { observer } from 'mobx-react';
import { SearchAndFilterContainer } from 'container/search-filter';
import { IXFormWrapper, IMetaData, IRegister } from 'types/base-type';
import { admin } from 'store/admin';
import { customPagination, cellStyle } from 'constants/table';
import { Table, Tooltip } from 'component/antd';
import { timeFormat } from 'constants/strategy';
import { SearchFormComponent } from '../searchForm';
import { getJarFuncForm, operateList, getOperateColumns } from './config'
import moment = require('moment');
import { tableFilter } from 'lib/utils';
@observer
export class OperationRecord extends SearchAndFilterContainer {
public state: any = {
searchKey: '',
filteredInfo: null,
sortedInfo: null,
};
public getData<T extends IMetaData>(origin: T[]) {
let data: T[] = origin;
let { searchKey } = this.state;
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IMetaData) =>
(item.clusterName !== undefined && item.clusterName !== null) && item.clusterName.toLowerCase().includes(searchKey as string),
) : origin;
return data;
};
public searchForm = (params: any) => {
// this.props.setFuncSubValue(params)
// getSystemFuncList(params).then(res => {
// this.props.setSysFuncList(res.data)
// this.props.setPagination(res.pagination)
// })
const { operator, moduleId } = params || {}
operator ? admin.getOperationRecordData(params) : admin.getOperationRecordData({ moduleId })
// getJarList(params).then(res => {
// this.props.setJarList(res.data)
// this.props.setPagination(res.pagination)
// })
}
public clearAll = () => {
this.setState({
filteredInfo: null,
sortedInfo: null,
});
};
public setHandleChange = (pagination: any, filters: any, sorter: any) => {
this.setState({
filteredInfo: filters,
sortedInfo: sorter,
});
}
public renderOperationRecordList() {
let { sortedInfo, filteredInfo } = this.state;
sortedInfo = sortedInfo || {};
filteredInfo = filteredInfo || {};
const operatingTime = Object.assign({
title: '操作时间',
dataIndex: 'modifyTime',
key: 'modifyTime',
align: 'center',
sorter: (a: any, b: any) => a.modifyTime - b.modifyTime,
render: (t: number) => moment(t).format(timeFormat),
width: '15%',
sortOrder: sortedInfo.columnKey === 'modifyTime' && sortedInfo.order,
});
const operatingPractice = Object.assign({
title: '行为',
dataIndex: 'operate',
key: 'operate',
align: 'center',
width: '12%',
filters: tableFilter<any>(this.getData(admin.oRList), 'operateId', operateList),
// filteredValue: filteredInfo.operate || null,
onFilter: (value: any, record: any) => {
return record.operateId === value
}
}, this.renderColumnsFilter('modifyTime'))
const columns = getOperateColumns()
columns.splice(0, 0, operatingTime);
columns.splice(3, 0, operatingPractice);
return (
<>
<div className="container">
<div className="table-operation-panel">
<SearchFormComponent
formMap={getJarFuncForm()}
onSubmit={(params: any) => this.searchForm(params)}
clearAll={() => this.clearAll()}
isReset={true}
/>
</div>
<div className="table-wrapper">
<Table
rowKey="key"
loading={admin.loading}
dataSource={this.getData(admin.oRList)}
columns={columns}
pagination={customPagination}
bordered
onChange={this.setHandleChange}
/>
</div>
</div>
</>
)
};
componentDidMount() {
admin.getOperationRecordData({ moduleId: 0 });
}
render() {
return <div>
{
this.renderOperationRecordList()
}
</div>
}
}

View File

@@ -0,0 +1,120 @@
import * as React from 'react';
import { Select, Input, InputNumber, Form, Switch, Checkbox, DatePicker, Radio, Upload, Button, Icon, Tooltip } from 'component/antd';
// import './index.less';
const Search = Input.Search;
export interface IFormItem {
key: string;
label: string;
type: string;
value?: string;
// 内部组件属性注入
attrs?: any;
// form属性注入
formAttrs?: any;
defaultValue?: string | number | any[];
rules?: any[];
invisible?: boolean;
getvaluefromevent: Function;
}
interface SerachFormProps {
formMap: IFormItem[];
// formData: any;
form: any;
onSubmit: Function;
isReset?: boolean;
clearAll: Function;
layout?: 'inline' | 'horizontal' | 'vertical';
}
export interface IFormSelect extends IFormItem {
options: Array<{ key?: string | number, value: string | number, label: string }>;
}
class SearchForm extends React.Component<SerachFormProps>{
public onSubmit = () => {
// this.props.onSubmit()
//
}
public renderFormItem(item: IFormItem) {
switch (item.type) {
default:
case 'input':
return <Input key={item.key} {...item.attrs} />;
case 'select':
return (
<Select
// size="small"
key={item.key}
{...item.attrs}
invisibleValue={item.formAttrs.invisibleValue}
>
{(item as IFormSelect).options && (item as IFormSelect).options.map((v, index) => (
<Select.Option
key={v.value || v.key || index}
value={v.value}
>
{v.label}
{/* <Tooltip placement='left' title={v.value}>
{v.label}
</Tooltip> */}
</Select.Option>
))}
</Select>
);
}
}
public theQueryClick = (value: any) => {
this.props.onSubmit(value)
this.props.clearAll()
// this.props.form.resetFields()
}
public resetClick = () => {
this.props.form.resetFields()
this.props.clearAll()
this.theQueryClick(this.props.form.getFieldsValue())
}
public render() {
const { form, formMap, isReset } = this.props;
const { getFieldDecorator, getFieldsValue } = form;
return (
<Form layout='inline' onSubmit={this.onSubmit}>
{
formMap.map(formItem => {
// const { initialValue, valuePropName } = this.handleFormItem(formItem, formData);
// const getFieldValue = {
// initialValue,
// rules: formItem.rules || [{ required: false, message: '' }],
// valuePropName,
// };
return (
<Form.Item
key={formItem.key}
label={formItem.label}
{...formItem.formAttrs}
>
{getFieldDecorator(formItem.key, {
initialValue: formItem.formAttrs?.initialvalue,
getValueFromEvent: formItem?.getvaluefromevent,
})(
this.renderFormItem(formItem),
)}
</Form.Item>
);
})
}
<Form.Item>
{
isReset && <Button style={{ width: '80px', marginRight: '20px' }} type="primary" onClick={() => this.resetClick()}></Button>
}
<Button style={{ width: '80px' }} type="primary" onClick={() => this.theQueryClick(getFieldsValue())}></Button>
</Form.Item>
</Form>
);
}
}
export const SearchFormComponent = Form.create<SerachFormProps>({ name: 'search-form' })(SearchForm);

View File

@@ -149,9 +149,9 @@ export class DynamicSetFilter extends React.Component<IDynamicProps> {
public handleSelectChange = (e: string, type: 'topic' | 'consumerGroup' | 'location') => {
switch (type) {
case 'topic':
if (!this.clusterId) {
return message.info('请选择集群');
}
// if (!this.clusterId) {
// return message.info('请选择集群');
// }
this.topicName = e;
const type = this.dealMonitorType();
if (['kafka-consumer-maxLag', 'kafka-consumer-maxDelayTime', 'kafka-consumer-lag'].indexOf(type) > -1) {

View File

@@ -52,8 +52,7 @@ export class CommonAppList extends SearchAndFilterContainer {
},
}),
render: (text: string, record: IAppItem) => {
return (
<Tooltip placement="bottomLeft" title={record.name}>{text}</Tooltip>);
return (<Tooltip placement="bottomLeft" title={record.name}>{text}</Tooltip>);
},
},
{
@@ -103,7 +102,7 @@ export class CommonAppList extends SearchAndFilterContainer {
}
public getOnlineConnect(record: IAppItem) {
modal.showOfflineAppModal(record.appId);
modal.showOfflineAppNewModal(record.appId);
}
public getData<T extends IAppItem>(origin: T[]) {
@@ -114,7 +113,7 @@ export class CommonAppList extends SearchAndFilterContainer {
data = searchKey ? origin.filter((item: IAppItem) =>
((item.name !== undefined && item.name !== null) && item.name.toLowerCase().includes(searchKey as string)) ||
((item.principals !== undefined && item.principals !== null) && item.principals.toLowerCase().includes(searchKey as string)) ||
((item.appId !== undefined && item.appId !== null) && item.appId.toLowerCase().includes(searchKey as string)) ) : origin;
((item.appId !== undefined && item.appId !== null) && item.appId.toLowerCase().includes(searchKey as string))) : origin;
return data;
}

View File

@@ -29,16 +29,16 @@ export class MyCluster extends SearchAndFilterContainer {
public applyCluster() {
const xFormModal = {
formMap: [
{
key: 'idc',
label: '数据中心',
defaultValue: region.regionName,
rules: [{ required: true, message: '请输入数据中心' }],
attrs: {
placeholder: '请输入数据中心',
disabled: true,
},
},
// {
// key: 'idc',
// label: '数据中心',
// defaultValue: region.regionName,
// rules: [{ required: true, message: '请输入数据中心' }],
// attrs: {
// placeholder: '请输入数据中心',
// disabled: true,
// },
// },
{
key: 'appId',
label: '所属应用',

View File

@@ -133,15 +133,15 @@ export class GovernanceTopic extends SearchAndFilterContainer {
width: '30%',
sorter: (a: IResource, b: IResource) => a.topicName.charCodeAt(0) - b.topicName.charCodeAt(0),
render: (text: string, item: IResource) =>
(
<Tooltip placement="bottomLeft" title={text}>
<a
// tslint:disable-next-line:max-line-length
href={`${this.urlPrefix}/topic/topic-detail?clusterId=${item.clusterId}&topic=${item.topicName}&isPhysicalClusterId=true&region=${region.currentRegion}`}
>
{text}
</a>
</Tooltip>),
(
<Tooltip placement="bottomLeft" title={text}>
<a
// tslint:disable-next-line:max-line-length
href={`${this.urlPrefix}/topic/topic-detail?clusterId=${item.clusterId}&topic=${item.topicName}&isPhysicalClusterId=true&region=${region.currentRegion}`}
>
{text}
</a>
</Tooltip>),
},
{
title: '所在集群',
@@ -215,7 +215,7 @@ export class GovernanceTopic extends SearchAndFilterContainer {
return (
<>
{this.pendingTopic(this.getData(expert.resourceData))}
{this.pendingTopic(this.getData(expert.resourceData))}
</>
);
}

View File

@@ -16,6 +16,14 @@
line-height: 64px;
vertical-align: middle;
}
.kafka-header-version{
display: inline-block;
vertical-align: middle;
padding-top:5px;
font-size: 12px;
margin-left:10px;
color:#a0a0a0;
}
}
.mid-content {

View File

@@ -145,6 +145,8 @@ export const Header = observer((props: IHeader) => {
<div className="left-content">
<img className="kafka-header-icon" src={logoUrl} alt="" />
<span className="kafka-header-text">Kafka Manager</span>
<a className='kafka-header-version' href="https://github.com/didi/Logi-KafkaManager/releases" target='_blank'>v2.4.0</a>
{/* 添加版本超链接 */}
</div>
<div className="mid-content">
{headerMenu.map((item: IMenuItem, index: number) =>

View File

@@ -22,11 +22,11 @@ export const showEditClusterTopic = (item: IClusterTopics) => {
},
{
key: 'appId',
label: '应用ID',
label: '应用名称',
type: 'select',
options: app.adminAppData.map(item => {
return {
label: item.appId,
label: item.name,
value: item.appId,
};
}),
@@ -61,7 +61,7 @@ export const showEditClusterTopic = (item: IClusterTopics) => {
attrs: {
placeholder: '请输入保存时间',
suffix: '小时',
prompttype:'修改保存时间,预计一分钟左右生效!'
prompttype: '修改保存时间,预计一分钟左右生效!'
},
},
{

View File

@@ -35,7 +35,6 @@ class CustomForm extends React.Component<IXFormProps> {
this.props.form.validateFields((err: any, values: any) => {
const deleteData = this.props.formData;
if (!err) {
// console.log('values', values);
if (values.topicName !== this.props.formData.topicName) {
notification.error({ message: 'topic名称不正确请重新输入' });
} else {
@@ -77,7 +76,6 @@ class CustomForm extends React.Component<IXFormProps> {
}
public render() {
// console.log('props', this.props);
const { formData = {} as any, visible } = this.props;
const { getFieldDecorator } = this.props.form;
let metadata = [] as IBrokersMetadata[];

View File

@@ -111,11 +111,11 @@ class CustomForm extends React.Component<IXFormProps> {
})(<Input placeholder="请输入分区数" />)}
</Form.Item>
<Form.Item label="类型">
{/* <Form.Item label={this.state.checked ? 'Region类型' : 'Borker类型'} > */}
{/* <Form.Item label={this.state.checked ? 'Region类型' : 'Broker类型'} > */}
{/* <Switch onChange={(checked) => this.onSwitchChange(checked)} /> */}
<Radio.Group value={this.state.checked ? 'region' : 'broker'} onChange={(e) => { this.onSwitchChange(e.target.value === 'region' ? true : false); }}>
<Radio.Button value="region">Region类型</Radio.Button>
<Radio.Button value="broker">Borker类型</Radio.Button>
<Radio.Button value="broker">Broker类型</Radio.Button>
</Radio.Group>
</Form.Item>
<Form.Item label="brokerIdList" style={{ display: this.state.checked ? 'none' : '' }}>

View File

@@ -28,8 +28,8 @@ const updateInputModal = (status?: string) => {
formMap[4].invisible = status === 'region';
formMap[5].invisible = status !== 'region';
formMap[4].rules = [{required: status !== 'region'}];
formMap[5].rules = [{required: status === 'region'}];
formMap[4].rules = [{ required: status !== 'region' }];
formMap[5].rules = [{ required: status === 'region' }];
// tslint:disable-next-line:no-unused-expression
wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData);
};
@@ -103,7 +103,7 @@ export const createMigrationTasks = () => {
label: 'Region',
value: 'region',
}, {
label: 'Borker',
label: 'Broker',
value: 'broker',
}],
rules: [{
@@ -141,7 +141,7 @@ export const createMigrationTasks = () => {
placeholder: '请选择目标Region',
},
},
{
key: 'beginTime',
label: '计划开始时间',

View File

@@ -0,0 +1,78 @@
import * as React from 'react';
import { Table, Modal, Tooltip, Icon, message, notification, Alert, Button } from 'component/antd';
import { app } from 'store/app';
import { getApplyOnlineColumns } from 'container/topic/config';
import { observer } from 'mobx-react';
import { modal } from 'store/modal';
import { users } from 'store/users';
import { urlPrefix } from 'constants/left-menu';
import { region } from 'store';
@observer
export class ConnectAppNewList extends React.Component {
public componentDidMount() {
app.getAppsConnections(modal.params);
}
public handleCancel = () => {
app.setAppsConnections([]);
modal.close();
}
public handleSubmit = () => {
const connectionList = app.appsConnections;
if (connectionList && connectionList.length) {
return message.warning('存在连接信息,无法申请下线!');
}
const offlineParams = {
type: 11,
applicant: users.currentUser.username,
description: '',
extensions: JSON.stringify({ appId: modal.params }),
};
app.applyAppOffline(offlineParams).then((data: any) => {
notification.success({ message: '申请下线成功' });
window.location.href = `${urlPrefix}/user/order-detail/?orderId=${data.id}&region=${region.currentRegion}`;
});
modal.close();
}
public render() {
const connectionList = app.appsConnections;
return (
<>
<Modal
visible={true}
className="stream-debug-modal"
title="提示"
maskClosable={false}
onCancel={this.handleCancel}
// onOk={this.handleSubmit}
// okText="确认"
// cancelText="取消"
okButtonProps={{ disabled: app.connectLoading || !!app.appsConnections.length }}
footer={connectionList && connectionList.length ?
<Button type="primary" onClick={this.handleCancel}></Button>
:
<>
<Button onClick={this.handleCancel}></Button>
<Button type="primary" onClick={this.handleSubmit}></Button>
</>
}
width={500}
>
<div style={{ textAlign: 'center', fontWeight: "bolder" }}>
{
connectionList && connectionList.length
?
<span>Topic关联Topic之间的关系<a href={`${urlPrefix}/topic/app-detail?appId=${modal.params}`}></a></span>
:
<span>线AppID</span>
}
</div>
</Modal>
</>
);
}
}

View File

@@ -4,6 +4,7 @@ import { message, Icon, notification, Modal, Table, Tooltip } from 'component/an
import { IApprovalOrder, IBaseOrder, IOrderInfo } from 'types/base-type';
import { admin } from 'store/admin';
import { modal } from 'store/modal';
import { cluster } from 'store/cluster';
import { cellStyle } from 'constants/table';
import * as React from 'react';
@@ -37,6 +38,12 @@ const renderModalTilte = (type: number, status: number) => {
export const showApprovalModal = (info: IOrderInfo, status: number, from?: string) => {
const { id, type } = info;
const formMap = [{
key: 'clusterId',
label: '所属集群',
type: 'input_number',
defaultValue: info.detail.logicalClusterName,
attrs: { disabled: true },
}, {
key: 'partitionNum',
label: '分区数',
type: 'input_number',
@@ -87,7 +94,7 @@ export const showApprovalModal = (info: IOrderInfo, status: number, from?: strin
label: 'Region',
value: 'region',
}, {
label: 'Borker',
label: 'Broker',
value: 'broker',
}],
rules: [{ required: false, message: '请选择类型' }],

View File

@@ -399,8 +399,8 @@ export const updateAllTopicFormModal = () => {
const formMap = wrapper.xFormWrapper.formMap;
if (topic.authorities) {
const { consume, send, checkStatus } = judgeAccessStatus(topic.authorities.access);
formMap[3].defaultValue = checkStatus;
formMap[3].options = [{
formMap[2].defaultValue = checkStatus;
formMap[2].options = [{
label: `消费权限${consume ? '(已拥有)' : ''}`,
value: '1',
disabled: consume,
@@ -409,7 +409,7 @@ export const updateAllTopicFormModal = () => {
value: '2',
disabled: send,
}];
formMap[3].rules = [{
formMap[2].rules = [{
required: true,
validator: (rule: any, value: any, callback: any) => getPowerValidator(rule, value, callback, checkStatus, 'allTopic'),
}];
@@ -476,7 +476,6 @@ export const showAllPermissionModal = (item: ITopic) => {
const showAllPermission = (appId: string, item: ITopic, access: number) => {
const { consume, send, checkStatus } = judgeAccessStatus(access);
const xFormModal = {
formMap: [
{
@@ -489,16 +488,6 @@ const showAllPermission = (appId: string, item: ITopic, access: number) => {
disabled: true,
},
},
{
key: 'clusterName',
label: '集群名称',
defaultValue: item.clusterName,
rules: [{ required: true, message: '请输入集群名称' }],
attrs: {
placeholder: '请输入集群名称',
disabled: true,
},
},
{
key: 'appId',
label: '绑定应用',
@@ -526,6 +515,26 @@ const showAllPermission = (appId: string, item: ITopic, access: number) => {
validator: (rule: any, value: any, callback: any) => getPowerValidator(rule, value, callback, checkStatus, 'allTopic'),
}],
},
// {
// key: 'clusterName',
// label: '集群名称',
// defaultValue: item.clusterName,
// rules: [{ required: true, message: '请输入集群名称' }],
// attrs: {
// placeholder: '请输入集群名称',
// disabled: true,
// },
// },
// {
// key: 'clusterName',
// label: '集群名称',
// defaultValue: item.clusterName,
// rules: [{ required: true, message: '请输入集群名称' }],
// attrs: {
// placeholder: '请输入集群名称',
// disabled: true,
// },
// },
{
key: 'description',
label: '申请原因',
@@ -587,16 +596,16 @@ export const showPermissionModal = (item: ITopic) => {
disabled: true,
},
},
{
key: 'clusterName',
label: '集群名称',
defaultValue: item.clusterName,
rules: [{ required: true, message: '请输入集群名称' }],
attrs: {
placeholder: '请输入集群名称',
disabled: true,
},
},
// {
// key: 'clusterName',
// label: '集群名称',
// defaultValue: item.clusterName,
// rules: [{ required: true, message: '请输入集群名称' }],
// attrs: {
// placeholder: '请输入集群名称',
// disabled: true,
// },
// },
{
key: 'appName',
label: '绑定应用',

View File

@@ -194,7 +194,7 @@ export class SearchAndFilterContainer extends React.Component<any, ISearchAndFil
);
}
public renderColumnsFilter = (type: string) => {
public renderColumnsFilter = (type: string, params?: any) => {
return {
filterIcon: this.renderFilterIcon.bind(null, type),
filterDropdownVisible: this.state[type] as boolean,

View File

@@ -28,7 +28,8 @@ export class StaffSelect extends React.Component<IStaffSelectProps> {
public getStaffList = () => {
const { value } = this.props;
const current = users.currentUser.username || getCookie('username');
const principals = value || (current ? [current] : []);
const principals = [''];
// const principals = value || (current ? [current] : []);
const promises: any[] = [];
for (const item of principals) {
@@ -64,7 +65,6 @@ export class StaffSelect extends React.Component<IStaffSelectProps> {
const { value, isDisabled } = this.props;
const current = users.currentUser.username || getCookie('username');
const principals = value || (current ? [current] : []);
return (
<Select
mode="multiple"
@@ -72,6 +72,7 @@ export class StaffSelect extends React.Component<IStaffSelectProps> {
defaultValue={principals}
onChange={(e: string[]) => this.handleChange(e)}
onSearch={(e: string) => this.handleSearch(e)}
onFocus={() => this.getFocus()}
disabled={isDisabled}
{...searchProps}
>
@@ -83,6 +84,10 @@ export class StaffSelect extends React.Component<IStaffSelectProps> {
);
}
public getFocus() {
this.getStaffList();
}
public handleSearch(params: string) {
debounce(() => {
getStaff(params).then((data: IStaff[]) => {
@@ -98,9 +103,9 @@ export class StaffSelect extends React.Component<IStaffSelectProps> {
});
}, 300)();
}
public handleChange(params: string[]) {
const { onChange } = this.props;
// tslint:disable-next-line:no-unused-expression
onChange && onChange(params);
}

View File

@@ -129,16 +129,17 @@ export class BaseInformation extends React.Component<IInfoProps> {
}
public realTimeTraffic() {
const realTraffic = topic.realTraffic;
if (realTraffic) {
return (
<>
<Spin spinning={topic.realLoading}>
{renderTrafficTable(this.updateRealStatus, StatusGragh)}
</Spin>
</>
);
}
// const realTraffic = topic.realTraffic;
// if (realTraffic) {
return (
<>
<Spin spinning={topic.realLoading}>
{renderTrafficTable(this.updateRealStatus, StatusGragh)}
</Spin>
</>
);
// }
}
public realTimeConsume() {

View File

@@ -30,7 +30,7 @@ export const getInfoRenderItem = (orderInfo: IOrderInfo, result: boolean) => {
value: orderInfo.detail.principals,
}];
const clusterTypelist: ILabelValue[] = [ {
const clusterTypelist: ILabelValue[] = [{
label: '物理集群名称',
value: orderInfo.detail.physicalClusterName,
}, {
@@ -80,23 +80,25 @@ export const getInfoRenderItem = (orderInfo: IOrderInfo, result: boolean) => {
phyAuthOfflineList.splice(3, 0, ...clusterTypelist);
const clusterInfoList: ILabelValue[] = [{
label: '流入流量',
value: `${transBToMB(orderInfo.detail.bytesIn)} MB/s`,
}, {
label: '数据中心',
value: orderInfo.detail.idc,
}, {
label: '集群类型',
value: clusterTypeMap[orderInfo.detail.mode],
}, {
label: '应用ID',
value: orderInfo.detail.appId,
},
label: '流入流量',
value: `${transBToMB(orderInfo.detail.bytesIn)} MB/s`,
},
// {
// label: '数据中心',
// value: orderInfo.detail.idc,
// },
{
label: '集群类型',
value: clusterTypeMap[orderInfo.detail.mode],
}, {
label: '应用ID',
value: orderInfo.detail.appId,
},
];
const clusterOfflineList: ILabelValue[] = expansionList;
const phyClusterOfflineList: ILabelValue[] = clusterTypelist;
const maxAvgBytesIn = orderInfo.detail.maxAvgBytesInList && orderInfo.detail.maxAvgBytesInList.map(item => {
const maxAvgBytesIn = orderInfo.detail.maxAvgBytesInList && orderInfo.detail.maxAvgBytesInList.map(item => {
const val = `${transBToMB(item)} MB/s`;
return val;
});
@@ -125,18 +127,18 @@ export const getInfoRenderItem = (orderInfo: IOrderInfo, result: boolean) => {
phyQuotaInfoList.splice(3, 0, ...clusterTypelist);
const partitionList: ILabelValue[] = expansionList.concat([{
label: 'Topic名称',
value: orderInfo.detail.topicName,
}, {
label: '申请分区数',
value: orderInfo.detail.needIncrPartitionNum,
}, {
label: '当前流入流量',
value: `${transBToMB(orderInfo.detail.bytesIn)} MB/s`,
}, {
label: '近三天峰值流入流量',
value: maxAvgBytesIn && maxAvgBytesIn.join('、'),
},
label: 'Topic名称',
value: orderInfo.detail.topicName,
}, {
label: '申请分区数',
value: orderInfo.detail.needIncrPartitionNum,
}, {
label: '当前流入流量',
value: `${transBToMB(orderInfo.detail.bytesIn)} MB/s`,
}, {
label: '近三天峰值流入流量',
value: maxAvgBytesIn && maxAvgBytesIn.join('、'),
},
]);
const phyPartitionList: ILabelValue[] = partitionList.filter(i => !cluster.includes(i.label));

View File

@@ -133,6 +133,12 @@ export class OrderDetail extends React.Component {
width: '20%',
render: (t: string) => <span>{t === 'consumer' ? '消费' : '生产'}</span>,
},
// {
// title: '客户端语言',
// dataIndex: 'language',
// key: 'language',
// width: '20%',
// },
];
return (
<>

View File

@@ -3,6 +3,7 @@ import { observer } from 'mobx-react';
import { modal } from 'store/modal';
import { ConnectTopicList } from '../modal/connect-topic-list';
import { ConnectAppList } from '../modal/offline-app-modal';
import { ConnectAppNewList } from '../modal/offline-app-modal-new';
import { CancelTopicPermission } from 'container/modal/cancel-topic-permission';
import { OfflineClusterModal } from 'container/modal/offline-cluster-modal';
import { RenderOrderOpResult } from 'container/modal/order';
@@ -22,6 +23,7 @@ export default class AllCustomModalInOne extends React.Component {
const modalMap = {
offlineTopicModal: <ConnectTopicList />,
offlineAppNewModal: <ConnectAppNewList />,
offlineAppModal: <ConnectAppList />,
cancelTopicPermission: <CancelTopicPermission />,
offlineClusterModal: <OfflineClusterModal />,

View File

@@ -418,6 +418,13 @@ export const getMetaData = (needDetail: boolean = true) => {
return fetch(`/rd/clusters/basic-info?need-detail=${needDetail}`);
};
export const getOperationRecordData = (params: any) => {
return fetch(`/rd/operate-record`,{
method: 'POST',
body: JSON.stringify(params),
});
};
export const getConfigure = () => {
return fetch(`/rd/configs`);
};

View File

@@ -3,7 +3,7 @@ import CommonRoutePage from './common';
import urlParser from 'lib/url-parser';
import urlQuery from 'store/url-query';
import { AppDetail } from 'container/app';
import { AdminAppList, ClusterList, ClusterDetail, BrokerDetail, UserManagement, VersionManagement, OperationManagement, OperationDetail, BillManagement, ConfigureManagement, IndividualBill, MigrationDetail, BillDetail } from 'container/admin';
import { AdminAppList, ClusterList, ClusterDetail, BrokerDetail, UserManagement, VersionManagement, OperationManagement, OperationDetail, BillManagement, ConfigureManagement, IndividualBill, MigrationDetail, BillDetail, OperationRecord } from 'container/admin';
import { PlatformManagement } from 'container/admin/platform-management';
export default class Home extends React.Component<any> {
@@ -52,7 +52,11 @@ export default class Home extends React.Component<any> {
path: '/admin/migration-detail',
exact: true,
component: MigrationDetail,
}];
}, {
path: '/admin/operation-record',
exact: true,
component: OperationRecord,
},];
constructor(props: any) {
super(props);
@@ -66,7 +70,7 @@ export default class Home extends React.Component<any> {
public render() {
return (
<CommonRoutePage pageRoute={this.pageRoute} mode="admin" active="admin"/>
<CommonRoutePage pageRoute={this.pageRoute} mode="admin" active="admin" />
);
}
}

View File

@@ -9,6 +9,7 @@ import {
getTopicsBasicInfo,
getTasksKafkaFiles,
getMetaData,
getOperationRecordData,
getConfigure,
addNewConfigure,
editConfigure,
@@ -103,6 +104,14 @@ class Admin {
@observable
public metaList: IMetaData[] = [];
@observable
public oRList: any[] = [];
@observable
public oRparams:any={
moduleId:0
};
@observable
public configureList: IConfigure[] = [];
@@ -319,6 +328,15 @@ class Admin {
}) : [];
}
@action.bound
public setOperationRecordList(data:any){
this.setLoading(false);
this.oRList = data ? data.map((item:any, index: any) => {
item.key = index;
return item;
}) : [];
}
@action.bound
public setConfigure(data: IConfigure[]) {
this.configureList = data ? data.map((item, index) => {
@@ -657,6 +675,12 @@ class Admin {
getMetaData(needDetail).then(this.setMetaList);
}
public getOperationRecordData(params: any) {
this.setLoading(true);
this.oRparams = params
getOperationRecordData(params).then(this.setOperationRecordList);
}
public getConfigure() {
getConfigure().then(this.setConfigure);
}

View File

@@ -31,6 +31,12 @@ class CustomModal {
this.params = value;
}
@action.bound
public showOfflineAppNewModal(value: any) {
this.modalId = 'offlineAppNewModal';
this.params = value;
}
@action.bound
public showOrderOpResult() {
this.modalId = 'orderOpResult';

View File

@@ -6,6 +6,16 @@
url('//at.alicdn.com/t/font_1251424_q66z80q0hio.ttf?t=1577526422376') format('truetype'), /* chrome, firefox, opera, Safari, Android, iOS 4.2+ */
url('//at.alicdn.com/t/font_1251424_q66z80q0hio.svg?t=1577526422376#kafka-manager') format('svg'); /* iOS 4.1- */
}
@font-face {
font-family: 'kafka-manager';
/* project id 2406313 */
src: url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.eot');
src: url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.eot?#iefix') format('embedded-opentype'),
url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.woff2') format('woff2'),
url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.woff') format('woff'),
url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.ttf') format('truetype'),
url('//at.alicdn.com/t/font_2406313_rbsze6uqtta.svg#iconfont') format('svg');
}
.kafka-manager {
font-family: "kafka-manager" !important;
@@ -15,6 +25,15 @@
-moz-osx-font-smoothing: grayscale;
}
/* .kafka-manager-record {
font-family: "kafka-manager" !important;
font-size: 16px;
font-style: normal;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
} */
.k-icon-fuwurenwuguanli:before {
content: "\e660";
}
@@ -47,6 +66,10 @@
content: "\e634";
}
.k-icon-operationrecord:before {
content: "\e772";
}
.k-icon-menu2:before {
content: "\e609";
}

View File

@@ -190,7 +190,7 @@ export interface IUser {
chineseName?: string;
department?: string;
key?: number;
confirmPassword?:string
confirmPassword?: string
}
export interface IOffset {
@@ -939,7 +939,7 @@ export interface INewLogical {
logicalClusterName?: string;
logicalClusterNameCn?: string;
regionIdList: number[];
logicalClusterIdentification?:string
logicalClusterIdentification?: string
}
export interface IPartitionsLocation {

View File

@@ -1,8 +1,8 @@
package com.xiaojukeji.kafka.manager.service.cache;
import com.alibaba.fastjson.JSONObject;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory;
import kafka.admin.AdminClient;
import org.apache.commons.pool2.impl.GenericObjectPool;
@@ -103,6 +103,21 @@ public class KafkaClientPool {
}
}
public static void closeKafkaConsumerPool(Long clusterId) {
lock.lock();
try {
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.remove(clusterId);
if (objectPool == null) {
return;
}
objectPool.close();
} catch (Exception e) {
LOGGER.error("close kafka consumer pool failed, clusterId:{}.", clusterId, e);
} finally {
lock.unlock();
}
}
public static KafkaConsumer borrowKafkaConsumerClient(ClusterDO clusterDO) {
if (ValidateUtils.isNull(clusterDO)) {
return null;
@@ -132,7 +147,11 @@ public class KafkaClientPool {
if (ValidateUtils.isNull(objectPool)) {
return;
}
objectPool.returnObject(kafkaConsumer);
try {
objectPool.returnObject(kafkaConsumer);
} catch (Exception e) {
LOGGER.error("return kafka consumer client failed, clusterId:{}", physicalClusterId, e);
}
}
public static AdminClient getAdminClient(Long clusterId) {

View File

@@ -4,21 +4,23 @@ import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant;
import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap;
import com.xiaojukeji.kafka.manager.service.service.JmxService;
import com.xiaojukeji.kafka.manager.service.zookeeper.*;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.service.service.JmxService;
import com.xiaojukeji.kafka.manager.service.zookeeper.BrokerStateListener;
import com.xiaojukeji.kafka.manager.service.zookeeper.ControllerStateListener;
import com.xiaojukeji.kafka.manager.service.zookeeper.TopicStateListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
@@ -160,8 +162,12 @@ public class PhysicalClusterMetadataManager {
CLUSTER_MAP.remove(clusterId);
}
public Set<Long> getClusterIdSet() {
return CLUSTER_MAP.keySet();
public static Map<Long, ClusterDO> getClusterMap() {
return CLUSTER_MAP;
}
public static void updateClusterMap(ClusterDO clusterDO) {
CLUSTER_MAP.put(clusterDO.getId(), clusterDO);
}
public static ClusterDO getClusterFromCache(Long clusterId) {

View File

@@ -4,7 +4,6 @@ import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO;
import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterMetricsDO;

View File

@@ -17,5 +17,5 @@ public interface OperateRecordService {
int insert(String operator, ModuleEnum module, String resourceName, OperateEnum operate, Map<String, String> content);
List<OperateRecordDO> queryByCondt(OperateRecordDTO dto);
List<OperateRecordDO> queryByCondition(OperateRecordDTO dto);
}

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.kafka.manager.service.service;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.dto.rd.RegionDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO;
import java.util.List;

View File

@@ -22,6 +22,8 @@ import java.util.Map;
public interface TopicManagerService {
List<TopicDO> listAll();
List<TopicDO> getByClusterIdFromCache(Long clusterId);
List<TopicDO> getByClusterId(Long clusterId);
TopicDO getByTopicName(Long clusterId, String topicName);
@@ -30,6 +32,15 @@ public interface TopicManagerService {
Map<String, List<Double>> getTopicMaxAvgBytesIn(Long clusterId, Integer latestDay, Double minMaxAvgBytesIn);
/**
* 获取指定时间范围内Topic的峰值均值流量
* @param clusterId 集群ID
* @param topicName Topic名称
* @param startTime 开始时间
* @param endTime 结束时间
* @param maxAvgDay 最大几天的均值
* @return
*/
Double getTopicMaxAvgBytesIn(Long clusterId, String topicName, Date startTime, Date endTime, Integer maxAvgDay);
TopicStatisticsDO getByTopicAndDay(Long clusterId, String topicName, String gmtDay);

View File

@@ -1,16 +1,17 @@
package com.xiaojukeji.kafka.manager.service.service.gateway.impl;
import com.alibaba.fastjson.JSONObject;
import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperationStatusEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AuthorityDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.KafkaAclDO;
import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao;
import com.xiaojukeji.kafka.manager.common.entity.ao.gateway.TopicQuota;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao;
import com.xiaojukeji.kafka.manager.dao.gateway.KafkaAclDao;
import com.xiaojukeji.kafka.manager.service.service.OperateRecordService;
import com.xiaojukeji.kafka.manager.service.service.gateway.AuthorityService;
@@ -20,10 +21,8 @@ import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author zhongyuankai
@@ -120,7 +119,7 @@ public class AuthorityServiceImpl implements AuthorityService {
operateRecordDO.setModuleId(ModuleEnum.AUTHORITY.getCode());
operateRecordDO.setOperateId(OperateEnum.DELETE.getCode());
operateRecordDO.setResource(topicName);
operateRecordDO.setContent(JSONObject.toJSONString(content));
operateRecordDO.setContent(JsonUtils.toJSONString(content));
operateRecordDO.setOperator(operator);
operateRecordService.insert(operateRecordDO);
} catch (Exception e) {
@@ -150,7 +149,7 @@ public class AuthorityServiceImpl implements AuthorityService {
} catch (Exception e) {
LOGGER.error("get authority failed, clusterId:{} topicName:{}.", clusterId, topicName, e);
}
return null;
return Collections.emptyList();
}
@Override
@@ -164,7 +163,11 @@ public class AuthorityServiceImpl implements AuthorityService {
if (ValidateUtils.isEmptyList(doList)) {
return new ArrayList<>();
}
return doList;
// 去除掉权限列表中无权限的数据
return doList.stream()
.filter(authorityDO -> !TopicAuthorityEnum.DENY.getCode().equals(authorityDO.getAccess()))
.collect(Collectors.toList());
}
@Override

View File

@@ -66,7 +66,10 @@ public class AdminServiceImpl implements AdminService {
String applicant,
String operator) {
List<Integer> fullBrokerIdList = regionService.getFullBrokerIdList(clusterDO.getId(), regionId, brokerIdList);
if (PhysicalClusterMetadataManager.getNotAliveBrokerNum(clusterDO.getId(), fullBrokerIdList) > DEFAULT_DEAD_BROKER_LIMIT_NUM) {
Long notAliveBrokerNum = PhysicalClusterMetadataManager.getNotAliveBrokerNum(clusterDO.getId(), fullBrokerIdList);
if (notAliveBrokerNum >= fullBrokerIdList.size() || notAliveBrokerNum > DEFAULT_DEAD_BROKER_LIMIT_NUM) {
// broker全挂了或者是挂的数量大于了DEFAULT_DEAD_BROKER_LIMIT_NUM时, 则认为broker参数不合法
return ResultStatus.BROKER_NOT_EXIST;
}
@@ -340,10 +343,6 @@ public class AdminServiceImpl implements AdminService {
@Override
public ResultStatus modifyTopicConfig(ClusterDO clusterDO, String topicName, Properties properties, String operator) {
ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, topicName, properties);
if (!ResultStatus.SUCCESS.equals(rs)) {
return rs;
}
return rs;
}
}

View File

@@ -82,6 +82,7 @@ public class ClusterServiceImpl implements ClusterService {
content.put("security properties", clusterDO.getSecurityProperties());
content.put("jmx properties", clusterDO.getJmxProperties());
operateRecordService.insert(operator, ModuleEnum.CLUSTER, clusterDO.getClusterName(), OperateEnum.ADD, content);
if (clusterDao.insert(clusterDO) <= 0) {
LOGGER.error("add new cluster failed, clusterDO:{}.", clusterDO);
return ResultStatus.MYSQL_ERROR;
@@ -205,21 +206,31 @@ public class ClusterServiceImpl implements ClusterService {
}
private boolean isZookeeperLegal(String zookeeper) {
boolean status = false;
ZooKeeper zk = null;
try {
zk = new ZooKeeper(zookeeper, 1000, null);
} catch (Throwable t) {
return false;
for (int i = 0; i < 15; ++i) {
if (zk.getState().isConnected()) {
// 只有状态是connected的时候才表示地址是合法的
status = true;
break;
}
Thread.sleep(1000);
}
} catch (Exception e) {
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=zk address illegal||errMsg={}", zookeeper, e.getMessage());
} finally {
try {
if (zk != null) {
zk.close();
}
} catch (Exception e) {
return false;
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=close zk client failed||errMsg={}", zookeeper, e.getMessage());
}
}
return true;
return status;
}
@Override

View File

@@ -8,7 +8,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumeDetailDTO;
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroup;
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroupSummary;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.common.entity.ao.PartitionOffsetDTO;
import com.xiaojukeji.kafka.manager.common.exception.ConfigException;

View File

@@ -41,8 +41,8 @@ public class OperateRecordServiceImpl implements OperateRecordService {
}
@Override
public List<OperateRecordDO> queryByCondt(OperateRecordDTO dto) {
return operateRecordDao.queryByCondt(
public List<OperateRecordDO> queryByCondition(OperateRecordDTO dto) {
return operateRecordDao.queryByCondition(
dto.getModuleId(),
dto.getOperateId(),
dto.getOperator(),

View File

@@ -95,6 +95,14 @@ public class TopicManagerServiceImpl implements TopicManagerService {
return new ArrayList<>();
}
@Override
public List<TopicDO> getByClusterIdFromCache(Long clusterId) {
if (clusterId == null) {
return new ArrayList<>();
}
return topicDao.getByClusterIdFromCache(clusterId);
}
@Override
public List<TopicDO> getByClusterId(Long clusterId) {
if (clusterId == null) {
@@ -139,12 +147,14 @@ public class TopicManagerServiceImpl implements TopicManagerService {
}
@Override
public Double getTopicMaxAvgBytesIn(Long clusterId,
String topicName,
Date startTime,
Date endTime,
Integer maxAvgDay) {
return topicStatisticsDao.getTopicMaxAvgBytesIn(clusterId, topicName, startTime, endTime, maxAvgDay);
public Double getTopicMaxAvgBytesIn(Long clusterId, String topicName, Date startTime, Date endTime, Integer maxAvgDay) {
try {
return topicStatisticsDao.getTopicMaxAvgBytesIn(clusterId, topicName, startTime, endTime, maxAvgDay);
} catch (Exception e) {
LOGGER.error("class=TopicManagerServiceImpl||method=getTopicMaxAvgBytesIn||clusterId={}||topicName={}||startTime={}||endTime={}||maxAvgDay={}||errMsg={}",
clusterId, topicName, startTime, endTime, maxAvgDay, e.getMessage());
}
return null;
}
@Override

View File

@@ -381,7 +381,7 @@ public class TopicServiceImpl implements TopicService {
return new ArrayList<>();
}
List<TopicDO> topicDOList = topicManagerService.getByClusterId(clusterId);
List<TopicDO> topicDOList = topicManagerService.getByClusterIdFromCache(clusterId);
if (ValidateUtils.isNull(topicDOList)) {
topicDOList = new ArrayList<>();
}

View File

@@ -14,5 +14,5 @@ public interface OperateRecordDao {
int insert(OperateRecordDO operateRecordDO);
List<OperateRecordDO> queryByCondt(Integer moduleId, Integer operateId, String operator, Date startTime, Date endTime);
List<OperateRecordDO> queryByCondition(Integer moduleId, Integer operateId, String operator, Date startTime, Date endTime);
}

View File

@@ -15,6 +15,8 @@ public interface TopicDao {
TopicDO getByTopicName(Long clusterId, String topicName);
List<TopicDO> getByClusterIdFromCache(Long clusterId);
List<TopicDO> getByClusterId(Long clusterId);
List<TopicDO> getByAppId(String appId);

View File

@@ -78,13 +78,14 @@ public class AppDaoImpl implements AppDao {
* 更新APP缓存
*/
private synchronized void updateTopicCache(List<AppDO> doList, long timestamp) {
if (APP_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
APP_MAP.clear();
}
if (doList == null || doList.isEmpty() || APP_CACHE_LATEST_UPDATE_TIME >= timestamp) {
// 本次无数据更新, 或者本次更新过时 时, 忽略本次更新
return;
}
if (APP_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
APP_MAP.clear();
}
for (AppDO elem: doList) {
APP_MAP.put(elem.getAppId(), elem);

View File

@@ -93,7 +93,7 @@ public class AuthorityDaoImpl implements AuthorityDao {
private void updateAuthorityCache() {
Long timestamp = System.currentTimeMillis();
long timestamp = System.currentTimeMillis();
if (timestamp + 1000 <= AUTHORITY_CACHE_LATEST_UPDATE_TIME) {
// 近一秒内的请求不走db
@@ -109,13 +109,14 @@ public class AuthorityDaoImpl implements AuthorityDao {
* 更新Topic缓存
*/
private synchronized void updateAuthorityCache(List<AuthorityDO> doList, Long timestamp) {
if (AUTHORITY_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
AUTHORITY_MAP.clear();
}
if (doList == null || doList.isEmpty() || AUTHORITY_CACHE_LATEST_UPDATE_TIME >= timestamp) {
// 本次无数据更新, 或者本次更新过时 时, 忽略本次更新
return;
}
if (AUTHORITY_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
AUTHORITY_MAP.clear();
}
for (AuthorityDO elem: doList) {
Map<Long, Map<String, AuthorityDO>> doMap =

View File

@@ -30,13 +30,13 @@ public class OperateRecordDaoImpl implements OperateRecordDao {
}
@Override
public List<OperateRecordDO> queryByCondt(Integer moduleId, Integer operateId, String operator, Date startTime, Date endTime) {
public List<OperateRecordDO> queryByCondition(Integer moduleId, Integer operateId, String operator, Date startTime, Date endTime) {
Map<String, Object> params = new HashMap<>(5);
params.put("moduleId", moduleId);
params.put("operateId", operateId);
params.put("operator", operator);
params.put("startTime", startTime);
params.put("endTime", endTime);
return sqlSession.selectList("OperateRecordDao.queryByCondt", params);
return sqlSession.selectList("OperateRecordDao.queryByCondition", params);
}
}

View File

@@ -62,11 +62,16 @@ public class TopicDaoImpl implements TopicDao {
}
@Override
public List<TopicDO> getByClusterId(Long clusterId) {
public List<TopicDO> getByClusterIdFromCache(Long clusterId) {
updateTopicCache();
return new ArrayList<>(TOPIC_MAP.getOrDefault(clusterId, Collections.emptyMap()).values());
}
@Override
public List<TopicDO> getByClusterId(Long clusterId) {
return sqlSession.selectList("TopicDao.getByClusterId", clusterId);
}
@Override
public List<TopicDO> getByAppId(String appId) {
return sqlSession.selectList("TopicDao.getByAppId", appId);
@@ -92,7 +97,7 @@ public class TopicDaoImpl implements TopicDao {
}
private void updateTopicCache() {
Long timestamp = System.currentTimeMillis();
long timestamp = System.currentTimeMillis();
if (timestamp + 1000 <= TOPIC_CACHE_LATEST_UPDATE_TIME) {
// 近一秒内的请求不走db
@@ -108,13 +113,14 @@ public class TopicDaoImpl implements TopicDao {
* 更新Topic缓存
*/
private synchronized void updateTopicCache(List<TopicDO> doList, Long timestamp) {
if (TOPIC_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
TOPIC_MAP.clear();
}
if (doList == null || doList.isEmpty() || TOPIC_CACHE_LATEST_UPDATE_TIME >= timestamp) {
// 本次无数据更新, 或者本次更新过时 时, 忽略本次更新
return;
}
if (TOPIC_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
TOPIC_MAP.clear();
}
for (TopicDO elem: doList) {
Map<String, TopicDO> doMap = TOPIC_MAP.getOrDefault(elem.getClusterId(), new ConcurrentHashMap<>());

View File

@@ -21,7 +21,7 @@
)
</insert>
<select id="queryByCondt" parameterType="java.util.Map" resultMap="OperateRecordMap">
<select id="queryByCondition" parameterType="java.util.Map" resultMap="OperateRecordMap">
select *
from operate_record
where

View File

@@ -16,5 +16,5 @@ public interface LoginService {
void logout(HttpServletRequest request, HttpServletResponse response, Boolean needJump2LoginPage);
boolean checkLogin(HttpServletRequest request, HttpServletResponse response);
boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue);
}

View File

@@ -0,0 +1,130 @@
package com.xiaojukeji.kafka.manager.account.component.ldap;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import javax.naming.AuthenticationException;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.naming.ldap.InitialLdapContext;
import javax.naming.ldap.LdapContext;
import java.util.Hashtable;
@Component
public class LdapAuthentication {
private static final Logger LOGGER = LoggerFactory.getLogger(LdapAuthentication.class);
@Value(value = "${account.ldap.url:}")
private String ldapUrl;
@Value(value = "${account.ldap.basedn:}")
private String ldapBasedn;
@Value(value = "${account.ldap.factory:}")
private String ldapFactory;
@Value(value = "${account.ldap.filter:}")
private String ldapFilter;
@Value(value = "${account.ldap.security.authentication:}")
private String securityAuthentication;
@Value(value = "${account.ldap.security.principal:}")
private String securityPrincipal;
@Value(value = "${account.ldap.security.credentials:}")
private String securityCredentials;
private LdapContext getLdapContext() {
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY, ldapFactory);
env.put(Context.PROVIDER_URL, ldapUrl + ldapBasedn);
env.put(Context.SECURITY_AUTHENTICATION, securityAuthentication);
// 此处若不指定用户名和密码,则自动转换为匿名登录
env.put(Context.SECURITY_PRINCIPAL, securityPrincipal);
env.put(Context.SECURITY_CREDENTIALS, securityCredentials);
try {
return new InitialLdapContext(env, null);
} catch (AuthenticationException e) {
LOGGER.warn("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
}
return null;
}
private String getUserDN(String account, LdapContext ctx) {
String userDN = "";
try {
SearchControls constraints = new SearchControls();
constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
String filter = "(&(objectClass=*)("+ldapFilter+"=" + account + "))";
NamingEnumeration<SearchResult> en = ctx.search("", filter, constraints);
if (en == null || !en.hasMoreElements()) {
return "";
}
// maybe more than one element
while (en.hasMoreElements()) {
Object obj = en.nextElement();
if (obj instanceof SearchResult) {
SearchResult si = (SearchResult) obj;
userDN += si.getName();
userDN += "," + ldapBasedn;
break;
}
}
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=getUserDN||account={}||errMsg={}", account, e);
}
return userDN;
}
/**
* LDAP账密验证
* @param account
* @param password
* @return
*/
public boolean authenticate(String account, String password) {
LdapContext ctx = getLdapContext();
if (ValidateUtils.isNull(ctx)) {
return false;
}
try {
String userDN = getUserDN(account, ctx);
if(ValidateUtils.isBlank(userDN)){
return false;
}
ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, userDN);
ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password);
ctx.reconnect(null);
return true;
} catch (AuthenticationException e) {
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} catch (NamingException e) {
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} finally {
if(ctx != null) {
try {
ctx.close();
} catch (NamingException e) {
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
}
}
}
return false;
}
}

View File

@@ -2,13 +2,17 @@ package com.xiaojukeji.kafka.manager.account.component.sso;
import com.xiaojukeji.kafka.manager.account.AccountService;
import com.xiaojukeji.kafka.manager.account.component.AbstractSingleSignOn;
import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum;
import com.xiaojukeji.kafka.manager.common.constant.LoginConstant;
import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO;
import com.xiaojukeji.kafka.manager.common.utils.EncryptUtil;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.account.component.ldap.LdapAuthentication;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import javax.servlet.http.HttpServletRequest;
@@ -23,12 +27,48 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
@Autowired
private AccountService accountService;
@Autowired
private LdapAuthentication ldapAuthentication;
//是否开启ldap验证
@Value(value = "${account.ldap.enabled:}")
private Boolean accountLdapEnabled;
//ldap自动注册的默认角色。请注意它通常来说都是低权限角色
@Value(value = "${account.ldap.auth-user-registration-role:}")
private String authUserRegistrationRole;
//ldap自动注册是否开启
@Value(value = "${account.ldap.auth-user-registration:}")
private boolean authUserRegistration;
@Override
public Result<String> loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto) {
if (ValidateUtils.isBlank(dto.getUsername()) || ValidateUtils.isNull(dto.getPassword())) {
return null;
return Result.buildFailure("Missing parameters");
}
Result<AccountDO> accountResult = accountService.getAccountDO(dto.getUsername());
//判断是否激活了LDAP验证, 若激活则也可使用ldap进行认证
if(!ValidateUtils.isNull(accountLdapEnabled) && accountLdapEnabled){
//去LDAP验证账密
if(!ldapAuthentication.authenticate(dto.getUsername(),dto.getPassword())){
return Result.buildFrom(ResultStatus.LDAP_AUTHENTICATION_FAILED);
}
if((ValidateUtils.isNull(accountResult) || ValidateUtils.isNull(accountResult.getData())) && authUserRegistration){
//自动注册
AccountDO accountDO = new AccountDO();
accountDO.setUsername(dto.getUsername());
accountDO.setRole(AccountRoleEnum.getUserRoleEnum(authUserRegistrationRole).getRole());
accountDO.setPassword(dto.getPassword());
accountService.createAccount(accountDO);
}
return Result.buildSuc(dto.getUsername());
}
if (ValidateUtils.isNull(accountResult) || accountResult.failed()) {
return new Result<>(accountResult.getCode(), accountResult.getMessage());
}
@@ -64,4 +104,4 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
response.setStatus(AbstractSingleSignOn.REDIRECT_CODE);
response.addHeader(AbstractSingleSignOn.HEADER_REDIRECT_KEY, "");
}
}
}

View File

@@ -63,12 +63,17 @@ public class LoginServiceImpl implements LoginService {
}
@Override
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response) {
String uri = request.getRequestURI();
if (!(uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)
|| uri.contains(ApiPrefix.API_V1_RD_PREFIX)
|| uri.contains(ApiPrefix.API_V1_OP_PREFIX))) {
// 白名单接口, 直接忽略登录
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue) {
if (ValidateUtils.isNull(classRequestMappingValue)) {
LOGGER.error("class=LoginServiceImpl||method=checkLogin||msg=uri illegal||uri={}", request.getRequestURI());
singleSignOn.setRedirectToLoginPage(response);
return false;
}
if (classRequestMappingValue.equals(ApiPrefix.API_V1_SSO_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.GATEWAY_API_V1_PREFIX)) {
// 白名单接口直接true
return true;
}
@@ -79,7 +84,7 @@ public class LoginServiceImpl implements LoginService {
return false;
}
boolean status = checkAuthority(request, accountService.getAccountRoleFromCache(username));
boolean status = checkAuthority(classRequestMappingValue, accountService.getAccountRoleFromCache(username));
if (status) {
HttpSession session = request.getSession();
session.setAttribute(LoginConstant.SESSION_USERNAME_KEY, username);
@@ -89,19 +94,18 @@ public class LoginServiceImpl implements LoginService {
return false;
}
private boolean checkAuthority(HttpServletRequest request, AccountRoleEnum accountRoleEnum) {
String uri = request.getRequestURI();
if (uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)) {
private boolean checkAuthority(String classRequestMappingValue, AccountRoleEnum accountRoleEnum) {
if (classRequestMappingValue.equals(ApiPrefix.API_V1_NORMAL_PREFIX)) {
// normal 接口都可以访问
return true;
}
if (uri.contains(ApiPrefix.API_V1_RD_PREFIX) ) {
// RD 接口 OP 或者 RD 可以访问
if (classRequestMappingValue.equals(ApiPrefix.API_V1_RD_PREFIX) ) {
// RD 接口, OP 或者 RD 可以访问
return AccountRoleEnum.RD.equals(accountRoleEnum) || AccountRoleEnum.OP.equals(accountRoleEnum);
}
if (uri.contains(ApiPrefix.API_V1_OP_PREFIX)) {
if (classRequestMappingValue.equals(ApiPrefix.API_V1_OP_PREFIX)) {
// OP 接口只有 OP 可以访问
return AccountRoleEnum.OP.equals(accountRoleEnum);
}

View File

@@ -95,7 +95,7 @@ public class DeleteAppOrder extends AbstractAppOrder {
// 判断app是否对topic有权限
List<AuthorityDO> authorityList = authorityService.getAuthority(orderAppExtension.getAppId());
if (!ValidateUtils.isEmptyList(authorityList)) {
return ResultStatus.OPERATION_FORBIDDEN;
return ResultStatus.APP_OFFLINE_FORBIDDEN;
}
if (appService.deleteApp(appDO, userName) > 0) {
return ResultStatus.SUCCESS;

View File

@@ -5,6 +5,8 @@ import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.monitor.common.entry.*;
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.*;
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.bizenum.CategoryEnum;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
@@ -13,6 +15,8 @@ import java.util.*;
* @date 20/8/26
*/
public class N9eConverter {
private static final Logger LOGGER = LoggerFactory.getLogger(N9eConverter.class);
public static List<N9eMetricSinkPoint> convert2N9eMetricSinkPointList(String nid, List<MetricSinkPoint> pointList) {
if (pointList == null || pointList.isEmpty()) {
return new ArrayList<>();
@@ -98,8 +102,8 @@ public class N9eConverter {
n9eStrategy.setNotify_user(new ArrayList<>());
n9eStrategy.setCallback(strategyAction.getCallback());
n9eStrategy.setEnable_stime("00:00");
n9eStrategy.setEnable_etime("23:59");
n9eStrategy.setEnable_stime(String.format("%02d:00", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().min((e1, e2) -> e1.compareTo(e2)).get()));
n9eStrategy.setEnable_etime(String.format("%02d:59", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().max((e1, e2) -> e1.compareTo(e2)).get()));
n9eStrategy.setEnable_days_of_week(ListUtils.string2IntList(strategy.getPeriodDaysOfWeek()));
n9eStrategy.setNeed_upgrade(0);
@@ -120,6 +124,15 @@ public class N9eConverter {
return strategyList;
}
private static Integer getEnableHour(String enableTime) {
try {
return Integer.valueOf(enableTime.split(":")[0]);
} catch (Exception e) {
LOGGER.warn("class=N9eConverter||method=getEnableHour||enableTime={}||errMsg={}", enableTime, e.getMessage());
}
return null;
}
public static Strategy convert2Strategy(N9eStrategy n9eStrategy, Map<String, NotifyGroup> notifyGroupMap) {
if (n9eStrategy == null) {
return null;
@@ -137,7 +150,16 @@ public class N9eConverter {
strategy.setId(n9eStrategy.getId().longValue());
strategy.setName(n9eStrategy.getName());
strategy.setPriority(n9eStrategy.getPriority());
strategy.setPeriodHoursOfDay("0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23");
List<Integer> hourList = new ArrayList<>();
Integer startHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_stime());
Integer endHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_etime());
if (!(ValidateUtils.isNullOrLessThanZero(startHour) || ValidateUtils.isNullOrLessThanZero(endHour) || endHour < startHour)) {
for (Integer hour = startHour; hour <= endHour; ++hour) {
hourList.add(hour);
}
}
strategy.setPeriodHoursOfDay(ListUtils.intList2String(hourList));
strategy.setPeriodDaysOfWeek(ListUtils.intList2String(n9eStrategy.getEnable_days_of_week()));
List<StrategyExpression> strategyExpressionList = new ArrayList<>();

View File

@@ -1,14 +1,7 @@
package com.xiaojukeji.kafka.manager.notify;
import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OrderDO;
import com.xiaojukeji.kafka.manager.common.events.OrderApplyEvent;
import com.xiaojukeji.kafka.manager.notify.common.NotifyConstant;
import com.xiaojukeji.kafka.manager.notify.notifyer.AbstractNotifyService;
import com.xiaojukeji.kafka.manager.notify.common.OrderNotifyTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.ApplicationListener;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
@@ -19,27 +12,10 @@ import org.springframework.stereotype.Service;
*/
@Service("orderApplyNotifyService")
public class OrderApplyNotifyService implements ApplicationListener<OrderApplyEvent> {
@Autowired
private AbstractNotifyService notifyService;
@Value("${notify.order.detail-url}")
private String orderDetailUrl;
@Async
@Override
public void onApplicationEvent(OrderApplyEvent orderApplyEvent) {
OrderDO orderDO = orderApplyEvent.getOrderDO();
String detailUrl = String.format(orderDetailUrl, orderDO.getId(), orderApplyEvent.getIdc());
for (Account account : NotifyConstant.accountList) {
notifyService.sendMsg(account.getUsername(),
OrderNotifyTemplate.getNotify2OrderHandlerMessage(
account.getChineseName(),
orderDO.getApplicant(),
orderDO.getTitle(),
detailUrl
)
);
}
// todo 工单通知
}
}

View File

@@ -1,18 +0,0 @@
package com.xiaojukeji.kafka.manager.notify.common;
import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum;
import com.xiaojukeji.kafka.manager.common.entity.ao.account.Account;
import java.util.Arrays;
import java.util.List;
/**
* @author zengqiao
* @date 20/8/27
*/
public class NotifyConstant {
public static final List<Account> accountList = Arrays.asList(
new Account("xuzhengxi", "徐正熙", "", AccountRoleEnum.OP)
);
}

View File

@@ -125,7 +125,7 @@ public class SyncTopic2DB extends AbstractScheduledTask<EmptyEntry> {
if (ValidateUtils.isNull(syncTopic2DBConfig.isAddAuthority()) || !syncTopic2DBConfig.isAddAuthority()) {
// 不增加权限信息, 则直接忽略
return;
continue;
}
// TODO 当前添加 Topic 和 添加 Authority 是非事务的, 中间出现异常之后, 会导致数据错误, 后续还需要优化一下

View File

@@ -1,15 +1,17 @@
package com.xiaojukeji.kafka.manager.task.schedule.metadata;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.service.cache.KafkaClientPool;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author zengqiao
@@ -25,24 +27,63 @@ public class FlushClusterMetadata {
@Scheduled(cron="0/30 * * * * ?")
public void flush() {
List<ClusterDO> doList = clusterService.list();
Map<Long, ClusterDO> dbClusterMap = clusterService.list().stream().collect(Collectors.toMap(ClusterDO::getId, Function.identity(), (key1, key2) -> key2));
Set<Long> newClusterIdSet = new HashSet<>();
Set<Long> oldClusterIdSet = physicalClusterMetadataManager.getClusterIdSet();
for (ClusterDO clusterDO: doList) {
newClusterIdSet.add(clusterDO.getId());
Map<Long, ClusterDO> cacheClusterMap = PhysicalClusterMetadataManager.getClusterMap();
// 添加集群
physicalClusterMetadataManager.addNew(clusterDO);
}
// 新增的集群
for (ClusterDO clusterDO: dbClusterMap.values()) {
if (cacheClusterMap.containsKey(clusterDO.getId())) {
// 已经存在
continue;
}
add(clusterDO);
}
for (Long clusterId: oldClusterIdSet) {
if (newClusterIdSet.contains(clusterId)) {
continue;
}
// 移除的集群
for (ClusterDO clusterDO: cacheClusterMap.values()) {
if (dbClusterMap.containsKey(clusterDO.getId())) {
// 已经存在
continue;
}
remove(clusterDO.getId());
}
// 移除集群
physicalClusterMetadataManager.remove(clusterId);
}
// 被修改配置的集群
for (ClusterDO dbClusterDO: dbClusterMap.values()) {
ClusterDO cacheClusterDO = cacheClusterMap.get(dbClusterDO.getId());
if (ValidateUtils.anyNull(cacheClusterDO) || dbClusterDO.equals(cacheClusterDO)) {
// 不存在 || 相等
continue;
}
modifyConfig(dbClusterDO);
}
}
private void add(ClusterDO clusterDO) {
if (ValidateUtils.anyNull(clusterDO)) {
return;
}
physicalClusterMetadataManager.addNew(clusterDO);
}
private void modifyConfig(ClusterDO clusterDO) {
if (ValidateUtils.anyNull(clusterDO)) {
return;
}
PhysicalClusterMetadataManager.updateClusterMap(clusterDO);
KafkaClientPool.closeKafkaConsumerPool(clusterDO.getId());
}
private void remove(Long clusterId) {
if (ValidateUtils.anyNull(clusterId)) {
return;
}
// 移除缓存信息
physicalClusterMetadataManager.remove(clusterId);
// 清除客户端池子
KafkaClientPool.closeKafkaConsumerPool(clusterId);
}
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.kafka.manager.web.api;
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
import com.xiaojukeji.kafka.manager.common.entity.Result;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
@@ -14,9 +15,9 @@ import springfox.documentation.annotations.ApiIgnore;
* @date 20/6/18
*/
@ApiIgnore
@Api(description = "web应用探活接口(REST)")
@Api(tags = "web应用探活接口(REST)")
@RestController
@RequestMapping("api/")
@RequestMapping(ApiPrefix.API_V1_THIRD_PART_PREFIX)
public class HealthController {
@ApiIgnore

View File

@@ -9,7 +9,6 @@ import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountSummaryVO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.SpringTool;
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
import com.xiaojukeji.kafka.manager.web.api.versionone.gateway.GatewayHeartbeatController;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import org.slf4j.Logger;
@@ -62,4 +61,4 @@ public class NormalAccountController {
AccountRoleEnum accountRoleEnum = accountService.getAccountRoleFromCache(username);
return new Result<>(new AccountRoleVO(username, accountRoleEnum.getRole()));
}
}
}

View File

@@ -11,11 +11,13 @@ import com.xiaojukeji.kafka.manager.common.entity.metrics.BaseMetrics;
import com.xiaojukeji.kafka.manager.common.entity.vo.common.RealTimeMetricsVO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.TopicBusinessInfoVO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.*;
import com.xiaojukeji.kafka.manager.common.utils.DateUtils;
import com.xiaojukeji.kafka.manager.common.utils.SpringTool;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.KafkaBillDO;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxAttributeEnum;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicStatisticMetricsVO;
import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.*;
@@ -339,4 +341,23 @@ public class NormalTopicController {
);
}
@ApiOperation(value = "Topic流量统计信息", notes = "")
@RequestMapping(value = "{clusterId}/topics/{topicName}/statistic-metrics", method = RequestMethod.GET)
@ResponseBody
public Result<TopicStatisticMetricsVO> getTopicStatisticMetrics(@PathVariable Long clusterId,
@PathVariable String topicName,
@RequestParam(value = "isPhysicalClusterId", required = false) Boolean isPhysicalClusterId,
@RequestParam("latest-day") Integer latestDay) {
Long physicalClusterId = logicalClusterMetadataManager.getPhysicalClusterId(clusterId, isPhysicalClusterId);
if (ValidateUtils.isNull(physicalClusterId)) {
return Result.buildFrom(ResultStatus.CLUSTER_NOT_EXIST);
}
Double maxAvgBytesIn = topicManagerService.getTopicMaxAvgBytesIn(physicalClusterId, topicName, new Date(DateUtils.getDayStarTime(-1 * latestDay)), new Date(), 1);
if (ValidateUtils.isNull(maxAvgBytesIn)) {
return Result.buildFrom(ResultStatus.MYSQL_ERROR);
}
return new Result<>(new TopicStatisticMetricsVO(maxAvgBytesIn));
}
}

View File

@@ -36,7 +36,7 @@ public class RdOperateRecordController {
if (ValidateUtils.isNull(dto) || !dto.legal()) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
List<OperateRecordVO> voList = OperateRecordModelConverter.convert2OperateRecordVOList(operateRecordService.queryByCondt(dto));
List<OperateRecordVO> voList = OperateRecordModelConverter.convert2OperateRecordVOList(operateRecordService.queryByCondition(dto));
if (voList.size() > MAX_RECORD_COUNT) {
voList = voList.subList(0, MAX_RECORD_COUNT);
}

View File

@@ -7,7 +7,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.openapi.common.vo.ThirdPartBrokerOverviewVO;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.BrokerService;
import io.swagger.annotations.Api;
@@ -52,4 +51,4 @@ public class ThirdPartClusterController {
return new Result<>(underReplicated.equals(0));
}
}
}

View File

@@ -13,8 +13,6 @@ import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicAuthorize
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.topic.TopicRequestTimeDetailVO;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.openapi.common.vo.TopicOffsetChangedVO;
import com.xiaojukeji.kafka.manager.openapi.common.vo.TopicStatisticMetricsVO;
import com.xiaojukeji.kafka.manager.common.utils.DateUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
@@ -30,7 +28,6 @@ import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.Date;
import java.util.List;
/**
@@ -69,27 +66,6 @@ public class ThirdPartTopicController {
return new Result<>(vo);
}
@ApiOperation(value = "Topic流量统计信息", notes = "")
@RequestMapping(value = "{physicalClusterId}/topics/{topicName}/statistic-metrics", method = RequestMethod.GET)
@ResponseBody
public Result<TopicStatisticMetricsVO> getTopicStatisticMetrics(@PathVariable Long physicalClusterId,
@PathVariable String topicName,
@RequestParam("latest-day") Integer latestDay) {
try {
return new Result<>(new TopicStatisticMetricsVO(topicManagerService.getTopicMaxAvgBytesIn(
physicalClusterId,
topicName,
new Date(DateUtils.getDayStarTime(-1 * latestDay)),
new Date(),
1
)));
} catch (Exception e) {
LOGGER.error("get topic statistic metrics failed, clusterId:{} topicName:{} latestDay:{}."
, physicalClusterId, topicName, latestDay, e);
}
return Result.buildFrom(ResultStatus.MYSQL_ERROR);
}
@ApiOperation(value = "Topic是否有流量", notes = "")
@RequestMapping(value = "{physicalClusterId}/topics/{topicName}/offset-changed", method = RequestMethod.GET)
@ResponseBody

View File

@@ -1,8 +1,13 @@
package com.xiaojukeji.kafka.manager.web.inteceptor;
import com.xiaojukeji.kafka.manager.account.LoginService;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.HandlerInterceptor;
import javax.servlet.http.HttpServletRequest;
@@ -15,6 +20,8 @@ import javax.servlet.http.HttpServletResponse;
*/
@Component
public class PermissionInterceptor implements HandlerInterceptor {
private static final Logger LOGGER = LoggerFactory.getLogger(PermissionInterceptor.class);
@Autowired
private LoginService loginService;
@@ -28,6 +35,31 @@ public class PermissionInterceptor implements HandlerInterceptor {
public boolean preHandle(HttpServletRequest request,
HttpServletResponse response,
Object handler) throws Exception {
return loginService.checkLogin(request, response);
String classRequestMappingValue = null;
try {
classRequestMappingValue = getClassRequestMappingValue(handler);
} catch (Exception e) {
LOGGER.error("class=PermissionInterceptor||method=preHandle||uri={}||msg=parse class request-mapping failed", request.getRequestURI(), e);
}
return loginService.checkLogin(request, response, classRequestMappingValue);
}
private String getClassRequestMappingValue(Object handler) {
RequestMapping classRM = null;
if(handler instanceof HandlerMethod) {
HandlerMethod hm = (HandlerMethod)handler;
classRM = hm.getMethod().getDeclaringClass().getAnnotation(RequestMapping.class);
} else if(handler instanceof org.springframework.web.servlet.mvc.Controller) {
org.springframework.web.servlet.mvc.Controller hm = (org.springframework.web.servlet.mvc.Controller)handler;
Class<? extends org.springframework.web.servlet.mvc.Controller> hmClass = hm.getClass();
classRM = hmClass.getAnnotation(RequestMapping.class);
} else {
classRM = handler.getClass().getAnnotation(RequestMapping.class);
}
if (ValidateUtils.isNull(classRM) || classRM.value().length < 0) {
return null;
}
return classRM.value()[0];
}
}

View File

@@ -14,7 +14,7 @@ spring:
jdbc-url: jdbc:mysql://127.0.0.1:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
username: admin
password: admin
driver-class-name: com.mysql.jdbc.Driver
driver-class-name: com.mysql.cj.jdbc.Driver
main:
allow-bean-definition-overriding: true
@@ -49,6 +49,17 @@ task:
account:
ldap:
enabled: false
url: ldap://127.0.0.1:389/
basedn: dc=tsign,dc=cn
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: cn=admin,dc=tsign,dc=cn
credentials: admin
auth-user-registration: true
auth-user-registration-role: normal
kcm:
enabled: false

View File

@@ -16,7 +16,7 @@
</parent>
<properties>
<kafka-manager.revision>2.3.0-SNAPSHOT</kafka-manager.revision>
<kafka-manager.revision>2.4.0-SNAPSHOT</kafka-manager.revision>
<swagger2.version>2.7.0</swagger2.version>
<swagger.version>1.5.13</swagger.version>
@@ -180,7 +180,7 @@
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.41</version>
<version>8.0.11</version>
</dependency>
<dependency>