mirror of
https://github.com/didi/KnowStreaming.git
synced 2025-12-24 20:22:12 +08:00
Compare commits
67 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b67a162d3f | ||
|
|
1141d4b833 | ||
|
|
cdac92ca7b | ||
|
|
2a57c260cc | ||
|
|
8f10624073 | ||
|
|
eb1f8be11e | ||
|
|
3333501ab9 | ||
|
|
0f40820315 | ||
|
|
b9bb1c775d | ||
|
|
1059b7376b | ||
|
|
f38ab4a9ce | ||
|
|
9e7450c012 | ||
|
|
99a3e360fe | ||
|
|
d45f8f78d6 | ||
|
|
648af61116 | ||
|
|
eebf1b89b1 | ||
|
|
f8094bb624 | ||
|
|
ed13e0d2c2 | ||
|
|
aa830589b4 | ||
|
|
999a2bd929 | ||
|
|
d69ee98450 | ||
|
|
f6712c24ad | ||
|
|
89d2772194 | ||
|
|
03352142b6 | ||
|
|
73a51e0c00 | ||
|
|
2e26f8caa6 | ||
|
|
f9bcce9e43 | ||
|
|
2ecc877ba8 | ||
|
|
3f8a3c69e3 | ||
|
|
67c37a0984 | ||
|
|
a58a55d00d | ||
|
|
06d51dd0b8 | ||
|
|
d5db028f57 | ||
|
|
fcb85ff4be | ||
|
|
3695b4363d | ||
|
|
cb11e6437c | ||
|
|
5127bd11ce | ||
|
|
91f90aefa1 | ||
|
|
0a067bce36 | ||
|
|
f0aba433bf | ||
|
|
f06467a0e3 | ||
|
|
68bcd3c710 | ||
|
|
a645733cc5 | ||
|
|
49fe5baf94 | ||
|
|
411ee55653 | ||
|
|
e351ce7411 | ||
|
|
f33e585a71 | ||
|
|
77f3096e0d | ||
|
|
9a5b18c4e6 | ||
|
|
0c7112869a | ||
|
|
f66a4d71ea | ||
|
|
9b0ab878df | ||
|
|
d30b90dfd0 | ||
|
|
efd28f8c27 | ||
|
|
e05e722387 | ||
|
|
748e81956d | ||
|
|
c9a41febce | ||
|
|
18e244b756 | ||
|
|
47676139a3 | ||
|
|
1ed933b7ad | ||
|
|
f6a343ccd6 | ||
|
|
dd6cdc22e5 | ||
|
|
f70f4348b3 | ||
|
|
e7349161f3 | ||
|
|
2e2907ea09 | ||
|
|
25e84b2a6c | ||
|
|
9aefc55534 |
11
README.md
11
README.md
@@ -67,11 +67,16 @@
|
||||
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
|
||||
- [kafka实践(十五):滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
|
||||
|
||||
## 3 滴滴Logi开源用户钉钉交流群
|
||||
## 3 滴滴Logi开源用户交流群
|
||||
|
||||
|
||||

|
||||
微信加群:关注公众号 Obsuite(官方公众号) 回复 "Logi加群"
|
||||
|
||||

|
||||
钉钉群ID:32821440
|
||||
|
||||
钉钉群ID:32821440
|
||||
|
||||
|
||||
## 4 OCE认证
|
||||
OCE是一个认证机制和交流平台,为滴滴Logi-KafkaManager生产用户量身打造,我们会为OCE企业提供更好的技术支持,比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等,如果贵司Logi-KafkaManager上了生产,[快来加入吧](http://obsuite.didiyun.com/open/openAuth)
|
||||
|
||||
|
||||
97
Releases_Notes.md
Normal file
97
Releases_Notes.md
Normal file
@@ -0,0 +1,97 @@
|
||||
|
||||
---
|
||||
|
||||

|
||||
|
||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||
|
||||
---
|
||||
|
||||
## v2.3.0
|
||||
|
||||
版本上线时间:2021-02-08
|
||||
|
||||
|
||||
### 能力提升
|
||||
|
||||
- 新增支持docker化部署
|
||||
- 可指定Broker作为候选controller
|
||||
- 可新增并管理网关配置
|
||||
- 可获取消费组状态
|
||||
- 增加集群的JMX认证
|
||||
|
||||
### 体验优化
|
||||
|
||||
- 优化编辑用户角色、修改密码的流程
|
||||
- 新增consumerID的搜索功能
|
||||
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
|
||||
- 在相应位置增加《资源申请文档》链接
|
||||
|
||||
### bug修复
|
||||
|
||||
- 修复Broker监控图表时间轴展示错误的问题
|
||||
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
|
||||
|
||||
|
||||
|
||||
## v2.2.0
|
||||
|
||||
版本上线时间:2021-01-25
|
||||
|
||||
|
||||
|
||||
### 能力提升
|
||||
|
||||
- 优化工单批量操作流程
|
||||
- 增加获取Topic75分位/99分位的实时耗时数据
|
||||
- 增加定时任务,可将无主未落DB的Topic定期写入DB
|
||||
|
||||
### 体验优化
|
||||
|
||||
- 在相应位置增加《集群接入文档》链接
|
||||
- 优化物理集群、逻辑集群含义
|
||||
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
|
||||
- 优化Topic审批时,Topic数据保存时间的配置流程
|
||||
- 优化Topic/应用申请、审批时的错误提示文案
|
||||
- 优化Topic数据采样的操作项文案
|
||||
- 优化运维人员删除Topic时的提示文案
|
||||
- 优化运维人员删除Region的删除逻辑与提示文案
|
||||
- 优化运维人员删除逻辑集群的提示文案
|
||||
- 优化上传集群配置文件时的文件类型限制条件
|
||||
|
||||
### bug修复
|
||||
|
||||
- 修复填写应用名称时校验特殊字符出错的问题
|
||||
- 修复普通用户越权访问应用详情的问题
|
||||
- 修复由于Kafka版本升级,导致的数据压缩格式无法获取的问题
|
||||
- 修复删除逻辑集群或Topic之后,界面依旧展示的问题
|
||||
- 修复进行Leader rebalance操作时执行结果重复提示的问题
|
||||
|
||||
|
||||
## v2.1.0
|
||||
|
||||
版本上线时间:2020-12-19
|
||||
|
||||
|
||||
|
||||
### 体验优化
|
||||
|
||||
- 优化页面加载时的背景样式
|
||||
- 优化普通用户申请Topic权限的流程
|
||||
- 优化Topic申请配额、申请分区的权限限制
|
||||
- 优化取消Topic权限的文案提示
|
||||
- 优化申请配额表单的表单项名称
|
||||
- 优化重置消费偏移的操作流程
|
||||
- 优化创建Topic迁移任务的表单内容
|
||||
- 优化Topic扩分区操作的弹窗界面样式
|
||||
- 优化集群Broker监控可视化图表样式
|
||||
- 优化创建逻辑集群的表单内容
|
||||
- 优化集群安全协议的提示文案
|
||||
|
||||
### bug修复
|
||||
|
||||
- 修复偶发性重置消费偏移失败的问题
|
||||
|
||||
|
||||
|
||||
|
||||
2
build.sh
2
build.sh
@@ -4,7 +4,7 @@ cd $workspace
|
||||
|
||||
## constant
|
||||
OUTPUT_DIR=./output
|
||||
KM_VERSION=2.3.0
|
||||
KM_VERSION=2.3.1
|
||||
APP_NAME=kafka-manager
|
||||
APP_DIR=${APP_NAME}-${KM_VERSION}
|
||||
|
||||
|
||||
Binary file not shown.
@@ -9,6 +9,13 @@
|
||||
|
||||
# 动态配置管理
|
||||
|
||||
## 0、目录
|
||||
|
||||
- 1、Topic定时同步任务
|
||||
- 2、专家服务——Topic分区热点
|
||||
- 3、专家服务——Topic分区不足
|
||||
|
||||
|
||||
## 1、Topic定时同步任务
|
||||
|
||||
### 1.1、配置的用途
|
||||
@@ -63,3 +70,53 @@ task:
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2、专家服务——Topic分区热点
|
||||
|
||||
在`Region`所圈定的Broker范围内,某个Topic的Leader数在这些圈定的Broker上分布不均衡时,我们认为该Topic是存在热点的Topic。
|
||||
|
||||
备注:单纯的查看Leader数的分布,确实存在一定的局限性,这块欢迎贡献更多的热点定义于代码。
|
||||
|
||||
|
||||
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理):
|
||||
|
||||
配置Key:
|
||||
```
|
||||
REGION_HOT_TOPIC_CONFIG
|
||||
```
|
||||
|
||||
配置Value:
|
||||
```json
|
||||
{
|
||||
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时,则认为是存在热点的Topic
|
||||
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
||||
"ignoreClusterIdList": [ # 忽略的集群
|
||||
50
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3、专家服务——Topic分区不足
|
||||
|
||||
总流量除以分区数,超过指定值时,则我们认为存在Topic分区不足。
|
||||
|
||||
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理):
|
||||
|
||||
配置Key:
|
||||
```
|
||||
TOPIC_INSUFFICIENT_PARTITION_CONFIG
|
||||
```
|
||||
|
||||
配置Value:
|
||||
```json
|
||||
{
|
||||
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
|
||||
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
||||
"ignoreClusterIdList": [ # 忽略的集群
|
||||
50
|
||||
]
|
||||
}
|
||||
```
|
||||
10
docs/dev_guide/gateway_config_manager.md
Normal file
10
docs/dev_guide/gateway_config_manager.md
Normal file
@@ -0,0 +1,10 @@
|
||||
|
||||
---
|
||||
|
||||

|
||||
|
||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||
|
||||
---
|
||||
|
||||
# Kafka-Gateway 配置说明
|
||||
94
docs/install_guide/install_guide_nginx_cn.md
Normal file
94
docs/install_guide/install_guide_nginx_cn.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
|
||||

|
||||
|
||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||
|
||||
---
|
||||
|
||||
## nginx配置-安装手册
|
||||
|
||||
# 一、独立部署
|
||||
|
||||
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
|
||||
|
||||
# 二、nginx配置
|
||||
|
||||
## 1、独立部署配置
|
||||
|
||||
```
|
||||
#nginx 根目录访问配置如下
|
||||
location / {
|
||||
proxy_pass http://ip:port;
|
||||
}
|
||||
```
|
||||
|
||||
## 2、前后端分离&配置多个静态资源
|
||||
|
||||
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
|
||||
|
||||
### 1、源码下载
|
||||
|
||||
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
|
||||
|
||||
### 2、修改webpack.config.js 配置文件
|
||||
|
||||
修改`kafka-manager-console`模块 `webpack.config.js`
|
||||
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
|
||||
|
||||
```
|
||||
cd kafka-manager-console
|
||||
vi webpack.config.js
|
||||
|
||||
# publicPath默认打包方式根目录下,修改为nginx代理访问路径。
|
||||
let publicPath = '/xxxx';
|
||||
```
|
||||
|
||||
### 3、打包
|
||||
|
||||
```
|
||||
|
||||
npm cache clean --force && npm install
|
||||
|
||||
```
|
||||
|
||||
ps:如果打包过程中报错,运行`npm install clipboard@2.0.6`,相反请忽略!
|
||||
|
||||
### 4、部署
|
||||
|
||||
#### 1、前段静态文件部署
|
||||
|
||||
静态资源 `../kafka-manager-web/src/main/resources/templates`
|
||||
|
||||
上传到指定目录,目前以`root目录`做demo
|
||||
|
||||
#### 2、上传jar包并启动,请参考:[kafka-manager 安装手册](install_guide_cn.md)
|
||||
|
||||
#### 3、修改nginx 配置
|
||||
|
||||
```
|
||||
location /xxxx {
|
||||
# 静态文件存放位置
|
||||
alias /root/templates;
|
||||
try_files $uri $uri/ /xxxx/index.html;
|
||||
index index.html;
|
||||
}
|
||||
|
||||
location /api {
|
||||
proxy_pass http://ip:port;
|
||||
}
|
||||
#后代端口建议使用/api,如果冲突可以使用以下配置
|
||||
#location /api/v2 {
|
||||
# proxy_pass http://ip:port;
|
||||
#}
|
||||
#location /api/v1 {
|
||||
# proxy_pass http://ip:port;
|
||||
#}
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -7,9 +7,9 @@
|
||||
|
||||
---
|
||||
|
||||
# FAQ
|
||||
# FAQ
|
||||
|
||||
- 0、Github图裂问题解决
|
||||
- 0、支持哪些Kafka版本?
|
||||
- 1、Topic申请、新建监控告警等操作时没有可选择的集群?
|
||||
- 2、逻辑集群 & Region的用途?
|
||||
- 3、登录失败?
|
||||
@@ -18,22 +18,16 @@
|
||||
- 6、如何使用`MySQL 8`?
|
||||
- 7、`Jmx`连接失败如何解决?
|
||||
- 8、`topic biz data not exist`错误及处理方式
|
||||
- 9、进程启动后,如何查看API文档
|
||||
- 10、如何创建告警组?
|
||||
- 11、连接信息、耗时信息为什么没有数据?
|
||||
- 12、逻辑集群申请审批通过之后为什么看不到逻辑集群?
|
||||
|
||||
---
|
||||
|
||||
### 0、Github图裂问题解决
|
||||
### 0、支持哪些Kafka版本?
|
||||
|
||||
可以在本地机器`ping github.com`这个地址,获取到`github.com`地址的IP地址。
|
||||
|
||||
然后将IP绑定到`/etc/hosts`文件中。
|
||||
|
||||
例如
|
||||
|
||||
```shell
|
||||
# 在 /etc/hosts文件中增加如下信息
|
||||
|
||||
140.82.113.3 github.com
|
||||
```
|
||||
基本上只要所使用的Kafka还依赖于Zookeeper,那么该版本的主要功能基本上应该就是支持的。
|
||||
|
||||
---
|
||||
|
||||
@@ -43,7 +37,7 @@
|
||||
|
||||
逻辑集群的创建参看:
|
||||
|
||||
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。
|
||||
- [kafka-manager 接入集群](add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。
|
||||
|
||||
---
|
||||
|
||||
@@ -76,7 +70,7 @@
|
||||
|
||||
- 3、数据库时区问题。
|
||||
|
||||
检查MySQL的topic表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
|
||||
检查MySQL的topic_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
|
||||
|
||||
---
|
||||
|
||||
@@ -109,3 +103,26 @@
|
||||
可以在`运维管控->集群列表->Topic信息`下面,编辑申请权限的Topic,为Topic选择一个应用即可。
|
||||
|
||||
以上仅仅只是针对单个Topic的场景,如果你有非常多的Topic需要进行初始化的,那么此时可以在配置管理中增加一个配置,来定时的对无主的Topic进行同步,具体见:[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md)
|
||||
|
||||
---
|
||||
|
||||
### 9、进程启动后,如何查看API文档
|
||||
|
||||
- 滴滴Logi-KafkaManager采用Swagger-API工具记录API文档。Swagger-API地址: [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-ui.html#/)
|
||||
|
||||
|
||||
### 10、如何创建告警组?
|
||||
|
||||
这块需要配合监控系统进行使用,现在默认已经实现了夜莺的对接,当然也可以对接自己内部的监控系统,不过需要实现一些接口。
|
||||
|
||||
具体的文档可见:[监控功能对接夜莺](../dev_guide/monitor_system_integrate_with_n9e.md)、[监控功能对接其他系统](../dev_guide/monitor_system_integrate_with_self.md)
|
||||
|
||||
### 11、连接信息、耗时信息为什么没有数据?
|
||||
|
||||
这块需要结合滴滴内部的kafka-gateway一同使用才会有数据,滴滴kafka-gateway暂未开源。
|
||||
|
||||
### 12、逻辑集群申请审批通过之后为什么看不到逻辑集群?
|
||||
|
||||
逻辑集群的申请与审批仅仅只是一个工单流程,并不会去实际创建逻辑集群,逻辑集群的创建还需要手动去创建。
|
||||
|
||||
具体的操作可见:[kafka-manager 接入集群](add_cluster/add_cluster.md)。
|
||||
|
||||
@@ -47,4 +47,13 @@ public enum AccountRoleEnum {
|
||||
}
|
||||
return AccountRoleEnum.UNKNOWN;
|
||||
}
|
||||
|
||||
public static AccountRoleEnum getUserRoleEnum(String roleName) {
|
||||
for (AccountRoleEnum elem: AccountRoleEnum.values()) {
|
||||
if (elem.message.equalsIgnoreCase(roleName)) {
|
||||
return elem;
|
||||
}
|
||||
}
|
||||
return AccountRoleEnum.UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
package com.xiaojukeji.kafka.manager.common.bizenum;
|
||||
|
||||
/**
|
||||
* 是否上报监控系统
|
||||
* @author zengqiao
|
||||
* @date 20/9/25
|
||||
*/
|
||||
public enum SinkMonitorSystemEnum {
|
||||
SINK_MONITOR_SYSTEM(0, "上报监控系统"),
|
||||
NOT_SINK_MONITOR_SYSTEM(1, "不上报监控系统"),
|
||||
;
|
||||
|
||||
private Integer code;
|
||||
|
||||
private String message;
|
||||
|
||||
SinkMonitorSystemEnum(Integer code, String message) {
|
||||
this.code = code;
|
||||
this.message = message;
|
||||
}
|
||||
|
||||
public Integer getCode() {
|
||||
return code;
|
||||
}
|
||||
|
||||
public void setCode(Integer code) {
|
||||
this.code = code;
|
||||
}
|
||||
|
||||
public String getMessage() {
|
||||
return message;
|
||||
}
|
||||
|
||||
public void setMessage(String message) {
|
||||
this.message = message;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "SinkMonitorSystemEnum{" +
|
||||
"code=" + code +
|
||||
", message='" + message + '\'' +
|
||||
'}';
|
||||
}
|
||||
}
|
||||
@@ -7,18 +7,18 @@ package com.xiaojukeji.kafka.manager.common.constant;
|
||||
*/
|
||||
public class ApiPrefix {
|
||||
public static final String API_PREFIX = "/api/";
|
||||
public static final String API_V1_PREFIX = API_PREFIX + "v1/";
|
||||
public static final String API_V2_PREFIX = API_PREFIX + "v2/";
|
||||
private static final String API_V1_PREFIX = API_PREFIX + "v1/";
|
||||
|
||||
// login
|
||||
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
|
||||
|
||||
// console
|
||||
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
|
||||
public static final String API_V1_NORMAL_PREFIX = API_V1_PREFIX + "normal/";
|
||||
public static final String API_V1_RD_PREFIX = API_V1_PREFIX + "rd/";
|
||||
public static final String API_V1_OP_PREFIX = API_V1_PREFIX + "op/";
|
||||
|
||||
// open
|
||||
public static final String API_V1_THIRD_PART_PREFIX = API_V1_PREFIX + "third-part/";
|
||||
public static final String API_V2_THIRD_PART_PREFIX = API_V2_PREFIX + "third-part/";
|
||||
|
||||
// gateway
|
||||
public static final String GATEWAY_API_V1_PREFIX = "/gateway" + API_V1_PREFIX;
|
||||
|
||||
@@ -106,6 +106,7 @@ public enum ResultStatus {
|
||||
STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"),
|
||||
STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"),
|
||||
STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"),
|
||||
LDAP_AUTHENTICATION_FAILED(8053, "ldap authentication failed"),
|
||||
|
||||
;
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package com.xiaojukeji.kafka.manager.common.entity.pojo;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.Objects;
|
||||
|
||||
/**
|
||||
* @author zengqiao
|
||||
@@ -116,4 +117,22 @@ public class ClusterDO implements Comparable<ClusterDO> {
|
||||
public int compareTo(ClusterDO clusterDO) {
|
||||
return this.id.compareTo(clusterDO.id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
ClusterDO clusterDO = (ClusterDO) o;
|
||||
return Objects.equals(id, clusterDO.id)
|
||||
&& Objects.equals(clusterName, clusterDO.clusterName)
|
||||
&& Objects.equals(zookeeper, clusterDO.zookeeper)
|
||||
&& Objects.equals(bootstrapServers, clusterDO.bootstrapServers)
|
||||
&& Objects.equals(securityProperties, clusterDO.securityProperties)
|
||||
&& Objects.equals(jmxProperties, clusterDO.jmxProperties);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(id, clusterName, zookeeper, bootstrapServers, securityProperties, jmxProperties);
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "mobx-ts-example",
|
||||
"version": "1.0.0",
|
||||
"name": "logi-kafka",
|
||||
"version": "2.3.1",
|
||||
"description": "",
|
||||
"scripts": {
|
||||
"start": "webpack-dev-server",
|
||||
@@ -21,7 +21,7 @@
|
||||
"@types/spark-md5": "^3.0.2",
|
||||
"antd": "^3.26.15",
|
||||
"clean-webpack-plugin": "^3.0.0",
|
||||
"clipboard": "^2.0.6",
|
||||
"clipboard": "2.0.6",
|
||||
"cross-env": "^7.0.2",
|
||||
"css-loader": "^2.1.0",
|
||||
"echarts": "^4.5.0",
|
||||
@@ -56,4 +56,4 @@
|
||||
"dependencies": {
|
||||
"format-to-json": "^1.0.4"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,8 +1,8 @@
|
||||
package com.xiaojukeji.kafka.manager.service.cache;
|
||||
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory;
|
||||
import kafka.admin.AdminClient;
|
||||
import org.apache.commons.pool2.impl.GenericObjectPool;
|
||||
@@ -103,6 +103,21 @@ public class KafkaClientPool {
|
||||
}
|
||||
}
|
||||
|
||||
public static void closeKafkaConsumerPool(Long clusterId) {
|
||||
lock.lock();
|
||||
try {
|
||||
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.remove(clusterId);
|
||||
if (objectPool == null) {
|
||||
return;
|
||||
}
|
||||
objectPool.close();
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("close kafka consumer pool failed, clusterId:{}.", clusterId, e);
|
||||
} finally {
|
||||
lock.unlock();
|
||||
}
|
||||
}
|
||||
|
||||
public static KafkaConsumer borrowKafkaConsumerClient(ClusterDO clusterDO) {
|
||||
if (ValidateUtils.isNull(clusterDO)) {
|
||||
return null;
|
||||
@@ -132,7 +147,11 @@ public class KafkaClientPool {
|
||||
if (ValidateUtils.isNull(objectPool)) {
|
||||
return;
|
||||
}
|
||||
objectPool.returnObject(kafkaConsumer);
|
||||
try {
|
||||
objectPool.returnObject(kafkaConsumer);
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("return kafka consumer client failed, clusterId:{}", physicalClusterId, e);
|
||||
}
|
||||
}
|
||||
|
||||
public static AdminClient getAdminClient(Long clusterId) {
|
||||
|
||||
@@ -4,21 +4,23 @@ import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum;
|
||||
import com.xiaojukeji.kafka.manager.common.constant.Constant;
|
||||
import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
|
||||
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap;
|
||||
import com.xiaojukeji.kafka.manager.service.service.JmxService;
|
||||
import com.xiaojukeji.kafka.manager.service.zookeeper.*;
|
||||
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
|
||||
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
|
||||
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
|
||||
import com.xiaojukeji.kafka.manager.service.service.JmxService;
|
||||
import com.xiaojukeji.kafka.manager.service.zookeeper.BrokerStateListener;
|
||||
import com.xiaojukeji.kafka.manager.service.zookeeper.ControllerStateListener;
|
||||
import com.xiaojukeji.kafka.manager.service.zookeeper.TopicStateListener;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
@@ -160,8 +162,12 @@ public class PhysicalClusterMetadataManager {
|
||||
CLUSTER_MAP.remove(clusterId);
|
||||
}
|
||||
|
||||
public Set<Long> getClusterIdSet() {
|
||||
return CLUSTER_MAP.keySet();
|
||||
public static Map<Long, ClusterDO> getClusterMap() {
|
||||
return CLUSTER_MAP;
|
||||
}
|
||||
|
||||
public static void updateClusterMap(ClusterDO clusterDO) {
|
||||
CLUSTER_MAP.put(clusterDO.getId(), clusterDO);
|
||||
}
|
||||
|
||||
public static ClusterDO getClusterFromCache(Long clusterId) {
|
||||
|
||||
@@ -4,7 +4,6 @@ import com.xiaojukeji.kafka.manager.common.entity.Result;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterMetricsDO;
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package com.xiaojukeji.kafka.manager.service.service;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.dto.rd.RegionDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
@@ -340,10 +340,6 @@ public class AdminServiceImpl implements AdminService {
|
||||
@Override
|
||||
public ResultStatus modifyTopicConfig(ClusterDO clusterDO, String topicName, Properties properties, String operator) {
|
||||
ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, topicName, properties);
|
||||
if (!ResultStatus.SUCCESS.equals(rs)) {
|
||||
return rs;
|
||||
}
|
||||
|
||||
return rs;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -205,21 +205,31 @@ public class ClusterServiceImpl implements ClusterService {
|
||||
}
|
||||
|
||||
private boolean isZookeeperLegal(String zookeeper) {
|
||||
boolean status = false;
|
||||
|
||||
ZooKeeper zk = null;
|
||||
try {
|
||||
zk = new ZooKeeper(zookeeper, 1000, null);
|
||||
} catch (Throwable t) {
|
||||
return false;
|
||||
for (int i = 0; i < 15; ++i) {
|
||||
if (zk.getState().isConnected()) {
|
||||
// 只有状态是connected的时候,才表示地址是合法的
|
||||
status = true;
|
||||
break;
|
||||
}
|
||||
Thread.sleep(1000);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=zk address illegal||errMsg={}", zookeeper, e.getMessage());
|
||||
} finally {
|
||||
try {
|
||||
if (zk != null) {
|
||||
zk.close();
|
||||
}
|
||||
} catch (Exception e) {
|
||||
return false;
|
||||
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=close zk client failed||errMsg={}", zookeeper, e.getMessage());
|
||||
}
|
||||
}
|
||||
return true;
|
||||
return status;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -8,7 +8,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumeDetailDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroup;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroupSummary;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ao.PartitionOffsetDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.exception.ConfigException;
|
||||
|
||||
@@ -16,5 +16,5 @@ public interface LoginService {
|
||||
|
||||
void logout(HttpServletRequest request, HttpServletResponse response, Boolean needJump2LoginPage);
|
||||
|
||||
boolean checkLogin(HttpServletRequest request, HttpServletResponse response);
|
||||
boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue);
|
||||
}
|
||||
@@ -0,0 +1,130 @@
|
||||
package com.xiaojukeji.kafka.manager.account.component.ldap;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
import javax.naming.AuthenticationException;
|
||||
import javax.naming.Context;
|
||||
import javax.naming.NamingEnumeration;
|
||||
import javax.naming.NamingException;
|
||||
import javax.naming.directory.SearchControls;
|
||||
import javax.naming.directory.SearchResult;
|
||||
import javax.naming.ldap.InitialLdapContext;
|
||||
import javax.naming.ldap.LdapContext;
|
||||
import java.util.Hashtable;
|
||||
|
||||
@Component
|
||||
public class LdapAuthentication {
|
||||
private static final Logger LOGGER = LoggerFactory.getLogger(LdapAuthentication.class);
|
||||
|
||||
@Value(value = "${account.ldap.url:}")
|
||||
private String ldapUrl;
|
||||
|
||||
@Value(value = "${account.ldap.basedn:}")
|
||||
private String ldapBasedn;
|
||||
|
||||
@Value(value = "${account.ldap.factory:}")
|
||||
private String ldapFactory;
|
||||
|
||||
@Value(value = "${account.ldap.filter:}")
|
||||
private String ldapFilter;
|
||||
|
||||
@Value(value = "${account.ldap.security.authentication:}")
|
||||
private String securityAuthentication;
|
||||
|
||||
@Value(value = "${account.ldap.security.principal:}")
|
||||
private String securityPrincipal;
|
||||
|
||||
@Value(value = "${account.ldap.security.credentials:}")
|
||||
private String securityCredentials;
|
||||
|
||||
private LdapContext getLdapContext() {
|
||||
Hashtable<String, String> env = new Hashtable<String, String>();
|
||||
env.put(Context.INITIAL_CONTEXT_FACTORY, ldapFactory);
|
||||
env.put(Context.PROVIDER_URL, ldapUrl + ldapBasedn);
|
||||
env.put(Context.SECURITY_AUTHENTICATION, securityAuthentication);
|
||||
|
||||
// 此处若不指定用户名和密码,则自动转换为匿名登录
|
||||
env.put(Context.SECURITY_PRINCIPAL, securityPrincipal);
|
||||
env.put(Context.SECURITY_CREDENTIALS, securityCredentials);
|
||||
try {
|
||||
return new InitialLdapContext(env, null);
|
||||
} catch (AuthenticationException e) {
|
||||
LOGGER.warn("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private String getUserDN(String account, LdapContext ctx) {
|
||||
String userDN = "";
|
||||
try {
|
||||
SearchControls constraints = new SearchControls();
|
||||
constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
|
||||
String filter = "(&(objectClass=*)("+ldapFilter+"=" + account + "))";
|
||||
|
||||
NamingEnumeration<SearchResult> en = ctx.search("", filter, constraints);
|
||||
if (en == null || !en.hasMoreElements()) {
|
||||
return "";
|
||||
}
|
||||
// maybe more than one element
|
||||
while (en.hasMoreElements()) {
|
||||
Object obj = en.nextElement();
|
||||
if (obj instanceof SearchResult) {
|
||||
SearchResult si = (SearchResult) obj;
|
||||
userDN += si.getName();
|
||||
userDN += "," + ldapBasedn;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("class=LdapAuthentication||method=getUserDN||account={}||errMsg={}", account, e);
|
||||
}
|
||||
return userDN;
|
||||
}
|
||||
|
||||
/**
|
||||
* LDAP账密验证
|
||||
* @param account
|
||||
* @param password
|
||||
* @return
|
||||
*/
|
||||
public boolean authenticate(String account, String password) {
|
||||
LdapContext ctx = getLdapContext();
|
||||
if (ValidateUtils.isNull(ctx)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
String userDN = getUserDN(account, ctx);
|
||||
if(ValidateUtils.isBlank(userDN)){
|
||||
return false;
|
||||
}
|
||||
|
||||
ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, userDN);
|
||||
ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password);
|
||||
ctx.reconnect(null);
|
||||
|
||||
return true;
|
||||
} catch (AuthenticationException e) {
|
||||
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
|
||||
} catch (NamingException e) {
|
||||
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
|
||||
} finally {
|
||||
if(ctx != null) {
|
||||
try {
|
||||
ctx.close();
|
||||
} catch (NamingException e) {
|
||||
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -2,13 +2,17 @@ package com.xiaojukeji.kafka.manager.account.component.sso;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.account.AccountService;
|
||||
import com.xiaojukeji.kafka.manager.account.component.AbstractSingleSignOn;
|
||||
import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum;
|
||||
import com.xiaojukeji.kafka.manager.common.constant.LoginConstant;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.Result;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.EncryptUtil;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.account.component.ldap.LdapAuthentication;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
@@ -23,12 +27,48 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
|
||||
@Autowired
|
||||
private AccountService accountService;
|
||||
|
||||
@Autowired
|
||||
private LdapAuthentication ldapAuthentication;
|
||||
|
||||
//是否开启ldap验证
|
||||
@Value(value = "${account.ldap.enabled:}")
|
||||
private Boolean accountLdapEnabled;
|
||||
|
||||
//ldap自动注册的默认角色。请注意:它通常来说都是低权限角色
|
||||
@Value(value = "${account.ldap.auth-user-registration-role:}")
|
||||
private String authUserRegistrationRole;
|
||||
|
||||
//ldap自动注册是否开启
|
||||
@Value(value = "${account.ldap.auth-user-registration:}")
|
||||
private boolean authUserRegistration;
|
||||
|
||||
@Override
|
||||
public Result<String> loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto) {
|
||||
if (ValidateUtils.isBlank(dto.getUsername()) || ValidateUtils.isNull(dto.getPassword())) {
|
||||
return null;
|
||||
return Result.buildFailure("Missing parameters");
|
||||
}
|
||||
|
||||
Result<AccountDO> accountResult = accountService.getAccountDO(dto.getUsername());
|
||||
|
||||
//判断是否激活了LDAP验证, 若激活则也可使用ldap进行认证
|
||||
if(!ValidateUtils.isNull(accountLdapEnabled) && accountLdapEnabled){
|
||||
//去LDAP验证账密
|
||||
if(!ldapAuthentication.authenticate(dto.getUsername(),dto.getPassword())){
|
||||
return Result.buildFrom(ResultStatus.LDAP_AUTHENTICATION_FAILED);
|
||||
}
|
||||
|
||||
if((ValidateUtils.isNull(accountResult) || ValidateUtils.isNull(accountResult.getData())) && authUserRegistration){
|
||||
//自动注册
|
||||
AccountDO accountDO = new AccountDO();
|
||||
accountDO.setUsername(dto.getUsername());
|
||||
accountDO.setRole(AccountRoleEnum.getUserRoleEnum(authUserRegistrationRole).getRole());
|
||||
accountDO.setPassword(dto.getPassword());
|
||||
accountService.createAccount(accountDO);
|
||||
}
|
||||
|
||||
return Result.buildSuc(dto.getUsername());
|
||||
}
|
||||
|
||||
if (ValidateUtils.isNull(accountResult) || accountResult.failed()) {
|
||||
return new Result<>(accountResult.getCode(), accountResult.getMessage());
|
||||
}
|
||||
@@ -64,4 +104,4 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
|
||||
response.setStatus(AbstractSingleSignOn.REDIRECT_CODE);
|
||||
response.addHeader(AbstractSingleSignOn.HEADER_REDIRECT_KEY, "");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,12 +63,17 @@ public class LoginServiceImpl implements LoginService {
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response) {
|
||||
String uri = request.getRequestURI();
|
||||
if (!(uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)
|
||||
|| uri.contains(ApiPrefix.API_V1_RD_PREFIX)
|
||||
|| uri.contains(ApiPrefix.API_V1_OP_PREFIX))) {
|
||||
// 白名单接口, 直接忽略登录
|
||||
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue) {
|
||||
if (ValidateUtils.isNull(classRequestMappingValue)) {
|
||||
LOGGER.error("class=LoginServiceImpl||method=checkLogin||msg=uri illegal||uri={}", request.getRequestURI());
|
||||
singleSignOn.setRedirectToLoginPage(response);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (classRequestMappingValue.equals(ApiPrefix.API_V1_SSO_PREFIX)
|
||||
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_PREFIX)
|
||||
|| classRequestMappingValue.equals(ApiPrefix.GATEWAY_API_V1_PREFIX)) {
|
||||
// 白名单接口直接true
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -79,7 +84,7 @@ public class LoginServiceImpl implements LoginService {
|
||||
return false;
|
||||
}
|
||||
|
||||
boolean status = checkAuthority(request, accountService.getAccountRoleFromCache(username));
|
||||
boolean status = checkAuthority(classRequestMappingValue, accountService.getAccountRoleFromCache(username));
|
||||
if (status) {
|
||||
HttpSession session = request.getSession();
|
||||
session.setAttribute(LoginConstant.SESSION_USERNAME_KEY, username);
|
||||
@@ -89,19 +94,18 @@ public class LoginServiceImpl implements LoginService {
|
||||
return false;
|
||||
}
|
||||
|
||||
private boolean checkAuthority(HttpServletRequest request, AccountRoleEnum accountRoleEnum) {
|
||||
String uri = request.getRequestURI();
|
||||
if (uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)) {
|
||||
private boolean checkAuthority(String classRequestMappingValue, AccountRoleEnum accountRoleEnum) {
|
||||
if (classRequestMappingValue.equals(ApiPrefix.API_V1_NORMAL_PREFIX)) {
|
||||
// normal 接口都可以访问
|
||||
return true;
|
||||
}
|
||||
|
||||
if (uri.contains(ApiPrefix.API_V1_RD_PREFIX) ) {
|
||||
// RD 接口 OP 或者 RD 可以访问
|
||||
if (classRequestMappingValue.equals(ApiPrefix.API_V1_RD_PREFIX) ) {
|
||||
// RD 接口, OP 或者 RD 可以访问
|
||||
return AccountRoleEnum.RD.equals(accountRoleEnum) || AccountRoleEnum.OP.equals(accountRoleEnum);
|
||||
}
|
||||
|
||||
if (uri.contains(ApiPrefix.API_V1_OP_PREFIX)) {
|
||||
if (classRequestMappingValue.equals(ApiPrefix.API_V1_OP_PREFIX)) {
|
||||
// OP 接口只有 OP 可以访问
|
||||
return AccountRoleEnum.OP.equals(accountRoleEnum);
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@ import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.monitor.common.entry.*;
|
||||
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.*;
|
||||
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.bizenum.CategoryEnum;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.*;
|
||||
|
||||
@@ -13,6 +15,8 @@ import java.util.*;
|
||||
* @date 20/8/26
|
||||
*/
|
||||
public class N9eConverter {
|
||||
private static final Logger LOGGER = LoggerFactory.getLogger(N9eConverter.class);
|
||||
|
||||
public static List<N9eMetricSinkPoint> convert2N9eMetricSinkPointList(String nid, List<MetricSinkPoint> pointList) {
|
||||
if (pointList == null || pointList.isEmpty()) {
|
||||
return new ArrayList<>();
|
||||
@@ -98,8 +102,8 @@ public class N9eConverter {
|
||||
|
||||
n9eStrategy.setNotify_user(new ArrayList<>());
|
||||
n9eStrategy.setCallback(strategyAction.getCallback());
|
||||
n9eStrategy.setEnable_stime("00:00");
|
||||
n9eStrategy.setEnable_etime("23:59");
|
||||
n9eStrategy.setEnable_stime(String.format("%02d:00", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().min((e1, e2) -> e1.compareTo(e2)).get()));
|
||||
n9eStrategy.setEnable_etime(String.format("%02d:59", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().max((e1, e2) -> e1.compareTo(e2)).get()));
|
||||
n9eStrategy.setEnable_days_of_week(ListUtils.string2IntList(strategy.getPeriodDaysOfWeek()));
|
||||
|
||||
n9eStrategy.setNeed_upgrade(0);
|
||||
@@ -120,6 +124,15 @@ public class N9eConverter {
|
||||
return strategyList;
|
||||
}
|
||||
|
||||
private static Integer getEnableHour(String enableTime) {
|
||||
try {
|
||||
return Integer.valueOf(enableTime.split(":")[0]);
|
||||
} catch (Exception e) {
|
||||
LOGGER.warn("class=N9eConverter||method=getEnableHour||enableTime={}||errMsg={}", enableTime, e.getMessage());
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
public static Strategy convert2Strategy(N9eStrategy n9eStrategy, Map<String, NotifyGroup> notifyGroupMap) {
|
||||
if (n9eStrategy == null) {
|
||||
return null;
|
||||
@@ -137,7 +150,16 @@ public class N9eConverter {
|
||||
strategy.setId(n9eStrategy.getId().longValue());
|
||||
strategy.setName(n9eStrategy.getName());
|
||||
strategy.setPriority(n9eStrategy.getPriority());
|
||||
strategy.setPeriodHoursOfDay("0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23");
|
||||
|
||||
List<Integer> hourList = new ArrayList<>();
|
||||
Integer startHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_stime());
|
||||
Integer endHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_etime());
|
||||
if (!(ValidateUtils.isNullOrLessThanZero(startHour) || ValidateUtils.isNullOrLessThanZero(endHour) || endHour < startHour)) {
|
||||
for (Integer hour = startHour; hour <= endHour; ++hour) {
|
||||
hourList.add(hour);
|
||||
}
|
||||
}
|
||||
strategy.setPeriodHoursOfDay(ListUtils.intList2String(hourList));
|
||||
strategy.setPeriodDaysOfWeek(ListUtils.intList2String(n9eStrategy.getEnable_days_of_week()));
|
||||
|
||||
List<StrategyExpression> strategyExpressionList = new ArrayList<>();
|
||||
|
||||
@@ -125,7 +125,7 @@ public class SyncTopic2DB extends AbstractScheduledTask<EmptyEntry> {
|
||||
|
||||
if (ValidateUtils.isNull(syncTopic2DBConfig.isAddAuthority()) || !syncTopic2DBConfig.isAddAuthority()) {
|
||||
// 不增加权限信息, 则直接忽略
|
||||
return;
|
||||
continue;
|
||||
}
|
||||
|
||||
// TODO 当前添加 Topic 和 添加 Authority 是非事务的, 中间出现异常之后, 会导致数据错误, 后续还需要优化一下
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
package com.xiaojukeji.kafka.manager.task.schedule.metadata;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.service.cache.KafkaClientPool;
|
||||
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
|
||||
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.scheduling.annotation.Scheduled;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.Map;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* @author zengqiao
|
||||
@@ -25,24 +27,63 @@ public class FlushClusterMetadata {
|
||||
|
||||
@Scheduled(cron="0/30 * * * * ?")
|
||||
public void flush() {
|
||||
List<ClusterDO> doList = clusterService.list();
|
||||
Map<Long, ClusterDO> dbClusterMap = clusterService.list().stream().collect(Collectors.toMap(ClusterDO::getId, Function.identity(), (key1, key2) -> key2));
|
||||
|
||||
Set<Long> newClusterIdSet = new HashSet<>();
|
||||
Set<Long> oldClusterIdSet = physicalClusterMetadataManager.getClusterIdSet();
|
||||
for (ClusterDO clusterDO: doList) {
|
||||
newClusterIdSet.add(clusterDO.getId());
|
||||
Map<Long, ClusterDO> cacheClusterMap = PhysicalClusterMetadataManager.getClusterMap();
|
||||
|
||||
// 添加集群
|
||||
physicalClusterMetadataManager.addNew(clusterDO);
|
||||
}
|
||||
// 新增的集群
|
||||
for (ClusterDO clusterDO: dbClusterMap.values()) {
|
||||
if (cacheClusterMap.containsKey(clusterDO.getId())) {
|
||||
// 已经存在
|
||||
continue;
|
||||
}
|
||||
add(clusterDO);
|
||||
}
|
||||
|
||||
for (Long clusterId: oldClusterIdSet) {
|
||||
if (newClusterIdSet.contains(clusterId)) {
|
||||
continue;
|
||||
}
|
||||
// 移除的集群
|
||||
for (ClusterDO clusterDO: cacheClusterMap.values()) {
|
||||
if (dbClusterMap.containsKey(clusterDO.getId())) {
|
||||
// 已经存在
|
||||
continue;
|
||||
}
|
||||
remove(clusterDO.getId());
|
||||
}
|
||||
|
||||
// 移除集群
|
||||
physicalClusterMetadataManager.remove(clusterId);
|
||||
}
|
||||
// 被修改配置的集群
|
||||
for (ClusterDO dbClusterDO: dbClusterMap.values()) {
|
||||
ClusterDO cacheClusterDO = cacheClusterMap.get(dbClusterDO.getId());
|
||||
if (ValidateUtils.anyNull(cacheClusterDO) || dbClusterDO.equals(cacheClusterDO)) {
|
||||
// 不存在 || 相等
|
||||
continue;
|
||||
}
|
||||
modifyConfig(dbClusterDO);
|
||||
}
|
||||
}
|
||||
|
||||
private void add(ClusterDO clusterDO) {
|
||||
if (ValidateUtils.anyNull(clusterDO)) {
|
||||
return;
|
||||
}
|
||||
physicalClusterMetadataManager.addNew(clusterDO);
|
||||
}
|
||||
|
||||
private void modifyConfig(ClusterDO clusterDO) {
|
||||
if (ValidateUtils.anyNull(clusterDO)) {
|
||||
return;
|
||||
}
|
||||
PhysicalClusterMetadataManager.updateClusterMap(clusterDO);
|
||||
KafkaClientPool.closeKafkaConsumerPool(clusterDO.getId());
|
||||
}
|
||||
|
||||
private void remove(Long clusterId) {
|
||||
if (ValidateUtils.anyNull(clusterId)) {
|
||||
return;
|
||||
}
|
||||
// 移除缓存信息
|
||||
physicalClusterMetadataManager.remove(clusterId);
|
||||
|
||||
// 清除客户端池子
|
||||
KafkaClientPool.closeKafkaConsumerPool(clusterId);
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,5 +1,6 @@
|
||||
package com.xiaojukeji.kafka.manager.web.api;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.Result;
|
||||
import io.swagger.annotations.Api;
|
||||
import io.swagger.annotations.ApiOperation;
|
||||
@@ -14,9 +15,9 @@ import springfox.documentation.annotations.ApiIgnore;
|
||||
* @date 20/6/18
|
||||
*/
|
||||
@ApiIgnore
|
||||
@Api(description = "web应用探活接口(REST)")
|
||||
@Api(tags = "web应用探活接口(REST)")
|
||||
@RestController
|
||||
@RequestMapping("api/")
|
||||
@RequestMapping(ApiPrefix.API_V1_THIRD_PART_PREFIX)
|
||||
public class HealthController {
|
||||
|
||||
@ApiIgnore
|
||||
|
||||
@@ -9,7 +9,6 @@ import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountSummaryVO;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.SpringTool;
|
||||
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
|
||||
import com.xiaojukeji.kafka.manager.web.api.versionone.gateway.GatewayHeartbeatController;
|
||||
import io.swagger.annotations.Api;
|
||||
import io.swagger.annotations.ApiOperation;
|
||||
import org.slf4j.Logger;
|
||||
@@ -62,4 +61,4 @@ public class NormalAccountController {
|
||||
AccountRoleEnum accountRoleEnum = accountService.getAccountRoleFromCache(username);
|
||||
return new Result<>(new AccountRoleVO(username, accountRoleEnum.getRole()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
|
||||
import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
|
||||
import com.xiaojukeji.kafka.manager.openapi.common.vo.ThirdPartBrokerOverviewVO;
|
||||
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
|
||||
import com.xiaojukeji.kafka.manager.service.service.BrokerService;
|
||||
import io.swagger.annotations.Api;
|
||||
@@ -52,4 +51,4 @@ public class ThirdPartClusterController {
|
||||
|
||||
return new Result<>(underReplicated.equals(0));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
package com.xiaojukeji.kafka.manager.web.inteceptor;
|
||||
|
||||
import com.xiaojukeji.kafka.manager.account.LoginService;
|
||||
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.stereotype.Component;
|
||||
import org.springframework.web.bind.annotation.RequestMapping;
|
||||
import org.springframework.web.method.HandlerMethod;
|
||||
import org.springframework.web.servlet.HandlerInterceptor;
|
||||
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
@@ -15,6 +20,8 @@ import javax.servlet.http.HttpServletResponse;
|
||||
*/
|
||||
@Component
|
||||
public class PermissionInterceptor implements HandlerInterceptor {
|
||||
private static final Logger LOGGER = LoggerFactory.getLogger(PermissionInterceptor.class);
|
||||
|
||||
@Autowired
|
||||
private LoginService loginService;
|
||||
|
||||
@@ -28,6 +35,31 @@ public class PermissionInterceptor implements HandlerInterceptor {
|
||||
public boolean preHandle(HttpServletRequest request,
|
||||
HttpServletResponse response,
|
||||
Object handler) throws Exception {
|
||||
return loginService.checkLogin(request, response);
|
||||
|
||||
String classRequestMappingValue = null;
|
||||
try {
|
||||
classRequestMappingValue = getClassRequestMappingValue(handler);
|
||||
} catch (Exception e) {
|
||||
LOGGER.error("class=PermissionInterceptor||method=preHandle||uri={}||msg=parse class request-mapping failed", request.getRequestURI(), e);
|
||||
}
|
||||
return loginService.checkLogin(request, response, classRequestMappingValue);
|
||||
}
|
||||
|
||||
private String getClassRequestMappingValue(Object handler) {
|
||||
RequestMapping classRM = null;
|
||||
if(handler instanceof HandlerMethod) {
|
||||
HandlerMethod hm = (HandlerMethod)handler;
|
||||
classRM = hm.getMethod().getDeclaringClass().getAnnotation(RequestMapping.class);
|
||||
} else if(handler instanceof org.springframework.web.servlet.mvc.Controller) {
|
||||
org.springframework.web.servlet.mvc.Controller hm = (org.springframework.web.servlet.mvc.Controller)handler;
|
||||
Class<? extends org.springframework.web.servlet.mvc.Controller> hmClass = hm.getClass();
|
||||
classRM = hmClass.getAnnotation(RequestMapping.class);
|
||||
} else {
|
||||
classRM = handler.getClass().getAnnotation(RequestMapping.class);
|
||||
}
|
||||
if (ValidateUtils.isNull(classRM) || classRM.value().length < 0) {
|
||||
return null;
|
||||
}
|
||||
return classRM.value()[0];
|
||||
}
|
||||
}
|
||||
|
||||
@@ -49,6 +49,17 @@ task:
|
||||
|
||||
account:
|
||||
ldap:
|
||||
enabled: false
|
||||
url: ldap://127.0.0.1:389/
|
||||
basedn: dc=tsign,dc=cn
|
||||
factory: com.sun.jndi.ldap.LdapCtxFactory
|
||||
filter: sAMAccountName
|
||||
security:
|
||||
authentication: simple
|
||||
principal: cn=admin,dc=tsign,dc=cn
|
||||
credentials: admin
|
||||
auth-user-registration: true
|
||||
auth-user-registration-role: normal
|
||||
|
||||
kcm:
|
||||
enabled: false
|
||||
|
||||
Reference in New Issue
Block a user