Compare commits

...

67 Commits

Author SHA1 Message Date
zengqiao
b67a162d3f bump version to v2.3.1 2021-04-19 14:13:48 +08:00
zengqiao
1141d4b833 通过获取类的RequestMapping注解来判断当前请求是否有权限 2021-04-15 18:12:21 +08:00
EricZeng
cdac92ca7b Merge pull request #229 from didi/dev
通过获取类的RequestMapping注解来判断当前请求是否需要登录
2021-04-14 19:47:43 +08:00
zengqiao
2a57c260cc 通过获取类的RequestMapping注解来判断当前请求是否需要登录 2021-04-14 19:40:19 +08:00
zengqiao
8f10624073 add jmx prometheus jar 2021-04-12 17:58:24 +08:00
EricZeng
eb1f8be11e Merge pull request #224 from didi/master
merge master
2021-04-12 13:51:14 +08:00
EricZeng
3333501ab9 Merge pull request #222 from zwOvO/master
删除无用import、删除无用代码
2021-04-09 19:28:04 +08:00
zwOvO
0f40820315 删除无用import、删除无用代码 2021-04-09 11:41:06 +08:00
zengqiao
b9bb1c775d change uri filter rule 2021-04-06 10:26:21 +08:00
zengqiao
1059b7376b forbiden request when uri contain .. 2021-04-06 10:01:29 +08:00
EricZeng
f38ab4a9ce Merge pull request #217 from didi/dev
拒绝包含./或/连续过多的接口请求
2021-03-31 20:00:52 +08:00
zengqiao
9e7450c012 拒绝包含./或/连续过多的接口请求 2021-03-31 19:45:18 +08:00
EricZeng
99a3e360fe Merge pull request #216 from didi/dev
接口过滤策略由接口黑名单转成接口白名单
2021-03-30 12:56:19 +08:00
lucasun
d45f8f78d6 Merge pull request #215 from zhangfenhua/master
增加nginx配置:前后端分离&配置多个静态资源
2021-03-30 11:11:58 +08:00
zengqiao
648af61116 接口过滤策略由接口黑名单转成接口白名单 2021-03-29 21:21:23 +08:00
zhangfenhua
eebf1b89b1 nginx配置手册 2021-03-29 11:53:50 +08:00
EricZeng
f8094bb624 Merge pull request #211 from didi/dev
add expert config desc
2021-03-23 15:23:10 +08:00
zengqiao
ed13e0d2c2 add expert config desc 2021-03-23 15:21:48 +08:00
EricZeng
aa830589b4 Merge pull request #210 from didi/dev
fix monitor enable time illegal bug
2021-03-22 17:22:44 +08:00
zengqiao
999a2bd929 fix monitor enable time illegal bug 2021-03-22 17:21:12 +08:00
EricZeng
d69ee98450 Merge pull request #209 from didi/dev
add faq, kafka version supported & apply logical cluster and how to handle it
2021-03-22 13:43:14 +08:00
zengqiao
f6712c24ad merge master 2021-03-22 13:42:09 +08:00
zengqiao
89d2772194 add faq, kafka version supported & apply logical cluster and how to handle it 2021-03-22 13:38:23 +08:00
mike.zhangliang
03352142b6 Update README.md
微信加群方式补充
2021-03-16 14:46:38 +08:00
lucasun
73a51e0c00 Merge pull request #205 from ZQKC/master
add qa
2021-03-10 19:27:01 +08:00
zengqiao
2e26f8caa6 add qa 2021-03-10 19:23:29 +08:00
EricZeng
f9bcce9e43 Merge pull request #3 from didi/master
merge didi Logi-KM
2021-03-10 19:20:39 +08:00
EricZeng
2ecc877ba8 fix add_cluster.md path
fix add_cluster.md path
2021-03-10 15:45:48 +08:00
EricZeng
3f8a3c69e3 Merge pull request #201 from ZQKC/master
optimize ldap
2021-03-10 14:12:35 +08:00
zengqiao
67c37a0984 optimize ldap 2021-03-10 13:52:09 +08:00
EricZeng
a58a55d00d Merge pull request #203 from lucasun/hotfix/v2.3.1
clipbord版本锁定在2.0.6,升级2.0.7会引起ts打包报错
2021-03-09 18:11:02 +08:00
孙超
06d51dd0b8 clipbord版本锁定在2.0.6,升级2.0.7会引起ts打包报错 2021-03-09 18:07:42 +08:00
zengqiao
d5db028f57 optimize ldap 2021-03-09 15:13:55 +08:00
EricZeng
fcb85ff4be Merge pull request #2 from didi/master
merge didi logi-km
2021-03-09 11:07:17 +08:00
EricZeng
3695b4363d Merge pull request #200 from didi/dev
del ResultStatus which in vo
2021-03-09 11:02:46 +08:00
zengqiao
cb11e6437c del ResultStatus in vo 2021-03-09 11:01:21 +08:00
EricZeng
5127bd11ce Merge pull request #198 from didi/master
merge master
2021-03-09 10:42:28 +08:00
EricZeng
91f90aefa1 Merge pull request #195 from fanghanyun/v2.3.0_ldap
support AD LDAP
2021-03-09 10:40:42 +08:00
fanghanyun
0a067bce36 Support AD LDAP 2021-03-09 10:19:08 +08:00
fanghanyun
f0aba433bf Support AD LDAP 2021-03-08 20:31:15 +08:00
EricZeng
f06467a0e3 Merge pull request #197 from didi/dev
delete without used code
2021-03-05 16:12:27 +08:00
zengqiao
68bcd3c710 delete without used code 2021-03-05 16:05:58 +08:00
EricZeng
a645733cc5 Merge pull request #196 from didi/dev
add gateway config docs
2021-03-05 15:31:53 +08:00
zengqiao
49fe5baf94 add gateway config docs 2021-03-05 14:59:40 +08:00
fanghanyun
411ee55653 support AD LDAP 2021-03-05 14:45:54 +08:00
EricZeng
e351ce7411 Merge pull request #194 from didi/dev
reject req when uri contains ..
2021-03-04 17:52:56 +08:00
zengqiao
f33e585a71 reject req when uri contains .. 2021-03-04 17:51:35 +08:00
EricZeng
77f3096e0d Merge pull request #191 from didi/dev
Dev
2021-02-28 22:04:34 +08:00
EricZeng
9a5b18c4e6 Merge pull request #190 from JokerQueue/dev
bug fix:correct way to judge a user does not exist
2021-02-28 14:36:28 +08:00
Joker
0c7112869a bug fix:correct way to judge a user does not exist 2021-02-27 22:35:35 +08:00
EricZeng
f66a4d71ea Merge pull request #188 from JokerQueue/dev
bug fix: unexpected stop of the topic sync task
2021-02-26 22:46:54 +08:00
Joker
9b0ab878df bug fix: unexpected stop of the topic sync task 2021-02-26 19:47:03 +08:00
EricZeng
d30b90dfd0 Merge pull request #186 from ZHAOYINRUI/master
新增releases_notes、更新FAQ
2021-02-26 09:59:18 +08:00
ZHAOYINRUI
efd28f8c27 Update faq.md 2021-02-26 00:03:25 +08:00
ZHAOYINRUI
e05e722387 Add files via upload 2021-02-26 00:01:09 +08:00
EricZeng
748e81956d Update faq.md 2021-02-24 14:10:41 +08:00
EricZeng
c9a41febce Merge pull request #184 from didi/dev
reject illegal zk address
2021-02-23 17:32:20 +08:00
zengqiao
18e244b756 reject illegal zk address 2021-02-23 17:18:49 +08:00
mrazkong
47676139a3 Merge pull request #183 from didi/dev
support dynamic change cluster auth
2021-02-23 16:56:26 +08:00
zengqiao
1ed933b7ad support dynamic change auth 2021-02-23 16:34:21 +08:00
EricZeng
f6a343ccd6 Merge pull request #182 from didi/master
merge master
2021-02-23 15:47:28 +08:00
EricZeng
dd6cdc22e5 Merge pull request #178 from Observe-secretly/v2.2.1_ldap
新功能:增加了对LDAP登录的支持
2021-02-10 12:35:07 +08:00
李民
f70f4348b3 Merge branch 'master' into v2.2.1_ldap 2021-02-10 10:00:32 +08:00
李民
e7349161f3 BUG FIX:修改LDAP登录重复注册用户的BUG 2021-02-09 15:22:26 +08:00
李民
2e2907ea09 修改LDAP获取UserDN的时候可能出错的问题 2021-02-09 14:33:53 +08:00
李民
25e84b2a6c 新功能:增加对LDAP的登录的支持 2021-02-09 11:33:54 +08:00
EricZeng
9aefc55534 Merge pull request #1 from didi/dev
merge didi dev
2021-01-23 11:16:35 +08:00
34 changed files with 714 additions and 143 deletions

View File

@@ -67,11 +67,16 @@
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
- [kafka实践十五滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
## 3 滴滴Logi开源用户钉钉交流群
## 3 滴滴Logi开源用户交流群
![image](https://user-images.githubusercontent.com/5287750/111266722-e531d800-8665-11eb-9242-3484da5a3099.png)
微信加群:关注公众号 Obsuite(官方公众号) 回复 "Logi加群"
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
钉钉群ID32821440
钉钉群ID32821440
## 4 OCE认证
OCE是一个认证机制和交流平台为滴滴Logi-KafkaManager生产用户量身打造我们会为OCE企业提供更好的技术支持比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等如果贵司Logi-KafkaManager上了生产[快来加入吧](http://obsuite.didiyun.com/open/openAuth)

97
Releases_Notes.md Normal file
View File

@@ -0,0 +1,97 @@
---
![kafka-manager-logo](./docs/assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## v2.3.0
版本上线时间2021-02-08
### 能力提升
- 新增支持docker化部署
- 可指定Broker作为候选controller
- 可新增并管理网关配置
- 可获取消费组状态
- 增加集群的JMX认证
### 体验优化
- 优化编辑用户角色、修改密码的流程
- 新增consumerID的搜索功能
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
- 在相应位置增加《资源申请文档》链接
### bug修复
- 修复Broker监控图表时间轴展示错误的问题
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
## v2.2.0
版本上线时间2021-01-25
### 能力提升
- 优化工单批量操作流程
- 增加获取Topic75分位/99分位的实时耗时数据
- 增加定时任务可将无主未落DB的Topic定期写入DB
### 体验优化
- 在相应位置增加《集群接入文档》链接
- 优化物理集群、逻辑集群含义
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
- 优化Topic审批时Topic数据保存时间的配置流程
- 优化Topic/应用申请、审批时的错误提示文案
- 优化Topic数据采样的操作项文案
- 优化运维人员删除Topic时的提示文案
- 优化运维人员删除Region的删除逻辑与提示文案
- 优化运维人员删除逻辑集群的提示文案
- 优化上传集群配置文件时的文件类型限制条件
### bug修复
- 修复填写应用名称时校验特殊字符出错的问题
- 修复普通用户越权访问应用详情的问题
- 修复由于Kafka版本升级导致的数据压缩格式无法获取的问题
- 修复删除逻辑集群或Topic之后界面依旧展示的问题
- 修复进行Leader rebalance操作时执行结果重复提示的问题
## v2.1.0
版本上线时间2020-12-19
### 体验优化
- 优化页面加载时的背景样式
- 优化普通用户申请Topic权限的流程
- 优化Topic申请配额、申请分区的权限限制
- 优化取消Topic权限的文案提示
- 优化申请配额表单的表单项名称
- 优化重置消费偏移的操作流程
- 优化创建Topic迁移任务的表单内容
- 优化Topic扩分区操作的弹窗界面样式
- 优化集群Broker监控可视化图表样式
- 优化创建逻辑集群的表单内容
- 优化集群安全协议的提示文案
### bug修复
- 修复偶发性重置消费偏移失败的问题

View File

@@ -4,7 +4,7 @@ cd $workspace
## constant
OUTPUT_DIR=./output
KM_VERSION=2.3.0
KM_VERSION=2.3.1
APP_NAME=kafka-manager
APP_DIR=${APP_NAME}-${KM_VERSION}

View File

@@ -9,6 +9,13 @@
# 动态配置管理
## 0、目录
- 1、Topic定时同步任务
- 2、专家服务——Topic分区热点
- 3、专家服务——Topic分区不足
## 1、Topic定时同步任务
### 1.1、配置的用途
@@ -63,3 +70,53 @@ task:
]
```
---
## 2、专家服务——Topic分区热点
`Region`所圈定的Broker范围内某个Topic的Leader数在这些圈定的Broker上分布不均衡时我们认为该Topic是存在热点的Topic。
备注单纯的查看Leader数的分布确实存在一定的局限性这块欢迎贡献更多的热点定义于代码。
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
REGION_HOT_TOPIC_CONFIG
```
配置Value
```json
{
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时则认为是存在热点的Topic
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```
---
## 3、专家服务——Topic分区不足
总流量除以分区数超过指定值时则我们认为存在Topic分区不足。
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理)
配置Key
```
TOPIC_INSUFFICIENT_PARTITION_CONFIG
```
配置Value
```json
{
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
"ignoreClusterIdList": [ # 忽略的集群
50
]
}
```

View File

@@ -0,0 +1,10 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# Kafka-Gateway 配置说明

View File

@@ -0,0 +1,94 @@
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## nginx配置-安装手册
# 一、独立部署
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
# 二、nginx配置
## 1、独立部署配置
```
#nginx 根目录访问配置如下
location / {
proxy_pass http://ip:port;
}
```
## 2、前后端分离&配置多个静态资源
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
### 1、源码下载
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
### 2、修改webpack.config.js 配置文件
修改`kafka-manager-console`模块 `webpack.config.js`
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
```
cd kafka-manager-console
vi webpack.config.js
# publicPath默认打包方式根目录下修改为nginx代理访问路径。
let publicPath = '/xxxx';
```
### 3、打包
```
npm cache clean --force && npm install
```
ps如果打包过程中报错运行`npm install clipboard@2.0.6`,相反请忽略!
### 4、部署
#### 1、前段静态文件部署
静态资源 `../kafka-manager-web/src/main/resources/templates`
上传到指定目录,目前以`root目录`做demo
#### 2、上传jar包并启动请参考[kafka-manager 安装手册](install_guide_cn.md)
#### 3、修改nginx 配置
```
location /xxxx {
# 静态文件存放位置
alias /root/templates;
try_files $uri $uri/ /xxxx/index.html;
index index.html;
}
location /api {
proxy_pass http://ip:port;
}
#后代端口建议使用/api如果冲突可以使用以下配置
#location /api/v2 {
# proxy_pass http://ip:port;
#}
#location /api/v1 {
# proxy_pass http://ip:port;
#}
```

View File

@@ -7,9 +7,9 @@
---
# FAQ
# FAQ
- 0、Github图裂问题解决
- 0、支持哪些Kafka版本
- 1、Topic申请、新建监控告警等操作时没有可选择的集群
- 2、逻辑集群 & Region的用途
- 3、登录失败
@@ -18,22 +18,16 @@
- 6、如何使用`MySQL 8`
- 7、`Jmx`连接失败如何解决?
- 8、`topic biz data not exist`错误及处理方式
- 9、进程启动后如何查看API文档
- 10、如何创建告警组
- 11、连接信息、耗时信息为什么没有数据
- 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
---
### 0、Github图裂问题解决
### 0、支持哪些Kafka版本
可以在本地机器`ping github.com`这个地址,获取到`github.com`地址的IP地址
然后将IP绑定到`/etc/hosts`文件中。
例如
```shell
# 在 /etc/hosts文件中增加如下信息
140.82.113.3 github.com
```
基本上只要所使用的Kafka还依赖于Zookeeper那么该版本的主要功能基本上应该就是支持的
---
@@ -43,7 +37,7 @@
逻辑集群的创建参看:
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加。
- [kafka-manager 接入集群](add_cluster/add_cluster.md) 手册这里的Region和逻辑集群都必须添加。
---
@@ -76,7 +70,7 @@
- 3、数据库时区问题。
检查MySQL的topic表查看是否有数据如果有数据那么再检查设置的时区是否正确。
检查MySQL的topic_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
---
@@ -109,3 +103,26 @@
可以在`运维管控->集群列表->Topic信息`下面编辑申请权限的Topic为Topic选择一个应用即可。
以上仅仅只是针对单个Topic的场景如果你有非常多的Topic需要进行初始化的那么此时可以在配置管理中增加一个配置来定时的对无主的Topic进行同步具体见[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md)
---
### 9、进程启动后如何查看API文档
- 滴滴Logi-KafkaManager采用Swagger-API工具记录API文档。Swagger-API地址 [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-ui.html#/)
### 10、如何创建告警组
这块需要配合监控系统进行使用,现在默认已经实现了夜莺的对接,当然也可以对接自己内部的监控系统,不过需要实现一些接口。
具体的文档可见:[监控功能对接夜莺](../dev_guide/monitor_system_integrate_with_n9e.md)、[监控功能对接其他系统](../dev_guide/monitor_system_integrate_with_self.md)
### 11、连接信息、耗时信息为什么没有数据
这块需要结合滴滴内部的kafka-gateway一同使用才会有数据滴滴kafka-gateway暂未开源。
### 12、逻辑集群申请审批通过之后为什么看不到逻辑集群
逻辑集群的申请与审批仅仅只是一个工单流程,并不会去实际创建逻辑集群,逻辑集群的创建还需要手动去创建。
具体的操作可见:[kafka-manager 接入集群](add_cluster/add_cluster.md)。

View File

@@ -47,4 +47,13 @@ public enum AccountRoleEnum {
}
return AccountRoleEnum.UNKNOWN;
}
public static AccountRoleEnum getUserRoleEnum(String roleName) {
for (AccountRoleEnum elem: AccountRoleEnum.values()) {
if (elem.message.equalsIgnoreCase(roleName)) {
return elem;
}
}
return AccountRoleEnum.UNKNOWN;
}
}

View File

@@ -1,45 +0,0 @@
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 是否上报监控系统
* @author zengqiao
* @date 20/9/25
*/
public enum SinkMonitorSystemEnum {
SINK_MONITOR_SYSTEM(0, "上报监控系统"),
NOT_SINK_MONITOR_SYSTEM(1, "不上报监控系统"),
;
private Integer code;
private String message;
SinkMonitorSystemEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public void setCode(Integer code) {
this.code = code;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
@Override
public String toString() {
return "SinkMonitorSystemEnum{" +
"code=" + code +
", message='" + message + '\'' +
'}';
}
}

View File

@@ -7,18 +7,18 @@ package com.xiaojukeji.kafka.manager.common.constant;
*/
public class ApiPrefix {
public static final String API_PREFIX = "/api/";
public static final String API_V1_PREFIX = API_PREFIX + "v1/";
public static final String API_V2_PREFIX = API_PREFIX + "v2/";
private static final String API_V1_PREFIX = API_PREFIX + "v1/";
// login
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
// console
public static final String API_V1_SSO_PREFIX = API_V1_PREFIX + "sso/";
public static final String API_V1_NORMAL_PREFIX = API_V1_PREFIX + "normal/";
public static final String API_V1_RD_PREFIX = API_V1_PREFIX + "rd/";
public static final String API_V1_OP_PREFIX = API_V1_PREFIX + "op/";
// open
public static final String API_V1_THIRD_PART_PREFIX = API_V1_PREFIX + "third-part/";
public static final String API_V2_THIRD_PART_PREFIX = API_V2_PREFIX + "third-part/";
// gateway
public static final String GATEWAY_API_V1_PREFIX = "/gateway" + API_V1_PREFIX;

View File

@@ -106,6 +106,7 @@ public enum ResultStatus {
STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"),
STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"),
STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"),
LDAP_AUTHENTICATION_FAILED(8053, "ldap authentication failed"),
;

View File

@@ -1,6 +1,7 @@
package com.xiaojukeji.kafka.manager.common.entity.pojo;
import java.util.Date;
import java.util.Objects;
/**
* @author zengqiao
@@ -116,4 +117,22 @@ public class ClusterDO implements Comparable<ClusterDO> {
public int compareTo(ClusterDO clusterDO) {
return this.id.compareTo(clusterDO.id);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ClusterDO clusterDO = (ClusterDO) o;
return Objects.equals(id, clusterDO.id)
&& Objects.equals(clusterName, clusterDO.clusterName)
&& Objects.equals(zookeeper, clusterDO.zookeeper)
&& Objects.equals(bootstrapServers, clusterDO.bootstrapServers)
&& Objects.equals(securityProperties, clusterDO.securityProperties)
&& Objects.equals(jmxProperties, clusterDO.jmxProperties);
}
@Override
public int hashCode() {
return Objects.hash(id, clusterName, zookeeper, bootstrapServers, securityProperties, jmxProperties);
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "mobx-ts-example",
"version": "1.0.0",
"name": "logi-kafka",
"version": "2.3.1",
"description": "",
"scripts": {
"start": "webpack-dev-server",
@@ -21,7 +21,7 @@
"@types/spark-md5": "^3.0.2",
"antd": "^3.26.15",
"clean-webpack-plugin": "^3.0.0",
"clipboard": "^2.0.6",
"clipboard": "2.0.6",
"cross-env": "^7.0.2",
"css-loader": "^2.1.0",
"echarts": "^4.5.0",
@@ -56,4 +56,4 @@
"dependencies": {
"format-to-json": "^1.0.4"
}
}
}

View File

@@ -1,8 +1,8 @@
package com.xiaojukeji.kafka.manager.service.cache;
import com.alibaba.fastjson.JSONObject;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.factory.KafkaConsumerFactory;
import kafka.admin.AdminClient;
import org.apache.commons.pool2.impl.GenericObjectPool;
@@ -103,6 +103,21 @@ public class KafkaClientPool {
}
}
public static void closeKafkaConsumerPool(Long clusterId) {
lock.lock();
try {
GenericObjectPool<KafkaConsumer> objectPool = KAFKA_CONSUMER_POOL.remove(clusterId);
if (objectPool == null) {
return;
}
objectPool.close();
} catch (Exception e) {
LOGGER.error("close kafka consumer pool failed, clusterId:{}.", clusterId, e);
} finally {
lock.unlock();
}
}
public static KafkaConsumer borrowKafkaConsumerClient(ClusterDO clusterDO) {
if (ValidateUtils.isNull(clusterDO)) {
return null;
@@ -132,7 +147,11 @@ public class KafkaClientPool {
if (ValidateUtils.isNull(objectPool)) {
return;
}
objectPool.returnObject(kafkaConsumer);
try {
objectPool.returnObject(kafkaConsumer);
} catch (Exception e) {
LOGGER.error("return kafka consumer client failed, clusterId:{}", physicalClusterId, e);
}
}
public static AdminClient getAdminClient(Long clusterId) {

View File

@@ -4,21 +4,23 @@ import com.xiaojukeji.kafka.manager.common.bizenum.KafkaBrokerRoleEnum;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant;
import com.xiaojukeji.kafka.manager.common.entity.KafkaVersion;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConfig;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap;
import com.xiaojukeji.kafka.manager.service.service.JmxService;
import com.xiaojukeji.kafka.manager.service.zookeeper.*;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.ControllerData;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.service.service.JmxService;
import com.xiaojukeji.kafka.manager.service.zookeeper.BrokerStateListener;
import com.xiaojukeji.kafka.manager.service.zookeeper.ControllerStateListener;
import com.xiaojukeji.kafka.manager.service.zookeeper.TopicStateListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
@@ -160,8 +162,12 @@ public class PhysicalClusterMetadataManager {
CLUSTER_MAP.remove(clusterId);
}
public Set<Long> getClusterIdSet() {
return CLUSTER_MAP.keySet();
public static Map<Long, ClusterDO> getClusterMap() {
return CLUSTER_MAP;
}
public static void updateClusterMap(ClusterDO clusterDO) {
CLUSTER_MAP.put(clusterDO.getId(), clusterDO);
}
public static ClusterDO getClusterFromCache(Long clusterId) {

View File

@@ -4,7 +4,6 @@ import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO;
import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterMetricsDO;

View File

@@ -1,7 +1,6 @@
package com.xiaojukeji.kafka.manager.service.service;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.dto.rd.RegionDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.RegionDO;
import java.util.List;

View File

@@ -340,10 +340,6 @@ public class AdminServiceImpl implements AdminService {
@Override
public ResultStatus modifyTopicConfig(ClusterDO clusterDO, String topicName, Properties properties, String operator) {
ResultStatus rs = TopicCommands.modifyTopicConfig(clusterDO, topicName, properties);
if (!ResultStatus.SUCCESS.equals(rs)) {
return rs;
}
return rs;
}
}

View File

@@ -205,21 +205,31 @@ public class ClusterServiceImpl implements ClusterService {
}
private boolean isZookeeperLegal(String zookeeper) {
boolean status = false;
ZooKeeper zk = null;
try {
zk = new ZooKeeper(zookeeper, 1000, null);
} catch (Throwable t) {
return false;
for (int i = 0; i < 15; ++i) {
if (zk.getState().isConnected()) {
// 只有状态是connected的时候才表示地址是合法的
status = true;
break;
}
Thread.sleep(1000);
}
} catch (Exception e) {
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=zk address illegal||errMsg={}", zookeeper, e.getMessage());
} finally {
try {
if (zk != null) {
zk.close();
}
} catch (Exception e) {
return false;
LOGGER.error("class=ClusterServiceImpl||method=isZookeeperLegal||zookeeper={}||msg=close zk client failed||errMsg={}", zookeeper, e.getMessage());
}
}
return true;
return status;
}
@Override

View File

@@ -8,7 +8,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumeDetailDTO;
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroup;
import com.xiaojukeji.kafka.manager.common.entity.ao.consumer.ConsumerGroupSummary;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ListUtils;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata;
import com.xiaojukeji.kafka.manager.common.entity.ao.PartitionOffsetDTO;
import com.xiaojukeji.kafka.manager.common.exception.ConfigException;

View File

@@ -16,5 +16,5 @@ public interface LoginService {
void logout(HttpServletRequest request, HttpServletResponse response, Boolean needJump2LoginPage);
boolean checkLogin(HttpServletRequest request, HttpServletResponse response);
boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue);
}

View File

@@ -0,0 +1,130 @@
package com.xiaojukeji.kafka.manager.account.component.ldap;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import javax.naming.AuthenticationException;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.naming.ldap.InitialLdapContext;
import javax.naming.ldap.LdapContext;
import java.util.Hashtable;
@Component
public class LdapAuthentication {
private static final Logger LOGGER = LoggerFactory.getLogger(LdapAuthentication.class);
@Value(value = "${account.ldap.url:}")
private String ldapUrl;
@Value(value = "${account.ldap.basedn:}")
private String ldapBasedn;
@Value(value = "${account.ldap.factory:}")
private String ldapFactory;
@Value(value = "${account.ldap.filter:}")
private String ldapFilter;
@Value(value = "${account.ldap.security.authentication:}")
private String securityAuthentication;
@Value(value = "${account.ldap.security.principal:}")
private String securityPrincipal;
@Value(value = "${account.ldap.security.credentials:}")
private String securityCredentials;
private LdapContext getLdapContext() {
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY, ldapFactory);
env.put(Context.PROVIDER_URL, ldapUrl + ldapBasedn);
env.put(Context.SECURITY_AUTHENTICATION, securityAuthentication);
// 此处若不指定用户名和密码,则自动转换为匿名登录
env.put(Context.SECURITY_PRINCIPAL, securityPrincipal);
env.put(Context.SECURITY_CREDENTIALS, securityCredentials);
try {
return new InitialLdapContext(env, null);
} catch (AuthenticationException e) {
LOGGER.warn("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=getLdapContext||errMsg={}", e);
}
return null;
}
private String getUserDN(String account, LdapContext ctx) {
String userDN = "";
try {
SearchControls constraints = new SearchControls();
constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
String filter = "(&(objectClass=*)("+ldapFilter+"=" + account + "))";
NamingEnumeration<SearchResult> en = ctx.search("", filter, constraints);
if (en == null || !en.hasMoreElements()) {
return "";
}
// maybe more than one element
while (en.hasMoreElements()) {
Object obj = en.nextElement();
if (obj instanceof SearchResult) {
SearchResult si = (SearchResult) obj;
userDN += si.getName();
userDN += "," + ldapBasedn;
break;
}
}
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=getUserDN||account={}||errMsg={}", account, e);
}
return userDN;
}
/**
* LDAP账密验证
* @param account
* @param password
* @return
*/
public boolean authenticate(String account, String password) {
LdapContext ctx = getLdapContext();
if (ValidateUtils.isNull(ctx)) {
return false;
}
try {
String userDN = getUserDN(account, ctx);
if(ValidateUtils.isBlank(userDN)){
return false;
}
ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, userDN);
ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password);
ctx.reconnect(null);
return true;
} catch (AuthenticationException e) {
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} catch (NamingException e) {
LOGGER.warn("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} catch (Exception e) {
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
} finally {
if(ctx != null) {
try {
ctx.close();
} catch (NamingException e) {
LOGGER.error("class=LdapAuthentication||method=authenticate||account={}||errMsg={}", account, e);
}
}
}
return false;
}
}

View File

@@ -2,13 +2,17 @@ package com.xiaojukeji.kafka.manager.account.component.sso;
import com.xiaojukeji.kafka.manager.account.AccountService;
import com.xiaojukeji.kafka.manager.account.component.AbstractSingleSignOn;
import com.xiaojukeji.kafka.manager.common.bizenum.AccountRoleEnum;
import com.xiaojukeji.kafka.manager.common.constant.LoginConstant;
import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.dto.normal.LoginDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.AccountDO;
import com.xiaojukeji.kafka.manager.common.utils.EncryptUtil;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.account.component.ldap.LdapAuthentication;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import javax.servlet.http.HttpServletRequest;
@@ -23,12 +27,48 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
@Autowired
private AccountService accountService;
@Autowired
private LdapAuthentication ldapAuthentication;
//是否开启ldap验证
@Value(value = "${account.ldap.enabled:}")
private Boolean accountLdapEnabled;
//ldap自动注册的默认角色。请注意它通常来说都是低权限角色
@Value(value = "${account.ldap.auth-user-registration-role:}")
private String authUserRegistrationRole;
//ldap自动注册是否开启
@Value(value = "${account.ldap.auth-user-registration:}")
private boolean authUserRegistration;
@Override
public Result<String> loginAndGetLdap(HttpServletRequest request, HttpServletResponse response, LoginDTO dto) {
if (ValidateUtils.isBlank(dto.getUsername()) || ValidateUtils.isNull(dto.getPassword())) {
return null;
return Result.buildFailure("Missing parameters");
}
Result<AccountDO> accountResult = accountService.getAccountDO(dto.getUsername());
//判断是否激活了LDAP验证, 若激活则也可使用ldap进行认证
if(!ValidateUtils.isNull(accountLdapEnabled) && accountLdapEnabled){
//去LDAP验证账密
if(!ldapAuthentication.authenticate(dto.getUsername(),dto.getPassword())){
return Result.buildFrom(ResultStatus.LDAP_AUTHENTICATION_FAILED);
}
if((ValidateUtils.isNull(accountResult) || ValidateUtils.isNull(accountResult.getData())) && authUserRegistration){
//自动注册
AccountDO accountDO = new AccountDO();
accountDO.setUsername(dto.getUsername());
accountDO.setRole(AccountRoleEnum.getUserRoleEnum(authUserRegistrationRole).getRole());
accountDO.setPassword(dto.getPassword());
accountService.createAccount(accountDO);
}
return Result.buildSuc(dto.getUsername());
}
if (ValidateUtils.isNull(accountResult) || accountResult.failed()) {
return new Result<>(accountResult.getCode(), accountResult.getMessage());
}
@@ -64,4 +104,4 @@ public class BaseSessionSignOn extends AbstractSingleSignOn {
response.setStatus(AbstractSingleSignOn.REDIRECT_CODE);
response.addHeader(AbstractSingleSignOn.HEADER_REDIRECT_KEY, "");
}
}
}

View File

@@ -63,12 +63,17 @@ public class LoginServiceImpl implements LoginService {
}
@Override
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response) {
String uri = request.getRequestURI();
if (!(uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)
|| uri.contains(ApiPrefix.API_V1_RD_PREFIX)
|| uri.contains(ApiPrefix.API_V1_OP_PREFIX))) {
// 白名单接口, 直接忽略登录
public boolean checkLogin(HttpServletRequest request, HttpServletResponse response, String classRequestMappingValue) {
if (ValidateUtils.isNull(classRequestMappingValue)) {
LOGGER.error("class=LoginServiceImpl||method=checkLogin||msg=uri illegal||uri={}", request.getRequestURI());
singleSignOn.setRedirectToLoginPage(response);
return false;
}
if (classRequestMappingValue.equals(ApiPrefix.API_V1_SSO_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.API_V1_THIRD_PART_PREFIX)
|| classRequestMappingValue.equals(ApiPrefix.GATEWAY_API_V1_PREFIX)) {
// 白名单接口直接true
return true;
}
@@ -79,7 +84,7 @@ public class LoginServiceImpl implements LoginService {
return false;
}
boolean status = checkAuthority(request, accountService.getAccountRoleFromCache(username));
boolean status = checkAuthority(classRequestMappingValue, accountService.getAccountRoleFromCache(username));
if (status) {
HttpSession session = request.getSession();
session.setAttribute(LoginConstant.SESSION_USERNAME_KEY, username);
@@ -89,19 +94,18 @@ public class LoginServiceImpl implements LoginService {
return false;
}
private boolean checkAuthority(HttpServletRequest request, AccountRoleEnum accountRoleEnum) {
String uri = request.getRequestURI();
if (uri.contains(ApiPrefix.API_V1_NORMAL_PREFIX)) {
private boolean checkAuthority(String classRequestMappingValue, AccountRoleEnum accountRoleEnum) {
if (classRequestMappingValue.equals(ApiPrefix.API_V1_NORMAL_PREFIX)) {
// normal 接口都可以访问
return true;
}
if (uri.contains(ApiPrefix.API_V1_RD_PREFIX) ) {
// RD 接口 OP 或者 RD 可以访问
if (classRequestMappingValue.equals(ApiPrefix.API_V1_RD_PREFIX) ) {
// RD 接口, OP 或者 RD 可以访问
return AccountRoleEnum.RD.equals(accountRoleEnum) || AccountRoleEnum.OP.equals(accountRoleEnum);
}
if (uri.contains(ApiPrefix.API_V1_OP_PREFIX)) {
if (classRequestMappingValue.equals(ApiPrefix.API_V1_OP_PREFIX)) {
// OP 接口只有 OP 可以访问
return AccountRoleEnum.OP.equals(accountRoleEnum);
}

View File

@@ -5,6 +5,8 @@ import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.monitor.common.entry.*;
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.*;
import com.xiaojukeji.kafka.manager.monitor.component.n9e.entry.bizenum.CategoryEnum;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
@@ -13,6 +15,8 @@ import java.util.*;
* @date 20/8/26
*/
public class N9eConverter {
private static final Logger LOGGER = LoggerFactory.getLogger(N9eConverter.class);
public static List<N9eMetricSinkPoint> convert2N9eMetricSinkPointList(String nid, List<MetricSinkPoint> pointList) {
if (pointList == null || pointList.isEmpty()) {
return new ArrayList<>();
@@ -98,8 +102,8 @@ public class N9eConverter {
n9eStrategy.setNotify_user(new ArrayList<>());
n9eStrategy.setCallback(strategyAction.getCallback());
n9eStrategy.setEnable_stime("00:00");
n9eStrategy.setEnable_etime("23:59");
n9eStrategy.setEnable_stime(String.format("%02d:00", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().min((e1, e2) -> e1.compareTo(e2)).get()));
n9eStrategy.setEnable_etime(String.format("%02d:59", ListUtils.string2IntList(strategy.getPeriodHoursOfDay()).stream().distinct().max((e1, e2) -> e1.compareTo(e2)).get()));
n9eStrategy.setEnable_days_of_week(ListUtils.string2IntList(strategy.getPeriodDaysOfWeek()));
n9eStrategy.setNeed_upgrade(0);
@@ -120,6 +124,15 @@ public class N9eConverter {
return strategyList;
}
private static Integer getEnableHour(String enableTime) {
try {
return Integer.valueOf(enableTime.split(":")[0]);
} catch (Exception e) {
LOGGER.warn("class=N9eConverter||method=getEnableHour||enableTime={}||errMsg={}", enableTime, e.getMessage());
}
return null;
}
public static Strategy convert2Strategy(N9eStrategy n9eStrategy, Map<String, NotifyGroup> notifyGroupMap) {
if (n9eStrategy == null) {
return null;
@@ -137,7 +150,16 @@ public class N9eConverter {
strategy.setId(n9eStrategy.getId().longValue());
strategy.setName(n9eStrategy.getName());
strategy.setPriority(n9eStrategy.getPriority());
strategy.setPeriodHoursOfDay("0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23");
List<Integer> hourList = new ArrayList<>();
Integer startHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_stime());
Integer endHour = N9eConverter.getEnableHour(n9eStrategy.getEnable_etime());
if (!(ValidateUtils.isNullOrLessThanZero(startHour) || ValidateUtils.isNullOrLessThanZero(endHour) || endHour < startHour)) {
for (Integer hour = startHour; hour <= endHour; ++hour) {
hourList.add(hour);
}
}
strategy.setPeriodHoursOfDay(ListUtils.intList2String(hourList));
strategy.setPeriodDaysOfWeek(ListUtils.intList2String(n9eStrategy.getEnable_days_of_week()));
List<StrategyExpression> strategyExpressionList = new ArrayList<>();

View File

@@ -125,7 +125,7 @@ public class SyncTopic2DB extends AbstractScheduledTask<EmptyEntry> {
if (ValidateUtils.isNull(syncTopic2DBConfig.isAddAuthority()) || !syncTopic2DBConfig.isAddAuthority()) {
// 不增加权限信息, 则直接忽略
return;
continue;
}
// TODO 当前添加 Topic 和 添加 Authority 是非事务的, 中间出现异常之后, 会导致数据错误, 后续还需要优化一下

View File

@@ -1,15 +1,17 @@
package com.xiaojukeji.kafka.manager.task.schedule.metadata;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.service.cache.KafkaClientPool;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author zengqiao
@@ -25,24 +27,63 @@ public class FlushClusterMetadata {
@Scheduled(cron="0/30 * * * * ?")
public void flush() {
List<ClusterDO> doList = clusterService.list();
Map<Long, ClusterDO> dbClusterMap = clusterService.list().stream().collect(Collectors.toMap(ClusterDO::getId, Function.identity(), (key1, key2) -> key2));
Set<Long> newClusterIdSet = new HashSet<>();
Set<Long> oldClusterIdSet = physicalClusterMetadataManager.getClusterIdSet();
for (ClusterDO clusterDO: doList) {
newClusterIdSet.add(clusterDO.getId());
Map<Long, ClusterDO> cacheClusterMap = PhysicalClusterMetadataManager.getClusterMap();
// 添加集群
physicalClusterMetadataManager.addNew(clusterDO);
}
// 新增的集群
for (ClusterDO clusterDO: dbClusterMap.values()) {
if (cacheClusterMap.containsKey(clusterDO.getId())) {
// 已经存在
continue;
}
add(clusterDO);
}
for (Long clusterId: oldClusterIdSet) {
if (newClusterIdSet.contains(clusterId)) {
continue;
}
// 移除的集群
for (ClusterDO clusterDO: cacheClusterMap.values()) {
if (dbClusterMap.containsKey(clusterDO.getId())) {
// 已经存在
continue;
}
remove(clusterDO.getId());
}
// 移除集群
physicalClusterMetadataManager.remove(clusterId);
}
// 被修改配置的集群
for (ClusterDO dbClusterDO: dbClusterMap.values()) {
ClusterDO cacheClusterDO = cacheClusterMap.get(dbClusterDO.getId());
if (ValidateUtils.anyNull(cacheClusterDO) || dbClusterDO.equals(cacheClusterDO)) {
// 不存在 || 相等
continue;
}
modifyConfig(dbClusterDO);
}
}
private void add(ClusterDO clusterDO) {
if (ValidateUtils.anyNull(clusterDO)) {
return;
}
physicalClusterMetadataManager.addNew(clusterDO);
}
private void modifyConfig(ClusterDO clusterDO) {
if (ValidateUtils.anyNull(clusterDO)) {
return;
}
PhysicalClusterMetadataManager.updateClusterMap(clusterDO);
KafkaClientPool.closeKafkaConsumerPool(clusterDO.getId());
}
private void remove(Long clusterId) {
if (ValidateUtils.anyNull(clusterId)) {
return;
}
// 移除缓存信息
physicalClusterMetadataManager.remove(clusterId);
// 清除客户端池子
KafkaClientPool.closeKafkaConsumerPool(clusterId);
}
}

View File

@@ -1,5 +1,6 @@
package com.xiaojukeji.kafka.manager.web.api;
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
import com.xiaojukeji.kafka.manager.common.entity.Result;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
@@ -14,9 +15,9 @@ import springfox.documentation.annotations.ApiIgnore;
* @date 20/6/18
*/
@ApiIgnore
@Api(description = "web应用探活接口(REST)")
@Api(tags = "web应用探活接口(REST)")
@RestController
@RequestMapping("api/")
@RequestMapping(ApiPrefix.API_V1_THIRD_PART_PREFIX)
public class HealthController {
@ApiIgnore

View File

@@ -9,7 +9,6 @@ import com.xiaojukeji.kafka.manager.common.entity.vo.common.AccountSummaryVO;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.utils.SpringTool;
import com.xiaojukeji.kafka.manager.common.constant.ApiPrefix;
import com.xiaojukeji.kafka.manager.web.api.versionone.gateway.GatewayHeartbeatController;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import org.slf4j.Logger;
@@ -62,4 +61,4 @@ public class NormalAccountController {
AccountRoleEnum accountRoleEnum = accountService.getAccountRoleFromCache(username);
return new Result<>(new AccountRoleVO(username, accountRoleEnum.getRole()));
}
}
}

View File

@@ -7,7 +7,6 @@ import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.BrokerMetadata;
import com.xiaojukeji.kafka.manager.openapi.common.vo.ThirdPartBrokerOverviewVO;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.BrokerService;
import io.swagger.annotations.Api;
@@ -52,4 +51,4 @@ public class ThirdPartClusterController {
return new Result<>(underReplicated.equals(0));
}
}
}

View File

@@ -1,8 +1,13 @@
package com.xiaojukeji.kafka.manager.web.inteceptor;
import com.xiaojukeji.kafka.manager.account.LoginService;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.HandlerInterceptor;
import javax.servlet.http.HttpServletRequest;
@@ -15,6 +20,8 @@ import javax.servlet.http.HttpServletResponse;
*/
@Component
public class PermissionInterceptor implements HandlerInterceptor {
private static final Logger LOGGER = LoggerFactory.getLogger(PermissionInterceptor.class);
@Autowired
private LoginService loginService;
@@ -28,6 +35,31 @@ public class PermissionInterceptor implements HandlerInterceptor {
public boolean preHandle(HttpServletRequest request,
HttpServletResponse response,
Object handler) throws Exception {
return loginService.checkLogin(request, response);
String classRequestMappingValue = null;
try {
classRequestMappingValue = getClassRequestMappingValue(handler);
} catch (Exception e) {
LOGGER.error("class=PermissionInterceptor||method=preHandle||uri={}||msg=parse class request-mapping failed", request.getRequestURI(), e);
}
return loginService.checkLogin(request, response, classRequestMappingValue);
}
private String getClassRequestMappingValue(Object handler) {
RequestMapping classRM = null;
if(handler instanceof HandlerMethod) {
HandlerMethod hm = (HandlerMethod)handler;
classRM = hm.getMethod().getDeclaringClass().getAnnotation(RequestMapping.class);
} else if(handler instanceof org.springframework.web.servlet.mvc.Controller) {
org.springframework.web.servlet.mvc.Controller hm = (org.springframework.web.servlet.mvc.Controller)handler;
Class<? extends org.springframework.web.servlet.mvc.Controller> hmClass = hm.getClass();
classRM = hmClass.getAnnotation(RequestMapping.class);
} else {
classRM = handler.getClass().getAnnotation(RequestMapping.class);
}
if (ValidateUtils.isNull(classRM) || classRM.value().length < 0) {
return null;
}
return classRM.value()[0];
}
}

View File

@@ -49,6 +49,17 @@ task:
account:
ldap:
enabled: false
url: ldap://127.0.0.1:389/
basedn: dc=tsign,dc=cn
factory: com.sun.jndi.ldap.LdapCtxFactory
filter: sAMAccountName
security:
authentication: simple
principal: cn=admin,dc=tsign,dc=cn
credentials: admin
auth-user-registration: true
auth-user-registration-role: normal
kcm:
enabled: false

View File

@@ -16,7 +16,7 @@
</parent>
<properties>
<kafka-manager.revision>2.3.0-SNAPSHOT</kafka-manager.revision>
<kafka-manager.revision>2.3.1-SNAPSHOT</kafka-manager.revision>
<swagger2.version>2.7.0</swagger2.version>
<swagger.version>1.5.13</swagger.version>