Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4471b054bc | ||
|
|
7049e9429d |
51
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
name: 报告Bug
|
|
||||||
about: 报告KnowStreaming的相关Bug
|
|
||||||
title: ''
|
|
||||||
labels: bug
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
你是否希望来认领这个Bug。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
### 环境信息
|
|
||||||
|
|
||||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
|
||||||
* Operating System version : <font size=4 color =red> xxx </font>
|
|
||||||
* Java version : <font size=4 color =red> xxx </font>
|
|
||||||
|
|
||||||
|
|
||||||
### 重现该问题的步骤
|
|
||||||
|
|
||||||
1. xxx
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. xxx
|
|
||||||
|
|
||||||
|
|
||||||
3. xxx
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 预期结果
|
|
||||||
|
|
||||||
<!-- 写下应该出现的预期结果?-->
|
|
||||||
|
|
||||||
### 实际结果
|
|
||||||
|
|
||||||
<!-- 实际发生了什么? -->
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
如果有异常,请附上异常Trace:
|
|
||||||
|
|
||||||
```
|
|
||||||
Just put your stack trace here!
|
|
||||||
```
|
|
||||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,8 +0,0 @@
|
|||||||
blank_issues_enabled: true
|
|
||||||
contact_links:
|
|
||||||
- name: 讨论问题
|
|
||||||
url: https://github.com/didi/KnowStreaming/discussions/new
|
|
||||||
about: 发起问题、讨论 等等
|
|
||||||
- name: KnowStreaming官网
|
|
||||||
url: https://knowstreaming.com/
|
|
||||||
about: KnowStreaming website
|
|
||||||
26
.github/ISSUE_TEMPLATE/detail_optimizing.md
vendored
@@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
name: 优化建议
|
|
||||||
about: 相关功能优化建议
|
|
||||||
title: ''
|
|
||||||
labels: Optimization Suggestions
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
你是否希望来认领这个优化建议。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
### 环境信息
|
|
||||||
|
|
||||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
|
||||||
* Operating System version : <font size=4 color =red> xxx </font>
|
|
||||||
* Java version : <font size=4 color =red> xxx </font>
|
|
||||||
|
|
||||||
### 需要优化的功能点
|
|
||||||
|
|
||||||
|
|
||||||
### 建议如何优化
|
|
||||||
|
|
||||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
name: 提议新功能/需求
|
|
||||||
about: 给KnowStreaming提一个功能需求
|
|
||||||
title: ''
|
|
||||||
labels: feature
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
|
|
||||||
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
|
|
||||||
|
|
||||||
你是否希望来认领这个Feature。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
|
|
||||||
## 这里描述需求
|
|
||||||
<!--请尽可能的描述清楚您的需求 -->
|
|
||||||
|
|
||||||
12
.github/ISSUE_TEMPLATE/question.md
vendored
@@ -1,12 +0,0 @@
|
|||||||
---
|
|
||||||
name: 提个问题
|
|
||||||
about: 问KnowStreaming相关问题
|
|
||||||
title: ''
|
|
||||||
labels: question
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
## 在这里提出你的问题
|
|
||||||
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,23 +0,0 @@
|
|||||||
请不要在没有先创建Issue的情况下创建Pull Request。
|
|
||||||
|
|
||||||
## 变更的目的是什么
|
|
||||||
|
|
||||||
XXXXX
|
|
||||||
|
|
||||||
## 简短的更新日志
|
|
||||||
|
|
||||||
XX
|
|
||||||
|
|
||||||
## 验证这一变化
|
|
||||||
|
|
||||||
XXXX
|
|
||||||
|
|
||||||
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
|
|
||||||
|
|
||||||
* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
|
|
||||||
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
|
|
||||||
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
|
|
||||||
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
|
|
||||||
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
|
|
||||||
* [ ] 确保编译通过,集成测试通过;
|
|
||||||
|
|
||||||
229
.gitignore
vendored
@@ -1,116 +1,113 @@
|
|||||||
### Intellij ###
|
### Intellij ###
|
||||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
||||||
|
|
||||||
*.iml
|
*.iml
|
||||||
|
|
||||||
## Directory-based project format:
|
## Directory-based project format:
|
||||||
.idea/
|
.idea/
|
||||||
# if you remove the above rule, at least ignore the following:
|
# if you remove the above rule, at least ignore the following:
|
||||||
|
|
||||||
# User-specific stuff:
|
# User-specific stuff:
|
||||||
# .idea/workspace.xml
|
# .idea/workspace.xml
|
||||||
# .idea/tasks.xml
|
# .idea/tasks.xml
|
||||||
# .idea/dictionaries
|
# .idea/dictionaries
|
||||||
# .idea/shelf
|
# .idea/shelf
|
||||||
|
|
||||||
# Sensitive or high-churn files:
|
# Sensitive or high-churn files:
|
||||||
.idea/dataSources.ids
|
.idea/dataSources.ids
|
||||||
.idea/dataSources.xml
|
.idea/dataSources.xml
|
||||||
.idea/sqlDataSources.xml
|
.idea/sqlDataSources.xml
|
||||||
.idea/dynamic.xml
|
.idea/dynamic.xml
|
||||||
.idea/uiDesigner.xml
|
.idea/uiDesigner.xml
|
||||||
|
|
||||||
|
|
||||||
# Mongo Explorer plugin:
|
# Mongo Explorer plugin:
|
||||||
.idea/mongoSettings.xml
|
.idea/mongoSettings.xml
|
||||||
|
|
||||||
## File-based project format:
|
## File-based project format:
|
||||||
*.ipr
|
*.ipr
|
||||||
*.iws
|
*.iws
|
||||||
|
|
||||||
## Plugin-specific files:
|
## Plugin-specific files:
|
||||||
|
|
||||||
# IntelliJ
|
# IntelliJ
|
||||||
/out/
|
/out/
|
||||||
|
|
||||||
# mpeltonen/sbt-idea plugin
|
# mpeltonen/sbt-idea plugin
|
||||||
.idea_modules/
|
.idea_modules/
|
||||||
|
|
||||||
# JIRA plugin
|
# JIRA plugin
|
||||||
atlassian-ide-plugin.xml
|
atlassian-ide-plugin.xml
|
||||||
|
|
||||||
# Crashlytics plugin (for Android Studio and IntelliJ)
|
# Crashlytics plugin (for Android Studio and IntelliJ)
|
||||||
com_crashlytics_export_strings.xml
|
com_crashlytics_export_strings.xml
|
||||||
crashlytics.properties
|
crashlytics.properties
|
||||||
crashlytics-build.properties
|
crashlytics-build.properties
|
||||||
fabric.properties
|
fabric.properties
|
||||||
|
|
||||||
|
|
||||||
### Java ###
|
### Java ###
|
||||||
*.class
|
*.class
|
||||||
|
|
||||||
# Mobile Tools for Java (J2ME)
|
# Mobile Tools for Java (J2ME)
|
||||||
.mtj.tmp/
|
.mtj.tmp/
|
||||||
|
|
||||||
# Package Files #
|
# Package Files #
|
||||||
*.jar
|
*.jar
|
||||||
*.war
|
*.war
|
||||||
*.ear
|
*.ear
|
||||||
*.tar.gz
|
*.tar.gz
|
||||||
|
|
||||||
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
||||||
hs_err_pid*
|
hs_err_pid*
|
||||||
|
|
||||||
|
|
||||||
### OSX ###
|
### OSX ###
|
||||||
.DS_Store
|
.DS_Store
|
||||||
.AppleDouble
|
.AppleDouble
|
||||||
.LSOverride
|
.LSOverride
|
||||||
|
|
||||||
# Icon must end with two \r
|
# Icon must end with two \r
|
||||||
Icon
|
Icon
|
||||||
|
|
||||||
|
|
||||||
# Thumbnails
|
# Thumbnails
|
||||||
._*
|
._*
|
||||||
|
|
||||||
# Files that might appear in the root of a volume
|
# Files that might appear in the root of a volume
|
||||||
.DocumentRevisions-V100
|
.DocumentRevisions-V100
|
||||||
.fseventsd
|
.fseventsd
|
||||||
.Spotlight-V100
|
.Spotlight-V100
|
||||||
.TemporaryItems
|
.TemporaryItems
|
||||||
.Trashes
|
.Trashes
|
||||||
.VolumeIcon.icns
|
.VolumeIcon.icns
|
||||||
|
|
||||||
# Directories potentially created on remote AFP share
|
# Directories potentially created on remote AFP share
|
||||||
.AppleDB
|
.AppleDB
|
||||||
.AppleDesktop
|
.AppleDesktop
|
||||||
Network Trash Folder
|
Network Trash Folder
|
||||||
Temporary Items
|
Temporary Items
|
||||||
.apdisk
|
.apdisk
|
||||||
|
|
||||||
/target
|
/target
|
||||||
target/
|
target/
|
||||||
*.log
|
*.log
|
||||||
*.log.*
|
*.log.*
|
||||||
*.bak
|
*.bak
|
||||||
*.vscode
|
*.vscode
|
||||||
*/.vscode/*
|
*/.vscode/*
|
||||||
*/.vscode
|
*/.vscode
|
||||||
*/velocity.log*
|
*/velocity.log*
|
||||||
*/*.log
|
*/*.log
|
||||||
*/*.log.*
|
*/*.log.*
|
||||||
node_modules/
|
node_modules/
|
||||||
node_modules/*
|
node_modules/*
|
||||||
workspace.xml
|
workspace.xml
|
||||||
/output/*
|
/output/*
|
||||||
.gitversion
|
.gitversion
|
||||||
out/*
|
node_modules/*
|
||||||
dist/
|
out/*
|
||||||
dist/*
|
dist/
|
||||||
km-rest/src/main/resources/templates/
|
dist/*
|
||||||
*dependency-reduced-pom*
|
kafka-manager-web/src/main/resources/templates/
|
||||||
#filter flattened xml
|
.DS_Store
|
||||||
*/.flattened-pom.xml
|
|
||||||
.flattened-pom.xml
|
|
||||||
*/*/.flattened-pom.xml
|
|
||||||
|
|||||||
28
CONTRIBUTING.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# Contribution Guideline
|
||||||
|
|
||||||
|
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
|
||||||
|
|
||||||
|
## Pull Requests
|
||||||
|
|
||||||
|
Before sending pull request to this project, please read and follow guidelines below.
|
||||||
|
|
||||||
|
1. Branch: We only accept pull request on `dev` branch.
|
||||||
|
2. Coding style: Follow the coding style used in kafka-manager.
|
||||||
|
3. Commit message: Use English and be aware of your spell.
|
||||||
|
4. Test: Make sure to test your code.
|
||||||
|
|
||||||
|
Add device mode, API version, related log, screenshots and other related information in your pull request if possible.
|
||||||
|
|
||||||
|
NOTE: We assume all your contribution can be licensed under the [Apache License 2.0](LICENSE).
|
||||||
|
|
||||||
|
## Issues
|
||||||
|
|
||||||
|
We love clearly described issues. :)
|
||||||
|
|
||||||
|
Following information can help us to resolve the issue faster.
|
||||||
|
|
||||||
|
* Device mode and hardware information.
|
||||||
|
* API version.
|
||||||
|
* Logs.
|
||||||
|
* Screenshots.
|
||||||
|
* Steps to reproduce the issue.
|
||||||
117
README.md
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
`LogiKM开源至今备受关注,考虑到开源项目应该更贴合Apache Kafka未来发展方向,经项目组慎重考虑,预计22年5月份将其品牌升级成Know Streaming,届时项目名称和Logo也将统一更新,感谢大家一如既往的支持,敬请期待!`
|
||||||
|
|
||||||
|
阅读本README文档,您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。
|
||||||
|
|
||||||
|
|
||||||
|
## 1 产品简介
|
||||||
|
滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。
|
||||||
|
|
||||||
|
### 1.1 快速体验地址
|
||||||
|
|
||||||
|
- 体验地址 http://117.51.150.133:8080 账号密码 admin/admin
|
||||||
|
|
||||||
|
### 1.2 体验地图
|
||||||
|
相比较于同类产品的用户视角单一(大多为管理员视角),滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是:**用户体验地图、运维体验地图、运营体验地图**
|
||||||
|
|
||||||
|
#### 1.2.1 用户体验地图
|
||||||
|
- 平台租户申请 :申请应用(App)作为Kafka中的用户名,并用 AppID+password作为身份验证
|
||||||
|
- 集群资源申请 :按需申请、按需使用。可使用平台提供的共享集群,也可为应用申请独立的集群
|
||||||
|
- Topic 申 请 :可根据应用(App)创建Topic,或者申请其他topic的读写权限
|
||||||
|
- Topic 运 维 :Topic数据采样、调整配额、申请分区等操作
|
||||||
|
- 指 标 监 控 :基于Topic生产消费各环节耗时统计,监控不同分位数性能指标
|
||||||
|
- 消 费 组 运 维 :支持将消费偏移重置至指定时间或指定位置
|
||||||
|
|
||||||
|
#### 1.2.2 运维体验地图
|
||||||
|
- 多版本集群管控 :支持从`0.10.2`到`2.x`版本
|
||||||
|
- 集 群 监 控 :集群Topic、Broker等多维度历史与实时关键指标查看,建立健康分体系
|
||||||
|
- 集 群 运 维 :划分部分Broker作为Region,使用Region定义资源划分单位,并按照业务、保障能力区分逻辑集群
|
||||||
|
- Broker 运 维 :包括优先副本选举等操作
|
||||||
|
- Topic 运 维 :包括创建、查询、扩容、修改属性、迁移、下线等
|
||||||
|
|
||||||
|
|
||||||
|
#### 1.2.3 运营体验地图
|
||||||
|
- 资 源 治 理 :沉淀资源治理方法。针对Topic分区热点、分区不足等高频常见问题,沉淀资源治理方法,实现资源治理专家化
|
||||||
|
- 资 源 审 批 :工单体系。Topic创建、调整配额、申请分区等操作,由专业运维人员审批,规范资源使用,保障平台平稳运行
|
||||||
|
- 账 单 体 系 :成本控制。Topic资源、集群资源按需申请、按需使用。根据流量核算费用,帮助企业建设大数据成本核算体系
|
||||||
|
|
||||||
|
### 1.3 核心优势
|
||||||
|
- 高 效 的 问 题 定 位 :监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题
|
||||||
|
- 便 捷 的 集 群 运 维 :按照Region定义集群资源划分单位,将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时,实现对服务端的强管控
|
||||||
|
- 专 业 的 资 源 治 理 :基于滴滴内部多年运营实践,沉淀资源治理方法,建立健康分体系。针对Topic分区热点、分区不足等高频常见问题,实现资源治理专家化
|
||||||
|
- 友 好 的 运 维 生 态 :与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效
|
||||||
|
|
||||||
|
### 1.4 滴滴Logi-KafkaManager架构图
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## 2 相关文档
|
||||||
|
|
||||||
|
### 2.1 产品文档
|
||||||
|
- [滴滴LogiKM 安装手册](docs/install_guide/install_guide_cn.md)
|
||||||
|
- [滴滴LogiKM 接入集群](docs/user_guide/add_cluster/add_cluster.md)
|
||||||
|
- [滴滴LogiKM 用户使用手册](docs/user_guide/user_guide_cn.md)
|
||||||
|
- [滴滴LogiKM FAQ](docs/user_guide/faq.md)
|
||||||
|
|
||||||
|
### 2.2 社区文章
|
||||||
|
- [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html)
|
||||||
|
- [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw)
|
||||||
|
- [滴滴LogiKM 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ)
|
||||||
|
- [滴滴LogiKM 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64)
|
||||||
|
- [滴滴LogiKM 系列视频教程](https://space.bilibili.com/442531657/channel/seriesdetail?sid=571649)
|
||||||
|
- [kafka最强最全知识图谱](https://www.szzdzhp.com/kafka/)
|
||||||
|
- [滴滴LogiKM新用户入门系列文章专栏 --石臻臻](https://www.szzdzhp.com/categories/LogIKM/)
|
||||||
|
- [kafka实践(十五):滴滴开源Kafka管控平台 LogiKM研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
|
||||||
|
|
||||||
|
|
||||||
|
## 3 滴滴Logi开源用户交流群
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
想跟各个大佬交流Kafka Es 等中间件/大数据相关技术请 加微信进群。
|
||||||
|
|
||||||
|
微信加群:添加<font color=red>mike_zhangliang</font>、<font color=red>PenceXie</font>的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群"
|
||||||
|
|
||||||
|
## 4 知识星球
|
||||||
|
|
||||||
|
<img width="447" alt="image" src="https://user-images.githubusercontent.com/71620349/147314042-843a371a-48c0-4d9a-a65e-ca40236f3300.png">
|
||||||
|
|
||||||
|
<br>
|
||||||
|
<center>
|
||||||
|
✅我们正在组建国内最大最权威的
|
||||||
|
</center>
|
||||||
|
<br>
|
||||||
|
<center>
|
||||||
|
<font color=red size=5><b>【Kafka中文社区】</b></font>
|
||||||
|
</center>
|
||||||
|
|
||||||
|
在这里你可以结交各大互联网Kafka大佬以及近2000+Kafka爱好者,一起实现知识共享,实时掌控最新行业资讯,期待您的加入中~https://z.didi.cn/5gSF9
|
||||||
|
|
||||||
|
<font color=red size=5>有问必答~! </font>
|
||||||
|
|
||||||
|
<font color=red size=5>互动有礼~! </font>
|
||||||
|
|
||||||
|
PS:提问请尽量把问题一次性描述清楚,并告知环境信息情况哦~!如使用版本、操作步骤、报错/警告信息等,方便大V们快速解答~
|
||||||
|
|
||||||
|
## 5 项目成员
|
||||||
|
|
||||||
|
### 5.1 内部核心人员
|
||||||
|
|
||||||
|
`iceyuhui`、`liuyaguang`、`limengmonty`、`zhangliangmike`、`xiepeng`、`nullhuangyiming`、`zengqiao`、`eilenexuzhe`、`huangjiaweihjw`、`zhaoyinrui`、`marzkonglingxu`、`joysunchao`、`石臻臻`
|
||||||
|
|
||||||
|
|
||||||
|
### 5.2 外部贡献者
|
||||||
|
|
||||||
|
`fangjunyu`、`zhoutaiyang`
|
||||||
|
|
||||||
|
|
||||||
|
## 6 协议
|
||||||
|
|
||||||
|
`LogiKM`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)
|
||||||
174
Releases_Notes.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## v2.6.0
|
||||||
|
|
||||||
|
版本上线时间:2022-01-24
|
||||||
|
|
||||||
|
### 能力提升
|
||||||
|
- 增加简单回退工具类
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
- 补充周期任务说明文档
|
||||||
|
- 补充集群安装部署使用说明文档
|
||||||
|
- 升级Swagger、SpringFramework、SpringBoot、EChats版本
|
||||||
|
- 优化Task模块的日志输出
|
||||||
|
- 优化corn表达式解析失败后退出无任何日志提示问题
|
||||||
|
- Ldap用户接入时,增加部门及邮箱信息等
|
||||||
|
- 对Jmx模块,增加连接失败后的回退机制及错误日志优化
|
||||||
|
- 增加线程池、客户端池可配置
|
||||||
|
- 删除无用的jmx_prometheus_javaagent-0.14.0.jar
|
||||||
|
- 优化迁移任务名称
|
||||||
|
- 优化创建Region时,Region容量信息不能立即被更新问题
|
||||||
|
- 引入lombok
|
||||||
|
- 更新视频教程
|
||||||
|
- 优化kcm_script.sh脚本中的LogiKM地址为可通过程序传入
|
||||||
|
- 第三方接口及网关接口,增加是否跳过登录的开关
|
||||||
|
- extends模块相关配置调整为非必须在application.yml中配置
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
- 修复批量往DB写入空指标数组时报SQL语法异常的问题
|
||||||
|
- 修复网关增加配置及修改配置时,version不变化问题
|
||||||
|
- 修复集群列表页,提示框遮挡问题
|
||||||
|
- 修复对高版本Broker元信息协议解析失败的问题
|
||||||
|
- 修复Dockerfile执行时提示缺少application.yml文件的问题
|
||||||
|
- 修复逻辑集群更新时,会报空指针的问题
|
||||||
|
|
||||||
|
## v2.4.1+
|
||||||
|
|
||||||
|
版本上线时间:2021-05-21
|
||||||
|
|
||||||
|
### 能力提升
|
||||||
|
- 增加直接增加权限和配额的接口(v2.4.1)
|
||||||
|
- 增加接口调用可绕过登录的功能(v2.4.1)
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
- tomcat 版本提升至8.5.66(v2.4.2)
|
||||||
|
- op接口优化,拆分util接口为topic、leader两类接口(v2.4.1)
|
||||||
|
- 简化Gateway配置的Key长度(v2.4.1)
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
- 修复页面展示版本错误问题(v2.4.2)
|
||||||
|
|
||||||
|
|
||||||
|
## v2.4.0
|
||||||
|
|
||||||
|
版本上线时间:2021-05-18
|
||||||
|
|
||||||
|
|
||||||
|
### 能力提升
|
||||||
|
|
||||||
|
- 增加App与Topic自动化审批开关
|
||||||
|
- Broker元信息中增加Rack信息
|
||||||
|
- 升级MySQL 驱动,支持MySQL 8+
|
||||||
|
- 增加操作记录查询界面
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
|
||||||
|
- FAQ告警组说明优化
|
||||||
|
- 用户手册共享及 独享集群概念优化
|
||||||
|
- 用户管理界面,前端限制用户删除自己
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
|
||||||
|
- 修复op-util类中创建Topic失败的接口
|
||||||
|
- 周期同步Topic到DB的任务修复,将Topic列表查询从缓存调整为直接查DB
|
||||||
|
- 应用下线审批失败的功能修复,将权限为0(无权限)的数据进行过滤
|
||||||
|
- 修复登录及权限绕过的漏洞
|
||||||
|
- 修复研发角色展示接入集群、暂停监控等按钮的问题
|
||||||
|
|
||||||
|
|
||||||
|
## v2.3.0
|
||||||
|
|
||||||
|
版本上线时间:2021-02-08
|
||||||
|
|
||||||
|
|
||||||
|
### 能力提升
|
||||||
|
|
||||||
|
- 新增支持docker化部署
|
||||||
|
- 可指定Broker作为候选controller
|
||||||
|
- 可新增并管理网关配置
|
||||||
|
- 可获取消费组状态
|
||||||
|
- 增加集群的JMX认证
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
|
||||||
|
- 优化编辑用户角色、修改密码的流程
|
||||||
|
- 新增consumerID的搜索功能
|
||||||
|
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
|
||||||
|
- 在相应位置增加《资源申请文档》链接
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
|
||||||
|
- 修复Broker监控图表时间轴展示错误的问题
|
||||||
|
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## v2.2.0
|
||||||
|
|
||||||
|
版本上线时间:2021-01-25
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 能力提升
|
||||||
|
|
||||||
|
- 优化工单批量操作流程
|
||||||
|
- 增加获取Topic75分位/99分位的实时耗时数据
|
||||||
|
- 增加定时任务,可将无主未落DB的Topic定期写入DB
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
|
||||||
|
- 在相应位置增加《集群接入文档》链接
|
||||||
|
- 优化物理集群、逻辑集群含义
|
||||||
|
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
|
||||||
|
- 优化Topic审批时,Topic数据保存时间的配置流程
|
||||||
|
- 优化Topic/应用申请、审批时的错误提示文案
|
||||||
|
- 优化Topic数据采样的操作项文案
|
||||||
|
- 优化运维人员删除Topic时的提示文案
|
||||||
|
- 优化运维人员删除Region的删除逻辑与提示文案
|
||||||
|
- 优化运维人员删除逻辑集群的提示文案
|
||||||
|
- 优化上传集群配置文件时的文件类型限制条件
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
|
||||||
|
- 修复填写应用名称时校验特殊字符出错的问题
|
||||||
|
- 修复普通用户越权访问应用详情的问题
|
||||||
|
- 修复由于Kafka版本升级,导致的数据压缩格式无法获取的问题
|
||||||
|
- 修复删除逻辑集群或Topic之后,界面依旧展示的问题
|
||||||
|
- 修复进行Leader rebalance操作时执行结果重复提示的问题
|
||||||
|
|
||||||
|
|
||||||
|
## v2.1.0
|
||||||
|
|
||||||
|
版本上线时间:2020-12-19
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 体验优化
|
||||||
|
|
||||||
|
- 优化页面加载时的背景样式
|
||||||
|
- 优化普通用户申请Topic权限的流程
|
||||||
|
- 优化Topic申请配额、申请分区的权限限制
|
||||||
|
- 优化取消Topic权限的文案提示
|
||||||
|
- 优化申请配额表单的表单项名称
|
||||||
|
- 优化重置消费偏移的操作流程
|
||||||
|
- 优化创建Topic迁移任务的表单内容
|
||||||
|
- 优化Topic扩分区操作的弹窗界面样式
|
||||||
|
- 优化集群Broker监控可视化图表样式
|
||||||
|
- 优化创建逻辑集群的表单内容
|
||||||
|
- 优化集群安全协议的提示文案
|
||||||
|
|
||||||
|
### bug修复
|
||||||
|
|
||||||
|
- 修复偶发性重置消费偏移失败的问题
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
29
container/dockerfiles/Dockerfile
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
FROM openjdk:16-jdk-alpine3.13
|
||||||
|
|
||||||
|
LABEL author="fengxsong"
|
||||||
|
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && apk add --no-cache tini
|
||||||
|
|
||||||
|
ENV VERSION 2.4.2
|
||||||
|
WORKDIR /opt/
|
||||||
|
|
||||||
|
ENV AGENT_HOME /opt/agent/
|
||||||
|
COPY docker-depends/config.yaml $AGENT_HOME
|
||||||
|
COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
|
||||||
|
|
||||||
|
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
|
||||||
|
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
|
||||||
|
ENV JAVA_OPTS="-verbose:gc \
|
||||||
|
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
|
||||||
|
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
|
||||||
|
|
||||||
|
RUN wget https://github.com/didi/Logi-KafkaManager/releases/download/v${VERSION}/kafka-manager-${VERSION}.tar.gz && \
|
||||||
|
tar xvf kafka-manager-${VERSION}.tar.gz && \
|
||||||
|
mv kafka-manager-${VERSION}/kafka-manager.jar /opt/app.jar && \
|
||||||
|
mv kafka-manager-${VERSION}/application.yml /opt/application.yml && \
|
||||||
|
rm -rf kafka-manager-${VERSION}*
|
||||||
|
|
||||||
|
EXPOSE 8080 9999
|
||||||
|
|
||||||
|
ENTRYPOINT ["tini", "--"]
|
||||||
|
|
||||||
|
CMD [ "sh", "-c", "java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"]
|
||||||
5
container/dockerfiles/docker-depends/config.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
startDelaySeconds: 0
|
||||||
|
ssl: false
|
||||||
|
lowercaseOutputName: false
|
||||||
|
lowercaseOutputLabelNames: false
|
||||||
23
container/helm/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# Patterns to ignore when building packages.
|
||||||
|
# This supports shell glob matching, relative path matching, and
|
||||||
|
# negation (prefixed with !). Only one pattern per line.
|
||||||
|
.DS_Store
|
||||||
|
# Common VCS dirs
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
.bzr/
|
||||||
|
.bzrignore
|
||||||
|
.hg/
|
||||||
|
.hgignore
|
||||||
|
.svn/
|
||||||
|
# Common backup files
|
||||||
|
*.swp
|
||||||
|
*.bak
|
||||||
|
*.tmp
|
||||||
|
*.orig
|
||||||
|
*~
|
||||||
|
# Various IDEs
|
||||||
|
.project
|
||||||
|
.idea/
|
||||||
|
*.tmproj
|
||||||
|
.vscode/
|
||||||
6
container/helm/Chart.lock
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
dependencies:
|
||||||
|
- name: mysql
|
||||||
|
repository: https://charts.bitnami.com/bitnami
|
||||||
|
version: 8.6.3
|
||||||
|
digest: sha256:d250c463c1d78ba30a24a338a06a551503c7a736621d974fe4999d2db7f6143e
|
||||||
|
generated: "2021-06-24T11:34:54.625217+08:00"
|
||||||
29
container/helm/Chart.yaml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
apiVersion: v2
|
||||||
|
name: didi-km
|
||||||
|
description: Logi-KafkaManager
|
||||||
|
|
||||||
|
# A chart can be either an 'application' or a 'library' chart.
|
||||||
|
#
|
||||||
|
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||||
|
# to be deployed.
|
||||||
|
#
|
||||||
|
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||||
|
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||||
|
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||||
|
type: application
|
||||||
|
|
||||||
|
# This is the chart version. This version number should be incremented each time you make changes
|
||||||
|
# to the chart and its templates, including the app version.
|
||||||
|
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||||
|
version: 0.1.0
|
||||||
|
|
||||||
|
# This is the version number of the application being deployed. This version number should be
|
||||||
|
# incremented each time you make changes to the application. Versions are not expected to
|
||||||
|
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||||
|
# It is recommended to use it with quotes.
|
||||||
|
appVersion: "2.4.2"
|
||||||
|
dependencies:
|
||||||
|
- condition: mysql.enabled
|
||||||
|
name: mysql
|
||||||
|
repository: https://charts.bitnami.com/bitnami
|
||||||
|
version: 8.x.x
|
||||||
BIN
container/helm/charts/mysql-8.6.3.tgz
Normal file
22
container/helm/templates/NOTES.txt
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
1. Get the application URL by running these commands:
|
||||||
|
{{- if .Values.ingress.enabled }}
|
||||||
|
{{- range $host := .Values.ingress.hosts }}
|
||||||
|
{{- range .paths }}
|
||||||
|
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- else if contains "NodePort" .Values.service.type }}
|
||||||
|
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "didi-km.fullname" . }})
|
||||||
|
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||||
|
echo http://$NODE_IP:$NODE_PORT
|
||||||
|
{{- else if contains "LoadBalancer" .Values.service.type }}
|
||||||
|
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||||
|
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "didi-km.fullname" . }}'
|
||||||
|
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "didi-km.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
||||||
|
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
||||||
|
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||||
|
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "didi-km.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||||
|
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
|
||||||
|
echo "Visit http://127.0.0.1:8080 to use your application"
|
||||||
|
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
|
||||||
|
{{- end }}
|
||||||
62
container/helm/templates/_helpers.tpl
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
{{/*
|
||||||
|
Expand the name of the chart.
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.name" -}}
|
||||||
|
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create a default fully qualified app name.
|
||||||
|
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||||
|
If release name contains chart name it will be used as a full name.
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.fullname" -}}
|
||||||
|
{{- if .Values.fullnameOverride }}
|
||||||
|
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||||
|
{{- else }}
|
||||||
|
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||||
|
{{- if contains $name .Release.Name }}
|
||||||
|
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||||
|
{{- else }}
|
||||||
|
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create chart name and version as used by the chart label.
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.chart" -}}
|
||||||
|
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Common labels
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.labels" -}}
|
||||||
|
helm.sh/chart: {{ include "didi-km.chart" . }}
|
||||||
|
{{ include "didi-km.selectorLabels" . }}
|
||||||
|
{{- if .Chart.AppVersion }}
|
||||||
|
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||||
|
{{- end }}
|
||||||
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Selector labels
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.selectorLabels" -}}
|
||||||
|
app.kubernetes.io/name: {{ include "didi-km.name" . }}
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create the name of the service account to use
|
||||||
|
*/}}
|
||||||
|
{{- define "didi-km.serviceAccountName" -}}
|
||||||
|
{{- if .Values.serviceAccount.create }}
|
||||||
|
{{- default (include "didi-km.fullname" .) .Values.serviceAccount.name }}
|
||||||
|
{{- else }}
|
||||||
|
{{- default "default" .Values.serviceAccount.name }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
110
container/helm/templates/configmap.yaml
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
{{- define "datasource.mysql" -}}
|
||||||
|
{{- if .Values.mysql.enabled }}
|
||||||
|
{{- printf "%s-mysql" (include "didi-km.fullname" .) -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{- printf "%s" .Values.externalDatabase.host -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: {{ include "didi-km.fullname" . }}-configs
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
data:
|
||||||
|
application.yml: |
|
||||||
|
server:
|
||||||
|
port: 8080
|
||||||
|
tomcat:
|
||||||
|
accept-count: 1000
|
||||||
|
max-connections: 10000
|
||||||
|
max-threads: 800
|
||||||
|
min-spare-threads: 100
|
||||||
|
|
||||||
|
spring:
|
||||||
|
application:
|
||||||
|
name: kafkamanager
|
||||||
|
datasource:
|
||||||
|
kafka-manager:
|
||||||
|
jdbc-url: jdbc:mysql://{{ include "datasource.mysql" . }}:3306/{{ .Values.mysql.auth.database }}?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false
|
||||||
|
username: {{ .Values.mysql.auth.username }}
|
||||||
|
password: {{ .Values.mysql.auth.password }}
|
||||||
|
driver-class-name: com.mysql.jdbc.Driver
|
||||||
|
main:
|
||||||
|
allow-bean-definition-overriding: true
|
||||||
|
|
||||||
|
profiles:
|
||||||
|
active: dev
|
||||||
|
servlet:
|
||||||
|
multipart:
|
||||||
|
max-file-size: 100MB
|
||||||
|
max-request-size: 100MB
|
||||||
|
|
||||||
|
logging:
|
||||||
|
config: classpath:logback-spring.xml
|
||||||
|
|
||||||
|
custom:
|
||||||
|
idc: cn
|
||||||
|
jmx:
|
||||||
|
max-conn: 20
|
||||||
|
store-metrics-task:
|
||||||
|
community:
|
||||||
|
broker-metrics-enabled: true
|
||||||
|
topic-metrics-enabled: true
|
||||||
|
didi:
|
||||||
|
app-topic-metrics-enabled: false
|
||||||
|
topic-request-time-metrics-enabled: false
|
||||||
|
topic-throttled-metrics-enabled: false
|
||||||
|
save-days: 7
|
||||||
|
|
||||||
|
# 任务相关的开关
|
||||||
|
task:
|
||||||
|
op:
|
||||||
|
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
||||||
|
|
||||||
|
account:
|
||||||
|
# ldap settings
|
||||||
|
ldap:
|
||||||
|
enabled: false
|
||||||
|
url: ldap://127.0.0.1:389/
|
||||||
|
basedn: dc=tsign,dc=cn
|
||||||
|
factory: com.sun.jndi.ldap.LdapCtxFactory
|
||||||
|
filter: sAMAccountName
|
||||||
|
security:
|
||||||
|
authentication: simple
|
||||||
|
principal: cn=admin,dc=tsign,dc=cn
|
||||||
|
credentials: admin
|
||||||
|
auth-user-registration: false
|
||||||
|
auth-user-registration-role: normal
|
||||||
|
|
||||||
|
kcm:
|
||||||
|
enabled: false
|
||||||
|
storage:
|
||||||
|
base-url: http://127.0.0.1
|
||||||
|
n9e:
|
||||||
|
base-url: http://127.0.0.1:8004
|
||||||
|
user-token: 12345678
|
||||||
|
timeout: 300
|
||||||
|
account: root
|
||||||
|
script-file: kcm_script.sh
|
||||||
|
|
||||||
|
monitor:
|
||||||
|
enabled: false
|
||||||
|
n9e:
|
||||||
|
nid: 2
|
||||||
|
user-token: 1234567890
|
||||||
|
mon:
|
||||||
|
base-url: http://127.0.0.1:8032
|
||||||
|
sink:
|
||||||
|
base-url: http://127.0.0.1:8006
|
||||||
|
rdb:
|
||||||
|
base-url: http://127.0.0.1:80
|
||||||
|
|
||||||
|
notify:
|
||||||
|
kafka:
|
||||||
|
cluster-id: 95
|
||||||
|
topic-name: didi-kafka-notify
|
||||||
|
order:
|
||||||
|
detail-url: http://127.0.0.1
|
||||||
|
|
||||||
64
container/helm/templates/deployment.yaml
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: {{ include "didi-km.fullname" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
{{- if not .Values.autoscaling.enabled }}
|
||||||
|
replicas: {{ .Values.replicaCount }}
|
||||||
|
{{- end }}
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
{{- include "didi-km.selectorLabels" . | nindent 6 }}
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
{{- with .Values.podAnnotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.selectorLabels" . | nindent 8 }}
|
||||||
|
spec:
|
||||||
|
{{- with .Values.imagePullSecrets }}
|
||||||
|
imagePullSecrets:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: {{ include "didi-km.serviceAccountName" . }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||||
|
containers:
|
||||||
|
- name: {{ .Chart.Name }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||||
|
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
containerPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
- name: jmx-metrics
|
||||||
|
containerPort: 9999
|
||||||
|
protocol: TCP
|
||||||
|
resources:
|
||||||
|
{{- toYaml .Values.resources | nindent 12 }}
|
||||||
|
volumeMounts:
|
||||||
|
- name: configs
|
||||||
|
mountPath: /tmp/application.yml
|
||||||
|
subPath: application.yml
|
||||||
|
{{- with .Values.nodeSelector }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.affinity }}
|
||||||
|
affinity:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.tolerations }}
|
||||||
|
tolerations:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
volumes:
|
||||||
|
- name: configs
|
||||||
|
configMap:
|
||||||
|
name: {{ include "didi-km.fullname" . }}-configs
|
||||||
28
container/helm/templates/hpa.yaml
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{{- if .Values.autoscaling.enabled }}
|
||||||
|
apiVersion: autoscaling/v2beta1
|
||||||
|
kind: HorizontalPodAutoscaler
|
||||||
|
metadata:
|
||||||
|
name: {{ include "didi-km.fullname" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
scaleTargetRef:
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
name: {{ include "didi-km.fullname" . }}
|
||||||
|
minReplicas: {{ .Values.autoscaling.minReplicas }}
|
||||||
|
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
|
||||||
|
metrics:
|
||||||
|
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||||
|
- type: Resource
|
||||||
|
resource:
|
||||||
|
name: cpu
|
||||||
|
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||||
|
- type: Resource
|
||||||
|
resource:
|
||||||
|
name: memory
|
||||||
|
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
41
container/helm/templates/ingress.yaml
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
{{- if .Values.ingress.enabled -}}
|
||||||
|
{{- $fullName := include "didi-km.fullname" . -}}
|
||||||
|
{{- $svcPort := .Values.service.port -}}
|
||||||
|
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
|
||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
{{- else -}}
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
{{- end }}
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: {{ $fullName }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
{{- with .Values.ingress.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
{{- if .Values.ingress.tls }}
|
||||||
|
tls:
|
||||||
|
{{- range .Values.ingress.tls }}
|
||||||
|
- hosts:
|
||||||
|
{{- range .hosts }}
|
||||||
|
- {{ . | quote }}
|
||||||
|
{{- end }}
|
||||||
|
secretName: {{ .secretName }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
rules:
|
||||||
|
{{- range .Values.ingress.hosts }}
|
||||||
|
- host: {{ .host | quote }}
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
{{- range .paths }}
|
||||||
|
- path: {{ .path }}
|
||||||
|
backend:
|
||||||
|
serviceName: {{ $fullName }}
|
||||||
|
servicePort: {{ $svcPort }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
15
container/helm/templates/service.yaml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: {{ include "didi-km.fullname" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
type: {{ .Values.service.type }}
|
||||||
|
ports:
|
||||||
|
- port: {{ .Values.service.port }}
|
||||||
|
targetPort: http
|
||||||
|
protocol: TCP
|
||||||
|
name: http
|
||||||
|
selector:
|
||||||
|
{{- include "didi-km.selectorLabels" . | nindent 4 }}
|
||||||
12
container/helm/templates/serviceaccount.yaml
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{{- if .Values.serviceAccount.create -}}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: {{ include "didi-km.serviceAccountName" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
{{- with .Values.serviceAccount.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
15
container/helm/templates/tests/test-connection.yaml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: "{{ include "didi-km.fullname" . }}-test-connection"
|
||||||
|
labels:
|
||||||
|
{{- include "didi-km.labels" . | nindent 4 }}
|
||||||
|
annotations:
|
||||||
|
"helm.sh/hook": test
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: wget
|
||||||
|
image: busybox
|
||||||
|
command: ['wget']
|
||||||
|
args: ['{{ include "didi-km.fullname" . }}:{{ .Values.service.port }}']
|
||||||
|
restartPolicy: Never
|
||||||
93
container/helm/values.yaml
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# Default values for didi-km.
|
||||||
|
# This is a YAML-formatted file.
|
||||||
|
# Declare variables to be passed into your templates.
|
||||||
|
|
||||||
|
replicaCount: 1
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: docker.io/fengxsong/logi-kafka-manager
|
||||||
|
pullPolicy: IfNotPresent
|
||||||
|
# Overrides the image tag whose default is the chart appVersion.
|
||||||
|
tag: "v2.4.2"
|
||||||
|
|
||||||
|
imagePullSecrets: []
|
||||||
|
nameOverride: ""
|
||||||
|
# fullnameOverride must set same as release name
|
||||||
|
fullnameOverride: "km"
|
||||||
|
|
||||||
|
serviceAccount:
|
||||||
|
# Specifies whether a service account should be created
|
||||||
|
create: true
|
||||||
|
# Annotations to add to the service account
|
||||||
|
annotations: {}
|
||||||
|
# The name of the service account to use.
|
||||||
|
# If not set and create is true, a name is generated using the fullname template
|
||||||
|
name: ""
|
||||||
|
|
||||||
|
podAnnotations: {}
|
||||||
|
|
||||||
|
podSecurityContext: {}
|
||||||
|
# fsGroup: 2000
|
||||||
|
|
||||||
|
securityContext: {}
|
||||||
|
# capabilities:
|
||||||
|
# drop:
|
||||||
|
# - ALL
|
||||||
|
# readOnlyRootFilesystem: true
|
||||||
|
# runAsNonRoot: true
|
||||||
|
# runAsUser: 1000
|
||||||
|
|
||||||
|
service:
|
||||||
|
type: ClusterIP
|
||||||
|
port: 8080
|
||||||
|
|
||||||
|
ingress:
|
||||||
|
enabled: false
|
||||||
|
annotations: {}
|
||||||
|
# kubernetes.io/ingress.class: nginx
|
||||||
|
# kubernetes.io/tls-acme: "true"
|
||||||
|
hosts:
|
||||||
|
- host: chart-example.local
|
||||||
|
paths: []
|
||||||
|
tls: []
|
||||||
|
# - secretName: chart-example-tls
|
||||||
|
# hosts:
|
||||||
|
# - chart-example.local
|
||||||
|
|
||||||
|
resources:
|
||||||
|
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||||
|
# choice for the user. This also increases chances charts run on environments with little
|
||||||
|
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||||
|
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||||
|
limits:
|
||||||
|
cpu: 500m
|
||||||
|
memory: 2048Mi
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 200Mi
|
||||||
|
|
||||||
|
autoscaling:
|
||||||
|
enabled: false
|
||||||
|
minReplicas: 1
|
||||||
|
maxReplicas: 100
|
||||||
|
targetCPUUtilizationPercentage: 80
|
||||||
|
# targetMemoryUtilizationPercentage: 80
|
||||||
|
|
||||||
|
nodeSelector: {}
|
||||||
|
|
||||||
|
tolerations: []
|
||||||
|
|
||||||
|
affinity: {}
|
||||||
|
|
||||||
|
# more configurations are set with configmap in file template/configmap.yaml
|
||||||
|
externalDatabase:
|
||||||
|
host: ""
|
||||||
|
mysql:
|
||||||
|
# if enabled is set to false, then you should manually specified externalDatabase.host
|
||||||
|
enabled: true
|
||||||
|
architecture: standalone
|
||||||
|
auth:
|
||||||
|
rootPassword: "s3cretR00t"
|
||||||
|
database: "logi_kafka_manager"
|
||||||
|
username: "logi_kafka_manager"
|
||||||
|
password: "n0tp@55w0rd"
|
||||||
16
distribution/bin/shutdown.sh
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
cd `dirname $0`/../target
|
||||||
|
target_dir=`pwd`
|
||||||
|
|
||||||
|
pid=`ps ax | grep -i 'kafka-manager' | grep ${target_dir} | grep java | grep -v grep | awk '{print $1}'`
|
||||||
|
if [ -z "$pid" ] ; then
|
||||||
|
echo "No kafka-manager running."
|
||||||
|
exit -1;
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "The kafka-manager (${pid}) is running..."
|
||||||
|
|
||||||
|
kill ${pid}
|
||||||
|
|
||||||
|
echo "Send shutdown request to kafka-manager (${pid}) OK"
|
||||||
81
distribution/bin/startup.sh
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
error_exit ()
|
||||||
|
{
|
||||||
|
echo "ERROR: $1 !!"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
|
||||||
|
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
|
||||||
|
[ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME
|
||||||
|
|
||||||
|
if [ -z "$JAVA_HOME" ]; then
|
||||||
|
if $darwin; then
|
||||||
|
|
||||||
|
if [ -x '/usr/libexec/java_home' ] ; then
|
||||||
|
export JAVA_HOME=`/usr/libexec/java_home`
|
||||||
|
|
||||||
|
elif [ -d "/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home" ]; then
|
||||||
|
export JAVA_HOME="/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
JAVA_PATH=`dirname $(readlink -f $(which javac))`
|
||||||
|
if [ "x$JAVA_PATH" != "x" ]; then
|
||||||
|
export JAVA_HOME=`dirname $JAVA_PATH 2>/dev/null`
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
if [ -z "$JAVA_HOME" ]; then
|
||||||
|
error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)! jdk8 or later is better!"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
export WEB_SERVER="kafka-manager"
|
||||||
|
export JAVA_HOME
|
||||||
|
export JAVA="$JAVA_HOME/bin/java"
|
||||||
|
export BASE_DIR=`cd $(dirname $0)/..; pwd`
|
||||||
|
export CUSTOM_SEARCH_LOCATIONS=file:${BASE_DIR}/conf/
|
||||||
|
|
||||||
|
|
||||||
|
#===========================================================================================
|
||||||
|
# JVM Configuration
|
||||||
|
#===========================================================================================
|
||||||
|
|
||||||
|
JAVA_OPT="${JAVA_OPT} -server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
|
||||||
|
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${BASE_DIR}/logs/java_heapdump.hprof"
|
||||||
|
|
||||||
|
## jdk版本高的情况 有些 参数废弃了
|
||||||
|
JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
|
||||||
|
if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
|
||||||
|
JAVA_OPT="${JAVA_OPT} -Xlog:gc*:file=${BASE_DIR}/logs/km_gc.log:time,tags:filecount=10,filesize=102400"
|
||||||
|
else
|
||||||
|
JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${JAVA_HOME}/lib/ext"
|
||||||
|
JAVA_OPT="${JAVA_OPT} -Xloggc:${BASE_DIR}/logs/km_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
JAVA_OPT="${JAVA_OPT} -jar ${BASE_DIR}/target/${WEB_SERVER}.jar"
|
||||||
|
JAVA_OPT="${JAVA_OPT} --spring.config.additional-location=${CUSTOM_SEARCH_LOCATIONS}"
|
||||||
|
JAVA_OPT="${JAVA_OPT} --logging.config=${BASE_DIR}/conf/logback-spring.xml"
|
||||||
|
JAVA_OPT="${JAVA_OPT} --server.max-http-header-size=524288"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if [ ! -d "${BASE_DIR}/logs" ]; then
|
||||||
|
mkdir ${BASE_DIR}/logs
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$JAVA ${JAVA_OPT}"
|
||||||
|
|
||||||
|
# check the start.out log output file
|
||||||
|
if [ ! -f "${BASE_DIR}/logs/start.out" ]; then
|
||||||
|
touch "${BASE_DIR}/logs/start.out"
|
||||||
|
fi
|
||||||
|
# start
|
||||||
|
echo -e "---- 启动脚本 ------\n $JAVA ${JAVA_OPT}" > ${BASE_DIR}/logs/start.out 2>&1 &
|
||||||
|
|
||||||
|
|
||||||
|
nohup $JAVA ${JAVA_OPT} >> ${BASE_DIR}/logs/start.out 2>&1 &
|
||||||
|
|
||||||
|
echo "${WEB_SERVER} is starting,you can check the ${BASE_DIR}/logs/start.out"
|
||||||
29
distribution/conf/application.yml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
|
||||||
|
## kafka-manager的配置文件,该文件中的配置会覆盖默认配置
|
||||||
|
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
|
||||||
|
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
|
||||||
|
|
||||||
|
|
||||||
|
server:
|
||||||
|
port: 8080
|
||||||
|
tomcat:
|
||||||
|
accept-count: 1000
|
||||||
|
max-connections: 10000
|
||||||
|
max-threads: 800
|
||||||
|
min-spare-threads: 100
|
||||||
|
|
||||||
|
spring:
|
||||||
|
application:
|
||||||
|
name: kafkamanager
|
||||||
|
version: 2.6.0
|
||||||
|
profiles:
|
||||||
|
active: dev
|
||||||
|
datasource:
|
||||||
|
kafka-manager:
|
||||||
|
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
|
||||||
|
username: root
|
||||||
|
password: 123456
|
||||||
|
driver-class-name: com.mysql.cj.jdbc.Driver
|
||||||
|
main:
|
||||||
|
allow-bean-definition-overriding: true
|
||||||
|
|
||||||
136
distribution/conf/application.yml.example
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
|
||||||
|
## kafka-manager的配置文件,该文件中的配置会覆盖默认配置
|
||||||
|
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
|
||||||
|
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
|
||||||
|
|
||||||
|
|
||||||
|
server:
|
||||||
|
port: 8080
|
||||||
|
tomcat:
|
||||||
|
accept-count: 1000
|
||||||
|
max-connections: 10000
|
||||||
|
max-threads: 800
|
||||||
|
min-spare-threads: 100
|
||||||
|
|
||||||
|
spring:
|
||||||
|
application:
|
||||||
|
name: kafkamanager
|
||||||
|
version: 2.6.0
|
||||||
|
profiles:
|
||||||
|
active: dev
|
||||||
|
datasource:
|
||||||
|
kafka-manager:
|
||||||
|
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
|
||||||
|
username: root
|
||||||
|
password: 123456
|
||||||
|
driver-class-name: com.mysql.cj.jdbc.Driver
|
||||||
|
main:
|
||||||
|
allow-bean-definition-overriding: true
|
||||||
|
|
||||||
|
servlet:
|
||||||
|
multipart:
|
||||||
|
max-file-size: 100MB
|
||||||
|
max-request-size: 100MB
|
||||||
|
|
||||||
|
logging:
|
||||||
|
config: classpath:logback-spring.xml
|
||||||
|
|
||||||
|
custom:
|
||||||
|
idc: cn
|
||||||
|
store-metrics-task:
|
||||||
|
community:
|
||||||
|
topic-metrics-enabled: true
|
||||||
|
didi: # 滴滴Kafka特有的指标
|
||||||
|
app-topic-metrics-enabled: false
|
||||||
|
topic-request-time-metrics-enabled: false
|
||||||
|
topic-throttled-metrics-enabled: false
|
||||||
|
|
||||||
|
# 任务相关的配置
|
||||||
|
task:
|
||||||
|
op:
|
||||||
|
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
||||||
|
order-auto-exec: # 工单自动化审批线程的开关
|
||||||
|
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
|
||||||
|
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
|
||||||
|
metrics:
|
||||||
|
collect: # 收集指标
|
||||||
|
broker-metrics-enabled: true # 收集Broker指标
|
||||||
|
sink: # 上报指标
|
||||||
|
cluster-metrics: # 上报cluster指标
|
||||||
|
sink-db-enabled: true # 上报到db
|
||||||
|
broker-metrics: # 上报broker指标
|
||||||
|
sink-db-enabled: true # 上报到db
|
||||||
|
delete: # 删除指标
|
||||||
|
delete-limit-size: 1000 # 单次删除的批大小
|
||||||
|
cluster-metrics-save-days: 14 # 集群指标保存天数
|
||||||
|
broker-metrics-save-days: 14 # Broker指标保存天数
|
||||||
|
topic-metrics-save-days: 7 # Topic指标保存天数
|
||||||
|
topic-request-time-metrics-save-days: 7 # Topic请求耗时指标保存天数
|
||||||
|
topic-throttled-metrics-save-days: 7 # Topic限流指标保存天数
|
||||||
|
app-topic-metrics-save-days: 7 # App+Topic指标保存天数
|
||||||
|
|
||||||
|
thread-pool:
|
||||||
|
collect-metrics:
|
||||||
|
thread-num: 256 # 收集指标线程池大小
|
||||||
|
queue-size: 5000 # 收集指标线程池的queue大小
|
||||||
|
api-call:
|
||||||
|
thread-num: 16 # api服务线程池大小
|
||||||
|
queue-size: 5000 # api服务线程池的queue大小
|
||||||
|
|
||||||
|
client-pool:
|
||||||
|
kafka-consumer:
|
||||||
|
min-idle-client-num: 24 # 最小空闲客户端数
|
||||||
|
max-idle-client-num: 24 # 最大空闲客户端数
|
||||||
|
max-total-client-num: 24 # 最大客户端数
|
||||||
|
borrow-timeout-unit-ms: 3000 # 租借超时时间,单位毫秒
|
||||||
|
|
||||||
|
account:
|
||||||
|
jump-login:
|
||||||
|
gateway-api: false # 网关接口
|
||||||
|
third-part-api: false # 第三方接口
|
||||||
|
ldap:
|
||||||
|
enabled: false
|
||||||
|
url: ldap://127.0.0.1:389/
|
||||||
|
basedn: dc=tsign,dc=cn
|
||||||
|
factory: com.sun.jndi.ldap.LdapCtxFactory
|
||||||
|
filter: sAMAccountName
|
||||||
|
security:
|
||||||
|
authentication: simple
|
||||||
|
principal: cn=admin,dc=tsign,dc=cn
|
||||||
|
credentials: admin
|
||||||
|
auth-user-registration: true
|
||||||
|
auth-user-registration-role: normal
|
||||||
|
|
||||||
|
kcm: # 集群安装部署,仅安装broker
|
||||||
|
enabled: false # 是否开启
|
||||||
|
s3: # s3 存储服务
|
||||||
|
endpoint: s3.didiyunapi.com
|
||||||
|
access-key: 1234567890
|
||||||
|
secret-key: 0987654321
|
||||||
|
bucket: logi-kafka
|
||||||
|
n9e: # 夜莺
|
||||||
|
base-url: http://127.0.0.1:8004 # 夜莺job服务地址
|
||||||
|
user-token: 12345678 # 用户的token
|
||||||
|
timeout: 300 # 当台操作的超时时间
|
||||||
|
account: root # 操作时使用的账号
|
||||||
|
script-file: kcm_script.sh # 脚本,已内置好,在源码的kcm模块内,此处配置无需修改
|
||||||
|
logikm-url: http://127.0.0.1:8080 # logikm部署地址,部署时kcm_script.sh会调用logikm检查部署中的一些状态
|
||||||
|
|
||||||
|
monitor:
|
||||||
|
enabled: false
|
||||||
|
n9e:
|
||||||
|
nid: 2
|
||||||
|
user-token: 1234567890
|
||||||
|
mon:
|
||||||
|
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
||||||
|
sink:
|
||||||
|
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
||||||
|
rdb:
|
||||||
|
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
||||||
|
|
||||||
|
notify:
|
||||||
|
kafka:
|
||||||
|
cluster-id: 95
|
||||||
|
topic-name: didi-kafka-notify
|
||||||
|
order:
|
||||||
|
detail-url: http://127.0.0.1
|
||||||
594
distribution/conf/create_mysql_table.sql
Normal file
@@ -0,0 +1,594 @@
|
|||||||
|
-- create database
|
||||||
|
CREATE DATABASE logi_kafka_manager;
|
||||||
|
|
||||||
|
USE logi_kafka_manager;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `account`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `account`;
|
||||||
|
CREATE TABLE `account` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`username` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '用户名',
|
||||||
|
`password` varchar(128) NOT NULL DEFAULT '' COMMENT '密码',
|
||||||
|
`role` tinyint(8) NOT NULL DEFAULT '0' COMMENT '角色类型, 0:普通用户 1:研发 2:运维',
|
||||||
|
`department` varchar(256) DEFAULT '' COMMENT '部门名',
|
||||||
|
`display_name` varchar(256) DEFAULT '' COMMENT '用户姓名',
|
||||||
|
`mail` varchar(256) DEFAULT '' COMMENT '邮箱',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '0标识使用中,-1标识已废弃',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_username` (`username`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='账号表';
|
||||||
|
INSERT INTO account(username, password, role) VALUES ('admin', '21232f297a57a5a743894a0e4a801fc3', 2);
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `app`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `app`;
|
||||||
|
CREATE TABLE `app` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`app_id` varchar(128) NOT NULL DEFAULT '' COMMENT '应用id',
|
||||||
|
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '应用名称',
|
||||||
|
`password` varchar(256) NOT NULL DEFAULT '' COMMENT '应用密码',
|
||||||
|
`type` int(11) NOT NULL DEFAULT '0' COMMENT '类型, 0:普通用户, 1:超级用户',
|
||||||
|
`applicant` varchar(64) NOT NULL DEFAULT '' COMMENT '申请人',
|
||||||
|
`principals` text COMMENT '应用负责人',
|
||||||
|
`description` text COMMENT '应用描述',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_name` (`name`),
|
||||||
|
UNIQUE KEY `uniq_app_id` (`app_id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='应用信息';
|
||||||
|
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `authority`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `authority`;
|
||||||
|
CREATE TABLE `authority` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`app_id` varchar(128) NOT NULL DEFAULT '' COMMENT '应用id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`access` int(11) NOT NULL DEFAULT '0' COMMENT '0:无权限, 1:读, 2:写, 3:读写',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_app_id_cluster_id_topic_name` (`app_id`,`cluster_id`,`topic_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='权限信息(kafka-manager)';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `broker`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `broker`;
|
||||||
|
CREATE TABLE `broker` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
||||||
|
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'broker主机名',
|
||||||
|
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'broker端口',
|
||||||
|
`timestamp` bigint(20) NOT NULL DEFAULT '-1' COMMENT '启动时间',
|
||||||
|
`max_avg_bytes_in` bigint(20) NOT NULL DEFAULT '-1' COMMENT '峰值的均值流量',
|
||||||
|
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'broker版本',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 0有效,-1无效',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_broker_id` (`cluster_id`,`broker_id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='broker信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `broker_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `broker_metrics`;
|
||||||
|
CREATE TABLE `broker_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
||||||
|
`metrics` text COMMENT '指标',
|
||||||
|
`messages_in` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '每秒消息数流入',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_broker_id_gmt_create` (`cluster_id`,`broker_id`,`gmt_create`),
|
||||||
|
KEY `idx_gmt_create` (`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='broker-metric信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `cluster`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `cluster`;
|
||||||
|
CREATE TABLE `cluster` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '集群id',
|
||||||
|
`cluster_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
|
||||||
|
`zookeeper` varchar(512) NOT NULL DEFAULT '' COMMENT 'zk地址',
|
||||||
|
`bootstrap_servers` varchar(512) NOT NULL DEFAULT '' COMMENT 'server地址',
|
||||||
|
`kafka_version` varchar(32) NOT NULL DEFAULT '' COMMENT 'kafka版本',
|
||||||
|
`security_properties` text COMMENT 'Kafka安全认证参数',
|
||||||
|
`jmx_properties` text COMMENT 'JMX配置',
|
||||||
|
`status` tinyint(4) NOT NULL DEFAULT '1' COMMENT ' 监控标记, 0表示未监控, 1表示监控中',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_name` (`cluster_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='cluster信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `cluster_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `cluster_metrics`;
|
||||||
|
CREATE TABLE `cluster_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`metrics` text COMMENT '指标',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_gmt_create` (`cluster_id`,`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='clustermetrics信息';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `cluster_tasks`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `cluster_tasks`;
|
||||||
|
CREATE TABLE `cluster_tasks` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`uuid` varchar(128) NOT NULL DEFAULT '' COMMENT '任务UUID',
|
||||||
|
`cluster_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`task_type` varchar(128) NOT NULL DEFAULT '' COMMENT '任务类型',
|
||||||
|
`kafka_package` text COMMENT 'kafka包',
|
||||||
|
`kafka_package_md5` varchar(128) NOT NULL DEFAULT '' COMMENT 'kafka包的md5',
|
||||||
|
`server_properties` text COMMENT 'kafkaserver配置',
|
||||||
|
`server_properties_md5` varchar(128) NOT NULL DEFAULT '' COMMENT '配置文件的md5',
|
||||||
|
`agent_task_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '任务id',
|
||||||
|
`agent_rollback_task_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '回滚任务id',
|
||||||
|
`host_list` text COMMENT '升级的主机',
|
||||||
|
`pause_host_list` text COMMENT '暂停点',
|
||||||
|
`rollback_host_list` text COMMENT '回滚机器列表',
|
||||||
|
`rollback_pause_host_list` text COMMENT '回滚暂停机器列表',
|
||||||
|
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
||||||
|
`task_status` int(11) NOT NULL DEFAULT '0' COMMENT '任务状态',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='集群任务(集群升级部署)';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `config`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `config`;
|
||||||
|
CREATE TABLE `config` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`config_key` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '配置key',
|
||||||
|
`config_value` text COMMENT '配置value',
|
||||||
|
`config_description` text COMMENT '备注说明',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '0标识使用中,-1标识已废弃',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_config_key` (`config_key`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='配置表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `controller`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `controller`;
|
||||||
|
CREATE TABLE `controller` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
||||||
|
`host` varchar(256) NOT NULL DEFAULT '' COMMENT '主机名',
|
||||||
|
`timestamp` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'controller变更时间',
|
||||||
|
`version` int(16) NOT NULL DEFAULT '-1' COMMENT 'controller格式版本',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_broker_id_timestamp` (`cluster_id`,`broker_id`,`timestamp`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='controller记录表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `gateway_config`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `gateway_config`;
|
||||||
|
CREATE TABLE `gateway_config` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`type` varchar(128) NOT NULL DEFAULT '' COMMENT '配置类型',
|
||||||
|
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '配置名称',
|
||||||
|
`value` text COMMENT '配置值',
|
||||||
|
`version` bigint(20) unsigned NOT NULL DEFAULT '1' COMMENT '版本信息',
|
||||||
|
`description` text COMMENT '描述信息',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_type_name` (`type`,`name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='gateway配置';
|
||||||
|
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_QUEUE_SIZE', 'SD_QUEUE_SIZE', 100000000, 1, '任意集群队列大小');
|
||||||
|
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_APP_RATE', 'SD_APP_RATE', 100000000, 1, '任意一个App限速');
|
||||||
|
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_IP_RATE', 'SD_IP_RATE', 100000000, 1, '任意一个IP限速');
|
||||||
|
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_SP_RATE', 'app_01234567', 100000000, 1, '指定App限速');
|
||||||
|
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_SP_RATE', '192.168.0.1', 100000000, 1, '指定IP限速');
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `heartbeat`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `heartbeat`;
|
||||||
|
CREATE TABLE `heartbeat` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`ip` varchar(128) NOT NULL DEFAULT '' COMMENT '主机ip',
|
||||||
|
`hostname` varchar(256) NOT NULL DEFAULT '' COMMENT '主机名',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_ip` (`ip`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='心跳信息';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `kafka_acl`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `kafka_acl`;
|
||||||
|
CREATE TABLE `kafka_acl` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`app_id` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '用户id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`access` int(11) NOT NULL DEFAULT '0' COMMENT '0:无权限, 1:读, 2:写, 3:读写',
|
||||||
|
`operation` int(11) NOT NULL DEFAULT '0' COMMENT '0:创建, 1:更新 2:删除, 以最新的一条数据为准',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='权限信息(kafka-broker)';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `kafka_bill`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `kafka_bill`;
|
||||||
|
CREATE TABLE `kafka_bill` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`principal` varchar(64) NOT NULL DEFAULT '' COMMENT '负责人',
|
||||||
|
`quota` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '配额, 单位mb/s',
|
||||||
|
`cost` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '成本, 单位元',
|
||||||
|
`cost_type` int(16) NOT NULL DEFAULT '0' COMMENT '成本类型, 0:共享集群, 1:独享集群, 2:独立集群',
|
||||||
|
`gmt_day` varchar(64) NOT NULL DEFAULT '' COMMENT '计价的日期, 例如2019-02-02的计价结果',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_topic_name_gmt_day` (`cluster_id`,`topic_name`,`gmt_day`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='kafka账单';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `kafka_file`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `kafka_file`;
|
||||||
|
CREATE TABLE `kafka_file` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`storage_name` varchar(128) NOT NULL DEFAULT '' COMMENT '存储位置',
|
||||||
|
`file_name` varchar(128) NOT NULL DEFAULT '' COMMENT '文件名',
|
||||||
|
`file_md5` varchar(256) NOT NULL DEFAULT '' COMMENT '文件md5',
|
||||||
|
`file_type` int(16) NOT NULL DEFAULT '-1' COMMENT '0:kafka压缩包, 1:kafkaserver配置',
|
||||||
|
`description` text COMMENT '备注信息',
|
||||||
|
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '创建用户',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态, 0:正常, -1:删除',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_file_name_storage_name` (`cluster_id`,`file_name`,`storage_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='文件管理';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `kafka_user`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `kafka_user`;
|
||||||
|
CREATE TABLE `kafka_user` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`app_id` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '应用id',
|
||||||
|
`password` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '密码',
|
||||||
|
`user_type` int(11) NOT NULL DEFAULT '0' COMMENT '0:普通用户, 1:超级用户',
|
||||||
|
`operation` int(11) NOT NULL DEFAULT '0' COMMENT '0:创建, 1:更新 2:删除, 以最新一条的记录为准',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='kafka用户表';
|
||||||
|
INSERT INTO app(app_id, name, password, type, applicant, principals, description) VALUES ('dkm_admin', 'KM管理员', 'km_kMl4N8as1Kp0CCY', 1, 'admin', 'admin', 'KM管理员应用-谨慎对外提供');
|
||||||
|
INSERT INTO kafka_user(app_id, password, user_type, operation) VALUES ('dkm_admin', 'km_kMl4N8as1Kp0CCY', 1, 0);
|
||||||
|
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `logical_cluster`
|
||||||
|
--
|
||||||
|
|
||||||
|
CREATE TABLE `logical_cluster` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称',
|
||||||
|
`identification` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识',
|
||||||
|
`mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表',
|
||||||
|
`description` text COMMENT '备注说明',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_name` (`name`),
|
||||||
|
UNIQUE KEY `uniq_identification` (`identification`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表';
|
||||||
|
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `monitor_rule`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `monitor_rule`;
|
||||||
|
CREATE TABLE `monitor_rule` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '告警名称',
|
||||||
|
`strategy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '监控id',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'appid',
|
||||||
|
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_name` (`name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='监控规则';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `operate_record`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `operate_record`;
|
||||||
|
CREATE TABLE `operate_record` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`module_id` int(16) NOT NULL DEFAULT '-1' COMMENT '模块类型, 0:topic, 1:应用, 2:配额, 3:权限, 4:集群, -1:未知',
|
||||||
|
`operate_id` int(16) NOT NULL DEFAULT '-1' COMMENT '操作类型, 0:新增, 1:删除, 2:修改',
|
||||||
|
`resource` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称、app名称',
|
||||||
|
`content` text COMMENT '操作内容',
|
||||||
|
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_module_id_operate_id_operator` (`module_id`,`operate_id`,`operator`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='操作记录';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `reassign_task`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `reassign_task`;
|
||||||
|
CREATE TABLE `reassign_task` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`task_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '任务ID',
|
||||||
|
`name` varchar(256) NOT NULL DEFAULT '' COMMENT '任务名称',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'Topic名称',
|
||||||
|
`partitions` text COMMENT '分区',
|
||||||
|
`reassignment_json` text COMMENT '任务参数',
|
||||||
|
`real_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流值',
|
||||||
|
`max_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流上限',
|
||||||
|
`min_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流下限',
|
||||||
|
`begin_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '开始时间',
|
||||||
|
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
||||||
|
`description` varchar(256) NOT NULL DEFAULT '' COMMENT '备注说明',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '任务状态',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '任务创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '任务修改时间',
|
||||||
|
`original_retention_time` bigint(20) NOT NULL DEFAULT '86400000' COMMENT 'Topic存储时间',
|
||||||
|
`reassign_retention_time` bigint(20) NOT NULL DEFAULT '86400000' COMMENT '迁移时的存储时间',
|
||||||
|
`src_brokers` text COMMENT '源Broker',
|
||||||
|
`dest_brokers` text COMMENT '目标Broker',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic迁移信息';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `region`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `region`;
|
||||||
|
CREATE TABLE `region` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`name` varchar(192) NOT NULL DEFAULT '' COMMENT 'region名称',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`broker_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'broker列表',
|
||||||
|
`capacity` bigint(20) NOT NULL DEFAULT '0' COMMENT '容量(B/s)',
|
||||||
|
`real_used` bigint(20) NOT NULL DEFAULT '0' COMMENT '实际使用量(B/s)',
|
||||||
|
`estimate_used` bigint(20) NOT NULL DEFAULT '0' COMMENT '预估使用量(B/s)',
|
||||||
|
`description` text COMMENT '备注说明',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态,0正常,1已满',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_name` (`name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='region信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic`;
|
||||||
|
CREATE TABLE `topic` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'topic所属appid',
|
||||||
|
`peak_bytes_in` bigint(20) NOT NULL DEFAULT '0' COMMENT '峰值流量',
|
||||||
|
`description` text COMMENT '备注信息',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_app_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_app_metrics`;
|
||||||
|
CREATE TABLE `topic_app_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'appid',
|
||||||
|
`metrics` text COMMENT '指标',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_topic_name_app_id_gmt_create` (`cluster_id`,`topic_name`,`app_id`,`gmt_create`),
|
||||||
|
KEY `idx_gmt_create` (`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic app metrics';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_connections`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_connections`;
|
||||||
|
CREATE TABLE `topic_connections` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '应用id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`type` varchar(16) NOT NULL DEFAULT '' COMMENT 'producer or consumer',
|
||||||
|
`ip` varchar(32) NOT NULL DEFAULT '' COMMENT 'ip地址',
|
||||||
|
`client_version` varchar(8) NOT NULL DEFAULT '' COMMENT '客户端版本',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_app_id_cluster_id_topic_name_type_ip_client_version` (`app_id`,`cluster_id`,`topic_name`,`type`,`ip`,`client_version`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic连接信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_expired`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_expired`;
|
||||||
|
CREATE TABLE `topic_expired` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`produce_connection_num` bigint(20) NOT NULL DEFAULT '0' COMMENT '发送连接数',
|
||||||
|
`fetch_connection_num` bigint(20) NOT NULL DEFAULT '0' COMMENT '消费连接数',
|
||||||
|
`expired_day` bigint(20) NOT NULL DEFAULT '0' COMMENT '过期天数',
|
||||||
|
`gmt_retain` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '保留截止时间',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '-1:可下线, 0:过期待通知, 1+:已通知待反馈',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic过期信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_metrics`;
|
||||||
|
CREATE TABLE `topic_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`metrics` text COMMENT '指标数据JSON',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_topic_name_gmt_create` (`cluster_id`,`topic_name`,`gmt_create`),
|
||||||
|
KEY `idx_gmt_create` (`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topicmetrics表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_report`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_report`;
|
||||||
|
CREATE TABLE `topic_report` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '开始上报时间',
|
||||||
|
`end_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '结束上报时间',
|
||||||
|
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='开启jmx采集的topic';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_request_time_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_request_time_metrics`;
|
||||||
|
CREATE TABLE `topic_request_time_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`metrics` text COMMENT '指标',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_topic_name_gmt_create` (`cluster_id`,`topic_name`,`gmt_create`),
|
||||||
|
KEY `idx_gmt_create` (`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic请求耗时信息';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_statistics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_statistics`;
|
||||||
|
CREATE TABLE `topic_statistics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
||||||
|
`offset_sum` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'offset和',
|
||||||
|
`max_avg_bytes_in` double(53,2) NOT NULL DEFAULT '-1.00' COMMENT '峰值的均值流量',
|
||||||
|
`gmt_day` varchar(64) NOT NULL DEFAULT '' COMMENT '日期2020-03-30的形式',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`max_avg_messages_in` double(53,2) NOT NULL DEFAULT '-1.00' COMMENT '峰值的均值消息条数',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `uniq_cluster_id_topic_name_gmt_day` (`cluster_id`,`topic_name`,`gmt_day`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic统计信息表';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `topic_throttled_metrics`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `topic_throttled_metrics`;
|
||||||
|
CREATE TABLE `topic_throttled_metrics` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
||||||
|
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic name',
|
||||||
|
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'app',
|
||||||
|
`produce_throttled` tinyint(8) NOT NULL DEFAULT '0' COMMENT '是否是生产耗时',
|
||||||
|
`fetch_throttled` tinyint(8) NOT NULL DEFAULT '0' COMMENT '是否是消费耗时',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `idx_cluster_id_topic_name_app_id` (`cluster_id`,`topic_name`,`app_id`),
|
||||||
|
KEY `idx_gmt_create` (`gmt_create`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic限流信息';
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `work_order`
|
||||||
|
--
|
||||||
|
|
||||||
|
-- DROP TABLE IF EXISTS `work_order`;
|
||||||
|
CREATE TABLE `work_order` (
|
||||||
|
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
||||||
|
`type` int(16) NOT NULL DEFAULT '-1' COMMENT '工单类型',
|
||||||
|
`title` varchar(512) NOT NULL DEFAULT '' COMMENT '工单标题',
|
||||||
|
`applicant` varchar(64) NOT NULL DEFAULT '' COMMENT '申请人',
|
||||||
|
`description` text COMMENT '备注信息',
|
||||||
|
`approver` varchar(64) NOT NULL DEFAULT '' COMMENT '审批人',
|
||||||
|
`gmt_handle` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '审批时间',
|
||||||
|
`opinion` varchar(256) NOT NULL DEFAULT '' COMMENT '审批信息',
|
||||||
|
`extensions` text COMMENT '扩展信息',
|
||||||
|
`status` int(16) NOT NULL DEFAULT '0' COMMENT '工单状态, 0:待审批, 1:通过, 2:拒绝, 3:取消',
|
||||||
|
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
||||||
|
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='工单表';
|
||||||
215
distribution/conf/logback-spring.xml
Normal file
@@ -0,0 +1,215 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<configuration scan="true" scanPeriod="10 seconds">
|
||||||
|
<contextName>logback</contextName>
|
||||||
|
<property name="log.path" value="./logs" />
|
||||||
|
|
||||||
|
<!-- 彩色日志 -->
|
||||||
|
<!-- 彩色日志依赖的渲染类 -->
|
||||||
|
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
|
||||||
|
<conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
|
||||||
|
<conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
|
||||||
|
<!-- 彩色日志格式 -->
|
||||||
|
<property name="CONSOLE_LOG_PATTERN" value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
|
||||||
|
|
||||||
|
<!--输出到控制台-->
|
||||||
|
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
|
||||||
|
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
|
||||||
|
<level>info</level>
|
||||||
|
</filter>
|
||||||
|
<encoder>
|
||||||
|
<Pattern>${CONSOLE_LOG_PATTERN}</Pattern>
|
||||||
|
<charset>UTF-8</charset>
|
||||||
|
</encoder>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
|
||||||
|
<!--输出到文件-->
|
||||||
|
|
||||||
|
<!-- 时间滚动输出 level为 DEBUG 日志 -->
|
||||||
|
<appender name="DEBUG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<file>${log.path}/log_debug.log</file>
|
||||||
|
<!--日志文件输出格式-->
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset> <!-- 设置字符集 -->
|
||||||
|
</encoder>
|
||||||
|
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<!-- 日志归档 -->
|
||||||
|
<fileNamePattern>${log.path}/log_debug_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<!--日志文件保留天数-->
|
||||||
|
<maxHistory>7</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
<!-- 此日志文件只记录debug级别的 -->
|
||||||
|
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
||||||
|
<level>debug</level>
|
||||||
|
<onMatch>ACCEPT</onMatch>
|
||||||
|
<onMismatch>DENY</onMismatch>
|
||||||
|
</filter>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<!-- 时间滚动输出 level为 INFO 日志 -->
|
||||||
|
<appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<!-- 正在记录的日志文件的路径及文件名 -->
|
||||||
|
<file>${log.path}/log_info.log</file>
|
||||||
|
<!--日志文件输出格式-->
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset>
|
||||||
|
</encoder>
|
||||||
|
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<!-- 每天日志归档路径以及格式 -->
|
||||||
|
<fileNamePattern>${log.path}/log_info_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<!--日志文件保留天数-->
|
||||||
|
<maxHistory>7</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
<!-- 此日志文件只记录info级别的 -->
|
||||||
|
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
||||||
|
<level>info</level>
|
||||||
|
<onMatch>ACCEPT</onMatch>
|
||||||
|
<onMismatch>DENY</onMismatch>
|
||||||
|
</filter>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<!-- 时间滚动输出 level为 WARN 日志 -->
|
||||||
|
<appender name="WARN_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<!-- 正在记录的日志文件的路径及文件名 -->
|
||||||
|
<file>${log.path}/log_warn.log</file>
|
||||||
|
<!--日志文件输出格式-->
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
|
||||||
|
</encoder>
|
||||||
|
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<fileNamePattern>${log.path}/log_warn_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<!--日志文件保留天数-->
|
||||||
|
<maxHistory>7</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
<!-- 此日志文件只记录warn级别的 -->
|
||||||
|
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
||||||
|
<level>warn</level>
|
||||||
|
<onMatch>ACCEPT</onMatch>
|
||||||
|
<onMismatch>DENY</onMismatch>
|
||||||
|
</filter>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
|
||||||
|
<!-- 时间滚动输出 level为 ERROR 日志 -->
|
||||||
|
<appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<!-- 正在记录的日志文件的路径及文件名 -->
|
||||||
|
<file>${log.path}/log_error.log</file>
|
||||||
|
<!--日志文件输出格式-->
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
|
||||||
|
</encoder>
|
||||||
|
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<fileNamePattern>${log.path}/log_error_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<!--日志文件保留天数-->
|
||||||
|
<maxHistory>7</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
<!-- 此日志文件只记录ERROR级别的 -->
|
||||||
|
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
||||||
|
<level>ERROR</level>
|
||||||
|
<onMatch>ACCEPT</onMatch>
|
||||||
|
<onMismatch>DENY</onMismatch>
|
||||||
|
</filter>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<!-- Metrics信息收集日志 -->
|
||||||
|
<appender name="COLLECTOR_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<file>${log.path}/metrics/collector_metrics.log</file>
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset>
|
||||||
|
</encoder>
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<fileNamePattern>${log.path}/metrics/collector_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<maxHistory>3</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<!-- Metrics信息收集日志 -->
|
||||||
|
<appender name="API_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<file>${log.path}/metrics/api_metrics.log</file>
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset>
|
||||||
|
</encoder>
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<fileNamePattern>${log.path}/metrics/api_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<maxHistory>3</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<!-- Metrics信息收集日志 -->
|
||||||
|
<appender name="SCHEDULED_TASK_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||||
|
<file>${log.path}/metrics/scheduled_tasks.log</file>
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
||||||
|
<charset>UTF-8</charset>
|
||||||
|
</encoder>
|
||||||
|
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||||
|
<fileNamePattern>${log.path}/metrics/scheduled_tasks_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
||||||
|
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
||||||
|
<maxFileSize>100MB</maxFileSize>
|
||||||
|
</timeBasedFileNamingAndTriggeringPolicy>
|
||||||
|
<maxHistory>5</maxHistory>
|
||||||
|
</rollingPolicy>
|
||||||
|
</appender>
|
||||||
|
|
||||||
|
<logger name="COLLECTOR_METRICS_LOGGER" level="DEBUG" additivity="false">
|
||||||
|
<appender-ref ref="COLLECTOR_METRICS_LOGGER"/>
|
||||||
|
</logger>
|
||||||
|
<logger name="API_METRICS_LOGGER" level="DEBUG" additivity="false">
|
||||||
|
<appender-ref ref="API_METRICS_LOGGER"/>
|
||||||
|
</logger>
|
||||||
|
<logger name="SCHEDULED_TASK_LOGGER" level="DEBUG" additivity="false">
|
||||||
|
<appender-ref ref="SCHEDULED_TASK_LOGGER"/>
|
||||||
|
</logger>
|
||||||
|
|
||||||
|
<logger name="org.apache.ibatis" level="INFO" additivity="false" />
|
||||||
|
<logger name="org.mybatis.spring" level="INFO" additivity="false" />
|
||||||
|
<logger name="com.github.miemiedev.mybatis.paginator" level="INFO" additivity="false" />
|
||||||
|
|
||||||
|
<root level="info">
|
||||||
|
<appender-ref ref="CONSOLE" />
|
||||||
|
<appender-ref ref="DEBUG_FILE" />
|
||||||
|
<appender-ref ref="INFO_FILE" />
|
||||||
|
<appender-ref ref="WARN_FILE" />
|
||||||
|
<appender-ref ref="ERROR_FILE" />
|
||||||
|
<!--<appender-ref ref="METRICS_LOG" />-->
|
||||||
|
</root>
|
||||||
|
|
||||||
|
<!--生产环境:输出到文件-->
|
||||||
|
<!--<springProfile name="pro">-->
|
||||||
|
<!--<root level="info">-->
|
||||||
|
<!--<appender-ref ref="CONSOLE" />-->
|
||||||
|
<!--<appender-ref ref="DEBUG_FILE" />-->
|
||||||
|
<!--<appender-ref ref="INFO_FILE" />-->
|
||||||
|
<!--<appender-ref ref="ERROR_FILE" />-->
|
||||||
|
<!--<appender-ref ref="WARN_FILE" />-->
|
||||||
|
<!--</root>-->
|
||||||
|
<!--</springProfile>-->
|
||||||
|
</configuration>
|
||||||
64
distribution/pom.xml
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
|
||||||
|
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||||
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||||
|
|
||||||
|
<parent>
|
||||||
|
<artifactId>kafka-manager</artifactId>
|
||||||
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
|
<version>${kafka-manager.revision}</version>
|
||||||
|
</parent>
|
||||||
|
|
||||||
|
<modelVersion>4.0.0</modelVersion>
|
||||||
|
|
||||||
|
<artifactId>distribution</artifactId>
|
||||||
|
<name>distribution</name>
|
||||||
|
<packaging>pom</packaging>
|
||||||
|
|
||||||
|
<dependencies>
|
||||||
|
<dependency>
|
||||||
|
<groupId>${project.groupId}</groupId>
|
||||||
|
<artifactId>kafka-manager-web</artifactId>
|
||||||
|
<version>${kafka-manager.revision}</version>
|
||||||
|
</dependency>
|
||||||
|
</dependencies>
|
||||||
|
|
||||||
|
<profiles>
|
||||||
|
|
||||||
|
<profile>
|
||||||
|
<id>release-kafka-manager</id>
|
||||||
|
<dependencies>
|
||||||
|
<dependency>
|
||||||
|
<groupId>${project.groupId}</groupId>
|
||||||
|
<artifactId>kafka-manager-web</artifactId>
|
||||||
|
<version>${kafka-manager.revision}</version>
|
||||||
|
</dependency>
|
||||||
|
</dependencies>
|
||||||
|
<build>
|
||||||
|
<plugins>
|
||||||
|
<plugin>
|
||||||
|
<groupId>org.apache.maven.plugins</groupId>
|
||||||
|
<artifactId>maven-assembly-plugin</artifactId>
|
||||||
|
<configuration>
|
||||||
|
<descriptors>
|
||||||
|
<descriptor>release-km.xml</descriptor>
|
||||||
|
</descriptors>
|
||||||
|
<tarLongFileMode>posix</tarLongFileMode>
|
||||||
|
</configuration>
|
||||||
|
<executions>
|
||||||
|
<execution>
|
||||||
|
<id>make-assembly</id>
|
||||||
|
<phase>install</phase>
|
||||||
|
<goals>
|
||||||
|
<goal>single</goal>
|
||||||
|
</goals>
|
||||||
|
</execution>
|
||||||
|
</executions>
|
||||||
|
</plugin>
|
||||||
|
</plugins>
|
||||||
|
<finalName>kafka-manager</finalName>
|
||||||
|
</build>
|
||||||
|
</profile>
|
||||||
|
</profiles>
|
||||||
|
</project>
|
||||||
22
distribution/readme.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
## 说明
|
||||||
|
|
||||||
|
### 1.创建mysql数据库文件
|
||||||
|
> conf/create_mysql_table.sql
|
||||||
|
|
||||||
|
### 2. 修改配置文件
|
||||||
|
> conf/application.yml.example
|
||||||
|
> 请将application.yml.example 复制一份改名为application.yml;
|
||||||
|
> 并放在同级目录下(conf/); 并修改成自己的配置
|
||||||
|
> 这里的优先级比jar包内配置文件的默认值高;
|
||||||
|
>
|
||||||
|
|
||||||
|
### 3.启动/关闭kafka-manager
|
||||||
|
> sh bin/startup.sh 启动
|
||||||
|
>
|
||||||
|
> sh shutdown.sh 关闭
|
||||||
|
>
|
||||||
|
|
||||||
|
|
||||||
|
### 4.升级jar包
|
||||||
|
> 如果是升级, 可以看看文件 `upgrade_config.md` 的配置变更历史;
|
||||||
|
>
|
||||||
51
distribution/release-km.xml
Executable file
@@ -0,0 +1,51 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
|
||||||
|
<assembly>
|
||||||
|
<id>${project.version}</id>
|
||||||
|
<includeBaseDirectory>true</includeBaseDirectory>
|
||||||
|
<formats>
|
||||||
|
<format>dir</format>
|
||||||
|
<format>tar.gz</format>
|
||||||
|
<format>zip</format>
|
||||||
|
</formats>
|
||||||
|
<fileSets>
|
||||||
|
<fileSet>
|
||||||
|
<includes>
|
||||||
|
<include>conf/**</include>
|
||||||
|
</includes>
|
||||||
|
</fileSet>
|
||||||
|
|
||||||
|
<fileSet>
|
||||||
|
<includes>
|
||||||
|
<include>bin/*</include>
|
||||||
|
</includes>
|
||||||
|
<fileMode>0755</fileMode>
|
||||||
|
</fileSet>
|
||||||
|
</fileSets>
|
||||||
|
<files>
|
||||||
|
|
||||||
|
|
||||||
|
<file>
|
||||||
|
<source>readme.md</source>
|
||||||
|
<destName>readme.md</destName>
|
||||||
|
</file>
|
||||||
|
<file>
|
||||||
|
<source>upgrade_config.md</source>
|
||||||
|
<destName>upgrade_config.md</destName>
|
||||||
|
</file>
|
||||||
|
<file>
|
||||||
|
<!--打好的jar包名称和放置目录-->
|
||||||
|
<source>../kafka-manager-web/target/kafka-manager.jar</source>
|
||||||
|
<outputDirectory>target/</outputDirectory>
|
||||||
|
</file>
|
||||||
|
</files>
|
||||||
|
|
||||||
|
<moduleSets>
|
||||||
|
<moduleSet>
|
||||||
|
<useAllReactorProjects>true</useAllReactorProjects>
|
||||||
|
<includes>
|
||||||
|
<include>com.xiaojukeji.kafka:kafka-manager-web</include>
|
||||||
|
</includes>
|
||||||
|
</moduleSet>
|
||||||
|
</moduleSets>
|
||||||
|
</assembly>
|
||||||
52
distribution/upgrade_config.md
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
|
||||||
|
## 版本升级配置变更
|
||||||
|
> 本文件 从 V2.2.0 开始记录; 如果配置有变更则会填写到下文中; 如果没有,则表示无变更;
|
||||||
|
> 当您从一个很低的版本升级时候,应该依次执行中间有过变更的sql脚本
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1.升级至`V2.2.0`版本
|
||||||
|
|
||||||
|
#### 1.mysql变更
|
||||||
|
|
||||||
|
`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段,因此需要执行下面的sql进行字段的增加。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
# 往cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息
|
||||||
|
ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`;
|
||||||
|
|
||||||
|
# 往logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键.
|
||||||
|
# 此后, name字段还是表示集群名称, 而identification字段表示的是集群标识, 只能是字母数字及下划线组成,
|
||||||
|
# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段.
|
||||||
|
ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`;
|
||||||
|
|
||||||
|
UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0;
|
||||||
|
|
||||||
|
ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 升级至`2.3.0`版本
|
||||||
|
|
||||||
|
#### 1.mysql变更
|
||||||
|
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段,因此需要执行下面的sql进行字段的增加。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE `gateway_config`
|
||||||
|
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 升级至`2.6.0`版本
|
||||||
|
|
||||||
|
#### 1.mysql变更
|
||||||
|
`2.6.0`版本在`account`表增加用户姓名,部门名,邮箱三个字段,因此需要执行下面的sql进行字段的增加。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE `account`
|
||||||
|
ADD COLUMN `display_name` VARCHAR(256) NOT NULL DEFAULT '' COMMENT '用户名' AFTER `role`,
|
||||||
|
ADD COLUMN `department` VARCHAR(256) NOT NULL DEFAULT '' COMMENT '部门名' AFTER `display_name`,
|
||||||
|
ADD COLUMN `mail` VARCHAR(256) NOT NULL DEFAULT '' COMMENT '邮箱' AFTER `department`;
|
||||||
|
```
|
||||||
BIN
docs/assets/images/common/arch.png
Normal file
|
After Width: | Height: | Size: 73 KiB |
BIN
docs/assets/images/common/logo_name.png
Normal file
|
After Width: | Height: | Size: 7.4 KiB |
BIN
docs/dev_guide/assets/connect_jmx_failed/check_jmx_opened.jpg
Normal file
|
After Width: | Height: | Size: 382 KiB |
|
After Width: | Height: | Size: 270 KiB |
|
After Width: | Height: | Size: 785 KiB |
|
After Width: | Height: | Size: 2.5 MiB |
BIN
docs/dev_guide/assets/kcm/kcm_principle.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
|
After Width: | Height: | Size: 589 KiB |
|
After Width: | Height: | Size: 652 KiB |
|
After Width: | Height: | Size: 511 KiB |
|
After Width: | Height: | Size: 672 KiB |
101
docs/dev_guide/connect_jmx_failed.md
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## JMX-连接失败问题解决
|
||||||
|
|
||||||
|
集群正常接入Logi-KafkaManager之后,即可以看到集群的Broker列表,此时如果查看不了Topic的实时流量,或者是Broker的实时流量信息时,那么大概率就是JMX连接的问题了。
|
||||||
|
|
||||||
|
下面我们按照步骤来一步一步的检查。
|
||||||
|
|
||||||
|
### 1、问题&说明
|
||||||
|
|
||||||
|
**类型一:JMX配置未开启**
|
||||||
|
|
||||||
|
未开启时,直接到`2、解决方法`查看如何开启即可。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
**类型二:配置错误**
|
||||||
|
|
||||||
|
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
|
||||||
|
|
||||||
|
- `JMX`配置错误:见`2、解决方法`。
|
||||||
|
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
|
||||||
|
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`。
|
||||||
|
|
||||||
|
|
||||||
|
错误日志例子:
|
||||||
|
```
|
||||||
|
# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。
|
||||||
|
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
|
||||||
|
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
|
||||||
|
|
||||||
|
|
||||||
|
# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。
|
||||||
|
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
|
||||||
|
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2、解决方法
|
||||||
|
|
||||||
|
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。
|
||||||
|
|
||||||
|
修改`kafka-server-start.sh`文件:
|
||||||
|
```
|
||||||
|
# 在这个下面增加JMX端口的配置
|
||||||
|
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||||
|
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
|
||||||
|
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
修改`kafka-run-class.sh`文件
|
||||||
|
```
|
||||||
|
# JMX settings
|
||||||
|
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
||||||
|
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# JMX port to use
|
||||||
|
if [ $JMX_PORT ]; then
|
||||||
|
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### 3、解决方法 —— 认证的JMX
|
||||||
|
|
||||||
|
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。
|
||||||
|
|
||||||
|
在JMX的配置等都没有问题的情况下,如果是因为认证的原因导致连接不了的,此时可以使用下面介绍的方法进行解决。
|
||||||
|
|
||||||
|
**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。**
|
||||||
|
|
||||||
|
`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。
|
||||||
|
|
||||||
|
这个数据是`json`格式的字符串,例子如下所示:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"maxConn": 10, # KM对单台Broker的最大JMX连接数
|
||||||
|
"username": "xxxxx", # 用户名
|
||||||
|
"password": "xxxx", # 密码
|
||||||
|
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
SQL的例子:
|
||||||
|
```sql
|
||||||
|
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx};
|
||||||
|
```
|
||||||
89
docs/dev_guide/drawio/KCM实现原理.drawio
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
<mxfile host="65bd71144e">
|
||||||
|
<diagram id="bhaMuW99Q1BzDTtcfRXp" name="Page-1">
|
||||||
|
<mxGraphModel dx="1138" dy="830" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
|
||||||
|
<root>
|
||||||
|
<mxCell id="0"/>
|
||||||
|
<mxCell id="1" parent="0"/>
|
||||||
|
<mxCell id="11" value="待部署Kafka-Broker的机器" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;dashed=1;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="380" y="240" width="320" height="240" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="24" value="" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;labelPosition=center;verticalLabelPosition=bottom;align=center;verticalAlign=top;dashed=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="410" y="310" width="260" height="160" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="6" style="edgeStyle=none;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="2" target="3">
|
||||||
|
<mxGeometry relative="1" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="7" value="调用夜莺接口,<br>创建集群安装部署任务" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="6">
|
||||||
|
<mxGeometry x="-0.0875" y="1" relative="1" as="geometry">
|
||||||
|
<mxPoint x="9" y="1" as="offset"/>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="9" style="edgeStyle=none;html=1;" edge="1" parent="1" source="2" target="4">
|
||||||
|
<mxGeometry relative="1" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="10" value="通过版本管理,将Kafka的安装包,<br>server配置上传到s3中" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="9">
|
||||||
|
<mxGeometry x="0.0125" y="2" relative="1" as="geometry">
|
||||||
|
<mxPoint as="offset"/>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="2" value="LogiKM" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="40" y="100" width="120" height="40" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="12" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;" edge="1" parent="1" source="3" target="5">
|
||||||
|
<mxGeometry relative="1" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="13" value="1、下发任务脚本(kcm_script.sh);<br>2、下发任务操作命令;" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];" vertex="1" connectable="0" parent="12">
|
||||||
|
<mxGeometry x="-0.0731" y="2" relative="1" as="geometry">
|
||||||
|
<mxPoint x="-2" y="-16" as="offset"/>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="3" value="夜莺——任务中心" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#cdeb8b;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="480" y="100" width="120" height="40" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="4" value="S3" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#ffe6cc;strokeColor=#d79b00;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="40" y="310" width="120" height="40" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="5" value="夜莺——Agent(<font color="#ff3333">代理执行kcm_script.sh脚本</font>)" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="400" y="260" width="280" height="40" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="22" style="edgeStyle=orthogonalEdgeStyle;html=1;entryX=1;entryY=0.5;entryDx=0;entryDy=0;fontColor=#FF3333;exitX=0;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1" source="14" target="4">
|
||||||
|
<mxGeometry relative="1" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="25" value="下载安装包" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#000000;" vertex="1" connectable="0" parent="22">
|
||||||
|
<mxGeometry x="0.2226" y="-2" relative="1" as="geometry">
|
||||||
|
<mxPoint x="27" y="2" as="offset"/>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="14" value="执行kcm_script.sh脚本:下载安装包" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="425" y="320" width="235" height="20" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="18" value="执行kcm_script.sh脚本:安装" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="425" y="350" width="235" height="20" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="19" value="执行kcm_script.sh脚本:检查安装结果" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="425" y="380" width="235" height="20" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="23" style="edgeStyle=orthogonalEdgeStyle;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;fontColor=#FF3333;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1" source="20" target="2">
|
||||||
|
<mxGeometry relative="1" as="geometry">
|
||||||
|
<Array as="points">
|
||||||
|
<mxPoint x="770" y="420"/>
|
||||||
|
<mxPoint x="770" y="40"/>
|
||||||
|
<mxPoint x="100" y="40"/>
|
||||||
|
</Array>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="26" value="检查副本同步状态" style="edgeLabel;html=1;align=center;verticalAlign=middle;resizable=0;points=[];fontColor=#000000;" vertex="1" connectable="0" parent="23">
|
||||||
|
<mxGeometry x="-0.3344" relative="1" as="geometry">
|
||||||
|
<mxPoint as="offset"/>
|
||||||
|
</mxGeometry>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="20" value="执行kcm_script.sh脚本:检查副本同步" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="425" y="410" width="235" height="20" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
<mxCell id="21" value="执行kcm_script.sh脚本:结束" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=1;fillColor=#eeeeee;strokeColor=#36393d;" vertex="1" parent="1">
|
||||||
|
<mxGeometry x="425" y="440" width="235" height="20" as="geometry"/>
|
||||||
|
</mxCell>
|
||||||
|
</root>
|
||||||
|
</mxGraphModel>
|
||||||
|
</diagram>
|
||||||
|
</mxfile>
|
||||||
169
docs/dev_guide/dynamic_config_manager.md
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 动态配置管理
|
||||||
|
|
||||||
|
## 0、目录
|
||||||
|
|
||||||
|
- 1、Topic定时同步任务
|
||||||
|
- 2、专家服务——Topic分区热点
|
||||||
|
- 3、专家服务——Topic分区不足
|
||||||
|
- 4、专家服务——Topic资源治理
|
||||||
|
- 5、账单配置
|
||||||
|
|
||||||
|
|
||||||
|
## 1、Topic定时同步任务
|
||||||
|
|
||||||
|
### 1.1、配置的用途
|
||||||
|
`Logi-KafkaManager`在设计上,所有的资源都是挂在应用(app)下面。 如果接入的Kafka集群已经存在Topic了,那么会导致这些Topic不属于任何的应用,从而导致很多管理上的不便。
|
||||||
|
|
||||||
|
因此,需要有一个方式将这些无主的Topic挂到某个应用下面。
|
||||||
|
|
||||||
|
这里提供了一个配置,会定时自动将集群无主的Topic挂到某个应用下面下面。
|
||||||
|
|
||||||
|
### 1.2、相关实现
|
||||||
|
|
||||||
|
就是一个定时任务,该任务会定期做同步的工作。具体代码的位置在`com.xiaojukeji.kafka.manager.task.dispatch.op`包下面的`SyncTopic2DB`类。
|
||||||
|
|
||||||
|
### 1.3、配置说明
|
||||||
|
|
||||||
|
**步骤一:开启该功能**
|
||||||
|
|
||||||
|
在application.yml文件中,增加如下配置,已经有该配置的话,直接把false修改为true即可
|
||||||
|
```yml
|
||||||
|
# 任务相关的开关
|
||||||
|
task:
|
||||||
|
op:
|
||||||
|
sync-topic-enabled: true # 无主的Topic定期同步到DB中
|
||||||
|
```
|
||||||
|
|
||||||
|
**步骤二:配置管理中指定挂在那个应用下面**
|
||||||
|
|
||||||
|
配置的位置:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
配置键:`SYNC_TOPIC_2_DB_CONFIG_KEY`
|
||||||
|
|
||||||
|
配置值(JSON数组):
|
||||||
|
- clusterId:需要进行定时同步的集群ID
|
||||||
|
- defaultAppId:该集群无主的Topic将挂在哪个应用下面
|
||||||
|
- addAuthority:是否需要加上权限, 默认是false。因为考虑到这个挂载只是临时的,我们不希望用户使用这个App,同时后续可能移交给真正的所属的应用,因此默认是不加上权限。
|
||||||
|
|
||||||
|
**注意,这里的集群ID,或者是应用ID不存在的话,会导致配置不生效。该任务对已经在DB中的Topic不会进行修改**
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"clusterId": 1234567,
|
||||||
|
"defaultAppId": "ANONYMOUS",
|
||||||
|
"addAuthority": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"clusterId": 7654321,
|
||||||
|
"defaultAppId": "ANONYMOUS",
|
||||||
|
"addAuthority": false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2、专家服务——Topic分区热点
|
||||||
|
|
||||||
|
在`Region`所圈定的Broker范围内,某个Topic的Leader数在这些圈定的Broker上分布不均衡时,我们认为该Topic是存在热点的Topic。
|
||||||
|
|
||||||
|
备注:单纯的查看Leader数的分布,确实存在一定的局限性,这块欢迎贡献更多的热点定义于代码。
|
||||||
|
|
||||||
|
|
||||||
|
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理):
|
||||||
|
|
||||||
|
配置Key:
|
||||||
|
```
|
||||||
|
REGION_HOT_TOPIC_CONFIG
|
||||||
|
```
|
||||||
|
|
||||||
|
配置Value:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时,则认为是存在热点的Topic
|
||||||
|
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
||||||
|
"ignoreClusterIdList": [ # 忽略的集群
|
||||||
|
50
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3、专家服务——Topic分区不足
|
||||||
|
|
||||||
|
总流量除以分区数,超过指定值时,则我们认为存在Topic分区不足。
|
||||||
|
|
||||||
|
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理):
|
||||||
|
|
||||||
|
配置Key:
|
||||||
|
```
|
||||||
|
TOPIC_INSUFFICIENT_PARTITION_CONFIG
|
||||||
|
```
|
||||||
|
|
||||||
|
配置Value:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
|
||||||
|
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
||||||
|
"ignoreClusterIdList": [ # 忽略的集群
|
||||||
|
50
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
## 4、专家服务——Topic资源治理
|
||||||
|
|
||||||
|
首先,我们认为在一定的时间长度内,Topic的分区offset没有任何变化的Topic,即没有数据写入的Topic,为过期的Topic。
|
||||||
|
|
||||||
|
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理):
|
||||||
|
|
||||||
|
配置Key:
|
||||||
|
```
|
||||||
|
EXPIRED_TOPIC_CONFIG
|
||||||
|
```
|
||||||
|
|
||||||
|
配置Value:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"minExpiredDay": 30, #过期时间大于此值才显示,
|
||||||
|
"filterRegex": ".*XXX\\s+", #忽略符合此正则规则的Topic
|
||||||
|
"ignoreClusterIdList": [ # 忽略的集群
|
||||||
|
50
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5、账单配置
|
||||||
|
|
||||||
|
Logi-KafkaManager除了作为Kafka运维管控平台之外,实际上还会有一些资源定价相关的功能。
|
||||||
|
|
||||||
|
当前定价方式:当月Topic的maxAvgDay天的峰值的均值流量作为Topic的使用额度。使用的额度 * 单价 * 溢价(预留buffer) 就等于当月的费用。
|
||||||
|
详细的计算逻辑见:com.xiaojukeji.kafka.manager.task.dispatch.biz.CalKafkaTopicBill; 和 com.xiaojukeji.kafka.manager.task.dispatch.biz.CalTopicStatistics;
|
||||||
|
|
||||||
|
这块在计算Topic的费用的配置如下所示:
|
||||||
|
|
||||||
|
配置Key:
|
||||||
|
```
|
||||||
|
KAFKA_TOPIC_BILL_CONFIG
|
||||||
|
```
|
||||||
|
|
||||||
|
配置Value:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"maxAvgDay": 10, # 使用额度的计算规则
|
||||||
|
"quotaRatio": 1.5, # 溢价率
|
||||||
|
"priseUnitMB": 100 # 单价,即单MB/s流量多少钱
|
||||||
|
}
|
||||||
|
```
|
||||||
10
docs/dev_guide/gateway_config_manager.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Kafka-Gateway 配置说明
|
||||||
42
docs/dev_guide/monitor_system_integrate_with_n9e.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 监控系统集成——夜莺
|
||||||
|
|
||||||
|
- `Kafka-Manager`通过将 监控的数据 以及 监控的规则 都提交给夜莺,然后依赖夜莺的监控系统从而实现监控告警功能。
|
||||||
|
|
||||||
|
- 监控数据上报 & 告警规则的创建等能力已经具备。但类似查看告警历史,告警触发时的监控数据等正在集成中(暂时可以到夜莺系统进行查看),欢迎有兴趣的同学进行共建 或 贡献代码。
|
||||||
|
|
||||||
|
## 1、配置说明
|
||||||
|
|
||||||
|
```yml
|
||||||
|
# 配置文件中关于监控部分的配置
|
||||||
|
monitor:
|
||||||
|
enabled: false
|
||||||
|
n9e:
|
||||||
|
nid: 2
|
||||||
|
user-token: 123456
|
||||||
|
# 夜莺 mon监控服务 地址
|
||||||
|
mon:
|
||||||
|
base-url: http://127.0.0.1:8006
|
||||||
|
# 夜莺 transfer上传服务 地址
|
||||||
|
sink:
|
||||||
|
base-url: http://127.0.0.1:8008
|
||||||
|
# 夜莺 rdb资源服务 地址
|
||||||
|
rdb:
|
||||||
|
base-url: http://127.0.0.1:80
|
||||||
|
|
||||||
|
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
|
||||||
|
# n9e.nid: 夜莺的节点ID
|
||||||
|
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
|
||||||
|
# n9e.mon.base-url: 监控地址
|
||||||
|
# n9e.sink.base-url: 数据上报地址
|
||||||
|
# n9e.rdb.base-url: 用户资源中心地址
|
||||||
|
```
|
||||||
|
|
||||||
54
docs/dev_guide/monitor_system_integrate_with_self.md
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 监控系统集成
|
||||||
|
|
||||||
|
- 监控系统默认与 [夜莺] (https://github.com/didi/nightingale) 进行集成;
|
||||||
|
- 对接自有的监控系统需要进行简单的二次开发,即实现部分监控告警模块的相关接口即可;
|
||||||
|
- 集成会有两块内容,一个是指标数据上报的集成,还有一个是监控告警规则的集成;
|
||||||
|
|
||||||
|
## 1、指标数据上报集成
|
||||||
|
|
||||||
|
仅完成这一步的集成之后,即可将监控数据上报到监控系统中,此时已能够在自己的监控系统进行监控告警规则的配置了。
|
||||||
|
|
||||||
|
**步骤一:实现指标上报的接口**
|
||||||
|
|
||||||
|
- 按照自己内部监控系统的数据格式要求,将数据进行组装成符合自己内部监控系统要求的数据进行上报,具体的可以参考夜莺集成的实现代码。
|
||||||
|
- 至于会上报哪些指标,可以查看有哪些地方调用了该接口。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**步骤二:相关配置修改**
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**步骤三:开启上报任务**
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## 2、监控告警规则集成
|
||||||
|
|
||||||
|
完成**1、指标数据上报集成**之后,即可在自己的监控系统进行监控告警规则的配置了。完成该步骤的集成之后,可以在`Logi-KafkaManager`中进行监控告警规则的增删改查等等。
|
||||||
|
|
||||||
|
大体上和**1、指标数据上报集成**一致,
|
||||||
|
|
||||||
|
**步骤一:实现相关接口**
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
实现完成步骤一之后,接下来的步骤和**1、指标数据上报集成**中的步骤二、步骤三一致,都需要进行相关配置的修改即可。
|
||||||
|
|
||||||
|
|
||||||
|
## 3、总结
|
||||||
|
|
||||||
|
简单介绍了一下监控告警的集成,嫌麻烦的同学可以仅做 **1、指标数据上报集成** 这一节的内容即可满足一定场景下的需求。
|
||||||
|
|
||||||
|
|
||||||
|
**集成过程中,有任何觉得文档没有说清楚的地方或者建议,欢迎入群交流,也欢迎贡献代码,觉得好也辛苦给个star。**
|
||||||
41
docs/dev_guide/use_mysql_8.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 使用`MySQL 8`
|
||||||
|
|
||||||
|
感谢 [herry-hu](https://github.com/herry-hu) 提供的方案。
|
||||||
|
|
||||||
|
|
||||||
|
当前因为无法同时兼容`MySQL 8`与`MySQL 5.7`,因此代码中默认的版本还是`MySQL 5.7`。
|
||||||
|
|
||||||
|
|
||||||
|
当前如需使用`MySQL 8`,则需按照下述流程进行简单修改代码。
|
||||||
|
|
||||||
|
|
||||||
|
- Step1. 修改application.yml中的MySQL驱动类
|
||||||
|
```shell
|
||||||
|
|
||||||
|
# 将driver-class-name后面的驱动类修改为:
|
||||||
|
# driver-class-name: com.mysql.jdbc.Driver
|
||||||
|
driver-class-name: com.mysql.cj.jdbc.Driver
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
- Step2. 修改MySQL依赖包
|
||||||
|
```shell
|
||||||
|
# 将根目录下面的pom.xml文件依赖的`MySQL`依赖包版本调整为
|
||||||
|
|
||||||
|
<dependency>
|
||||||
|
<groupId>mysql</groupId>
|
||||||
|
<artifactId>mysql-connector-java</artifactId>
|
||||||
|
# <version>5.1.41</version>
|
||||||
|
<version>8.0.20</version>
|
||||||
|
</dependency>
|
||||||
|
```
|
||||||
|
|
||||||
39
docs/dev_guide/周期任务说明文档.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
| 定时任务名称或方法名 | 所在类 | 详细说明 | cron | cron说明 | 线程数量 |
|
||||||
|
| -------------------------------------- | -------------------------------------- | ------------------------------------------ | --------------- | --------------------------------------- | -------- |
|
||||||
|
| calKafkaBill | CalKafkaTopicBill | 计算Kafka使用账单 | 0 0 1 * * ? | 每天凌晨1点执行一次 | 1 |
|
||||||
|
| calRegionCapacity | CalRegionCapacity | 计算Region容量 | 0 0 0/12 * * ? | 每隔12小时执行一次,在0分钟0秒时触发 | 1 |
|
||||||
|
| calTopicStatistics | CalTopicStatistics | 定时计算Topic统计数据 | 0 0 0/4 * * ? | 每隔4小时执行一次,在0分钟0秒时触发 | 5 |
|
||||||
|
| flushBrokerTable | FlushBrokerTable | 定时刷新BrokerTable数据 | 0 0 0/1 * * ? | 每隔1小时执行一次,在0分钟0秒时触发 | 1 |
|
||||||
|
| flushExpiredTopic | FlushExpiredTopic | 定期更新过期Topic | 0 0 0/5 * * ? | 每隔5小时执行一次,在0分钟0秒时触发 | 1 |
|
||||||
|
| syncClusterTaskState | SyncClusterTaskState | 同步更新集群任务状态 | 0 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| newCollectAndPublishCGData | CollectAndPublishCGData | 收集并发布消费者指标数据 | 30 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的30秒时触发 | 10 |
|
||||||
|
| collectAndPublishCommunityTopicMetrics | CollectAndPublishCommunityTopicMetrics | Topic社区指标收集 | 31 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的30秒时触发 | 5 |
|
||||||
|
| collectAndPublishTopicThrottledMetrics | CollectAndPublishTopicThrottledMetrics | 收集和发布Topic限流信息 | 11 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的11秒时触发 | 5 |
|
||||||
|
| deleteMetrics | DeleteMetrics | 定期删除Metrics信息 | 0 0/2 * * * ? | 每隔2分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| storeDiDiAppTopicMetrics | StoreDiDiAppTopicMetrics | JMX中获取appId维度的流量信息存DB | 41 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的41秒时触发 | 5 |
|
||||||
|
| storeDiDiTopicRequestTimeMetrics | StoreDiDiTopicRequestTimeMetrics | JMX中获取的TopicRequestTimeMetrics信息存DB | 51 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的51秒时触发 | 5 |
|
||||||
|
| autoHandleTopicOrder | AutoHandleTopicOrder | 定时自动处理Topic相关工单 | 0 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| automatedHandleOrder | AutomatedHandleOrder | 工单自动化审批 | 0 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| flushReassignment | FlushReassignment | 定时处理分区迁移任务 | 0 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| syncTopic2DB | SyncTopic2DB | 定期将未落盘的Topic刷新到DB中 | 0 0/2 * * * ? | 每隔2分钟执行一次,在每分钟的0秒时触发 | 1 |
|
||||||
|
| sinkCommunityTopicMetrics2Monitor | SinkCommunityTopicMetrics2Monitor | 定时上报Topic监控指标 | 1 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的1秒时触发 | 5 |
|
||||||
|
| flush方法 | LogicalClusterMetadataManager | 定时刷新逻辑集群元数据到缓存中 | 0/30 * * * * ? | 每隔30秒执行一次 | 1 |
|
||||||
|
| flush方法 | AccountServiceImpl | 定时刷新account信息到缓存中 | 0/5 * * * * ? | 每隔5秒执行一次 | 1 |
|
||||||
|
| ipFlush方法 | HeartBeat | 定时获取管控平台所在机器IP等信息到DB | 0/10 * * * * ? | 每隔10秒执行一次 | 1 |
|
||||||
|
| flushTopicMetrics方法 | FlushTopicMetrics | 定时刷新topic指标到缓存中 | 5 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的5秒时触发 | 1 |
|
||||||
|
| schedule方法 | FlushBKConsumerGroupMetadata | 定时刷新broker上消费组信息到缓存中 | 15 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的15秒时触发 | 1 |
|
||||||
|
| flush方法 | FlushClusterMetadata | 定时刷新物理集群元信息到缓存中 | 0/30 * * * * ? | 每隔30秒执行一次 | 1 |
|
||||||
|
| flush方法 | FlushTopicProperties | 定时刷新物理集群配置到缓存中 | 25 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的25秒时触发 | 1 |
|
||||||
|
| schedule方法 | FlushZKConsumerGroupMetadata | 定时刷新zk上的消费组信息到缓存中 | 35 0/1 * * * ? | 每隔1分钟执行一次,在每分钟的35秒时触发 | 1 |
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
89
docs/dev_guide/如何使用集群安装部署功能.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 如何使用集群安装部署功能?
|
||||||
|
|
||||||
|
[TOC]
|
||||||
|
|
||||||
|
## 1、实现原理
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
- LogiKM上传安装包到S3服务;
|
||||||
|
- LogiKM调用夜莺-Job服务接口,创建执行[kcm_script.sh](https://github.com/didi/LogiKM/blob/master/kafka-manager-extends/kafka-manager-kcm/src/main/resources/kcm_script.sh)脚本的任务,kcm_script.sh脚本是安装部署Kafka集群的脚本;
|
||||||
|
- 夜莺将任务脚本下发到具体的机器上,通过夜莺Agent执行该脚本;
|
||||||
|
- kcm_script.sh脚本会进行Kafka-Broker的安装部署;
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2、使用方式
|
||||||
|
|
||||||
|
### 2.1、第一步:修改配置
|
||||||
|
|
||||||
|
**配置application.yml文件**
|
||||||
|
```yaml
|
||||||
|
#
|
||||||
|
kcm:
|
||||||
|
enabled: false # 是否开启,将其修改为true
|
||||||
|
s3: # s3 存储服务
|
||||||
|
endpoint: s3.didiyunapi.com
|
||||||
|
access-key: 1234567890
|
||||||
|
secret-key: 0987654321
|
||||||
|
bucket: logi-kafka
|
||||||
|
n9e: # 夜莺
|
||||||
|
base-url: http://127.0.0.1:8004 # 夜莺job服务地址
|
||||||
|
user-token: 12345678 # 用户的token
|
||||||
|
timeout: 300 # 单台操作的超时时间
|
||||||
|
account: root # 操作时使用的账号
|
||||||
|
script-file: kcm_script.sh # 脚本,已内置好,在源码的kcm模块内,此处配置无需修改
|
||||||
|
logikm-url: http://127.0.0.1:8080 # logikm部署地址,部署时kcm_script.sh会调用logikm检查部署中的一些状态,这里只需要填写 http://IP:PORT 就可以了
|
||||||
|
|
||||||
|
|
||||||
|
account:
|
||||||
|
jump-login:
|
||||||
|
gateway-api: false # 网关接口
|
||||||
|
third-part-api: false # 第三方接口,将其修改为true,即允许未登录情况下调用开放的第三方接口
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2、第二步:检查服务
|
||||||
|
|
||||||
|
**检查s3服务**
|
||||||
|
- 测试 "运维管控-》集群运维-》版本管理" 页面的上传,查看等功能是否都正常。如果存在不正常,则需要查看s3的配置是否正确;
|
||||||
|
- 如果都没有问题,则上传Kafka的以.tgz结尾的安装包以及server.properties文件;
|
||||||
|
|
||||||
|
**检查夜莺Job服务**
|
||||||
|
- 创建一个job任务,机器选择需要安装Kafka集群的机器,然后执行的命令是echo "Hello LogiKM",看能否被成功执行。如果不行,则需要检查夜莺的安装;
|
||||||
|
- 如果没有问题则表示夜莺和所需部署的机器之间的交互是没有问题的;
|
||||||
|
|
||||||
|
### 2.3、第三步:接入集群
|
||||||
|
|
||||||
|
在LogiKM的 “运维管控-》集群列表” 中接入需要安装部署的集群,**PS:此时是允许接入一个没有任何Broker的空的Kafka集群**,其中对的bootstrapServers配置搭建完成后的Kafka集群地址就可以了,而ZK地址必须和集群的server.properties中的ZK地址保持一致;
|
||||||
|
|
||||||
|
### 2.4、第四步:部署集群
|
||||||
|
|
||||||
|
- 打开LogiKM的 “运维管控-》集群运维-》集群任务” 页面,点击 “新建集群任务” 按钮;
|
||||||
|
- 选择集群、任务类型、包版本、server配置及填写主机列表,然后点击确认,即可在夜莺的Job服务中心中创建一个任务出来。**PS:如果创建失败,可以看一下日志我为什么创建失败**;
|
||||||
|
- 随后可以点击详情及状态对任务进行操作;
|
||||||
|
|
||||||
|
### 2.5、可能问题
|
||||||
|
|
||||||
|
#### 2.5.1、问题一:任务执行超时、失败等
|
||||||
|
|
||||||
|
进入夜莺Job服务中心,查看对应的任务的相关日志;
|
||||||
|
|
||||||
|
- 提示安装包下载失败,则需要查看对应的s3服务是否可以直接wget下载安装包,如果不可以则需要对kcm_script.sh脚本进行修改;
|
||||||
|
- 提示调用LogiKM失败,则可以使用postman手动测试一下kcm_script.sh脚本调用LogiKM的那个接口是否有问题,如果存在问题则进行相应的修改;PS:具体接口见kcm_script.sh脚本
|
||||||
|
|
||||||
|
|
||||||
|
## 3、备注说明
|
||||||
|
|
||||||
|
- 集群安装部署,仅安装部署Kafka-Broker,不安装Kafka的ZK服务;
|
||||||
|
- 安装部署中,有任何定制化的需求,例如修改安装的目录等,可以通过修改kcm_script.sh脚本实现;
|
||||||
|
- kcm_script.sh脚本位置:[kcm_script.sh](https://github.com/didi/LogiKM/blob/master/kafka-manager-extends/kafka-manager-kcm/src/main/resources/kcm_script.sh);
|
||||||
53
docs/dev_guide/如何增加上报监控系统指标.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 如何增加上报监控系统指标?
|
||||||
|
|
||||||
|
## 0、前言
|
||||||
|
|
||||||
|
LogiKM是 **一站式`Apache Kafka`集群指标监控与运维管控平台** ,当前会将消费Lag,Topic流量等指标上报到监控系统中,从而方便用户在监控系统中对这些指标配置监控告警规则,进而达到监控自身客户端是否正常的目的。
|
||||||
|
|
||||||
|
那么,如果我们想增加一个新的监控指标,应该如何做呢,比如我们想监控Broker的流量,监控Broker的存活信息,监控集群Controller个数等等。
|
||||||
|
|
||||||
|
在具体介绍之前,我们大家都知道,Kafka监控相关的信息,基本都存储于Broker、Jmx以及ZK中。当前LogiKM也已经具备从这三个地方获取数据的基本能力,因此基于LogiKM我们再获取其他指标,总体上还是非常方便的。
|
||||||
|
|
||||||
|
这里我们就以已经获取到的Topic流量信息为例,看LogiKM如何实现Topic指标的获取并上报的。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1、确定指标位置
|
||||||
|
|
||||||
|
基于对Kafka的了解,我们知道Topic流量信息这个指标是存储于Jmx中的,因此我们需要从Jmx中获取。大家如果对于自己所需要获取的指标存储在何处不太清楚的,可以加入我们维护的Kafka中文社区(README中有二维码)中今天沟通交流。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2、指标获取
|
||||||
|
|
||||||
|
Topic流量指标的获取详细见图中说明。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3、指标上报
|
||||||
|
|
||||||
|
上一步我们已经采集到Topic流量指标了,下一步就是将该指标上报到监控系统,这块只需要按照监控系统要求的格式,将数据上报即可。
|
||||||
|
|
||||||
|
LogiKM中有一个monitor模块,具体的如下图所示:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## 4、补充说明
|
||||||
|
|
||||||
|
监控系统对接的相关内容见:
|
||||||
|
|
||||||
|
[监控系统集成](./monitor_system_integrate_with_self.md)
|
||||||
|
|
||||||
|
[监控系统集成例子——集成夜莺](./monitor_system_integrate_with_n9e.md)
|
||||||
107
docs/install_guide/config_description.md
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 配置说明
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
server:
|
||||||
|
port: 8080 # 服务端口
|
||||||
|
tomcat:
|
||||||
|
accept-count: 1000
|
||||||
|
max-connections: 10000
|
||||||
|
max-threads: 800
|
||||||
|
min-spare-threads: 100
|
||||||
|
|
||||||
|
spring:
|
||||||
|
application:
|
||||||
|
name: kafkamanager
|
||||||
|
datasource:
|
||||||
|
kafka-manager: # 数据库连接配置
|
||||||
|
jdbc-url: jdbc:mysql://127.0.0.1:3306/kafka_manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8 #数据库的地址
|
||||||
|
username: admin # 用户名
|
||||||
|
password: admin # 密码
|
||||||
|
driver-class-name: com.mysql.jdbc.Driver
|
||||||
|
main:
|
||||||
|
allow-bean-definition-overriding: true
|
||||||
|
|
||||||
|
profiles:
|
||||||
|
active: dev # 启用的配置
|
||||||
|
servlet:
|
||||||
|
multipart:
|
||||||
|
max-file-size: 100MB
|
||||||
|
max-request-size: 100MB
|
||||||
|
|
||||||
|
logging:
|
||||||
|
config: classpath:logback-spring.xml
|
||||||
|
|
||||||
|
custom:
|
||||||
|
idc: cn # 部署的数据中心, 忽略该配置, 后续会进行删除
|
||||||
|
jmx:
|
||||||
|
max-conn: 10 # 和单台 broker 的最大JMX连接数
|
||||||
|
store-metrics-task:
|
||||||
|
community:
|
||||||
|
broker-metrics-enabled: true # 社区部分broker metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
||||||
|
topic-metrics-enabled: true # 社区部分topic的metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
||||||
|
didi:
|
||||||
|
app-topic-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
||||||
|
topic-request-time-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
||||||
|
topic-throttled-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
||||||
|
save-days: 7 #指标在DB中保持的天数,-1表示永久保存,7表示保存近7天的数据
|
||||||
|
|
||||||
|
# 任务相关的开关
|
||||||
|
task:
|
||||||
|
op:
|
||||||
|
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
||||||
|
order-auto-exec: # 工单自动化审批线程的开关
|
||||||
|
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
|
||||||
|
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
|
||||||
|
|
||||||
|
account: # ldap相关的配置, 社区版本暂时支持不够完善,可以先忽略,欢迎贡献代码对这块做优化
|
||||||
|
ldap:
|
||||||
|
|
||||||
|
kcm: # 集群升级部署相关的功能,需要配合夜莺及S3进行使用,这块我们后续专门补充一个文档细化一下,牵扯到kcm_script.sh脚本的修改
|
||||||
|
enabled: false # 默认关闭
|
||||||
|
storage:
|
||||||
|
base-url: http://127.0.0.1 # 存储地址
|
||||||
|
n9e:
|
||||||
|
base-url: http://127.0.0.1:8004 # 夜莺任务中心的地址
|
||||||
|
user-token: 12345678 # 夜莺用户的token
|
||||||
|
timeout: 300 # 集群任务的超时时间,单位秒
|
||||||
|
account: root # 集群任务使用的账号
|
||||||
|
script-file: kcm_script.sh # 集群任务的脚本
|
||||||
|
|
||||||
|
monitor: # 监控告警相关的功能,需要配合夜莺进行使用
|
||||||
|
enabled: false # 默认关闭,true就是开启
|
||||||
|
n9e:
|
||||||
|
nid: 2
|
||||||
|
user-token: 1234567890
|
||||||
|
mon:
|
||||||
|
# 夜莺 mon监控服务 地址
|
||||||
|
base-url: http://127.0.0.1:8032
|
||||||
|
sink:
|
||||||
|
# 夜莺 transfer上传服务 地址
|
||||||
|
base-url: http://127.0.0.1:8006
|
||||||
|
rdb:
|
||||||
|
# 夜莺 rdb资源服务 地址
|
||||||
|
base-url: http://127.0.0.1:80
|
||||||
|
|
||||||
|
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
|
||||||
|
# n9e.nid: 夜莺的节点ID
|
||||||
|
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
|
||||||
|
# n9e.mon.base-url: 监控地址
|
||||||
|
# n9e.sink.base-url: 数据上报地址
|
||||||
|
# n9e.rdb.base-url: 用户资源中心地址
|
||||||
|
|
||||||
|
notify: # 通知的功能
|
||||||
|
kafka: # 默认通知发送到kafka的指定Topic中
|
||||||
|
cluster-id: 95 # Topic的集群ID
|
||||||
|
topic-name: didi-kafka-notify # Topic名称
|
||||||
|
order: # 部署的KM的地址
|
||||||
|
detail-url: http://127.0.0.1
|
||||||
|
```
|
||||||
93
docs/install_guide/install_guide_cn.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 安装手册
|
||||||
|
|
||||||
|
## 1、环境依赖
|
||||||
|
|
||||||
|
如果是以Release包进行安装的,则仅安装`Java`及`MySQL`即可。如果是要先进行源码包进行打包,然后再使用,则需要安装`Maven`及`Node`环境。
|
||||||
|
|
||||||
|
- `Java 8+`(运行环境需要)
|
||||||
|
- `MySQL 5.7`(数据存储)
|
||||||
|
- `Maven 3.5+`(后端打包依赖)
|
||||||
|
- `Node 10+`(前端打包依赖)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2、获取安装包
|
||||||
|
|
||||||
|
**1、Release直接下载**
|
||||||
|
|
||||||
|
这里如果觉得麻烦,然后也不想进行二次开发,则可以直接下载Release包,下载地址:[Github Release包下载地址](https://github.com/didi/Logi-KafkaManager/releases)
|
||||||
|
|
||||||
|
如果觉得Github的下载地址太慢了,也可以进入`Logi-KafkaManager`的用户群获取,群地址在README中。
|
||||||
|
|
||||||
|
|
||||||
|
**2、源代码进行打包**
|
||||||
|
|
||||||
|
下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`mvn -Prelease-kafka-manager -Dmaven.test.skip=true clean install -U `命令即可,
|
||||||
|
执行完成之后会在`distribution/target`目录下面生成一个`kafka-manager-*.tar.gz`。
|
||||||
|
和一个`kafka-manager-*.zip` 文件,随便任意一个压缩包都可以;
|
||||||
|
当然此时同级目录有一个已经解压好的文件夹;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. 解压安装包
|
||||||
|
解压完成后; 在文件目录中可以看到有`kafka-manager/conf/create_mysql_table.sql` 有个mysql初始化文件
|
||||||
|
先初始化DB
|
||||||
|
|
||||||
|
|
||||||
|
## 4、MySQL-DB初始化
|
||||||
|
|
||||||
|
执行[create_mysql_table.sql](../../distribution/conf/create_mysql_table.sql)中的SQL命令,从而创建所需的MySQL库及表,默认创建的库名是`logi_kafka_manager`。
|
||||||
|
|
||||||
|
```
|
||||||
|
# 示例:
|
||||||
|
mysql -uXXXX -pXXX -h XXX.XXX.XXX.XXX -PXXXX < ./create_mysql_table.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5.修该配置
|
||||||
|
请将`conf/application.yml.example` 文件复制一份出来命名为`application.yml` 放在同级目录:conf/application.yml ;
|
||||||
|
并且修改配置; 当然不修改的话 就会用默认的配置;
|
||||||
|
至少 mysql配置成自己的吧
|
||||||
|
|
||||||
|
|
||||||
|
## 6、启动/关闭
|
||||||
|
解压包中有启动和关闭脚本
|
||||||
|
`kafka-manager/bin/shutdown.sh`
|
||||||
|
`kafka-manager/bin/startup.sh`
|
||||||
|
|
||||||
|
执行 sh startup.sh 启动
|
||||||
|
执行 sh shutdown.sh 关闭
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 6、使用
|
||||||
|
|
||||||
|
本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md)
|
||||||
|
|
||||||
|
### 7. 升级
|
||||||
|
|
||||||
|
如果是升级版本,请查看文件 [kafka-manager 升级手册](../../distribution/upgrade_config.md)
|
||||||
|
在您下载的启动包(V2.5及其后)中也有记录,在 kafka-manager/upgrade_config.md 中
|
||||||
|
|
||||||
|
|
||||||
|
### 8. 在IDE中启动
|
||||||
|
> 如果想参与开发或者想在IDE中启动的话
|
||||||
|
> 先执行 `mvn -Dmaven.test.skip=true clean install -U `
|
||||||
|
>
|
||||||
|
> 然后这个时候可以选择去 [pom.xml](../../pom.xml) 中将`kafka-manager-console`模块注释掉;
|
||||||
|
> 注释是因为每次install的时候都会把前端文件`kafka-manager-console`重新打包进`kafka-manager-web`
|
||||||
|
>
|
||||||
|
> 完事之后,只需要直接用IDE启动运行`kafka-manager-web`模块中的
|
||||||
|
> com.xiaojukeji.kafka.manager.web.MainApplication main方法就行了
|
||||||
94
docs/install_guide/install_guide_nginx_cn.md
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## nginx配置-安装手册
|
||||||
|
|
||||||
|
# 一、独立部署
|
||||||
|
|
||||||
|
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
|
||||||
|
|
||||||
|
# 二、nginx配置
|
||||||
|
|
||||||
|
## 1、独立部署配置
|
||||||
|
|
||||||
|
```
|
||||||
|
#nginx 根目录访问配置如下
|
||||||
|
location / {
|
||||||
|
proxy_pass http://ip:port;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2、前后端分离&配置多个静态资源
|
||||||
|
|
||||||
|
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
|
||||||
|
|
||||||
|
### 1、源码下载
|
||||||
|
|
||||||
|
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
|
||||||
|
|
||||||
|
### 2、修改webpack.config.js 配置文件
|
||||||
|
|
||||||
|
修改`kafka-manager-console`模块 `webpack.config.js`
|
||||||
|
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
|
||||||
|
|
||||||
|
```
|
||||||
|
cd kafka-manager-console
|
||||||
|
vi webpack.config.js
|
||||||
|
|
||||||
|
# publicPath默认打包方式根目录下,修改为nginx代理访问路径。
|
||||||
|
let publicPath = '/xxxx';
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3、打包
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
npm cache clean --force && npm install
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
ps:如果打包过程中报错,运行`npm install clipboard@2.0.6`,相反请忽略!
|
||||||
|
|
||||||
|
### 4、部署
|
||||||
|
|
||||||
|
#### 1、前段静态文件部署
|
||||||
|
|
||||||
|
静态资源 `../kafka-manager-web/src/main/resources/templates`
|
||||||
|
|
||||||
|
上传到指定目录,目前以`root目录`做demo
|
||||||
|
|
||||||
|
#### 2、上传jar包并启动,请参考:[kafka-manager 安装手册](install_guide_cn.md)
|
||||||
|
|
||||||
|
#### 3、修改nginx 配置
|
||||||
|
|
||||||
|
```
|
||||||
|
location /xxxx {
|
||||||
|
# 静态文件存放位置
|
||||||
|
alias /root/templates;
|
||||||
|
try_files $uri $uri/ /xxxx/index.html;
|
||||||
|
index index.html;
|
||||||
|
}
|
||||||
|
|
||||||
|
location /api {
|
||||||
|
proxy_pass http://ip:port;
|
||||||
|
}
|
||||||
|
#后代端口建议使用/api,如果冲突可以使用以下配置
|
||||||
|
#location /api/v2 {
|
||||||
|
# proxy_pass http://ip:port;
|
||||||
|
#}
|
||||||
|
#location /api/v1 {
|
||||||
|
# proxy_pass http://ip:port;
|
||||||
|
#}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
49
docs/user_guide/add_cluster/add_cluster.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# 集群接入
|
||||||
|
|
||||||
|
## 主要概念讲解
|
||||||
|
面对大规模集群、业务场景复杂的情况,引入Region、逻辑集群的概念
|
||||||
|
- Region:划分部分Broker作为一个 Region,用Region定义资源划分的单位,提高扩展性和隔离性。如果部分Topic异常也不会影响大面积的Broker
|
||||||
|
- 逻辑集群:逻辑集群由部分Region组成,便于对大规模集群按照业务划分、保障能力进行管理
|
||||||
|

|
||||||
|
|
||||||
|
集群的接入总共需要三个步骤,分别是:
|
||||||
|
1. 接入物理集群:填写机器地址、安全协议等配置信息,接入真实的物理集群
|
||||||
|
2. 创建Region:将部分Broker划分为一个Region
|
||||||
|
3. 创建逻辑集群:逻辑集群由部分Region组成,可根据业务划分、保障等级来创建相应的逻辑集群
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
**备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。**
|
||||||
|
|
||||||
|
|
||||||
|
## 1、接入物理集群
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
如上图所示,填写集群信息,然后点击确定,即可完成集群的接入。因为考虑到分布式部署,添加集群之后,需要稍等**`1分钟`**才可以在界面上看到集群的详细信息。
|
||||||
|
|
||||||
|
## 2、创建Region
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
如上图所示,填写Region信息,然后点击确定,即可完成Region的创建。
|
||||||
|
|
||||||
|
备注:Region即为Broker的集合,可以按照业务需要,将Broker归类,从而创建相应的Region。
|
||||||
|
|
||||||
|
## 3、创建逻辑集群
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。
|
||||||
BIN
docs/user_guide/add_cluster/assets/op_add_cluster.jpg
Normal file
|
After Width: | Height: | Size: 261 KiB |
BIN
docs/user_guide/add_cluster/assets/op_add_logical_cluster.jpg
Normal file
|
After Width: | Height: | Size: 240 KiB |
BIN
docs/user_guide/add_cluster/assets/op_add_region.jpg
Normal file
|
After Width: | Height: | Size: 195 KiB |
BIN
docs/user_guide/add_cluster/assets/op_cluster_arch.png
Normal file
|
After Width: | Height: | Size: 124 KiB |
BIN
docs/user_guide/add_cluster/assets/op_cluster_flow.png
Normal file
|
After Width: | Height: | Size: 105 KiB |
BIN
docs/user_guide/assets/LeaderRebalance.png
Normal file
|
After Width: | Height: | Size: 94 KiB |
BIN
docs/user_guide/assets/Versionmanagement.png
Normal file
|
After Width: | Height: | Size: 181 KiB |
BIN
docs/user_guide/assets/alarmhistory.png
Normal file
|
After Width: | Height: | Size: 65 KiB |
BIN
docs/user_guide/assets/alarmruledetail.png
Normal file
|
After Width: | Height: | Size: 166 KiB |
BIN
docs/user_guide/assets/alarmruleex.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
docs/user_guide/assets/alarmruleforbidden.png
Normal file
|
After Width: | Height: | Size: 78 KiB |
BIN
docs/user_guide/assets/alarmruleforbiddenhistory.png
Normal file
|
After Width: | Height: | Size: 48 KiB |
BIN
docs/user_guide/assets/alarmrulesent.png
Normal file
|
After Width: | Height: | Size: 55 KiB |
BIN
docs/user_guide/assets/alarmruletime.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
docs/user_guide/assets/appdetailop.png
Normal file
|
After Width: | Height: | Size: 297 KiB |
BIN
docs/user_guide/assets/applyapp.png
Normal file
|
After Width: | Height: | Size: 189 KiB |
BIN
docs/user_guide/assets/applycluster.png
Normal file
|
After Width: | Height: | Size: 173 KiB |
BIN
docs/user_guide/assets/applylocated.png
Normal file
|
After Width: | Height: | Size: 197 KiB |
BIN
docs/user_guide/assets/applytopicright.png
Normal file
|
After Width: | Height: | Size: 244 KiB |
BIN
docs/user_guide/assets/appmanager.png
Normal file
|
After Width: | Height: | Size: 118 KiB |
BIN
docs/user_guide/assets/appmanagerop.png
Normal file
|
After Width: | Height: | Size: 150 KiB |
BIN
docs/user_guide/assets/appoffline.png
Normal file
|
After Width: | Height: | Size: 177 KiB |
BIN
docs/user_guide/assets/apprighttopic.png
Normal file
|
After Width: | Height: | Size: 276 KiB |
BIN
docs/user_guide/assets/apptopic.png
Normal file
|
After Width: | Height: | Size: 257 KiB |
BIN
docs/user_guide/assets/billdata.png
Normal file
|
After Width: | Height: | Size: 153 KiB |
BIN
docs/user_guide/assets/brokerinfo.png
Normal file
|
After Width: | Height: | Size: 189 KiB |
BIN
docs/user_guide/assets/brokerinfolist.png
Normal file
|
After Width: | Height: | Size: 187 KiB |
BIN
docs/user_guide/assets/brokerpartition.png
Normal file
|
After Width: | Height: | Size: 92 KiB |
BIN
docs/user_guide/assets/brokerpartitionop.png
Normal file
|
After Width: | Height: | Size: 116 KiB |
BIN
docs/user_guide/assets/brokerrask.png
Normal file
|
After Width: | Height: | Size: 166 KiB |
BIN
docs/user_guide/assets/brokerraskop.png
Normal file
|
After Width: | Height: | Size: 158 KiB |
BIN
docs/user_guide/assets/brokerregion.png
Normal file
|
After Width: | Height: | Size: 124 KiB |