Compare commits
5 Commits
master
...
v3.0.0-bet
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
41637dc1e5 | ||
|
|
65499443c2 | ||
|
|
6515dd28aa | ||
|
|
13354145fc | ||
|
|
0b376bd69c |
51
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,51 +0,0 @@
|
|||||||
---
|
|
||||||
name: 报告Bug
|
|
||||||
about: 报告KnowStreaming的相关Bug
|
|
||||||
title: ''
|
|
||||||
labels: bug
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
你是否希望来认领这个Bug。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
### 环境信息
|
|
||||||
|
|
||||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
|
||||||
* Operating System version : <font size=4 color =red> xxx </font>
|
|
||||||
* Java version : <font size=4 color =red> xxx </font>
|
|
||||||
|
|
||||||
|
|
||||||
### 重现该问题的步骤
|
|
||||||
|
|
||||||
1. xxx
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. xxx
|
|
||||||
|
|
||||||
|
|
||||||
3. xxx
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 预期结果
|
|
||||||
|
|
||||||
<!-- 写下应该出现的预期结果?-->
|
|
||||||
|
|
||||||
### 实际结果
|
|
||||||
|
|
||||||
<!-- 实际发生了什么? -->
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
如果有异常,请附上异常Trace:
|
|
||||||
|
|
||||||
```
|
|
||||||
Just put your stack trace here!
|
|
||||||
```
|
|
||||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,8 +0,0 @@
|
|||||||
blank_issues_enabled: true
|
|
||||||
contact_links:
|
|
||||||
- name: 讨论问题
|
|
||||||
url: https://github.com/didi/KnowStreaming/discussions/new
|
|
||||||
about: 发起问题、讨论 等等
|
|
||||||
- name: KnowStreaming官网
|
|
||||||
url: https://knowstreaming.com/
|
|
||||||
about: KnowStreaming website
|
|
||||||
26
.github/ISSUE_TEMPLATE/detail_optimizing.md
vendored
@@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
name: 优化建议
|
|
||||||
about: 相关功能优化建议
|
|
||||||
title: ''
|
|
||||||
labels: Optimization Suggestions
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
你是否希望来认领这个优化建议。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
### 环境信息
|
|
||||||
|
|
||||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
|
||||||
* Operating System version : <font size=4 color =red> xxx </font>
|
|
||||||
* Java version : <font size=4 color =red> xxx </font>
|
|
||||||
|
|
||||||
### 需要优化的功能点
|
|
||||||
|
|
||||||
|
|
||||||
### 建议如何优化
|
|
||||||
|
|
||||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -1,20 +0,0 @@
|
|||||||
---
|
|
||||||
name: 提议新功能/需求
|
|
||||||
about: 给KnowStreaming提一个功能需求
|
|
||||||
title: ''
|
|
||||||
labels: feature
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
|
|
||||||
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
|
|
||||||
|
|
||||||
你是否希望来认领这个Feature。
|
|
||||||
|
|
||||||
「 Y / N 」
|
|
||||||
|
|
||||||
|
|
||||||
## 这里描述需求
|
|
||||||
<!--请尽可能的描述清楚您的需求 -->
|
|
||||||
|
|
||||||
12
.github/ISSUE_TEMPLATE/question.md
vendored
@@ -1,12 +0,0 @@
|
|||||||
---
|
|
||||||
name: 提个问题
|
|
||||||
about: 问KnowStreaming相关问题
|
|
||||||
title: ''
|
|
||||||
labels: question
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
|
||||||
|
|
||||||
## 在这里提出你的问题
|
|
||||||
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,23 +0,0 @@
|
|||||||
请不要在没有先创建Issue的情况下创建Pull Request。
|
|
||||||
|
|
||||||
## 变更的目的是什么
|
|
||||||
|
|
||||||
XXXXX
|
|
||||||
|
|
||||||
## 简短的更新日志
|
|
||||||
|
|
||||||
XX
|
|
||||||
|
|
||||||
## 验证这一变化
|
|
||||||
|
|
||||||
XXXX
|
|
||||||
|
|
||||||
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
|
|
||||||
|
|
||||||
* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
|
|
||||||
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
|
|
||||||
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
|
|
||||||
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
|
|
||||||
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
|
|
||||||
* [ ] 确保编译通过,集成测试通过;
|
|
||||||
|
|
||||||
43
.github/workflows/ci_build.yml
vendored
@@ -1,43 +0,0 @@
|
|||||||
name: KnowStreaming Build
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches: [ "*" ]
|
|
||||||
pull_request:
|
|
||||||
branches: [ "*" ]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: Set up JDK 11
|
|
||||||
uses: actions/setup-java@v3
|
|
||||||
with:
|
|
||||||
java-version: '11'
|
|
||||||
distribution: 'temurin'
|
|
||||||
cache: maven
|
|
||||||
|
|
||||||
- name: Setup Node
|
|
||||||
uses: actions/setup-node@v1
|
|
||||||
with:
|
|
||||||
node-version: '12.22.12'
|
|
||||||
|
|
||||||
- name: Build With Maven
|
|
||||||
run: mvn -Prelease-package -Dmaven.test.skip=true clean install -U
|
|
||||||
|
|
||||||
- name: Get KnowStreaming Version
|
|
||||||
if: ${{ success() }}
|
|
||||||
run: |
|
|
||||||
version=`mvn -Dexec.executable='echo' -Dexec.args='${project.version}' --non-recursive exec:exec -q`
|
|
||||||
echo "VERSION=${version}" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Upload Binary Package
|
|
||||||
if: ${{ success() }}
|
|
||||||
uses: actions/upload-artifact@v3
|
|
||||||
with:
|
|
||||||
name: KnowStreaming-${{ env.VERSION }}.tar.gz
|
|
||||||
path: km-dist/target/KnowStreaming-${{ env.VERSION }}.tar.gz
|
|
||||||
4
.gitignore
vendored
@@ -110,7 +110,3 @@ dist/
|
|||||||
dist/*
|
dist/*
|
||||||
km-rest/src/main/resources/templates/
|
km-rest/src/main/resources/templates/
|
||||||
*dependency-reduced-pom*
|
*dependency-reduced-pom*
|
||||||
#filter flattened xml
|
|
||||||
*/.flattened-pom.xml
|
|
||||||
.flattened-pom.xml
|
|
||||||
*/*/.flattened-pom.xml
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
|
|
||||||
# Contributor Covenant Code of Conduct
|
|
||||||
|
|
||||||
## Our Pledge
|
|
||||||
|
|
||||||
In the interest of fostering an open and welcoming environment, we as
|
|
||||||
contributors and maintainers pledge to making participation in our project, and
|
|
||||||
our community a harassment-free experience for everyone, regardless of age, body
|
|
||||||
size, disability, ethnicity, gender identity and expression, level of experience,
|
|
||||||
education, socio-economic status, nationality, personal appearance, race,
|
|
||||||
religion, or sexual identity and orientation.
|
|
||||||
|
|
||||||
## Our Standards
|
|
||||||
|
|
||||||
Examples of behavior that contributes to creating a positive environment
|
|
||||||
include:
|
|
||||||
|
|
||||||
* Using welcoming and inclusive language
|
|
||||||
* Being respectful of differing viewpoints and experiences
|
|
||||||
* Gracefully accepting constructive criticism
|
|
||||||
* Focusing on what is best for the community
|
|
||||||
* Showing empathy towards other community members
|
|
||||||
|
|
||||||
Examples of unacceptable behavior by participants include:
|
|
||||||
|
|
||||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
|
||||||
advances
|
|
||||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
|
||||||
* Public or private harassment
|
|
||||||
* Publishing others' private information, such as a physical or electronic
|
|
||||||
address, without explicit permission
|
|
||||||
* Other conduct which could reasonably be considered inappropriate in a
|
|
||||||
professional setting
|
|
||||||
|
|
||||||
## Our Responsibilities
|
|
||||||
|
|
||||||
Project maintainers are responsible for clarifying the standards of acceptable
|
|
||||||
behavior and are expected to take appropriate and fair corrective action in
|
|
||||||
response to any instances of unacceptable behavior.
|
|
||||||
|
|
||||||
Project maintainers have the right and responsibility to remove, edit, or
|
|
||||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
|
||||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
|
||||||
permanently any contributor for other behaviors that they deem inappropriate,
|
|
||||||
threatening, offensive, or harmful.
|
|
||||||
|
|
||||||
## Scope
|
|
||||||
|
|
||||||
This Code of Conduct applies both within project spaces and in public spaces
|
|
||||||
when an individual is representing the project or its community. Examples of
|
|
||||||
representing a project or community include using an official project e-mail
|
|
||||||
address, posting via an official social media account, or acting as an appointed
|
|
||||||
representative at an online or offline event. Representation of a project may be
|
|
||||||
further defined and clarified by project maintainers.
|
|
||||||
|
|
||||||
## Enforcement
|
|
||||||
|
|
||||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
|
||||||
reported by contacting the project team at https://knowstreaming.com/support-center . All
|
|
||||||
complaints will be reviewed and investigated and will result in a response that
|
|
||||||
is deemed necessary and appropriate to the circumstances. The project team is
|
|
||||||
obligated to maintain confidentiality with regard to the reporter of an incident.
|
|
||||||
Further details of specific enforcement policies may be posted separately.
|
|
||||||
|
|
||||||
Project maintainers who do not follow or enforce the Code of Conduct in good
|
|
||||||
faith may face temporary or permanent repercussions as determined by other
|
|
||||||
members of the project's leadership.
|
|
||||||
|
|
||||||
## Attribution
|
|
||||||
|
|
||||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
|
||||||
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
|
||||||
|
|
||||||
[homepage]: https://www.contributor-covenant.org
|
|
||||||
158
CONTRIBUTING.md
@@ -1,150 +1,28 @@
|
|||||||
|
# Contribution Guideline
|
||||||
|
|
||||||
|
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
|
||||||
|
|
||||||
|
## Pull Requests
|
||||||
|
|
||||||
# 为KnowStreaming做贡献
|
Before sending pull request to this project, please read and follow guidelines below.
|
||||||
|
|
||||||
|
1. Branch: We only accept pull request on `dev` branch.
|
||||||
|
2. Coding style: Follow the coding style used in LogiKM.
|
||||||
|
3. Commit message: Use English and be aware of your spell.
|
||||||
|
4. Test: Make sure to test your code.
|
||||||
|
|
||||||
欢迎👏🏻来到KnowStreaming!本文档是关于如何为KnowStreaming做出贡献的指南。
|
Add device mode, API version, related log, screenshots and other related information in your pull request if possible.
|
||||||
|
|
||||||
如果您发现不正确或遗漏的内容, 请留下意见/建议。
|
NOTE: We assume all your contribution can be licensed under the [Apache License 2.0](LICENSE).
|
||||||
|
|
||||||
## 行为守则
|
## Issues
|
||||||
请务必阅读并遵守我们的 [行为准则](./CODE_OF_CONDUCT.md).
|
|
||||||
|
|
||||||
|
We love clearly described issues. :)
|
||||||
|
|
||||||
|
Following information can help us to resolve the issue faster.
|
||||||
|
|
||||||
## 贡献
|
* Device mode and hardware information.
|
||||||
|
* API version.
|
||||||
**KnowStreaming** 欢迎任何角色的新参与者,包括 **User** 、**Contributor**、**Committer**、**PMC** 。
|
* Logs.
|
||||||
|
* Screenshots.
|
||||||
我们鼓励新人积极加入 **KnowStreaming** 项目,从User到Contributor、Committer ,甚至是 PMC 角色。
|
* Steps to reproduce the issue.
|
||||||
|
|
||||||
为了做到这一点,新人需要积极地为 **KnowStreaming** 项目做出贡献。以下介绍如何对 **KnowStreaming** 进行贡献。
|
|
||||||
|
|
||||||
|
|
||||||
### 创建/打开 Issue
|
|
||||||
|
|
||||||
如果您在文档中发现拼写错误、在代码中**发现错误**或想要**新功能**或想要**提供建议**,您可以在 GitHub 上[创建一个Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 进行报告。
|
|
||||||
|
|
||||||
|
|
||||||
如果您想直接贡献, 您可以选择下面标签的问题。
|
|
||||||
|
|
||||||
- [contribution welcome](https://github.com/didi/KnowStreaming/labels/contribution%20welcome) : 非常需要解决/新增 的Issues
|
|
||||||
- [good first issue](https://github.com/didi/KnowStreaming/labels/good%20first%20issue): 对新人比较友好, 新人可以拿这个Issue来练练手热热身。
|
|
||||||
|
|
||||||
<font color=red ><b> 请注意,任何 PR 都必须与有效issue相关联。否则,PR 将被拒绝。</b></font>
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 开始你的贡献
|
|
||||||
|
|
||||||
**分支介绍**
|
|
||||||
|
|
||||||
我们将 `dev`分支作为开发分支, 说明这是一个不稳定的分支。
|
|
||||||
|
|
||||||
此外,我们的分支模型符合 [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/). 我们强烈建议新人在创建PR之前先阅读上述文章。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**贡献流程**
|
|
||||||
|
|
||||||
为方便描述,我们这里定义一下2个名词:
|
|
||||||
|
|
||||||
自己Fork出来的仓库是私人仓库, 我们这里称之为 :**分叉仓库**
|
|
||||||
Fork的源项目,我们称之为:**源仓库**
|
|
||||||
|
|
||||||
|
|
||||||
现在,如果您准备好创建PR, 以下是贡献者的工作流程:
|
|
||||||
|
|
||||||
1. Fork [KnowStreaming](https://github.com/didi/KnowStreaming) 项目到自己的仓库
|
|
||||||
|
|
||||||
2. 从源仓库的`dev`拉取并创建自己的本地分支,例如: `dev`
|
|
||||||
3. 在本地分支上对代码进行修改
|
|
||||||
4. Rebase 开发分支, 并解决冲突
|
|
||||||
5. commit 并 push 您的更改到您自己的**分叉仓库**
|
|
||||||
6. 创建一个 Pull Request 到**源仓库**的`dev`分支中。
|
|
||||||
7. 等待回复。如果回复的慢,请无情的催促。
|
|
||||||
|
|
||||||
|
|
||||||
更为详细的贡献流程请看:[贡献流程](./docs/contributer_guide/贡献流程.md)
|
|
||||||
|
|
||||||
创建Pull Request时:
|
|
||||||
|
|
||||||
1. 请遵循 PR的 [模板](./.github/PULL_REQUEST_TEMPLATE.md)
|
|
||||||
2. 请确保 PR 有相应的issue。
|
|
||||||
3. 如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档(在对应的issue中)。
|
|
||||||
4. 注意单个 PR 不能太大。如果需要进行大量更改,最好将更改分成几个单独的 PR。
|
|
||||||
5. 在合并PR之前,尽量的将最终的提交信息清晰简洁, 将多次修改的提交尽可能的合并为一次提交。
|
|
||||||
6. 创建 PR 后,将为PR分配一个或多个reviewers。
|
|
||||||
|
|
||||||
|
|
||||||
<font color=red><b>如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档。</b></font>
|
|
||||||
|
|
||||||
|
|
||||||
# 代码审查指南
|
|
||||||
|
|
||||||
Commiter将轮流review代码,以确保在合并前至少有一名Commiter
|
|
||||||
|
|
||||||
一些原则:
|
|
||||||
|
|
||||||
- 可读性——重要的代码应该有详细的文档。API 应该有 Javadoc。代码风格应与现有风格保持一致。
|
|
||||||
- 优雅:新的函数、类或组件应该设计得很好。
|
|
||||||
- 可测试性——单元测试用例应该覆盖 80% 的新代码。
|
|
||||||
- 可维护性 - 遵守我们的编码规范。
|
|
||||||
|
|
||||||
|
|
||||||
# 开发者
|
|
||||||
|
|
||||||
## 成为Contributor
|
|
||||||
|
|
||||||
只要成功提交并合并PR , 则为Contributor
|
|
||||||
|
|
||||||
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
|
|
||||||
|
|
||||||
## 尝试成为Commiter
|
|
||||||
|
|
||||||
一般来说, 贡献8个重要的补丁并至少让三个不同的人来Review他们(您需要3个Commiter的支持)。
|
|
||||||
然后请人给你提名, 您需要展示您的
|
|
||||||
|
|
||||||
1. 至少8个重要的PR和项目的相关问题
|
|
||||||
2. 与团队合作的能力
|
|
||||||
3. 了解项目的代码库和编码风格
|
|
||||||
4. 编写好代码的能力
|
|
||||||
|
|
||||||
当前的Commiter可以通过在KnowStreaming中的Issue标签 `nomination`(提名)来提名您
|
|
||||||
|
|
||||||
1. 你的名字和姓氏
|
|
||||||
2. 指向您的Git个人资料的链接
|
|
||||||
3. 解释为什么你应该成为Commiter
|
|
||||||
4. 详细说明提名人与您合作的3个PR以及相关问题,这些问题可以证明您的能力。
|
|
||||||
|
|
||||||
另外2个Commiter需要支持您的**提名**,如果5个工作日内没有人反对,您就是提交者,如果有人反对或者想要更多的信息,Commiter会讨论并通常达成共识(5个工作日内) 。
|
|
||||||
|
|
||||||
|
|
||||||
# 开源奖励计划
|
|
||||||
|
|
||||||
|
|
||||||
我们非常欢迎开发者们为KnowStreaming开源项目贡献一份力量,相应也将给予贡献者激励以表认可与感谢。
|
|
||||||
|
|
||||||
|
|
||||||
## 参与贡献
|
|
||||||
|
|
||||||
1. 积极参与 Issue 的讨论,如答疑解惑、提供想法或报告无法解决的错误(Issue)
|
|
||||||
2. 撰写和改进项目的文档(Wiki)
|
|
||||||
3. 提交补丁优化代码(Coding)
|
|
||||||
|
|
||||||
|
|
||||||
## 你将获得
|
|
||||||
|
|
||||||
1. 加入KnowStreaming开源项目贡献者名单并展示
|
|
||||||
2. KnowStreaming开源贡献者证书(纸质&电子版)
|
|
||||||
3. KnowStreaming贡献者精美大礼包(KnowStreamin/滴滴 周边)
|
|
||||||
|
|
||||||
|
|
||||||
## 相关规则
|
|
||||||
|
|
||||||
- Contributer和Commiter都会有对应的证书和对应的礼包
|
|
||||||
- 每季度有KnowStreaming项目团队评选出杰出贡献者,颁发相应证书。
|
|
||||||
- 年末进行年度评选
|
|
||||||
|
|
||||||
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
|
|
||||||
32
README.md
@@ -45,14 +45,7 @@
|
|||||||
|
|
||||||
## `Know Streaming` 简介
|
## `Know Streaming` 简介
|
||||||
|
|
||||||
`Know Streaming`是一套云原生的Kafka管控平台,脱胎于众多互联网内部多年的Kafka运营实践经验,专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设,提供一系列特色的功能,极大地方便了用户和运维人员的日常使用,让普通运维人员都能成为Kafka专家。
|
`Know Streaming`是一套云原生的Kafka管控平台,脱胎于众多互联网内部多年的Kafka运营实践经验,专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设,提供一系列特色的功能,极大地方便了用户和运维人员的日常使用,让普通运维人员都能成为Kafka专家。整体具有以下特点:
|
||||||
|
|
||||||
我们现在正在收集 Know Streaming 用户信息,以帮助我们进一步改进 Know Streaming。
|
|
||||||
请在 [issue#663](https://github.com/didi/KnowStreaming/issues/663) 上提供您的使用信息来支持我们:[谁在使用 Know Streaming](https://github.com/didi/KnowStreaming/issues/663)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
整体具有以下特点:
|
|
||||||
|
|
||||||
- 👀 **零侵入、全覆盖**
|
- 👀 **零侵入、全覆盖**
|
||||||
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka,包括 `ZK` 或 `Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
|
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka,包括 `ZK` 或 `Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
|
||||||
@@ -90,7 +83,6 @@
|
|||||||
- [单机部署手册](docs/install_guide/单机部署手册.md)
|
- [单机部署手册](docs/install_guide/单机部署手册.md)
|
||||||
- [版本升级手册](docs/install_guide/版本升级手册.md)
|
- [版本升级手册](docs/install_guide/版本升级手册.md)
|
||||||
- [本地源码启动手册](docs/dev_guide/本地源码启动手册.md)
|
- [本地源码启动手册](docs/dev_guide/本地源码启动手册.md)
|
||||||
- [页面无数据排查手册](docs/dev_guide/页面无数据排查手册.md)
|
|
||||||
|
|
||||||
**`产品相关手册`**
|
**`产品相关手册`**
|
||||||
|
|
||||||
@@ -101,21 +93,15 @@
|
|||||||
|
|
||||||
**点击 [这里](https://doc.knowstreaming.com/product),也可以从官网获取到更多文档**
|
**点击 [这里](https://doc.knowstreaming.com/product),也可以从官网获取到更多文档**
|
||||||
|
|
||||||
**`产品网址`**
|
|
||||||
- [产品官网:https://knowstreaming.com](https://knowstreaming.com)
|
|
||||||
- [体验环境:https://demo.knowstreaming.com](https://demo.knowstreaming.com),登陆账号:admin/admin
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 成为社区贡献者
|
## 成为社区贡献者
|
||||||
|
|
||||||
1. [贡献源码](https://doc.knowstreaming.com/product/10-contribution) 了解如何成为 Know Streaming 的贡献者
|
点击 [这里](CONTRIBUTING.md),了解如何成为 Know Streaming 的贡献者
|
||||||
2. [具体贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)
|
|
||||||
3. [开源激励计划](https://doc.knowstreaming.com/product/10-contribution#105-开源激励计划)
|
|
||||||
4. [贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)
|
|
||||||
|
|
||||||
|
|
||||||
获取KnowStreaming开源社区证书。
|
|
||||||
|
|
||||||
## 加入技术交流群
|
## 加入技术交流群
|
||||||
|
|
||||||
@@ -136,7 +122,7 @@
|
|||||||
|
|
||||||
👍 我们正在组建国内最大,最权威的 **[Kafka中文社区](https://z.didi.cn/5gSF9)**
|
👍 我们正在组建国内最大,最权威的 **[Kafka中文社区](https://z.didi.cn/5gSF9)**
|
||||||
|
|
||||||
在这里你可以结交各大互联网的 Kafka大佬 以及 6200+ Kafka爱好者,一起实现知识共享,实时掌控最新行业资讯,期待 👏 您的加入中~ https://z.didi.cn/5gSF9
|
在这里你可以结交各大互联网的 Kafka大佬 以及 4000+ Kafka爱好者,一起实现知识共享,实时掌控最新行业资讯,期待 👏 您的加入中~ https://z.didi.cn/5gSF9
|
||||||
|
|
||||||
有问必答~! 互动有礼~!
|
有问必答~! 互动有礼~!
|
||||||
|
|
||||||
@@ -146,16 +132,8 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
|
|||||||
|
|
||||||
**`2、微信群`**
|
**`2、微信群`**
|
||||||
|
|
||||||
微信加群:添加`PynnXie` 的微信号备注KnowStreaming加群。
|
微信加群:添加`mike_zhangliang`、`PenceXie`的微信号备注KnowStreaming加群。
|
||||||
<br/>
|
|
||||||
|
|
||||||
加群之前有劳点一下 star,一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。
|
|
||||||
|
|
||||||
感谢感谢!!!
|
|
||||||
|
|
||||||
<img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png">
|
|
||||||
|
|
||||||
## Star History
|
## Star History
|
||||||
|
|
||||||
[](https://star-history.com/#didi/KnowStreaming&Date)
|
[](https://star-history.com/#didi/KnowStreaming&Date)
|
||||||
|
|
||||||
|
|||||||
@@ -1,281 +1,4 @@
|
|||||||
|
|
||||||
## v3.4.0
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**问题修复**
|
|
||||||
- [Bugfix]修复 Overview 指标文案错误的错误 ([#1190](https://github.com/didi/KnowStreaming/issues/1190))
|
|
||||||
- [Bugfix]修复删除 Kafka 集群后,Connect 集群任务出现 NPE 问题 ([#1129](https://github.com/didi/KnowStreaming/issues/1129))
|
|
||||||
- [Bugfix]修复在 Ldap 登录时,设置 auth-user-registration: false 会导致空指针的问题 ([#1117](https://github.com/didi/KnowStreaming/issues/1117))
|
|
||||||
- [Bugfix]修复 Ldap 登录,调用 user.getId() 出现 NPE 的问题 ([#1108](https://github.com/didi/KnowStreaming/issues/1108))
|
|
||||||
- [Bugfix]修复前端新增角色失败等问题 ([#1107](https://github.com/didi/KnowStreaming/issues/1107))
|
|
||||||
- [Bugfix]修复 ZK 四字命令解析错误的问题
|
|
||||||
- [Bugfix]修复 zk standalone 模式下,状态获取错误的问题
|
|
||||||
- [Bugfix]修复 Broker 元信息解析方法未调用导致接入集群失败的问题 ([#993](https://github.com/didi/KnowStreaming/issues/993))
|
|
||||||
- [Bugfix]修复 ConsumerAssignment 类型转换错误的问题
|
|
||||||
- [Bugfix]修复对 Connect 集群的 clusterUrl 的动态更新导致配置不生效的问题 ([#1079](https://github.com/didi/KnowStreaming/issues/1079))
|
|
||||||
- [Bugfix]修复消费组不支持重置到最旧 Offset 的问题 ([#1059](https://github.com/didi/KnowStreaming/issues/1059))
|
|
||||||
- [Bugfix]后端增加查看 User 密码的权限点 ([#1095](https://github.com/didi/KnowStreaming/issues/1095))
|
|
||||||
- [Bugfix]修复 Connect-JMX 端口维护信息错误的问题 ([#1146](https://github.com/didi/KnowStreaming/issues/1146))
|
|
||||||
- [Bugfix]修复系统管理子应用无法正常启动的问题 ([#1167](https://github.com/didi/KnowStreaming/issues/1167))
|
|
||||||
- [Bugfix]修复 Security 模块,权限点缺失问题 ([#1069](https://github.com/didi/KnowStreaming/issues/1069)), ([#1154](https://github.com/didi/KnowStreaming/issues/1154))
|
|
||||||
- [Bugfix]修复 Connect-Worker Jmx 不生效的问题 ([#1067](https://github.com/didi/KnowStreaming/issues/1067))
|
|
||||||
- [Bugfix]修复权限 ACL 管理中,消费组列表展示错误的问题 ([#1037](https://github.com/didi/KnowStreaming/issues/1037))
|
|
||||||
- [Bugfix]修复 Connect 模块没有默认勾选指标的问题([#1022](https://github.com/didi/KnowStreaming/issues/1022))
|
|
||||||
- [Bugfix]修复 es 索引 create/delete 死循环的问题 ([#1021](https://github.com/didi/KnowStreaming/issues/1021))
|
|
||||||
- [Bugfix]修复 Connect-GroupDescription 解析失败的问题 ([#1015](https://github.com/didi/KnowStreaming/issues/1015))
|
|
||||||
- [Bugfix]修复 Prometheus 开放接口中,Partition 指标 tag 缺失的问题 ([#1014](https://github.com/didi/KnowStreaming/issues/1014))
|
|
||||||
- [Bugfix]修复 Topic 消息展示,offset 为 0 不显示的问题 ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
|
|
||||||
- [Bugfix]修复重置offset接口调用过多问题
|
|
||||||
- [Bugfix]Connect 提交任务变更为只保存用户修改的配置,并修复 JSON 模式下配置展示不全的问题 ([#1158](https://github.com/didi/KnowStreaming/issues/1158))
|
|
||||||
- [Bugfix]修复消费组 Offset 重置后,提示重置成功,但是前端不刷新数据,Offset 无变化的问题 ([#1090](https://github.com/didi/KnowStreaming/issues/1090))
|
|
||||||
- [Bugfix]修复未勾选系统管理查看权限,但是依然可以查看系统管理的问题 ([#1105](https://github.com/didi/KnowStreaming/issues/1105))
|
|
||||||
|
|
||||||
|
|
||||||
**产品优化**
|
|
||||||
- [Optimize]补充接入集群时,可选的 Kafka 版本列表 ([#1204](https://github.com/didi/KnowStreaming/issues/1204))
|
|
||||||
- [Optimize]GroupTopic 信息修改为实时获取 ([#1196](https://github.com/didi/KnowStreaming/issues/1196))
|
|
||||||
- [Optimize]增加 AdminClient 观测信息 ([#1111](https://github.com/didi/KnowStreaming/issues/1111))
|
|
||||||
- [Optimize]增加 Connector 运行状态指标 ([#1110](https://github.com/didi/KnowStreaming/issues/1110))
|
|
||||||
- [Optimize]统一 DB 元信息更新格式 ([#1127](https://github.com/didi/KnowStreaming/issues/1127)), ([#1125](https://github.com/didi/KnowStreaming/issues/1125)), ([#1006](https://github.com/didi/KnowStreaming/issues/1006))
|
|
||||||
- [Optimize]日志输出增加支持 MDC,方便用户在 logback.xml 中 json 格式化日志 ([#1032](https://github.com/didi/KnowStreaming/issues/1032))
|
|
||||||
- [Optimize]Jmx 相关日志优化 ([#1082](https://github.com/didi/KnowStreaming/issues/1082))
|
|
||||||
- [Optimize]Topic-Partitions增加主动超时功能 ([#1076](https://github.com/didi/KnowStreaming/issues/1076))
|
|
||||||
- [Optimize]Topic-Messages页面后端增加按照Partition和Offset纬度的排序 ([#1075](https://github.com/didi/KnowStreaming/issues/1075))
|
|
||||||
- [Optimize]Connect-JSON模式下的JSON格式和官方API的格式不一致 ([#1080](https://github.com/didi/KnowStreaming/issues/1080)), ([#1153](https://github.com/didi/KnowStreaming/issues/1153)), ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
|
|
||||||
- [Optimize]登录页面展示的 star 数量修改为最新的数量
|
|
||||||
- [Optimize]Group 列表的 maxLag 指标调整为实时获取 ([#1074](https://github.com/didi/KnowStreaming/issues/1074))
|
|
||||||
- [Optimize]Connector增加重启、编辑、删除等权限点 ([#1066](https://github.com/didi/KnowStreaming/issues/1066)), ([#1147](https://github.com/didi/KnowStreaming/issues/1147))
|
|
||||||
- [Optimize]优化 pom.xml 中,KS版本的标签名
|
|
||||||
- [Optimize]优化集群Brokers中, Controller显示存在延迟的问题 ([#1162](https://github.com/didi/KnowStreaming/issues/1162))
|
|
||||||
- [Optimize]bump jackson version to 2.13.5
|
|
||||||
- [Optimize]权限新增 ACL,自定义权限配置,资源 TransactionalId 优化 ([#1192](https://github.com/didi/KnowStreaming/issues/1192))
|
|
||||||
- [Optimize]Connect 样式优化
|
|
||||||
- [Optimize]消费组详情控制数据实时刷新
|
|
||||||
|
|
||||||
|
|
||||||
**功能新增**
|
|
||||||
- [Feature]新增删除 Group 或 GroupOffset 功能 ([#1064](https://github.com/didi/KnowStreaming/issues/1064)), ([#1084](https://github.com/didi/KnowStreaming/issues/1084)), ([#1040](https://github.com/didi/KnowStreaming/issues/1040)), ([#1144](https://github.com/didi/KnowStreaming/issues/1144))
|
|
||||||
- [Feature]增加 Truncate 数据功能 ([#1062](https://github.com/didi/KnowStreaming/issues/1062)), ([#1043](https://github.com/didi/KnowStreaming/issues/1043)), ([#1145](https://github.com/didi/KnowStreaming/issues/1145))
|
|
||||||
- [Feature]支持指定 Server 的具体 Jmx 端口 ([#965](https://github.com/didi/KnowStreaming/issues/965))
|
|
||||||
|
|
||||||
|
|
||||||
**文档更新**
|
|
||||||
- [Doc]FAQ 补充 ES 8.x 版本使用说明 ([#1189](https://github.com/didi/KnowStreaming/issues/1189))
|
|
||||||
- [Doc]补充启动失败的说明 ([#1126](https://github.com/didi/KnowStreaming/issues/1126))
|
|
||||||
- [Doc]补充 ZK 无数据排查说明 ([#1004](https://github.com/didi/KnowStreaming/issues/1004))
|
|
||||||
- [Doc]无数据排查文档,补充 ES 集群 Shard 满的异常日志
|
|
||||||
- [Doc]README 补充页面无数据排查手册链接
|
|
||||||
- [Doc]补充连接特定 Jmx 端口的说明 ([#965](https://github.com/didi/KnowStreaming/issues/965))
|
|
||||||
- [Doc]补充 zk_properties 字段的使用说明 ([#1003](https://github.com/didi/KnowStreaming/issues/1003))
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## v3.3.0
|
|
||||||
|
|
||||||
**问题修复**
|
|
||||||
- 修复 Connect 的 JMX-Port 配置未生效问题;
|
|
||||||
- 修复 不存在 Connector 时,OverView 页面的数据一直处于加载中的问题;
|
|
||||||
- 修复 Group 分区信息,分页时展示不全的问题;
|
|
||||||
- 修复采集副本指标时,参数传递错误的问题;
|
|
||||||
- 修复用户信息修改后,用户列表会抛出空指针异常的问题;
|
|
||||||
- 修复 Topic 详情页面,查看消息时,选择分区不生效问题;
|
|
||||||
- 修复对 ZK 客户端进行配置后不生效的问题;
|
|
||||||
- 修复 connect 模块,指标中缺少健康巡检项通过数的问题;
|
|
||||||
- 修复 connect 模块,指标获取方法存在映射错误的问题;
|
|
||||||
- 修复 connect 模块,max 纬度指标获取错误的问题;
|
|
||||||
- 修复 Topic 指标大盘 TopN 指标显示信息错误的问题;
|
|
||||||
- 修复 Broker Similar Config 显示错误的问题;
|
|
||||||
- 修复解析 ZK 四字命令时,数据类型设置错误导致空指针的问题;
|
|
||||||
- 修复新增 Topic 时,清理策略选项版本控制错误的问题;
|
|
||||||
- 修复新接入集群时 Controller-Host 信息不显示的问题;
|
|
||||||
- 修复 Connector 和 MM2 列表搜索不生效的问题;
|
|
||||||
- 修复 Zookeeper 页面,Leader 显示存在异常的问题;
|
|
||||||
- 修复前端打包失败的问题;
|
|
||||||
|
|
||||||
|
|
||||||
**产品优化**
|
|
||||||
- ZK Overview 页面补充默认展示的指标;
|
|
||||||
- 统一初始化 ES 索引模版的脚本为 init_es_template.sh,同时新增缺失的 connect 索引模版初始化脚本,去除多余的 replica 和 zookeper 索引模版初始化脚本;
|
|
||||||
- 指标大盘页面,优化指标筛选操作后,无指标数据的指标卡片由不显示改为显示,并增加无数据的兜底;
|
|
||||||
- 删除从 ES 读写 replica 指标的相关代码;
|
|
||||||
- 优化 Topic 健康巡检的日志,明确错误的原因;
|
|
||||||
- 优化无 ZK 模块时,巡检详情忽略对 ZK 的展示;
|
|
||||||
- 优化本地缓存大小为可配置;
|
|
||||||
- Task 模块中的返回中,补充任务的分组信息;
|
|
||||||
- FAQ 补充 Ldap 的配置说明;
|
|
||||||
- FAQ 补充接入 Kerberos 认证的 Kafka 集群的配置说明;
|
|
||||||
- ks_km_kafka_change_record 表增加时间纬度的索引,优化查询性能;
|
|
||||||
- 优化 ZK 健康巡检的日志,便于问题的排查;
|
|
||||||
|
|
||||||
**功能新增**
|
|
||||||
- 新增基于滴滴 Kafka 的 Topic 复制功能(需使用滴滴 Kafka 才可具备该能力);
|
|
||||||
- Topic 指标大盘,新增 Topic 复制相关的指标;
|
|
||||||
- 新增基于 TestContainers 的单测;
|
|
||||||
|
|
||||||
|
|
||||||
**Kafka MM2 Beta版 (v3.3.0版本新增发布)**
|
|
||||||
- MM2 任务的增删改查;
|
|
||||||
- MM2 任务的指标大盘;
|
|
||||||
- MM2 任务的健康状态;
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## v3.2.0
|
|
||||||
|
|
||||||
**问题修复**
|
|
||||||
- 修复健康巡检结果更新至 DB 时,出现死锁问题;
|
|
||||||
- 修复 KafkaJMXClient 类中,logger错误的问题;
|
|
||||||
- 后端修复 Topic 过期策略在 0.10.1.0 版本能多选的问题,实际应该只能二选一;
|
|
||||||
- 修复接入集群时,不填写集群配置会报错的问题;
|
|
||||||
- 升级 spring-context 至 5.3.19 版本,修复安全漏洞;
|
|
||||||
- 修复 Broker & Topic 修改配置时,多版本兼容配置的版本信息错误的问题;
|
|
||||||
- 修复 Topic 列表的健康分为健康状态;
|
|
||||||
- 修复 Broker LogSize 指标存储名称错误导致查询不到的问题;
|
|
||||||
- 修复 Prometheus 中,缺少 Group 部分指标的问题;
|
|
||||||
- 修复因缺少健康状态指标导致集群数错误的问题;
|
|
||||||
- 修复后台任务记录操作日志时,因缺少操作用户信息导致出现异常的问题;
|
|
||||||
- 修复 Replica 指标查询时,DSL 错误的问题;
|
|
||||||
- 关闭 errorLogger,修复错误日志重复输出的问题;
|
|
||||||
- 修复系统管理更新用户信息失败的问题;
|
|
||||||
- 修复因原AR信息丢失,导致迁移任务一直处于执行中的错误;
|
|
||||||
- 修复集群 Topic 列表实时数据查询时,出现失败的问题;
|
|
||||||
- 修复集群 Topic 列表,页面白屏问题;
|
|
||||||
- 修复副本变更时,因AR数据异常,导致数组访问越界的问题;
|
|
||||||
|
|
||||||
|
|
||||||
**产品优化**
|
|
||||||
- 优化健康巡检为按照资源维度多线程并发处理;
|
|
||||||
- 统一日志输出格式,并优化部分输出的日志;
|
|
||||||
- 优化 ZK 四字命令结果解析过程中,容易引起误解的 WARN 日志;
|
|
||||||
- 优化 Zookeeper 详情中,目录结构的搜索文案;
|
|
||||||
- 优化线程池的名称,方便第三方系统进行相关问题的分析;
|
|
||||||
- 去除 ESClient 的并发访问控制,降低 ESClient 创建数及提升利用率;
|
|
||||||
- 优化 Topic Messages 抽屉文案;
|
|
||||||
- 优化 ZK 健康巡检失败时的错误日志信息;
|
|
||||||
- 提高 Offset 信息获取的超时时间,降低并发过高时出现请求超时的概率;
|
|
||||||
- 优化 Topic & Partition 元信息的更新策略,降低对 DB 连接的占用;
|
|
||||||
- 优化 Sonar 代码扫码问题;
|
|
||||||
- 优化分区 Offset 指标的采集;
|
|
||||||
- 优化前端图表相关组件逻辑;
|
|
||||||
- 优化产品主题色;
|
|
||||||
- Consumer 列表刷新按钮新增 hover 提示;
|
|
||||||
- 优化配置 Topic 的消息大小时的测试弹框体验;
|
|
||||||
- 优化 Overview 页面 TopN 查询的流程;
|
|
||||||
|
|
||||||
|
|
||||||
**功能新增**
|
|
||||||
- 新增页面无数据排查文档;
|
|
||||||
- 增加 ES 索引删除的功能;
|
|
||||||
- 支持拆分API服务和Job服务部署;
|
|
||||||
|
|
||||||
|
|
||||||
**Kafka Connect Beta版 (v3.2.0版本新增发布)**
|
|
||||||
- Connect 集群的纳管;
|
|
||||||
- Connector 的增删改查;
|
|
||||||
- Connect 集群 & Connector 的指标大盘;
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## v3.1.0
|
|
||||||
|
|
||||||
**Bug修复**
|
|
||||||
- 修复重置 Group Offset 的提示信息中,缺少Dead状态也可进行重置的描述;
|
|
||||||
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
|
|
||||||
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
|
|
||||||
- 修复 git 目录不存在时,打包不能正常进行的问题;
|
|
||||||
- 修复 KRaft 模式的 Kafka 集群,JMX PORT 显示 -1 的问题;
|
|
||||||
|
|
||||||
|
|
||||||
**体验优化**
|
|
||||||
- 优化Cluster、Broker、Topic、Group的健康分为健康状态;
|
|
||||||
- 去除健康巡检配置中的权重信息;
|
|
||||||
- 错误提示页面展示优化;
|
|
||||||
- 前端打包编译依赖默认使用 taobao 镜像;
|
|
||||||
- 重新设计优化导航栏的 icon ;
|
|
||||||
|
|
||||||
|
|
||||||
**新增**
|
|
||||||
- 个人头像下拉信息中,新增产品版本信息;
|
|
||||||
- 多集群列表页面,新增集群健康状态分布信息;
|
|
||||||
|
|
||||||
|
|
||||||
**Kafka ZK 部分 (v3.1.0版本正式发布)**
|
|
||||||
- 新增 ZK 集群的指标大盘信息;
|
|
||||||
- 新增 ZK 集群的服务状态概览信息;
|
|
||||||
- 新增 ZK 集群的服务节点列表信息;
|
|
||||||
- 新增 Kafka 在 ZK 的存储数据查看功能;
|
|
||||||
- 新增 ZK 的健康巡检及健康状态计算;
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## v3.0.1
|
|
||||||
|
|
||||||
**Bug修复**
|
|
||||||
- 修复重置 Group Offset 时,提示信息中缺少 Dead 状态也可进行重置的信息;
|
|
||||||
- 修复 Ldap 某个属性不存在时,会直接抛出空指针导致登陆失败的问题;
|
|
||||||
- 修复集群 Topic 列表页,健康分详情信息中,检查时间展示错误的问题;
|
|
||||||
- 修复更新健康检查结果时,出现死锁的问题;
|
|
||||||
- 修复 Replica 索引模版错误的问题;
|
|
||||||
- 修复 FAQ 文档中的错误链接;
|
|
||||||
- 修复 Broker 的 TopN 指标不存在时,页面数据不展示的问题;
|
|
||||||
- 修复 Group 详情页,图表时间范围选择不生效的问题;
|
|
||||||
|
|
||||||
|
|
||||||
**体验优化**
|
|
||||||
- 集群 Group 列表按照 Group 维度进行展示;
|
|
||||||
- 优化避免因 ES 中该指标不存在,导致日志中出现大量空指针的问题;
|
|
||||||
- 优化全局 Message & Notification 展示效果;
|
|
||||||
- 优化 Topic 扩分区名称 & 描述展示;
|
|
||||||
|
|
||||||
|
|
||||||
**新增**
|
|
||||||
- Broker 列表页面,新增 JMX 是否成功连接的信息;
|
|
||||||
|
|
||||||
|
|
||||||
**ZK 部分(未完全发布)**
|
|
||||||
- 后端补充 Kafka ZK 指标采集,Kafka ZK 信息获取相关功能;
|
|
||||||
- 增加本地缓存,避免同一采集周期内 ZK 指标重复采集;
|
|
||||||
- 增加 ZK 节点采集失败跳过策略,避免不断对存在问题的节点不断尝试;
|
|
||||||
- 修复 zkAvgLatency 指标转 Long 时抛出异常问题;
|
|
||||||
- 修复 ks_km_zookeeper 表中,role 字段类型错误问题;
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## v3.0.0
|
|
||||||
|
|
||||||
**Bug修复**
|
|
||||||
- 修复 Group 指标防重复采集不生效问题
|
|
||||||
- 修复自动创建 ES 索引模版失败问题
|
|
||||||
- 修复 Group+Topic 列表中存在已删除Topic的问题
|
|
||||||
- 修复使用 MySQL-8 ,因兼容问题, start_time 信息为 NULL 时,会导致创建任务失败的问题
|
|
||||||
- 修复 Group 信息表更新时,出现死锁的问题
|
|
||||||
- 修复图表补点逻辑与图表时间范围不适配的问题
|
|
||||||
|
|
||||||
|
|
||||||
**体验优化**
|
|
||||||
- 按照资源类别,拆分健康巡检任务
|
|
||||||
- 优化 Group 详情页的指标为实时获取
|
|
||||||
- 图表拖拽排序支持用户级存储
|
|
||||||
- 多集群列表 ZK 信息展示兼容无 ZK 情况
|
|
||||||
- Topic 详情消息预览支持复制功能
|
|
||||||
- 部分内容大数字支持千位分割符展示
|
|
||||||
|
|
||||||
|
|
||||||
**新增**
|
|
||||||
- 集群信息中,新增 Zookeeper 客户端配置字段
|
|
||||||
- 集群信息中,新增 Kafka 集群运行模式字段
|
|
||||||
- 新增 docker-compose 的部署方式
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## v3.0.0-beta.3
|
## v3.0.0-beta.3
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ curl -s --connect-timeout 10 -o /dev/null -X POST -H 'cache-control: no-cache' -
|
|||||||
],
|
],
|
||||||
"settings" : {
|
"settings" : {
|
||||||
"index" : {
|
"index" : {
|
||||||
"number_of_shards" : "2"
|
"number_of_shards" : "10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"mappings" : {
|
"mappings" : {
|
||||||
@@ -115,7 +115,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
],
|
],
|
||||||
"settings" : {
|
"settings" : {
|
||||||
"index" : {
|
"index" : {
|
||||||
"number_of_shards" : "2"
|
"number_of_shards" : "10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"mappings" : {
|
"mappings" : {
|
||||||
@@ -302,7 +302,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
],
|
],
|
||||||
"settings" : {
|
"settings" : {
|
||||||
"index" : {
|
"index" : {
|
||||||
"number_of_shards" : "6"
|
"number_of_shards" : "10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"mappings" : {
|
"mappings" : {
|
||||||
@@ -377,7 +377,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
],
|
],
|
||||||
"settings" : {
|
"settings" : {
|
||||||
"index" : {
|
"index" : {
|
||||||
"number_of_shards" : "6"
|
"number_of_shards" : "10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"mappings" : {
|
"mappings" : {
|
||||||
@@ -436,6 +436,95 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
"aliases" : { }
|
"aliases" : { }
|
||||||
}'
|
}'
|
||||||
|
|
||||||
|
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
|
||||||
|
"order" : 10,
|
||||||
|
"index_patterns" : [
|
||||||
|
"ks_kafka_partition_metric*"
|
||||||
|
],
|
||||||
|
"settings" : {
|
||||||
|
"index" : {
|
||||||
|
"number_of_shards" : "10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings" : {
|
||||||
|
"properties" : {
|
||||||
|
"brokerId" : {
|
||||||
|
"type" : "long"
|
||||||
|
},
|
||||||
|
"partitionId" : {
|
||||||
|
"type" : "long"
|
||||||
|
},
|
||||||
|
"routingValue" : {
|
||||||
|
"type" : "text",
|
||||||
|
"fields" : {
|
||||||
|
"keyword" : {
|
||||||
|
"ignore_above" : 256,
|
||||||
|
"type" : "keyword"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"clusterPhyId" : {
|
||||||
|
"type" : "long"
|
||||||
|
},
|
||||||
|
"topic" : {
|
||||||
|
"type" : "keyword"
|
||||||
|
},
|
||||||
|
"metrics" : {
|
||||||
|
"properties" : {
|
||||||
|
"LogStartOffset" : {
|
||||||
|
"type" : "float"
|
||||||
|
},
|
||||||
|
"Messages" : {
|
||||||
|
"type" : "float"
|
||||||
|
},
|
||||||
|
"LogEndOffset" : {
|
||||||
|
"type" : "float"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"key" : {
|
||||||
|
"type" : "text",
|
||||||
|
"fields" : {
|
||||||
|
"keyword" : {
|
||||||
|
"ignore_above" : 256,
|
||||||
|
"type" : "keyword"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"timestamp" : {
|
||||||
|
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
||||||
|
"index" : true,
|
||||||
|
"type" : "date",
|
||||||
|
"doc_values" : true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aliases" : { }
|
||||||
|
}[root@10-255-0-23 template]# cat ks_kafka_replication_metric
|
||||||
|
PUT _template/ks_kafka_replication_metric
|
||||||
|
{
|
||||||
|
"order" : 10,
|
||||||
|
"index_patterns" : [
|
||||||
|
"ks_kafka_replication_metric*"
|
||||||
|
],
|
||||||
|
"settings" : {
|
||||||
|
"index" : {
|
||||||
|
"number_of_shards" : "10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings" : {
|
||||||
|
"properties" : {
|
||||||
|
"timestamp" : {
|
||||||
|
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
||||||
|
"index" : true,
|
||||||
|
"type" : "date",
|
||||||
|
"doc_values" : true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aliases" : { }
|
||||||
|
}'
|
||||||
|
|
||||||
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
|
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
|
||||||
"order" : 10,
|
"order" : 10,
|
||||||
"index_patterns" : [
|
"index_patterns" : [
|
||||||
@@ -443,7 +532,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
],
|
],
|
||||||
"settings" : {
|
"settings" : {
|
||||||
"index" : {
|
"index" : {
|
||||||
"number_of_shards" : "6"
|
"number_of_shards" : "10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"mappings" : {
|
"mappings" : {
|
||||||
@@ -553,473 +642,6 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
|
|||||||
"aliases" : { }
|
"aliases" : { }
|
||||||
}'
|
}'
|
||||||
|
|
||||||
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_zookeeper_metric -d '{
|
|
||||||
"order" : 10,
|
|
||||||
"index_patterns" : [
|
|
||||||
"ks_kafka_zookeeper_metric*"
|
|
||||||
],
|
|
||||||
"settings" : {
|
|
||||||
"index" : {
|
|
||||||
"number_of_shards" : "2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings" : {
|
|
||||||
"properties" : {
|
|
||||||
"routingValue" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"clusterPhyId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"metrics" : {
|
|
||||||
"properties" : {
|
|
||||||
"AvgRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MinRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MaxRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"OutstandingRequests" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"NodeCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"WatchCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"NumAliveConnections" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"PacketsReceived" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"PacketsSent" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"EphemeralsCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"ApproximateDataSize" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"OpenFileDescriptorCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MaxFileDescriptorCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"key" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp" : {
|
|
||||||
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
|
||||||
"type" : "date"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"aliases" : { }
|
|
||||||
}'
|
|
||||||
|
|
||||||
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_cluster_metric -d '{
|
|
||||||
"order" : 10,
|
|
||||||
"index_patterns" : [
|
|
||||||
"ks_kafka_connect_cluster_metric*"
|
|
||||||
],
|
|
||||||
"settings" : {
|
|
||||||
"index" : {
|
|
||||||
"number_of_shards" : "2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings" : {
|
|
||||||
"properties" : {
|
|
||||||
"connectClusterId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"routingValue" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"clusterPhyId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"metrics" : {
|
|
||||||
"properties" : {
|
|
||||||
"ConnectorCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorStartupAttemptsTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorStartupFailurePercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorStartupFailureTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorStartupSuccessPercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorStartupSuccessTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskStartupAttemptsTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskStartupFailurePercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskStartupFailureTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskStartupSuccessPercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TaskStartupSuccessTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"key" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp" : {
|
|
||||||
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
|
||||||
"index" : true,
|
|
||||||
"type" : "date",
|
|
||||||
"doc_values" : true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"aliases" : { }
|
|
||||||
}'
|
|
||||||
|
|
||||||
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_connector_metric -d '{
|
|
||||||
"order" : 10,
|
|
||||||
"index_patterns" : [
|
|
||||||
"ks_kafka_connect_connector_metric*"
|
|
||||||
],
|
|
||||||
"settings" : {
|
|
||||||
"index" : {
|
|
||||||
"number_of_shards" : "2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings" : {
|
|
||||||
"properties" : {
|
|
||||||
"connectClusterId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"routingValue" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"connectorName" : {
|
|
||||||
"type" : "keyword"
|
|
||||||
},
|
|
||||||
"connectorNameAndClusterId" : {
|
|
||||||
"type" : "keyword"
|
|
||||||
},
|
|
||||||
"clusterPhyId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"metrics" : {
|
|
||||||
"properties" : {
|
|
||||||
"HealthState" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorTotalTaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"HealthCheckPassed" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"HealthCheckTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorRunningTaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorPausedTaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorFailedTaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ConnectorUnassignedTaskCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"BatchSizeAvg" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"BatchSizeMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitAvgTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitMaxTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitFailurePercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitSuccessPercentage" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"PollBatchAvgTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"PollBatchMaxTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordActiveCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordActiveCountAvg" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordActiveCountMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordPollRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordPollTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordWriteRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SourceRecordWriteTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitCompletionRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitCompletionTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitSkipRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"OffsetCommitSkipTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"PartitionCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"PutBatchAvgTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"PutBatchMaxTimeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordActiveCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordActiveCountAvg" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordActiveCountMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordLagMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordReadRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordReadTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordSendRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"SinkRecordSendTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"DeadletterqueueProduceFailures" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"DeadletterqueueProduceRequests" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"LastErrorTimestamp" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TotalErrorsLogged" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TotalRecordErrors" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TotalRecordFailures" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TotalRecordsSkipped" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"TotalRetries" : {
|
|
||||||
"type" : "float"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"key" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp" : {
|
|
||||||
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
|
||||||
"index" : true,
|
|
||||||
"type" : "date",
|
|
||||||
"doc_values" : true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"aliases" : { }
|
|
||||||
}'
|
|
||||||
|
|
||||||
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_connect_mirror_maker_metric -d '{
|
|
||||||
"order" : 10,
|
|
||||||
"index_patterns" : [
|
|
||||||
"ks_kafka_connect_mirror_maker_metric*"
|
|
||||||
],
|
|
||||||
"settings" : {
|
|
||||||
"index" : {
|
|
||||||
"number_of_shards" : "2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings" : {
|
|
||||||
"properties" : {
|
|
||||||
"connectClusterId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"routingValue" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"connectorName" : {
|
|
||||||
"type" : "keyword"
|
|
||||||
},
|
|
||||||
"connectorNameAndClusterId" : {
|
|
||||||
"type" : "keyword"
|
|
||||||
},
|
|
||||||
"clusterPhyId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"metrics" : {
|
|
||||||
"properties" : {
|
|
||||||
"HealthState" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"HealthCheckTotal" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ByteCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ByteRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordAgeMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordAgeMsAvg" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordAgeMsMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordAgeMsMin" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordCount" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"RecordRate" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ReplicationLatencyMs" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ReplicationLatencyMsAvg" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ReplicationLatencyMsMax" : {
|
|
||||||
"type" : "float"
|
|
||||||
},
|
|
||||||
"ReplicationLatencyMsMin" : {
|
|
||||||
"type" : "float"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"key" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp" : {
|
|
||||||
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
|
||||||
"index" : true,
|
|
||||||
"type" : "date",
|
|
||||||
"doc_values" : true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"aliases" : { }
|
|
||||||
}'
|
|
||||||
|
|
||||||
|
|
||||||
for i in {0..6};
|
for i in {0..6};
|
||||||
do
|
do
|
||||||
logdate=_$(date -d "${i} day ago" +%Y-%m-%d)
|
logdate=_$(date -d "${i} day ago" +%Y-%m-%d)
|
||||||
@@ -1027,10 +649,7 @@ do
|
|||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
|
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
|
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
|
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
|
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_cluster_metric${logdate} && \
|
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_connector_metric${logdate} && \
|
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_connect_mirror_maker_metric${logdate} && \
|
|
||||||
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
|
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
|
||||||
exit 2
|
exit 2
|
||||||
done
|
done
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
<mxfile host="65bd71144e">
|
|
||||||
<diagram id="vxzhwhZdNVAY19FZ4dgb" name="Page-1">
|
|
||||||
<mxGraphModel dx="1194" dy="733" grid="0" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
|
|
||||||
<root>
|
|
||||||
<mxCell id="0"/>
|
|
||||||
<mxCell id="1" parent="0"/>
|
|
||||||
<mxCell id="4" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="16">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="200" y="540" as="targetPoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="7" style="edgeStyle=none;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1" source="2">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="360" y="240" as="targetPoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="5" style="edgeStyle=none;html=1;startArrow=none;strokeColor=#33FF33;strokeWidth=2;" edge="1" parent="1">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="200" y="400" as="targetPoint"/>
|
|
||||||
<mxPoint x="360" y="360" as="sourcePoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="3" value="C3" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#FF8000;strokeWidth=2;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="340" y="280" width="40" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=0;entryDx=0;entryDy=0;entryPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="8" target="3">
|
|
||||||
<mxGeometry relative="1" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="8" value="fix_928" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="320" y="40" width="80" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="9" value="github_master" style="rounded=1;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="160" y="40" width="80" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="10" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;startArrow=none;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="11" target="2">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="200" y="640" as="targetPoint"/>
|
|
||||||
<mxPoint x="200" y="80" as="sourcePoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="2" value="C2" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="180" y="200" width="40" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="12" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=classic;endFill=1;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="9" target="11">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="200" y="200" as="targetPoint"/>
|
|
||||||
<mxPoint x="200" y="80" as="sourcePoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="11" value="C1" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="180" y="120" width="40" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="23" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;strokeColor=#FF8000;strokeWidth=2;" edge="1" parent="1" source="3">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="360" y="360" as="targetPoint"/>
|
|
||||||
<mxPoint x="360" y="400" as="sourcePoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="17" value="" style="edgeStyle=none;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;startArrow=none;endArrow=none;strokeWidth=2;strokeColor=#6666FF;" edge="1" parent="1" source="2" target="16">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="200" y="640" as="targetPoint"/>
|
|
||||||
<mxPoint x="200" y="240" as="sourcePoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="16" value="C4" style="verticalLabelPosition=middle;verticalAlign=middle;html=1;shape=mxgraph.flowchart.on-page_reference;labelPosition=center;align=center;strokeColor=#6666FF;strokeWidth=2;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="180" y="440" width="40" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="22" value="Tag-v3.2.0" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="100" y="120" width="80" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="24" value="Tag-v3.2.1" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;fillColor=none;strokeColor=none;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="100" y="440" width="80" height="40" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="27" value="切换到主分支:git checkout github_master" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="90" width="240" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="34" style="edgeStyle=none;html=1;exitX=0;exitY=0;exitDx=0;exitDy=0;entryX=0.855;entryY=0.145;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="28" target="2">
|
|
||||||
<mxGeometry relative="1" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="28" value="主分支拉最新代码:git pull" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="120" width="160" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="35" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="29">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="270" y="225" as="targetPoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="29" value="基于主分支拉新分支:git checkout -b fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="210" width="250" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="37" style="edgeStyle=none;html=1;exitX=0;exitY=1;exitDx=0;exitDy=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;entryPerimeter=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="30" target="3">
|
|
||||||
<mxGeometry relative="1" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="30" value="提交代码:git commit -m "[Optimize]优化xxx问题(#928)"" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="270" width="320" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="31" value="提交到自己远端仓库:git push --set-upstream origin fix_928" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="300" width="334" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="38" style="edgeStyle=none;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;dashed=1;dashPattern=8 8;fontSize=18;endArrow=none;endFill=0;" edge="1" parent="1" source="32">
|
|
||||||
<mxGeometry relative="1" as="geometry">
|
|
||||||
<mxPoint x="280" y="380" as="targetPoint"/>
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="32" value="GitHub页面发起Pull Request请求,管理员合入主仓库" style="rounded=0;whiteSpace=wrap;html=1;absoluteArcSize=1;arcSize=14;strokeWidth=0;labelPosition=center;verticalLabelPosition=middle;align=center;verticalAlign=middle;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="520" y="360" width="300" height="30" as="geometry"/>
|
|
||||||
</mxCell>
|
|
||||||
</root>
|
|
||||||
</mxGraphModel>
|
|
||||||
</diagram>
|
|
||||||
</mxfile>
|
|
||||||
|
Before Width: | Height: | Size: 64 KiB |
|
Before Width: | Height: | Size: 180 KiB |
|
Before Width: | Height: | Size: 80 KiB |
|
Before Width: | Height: | Size: 631 KiB |
@@ -1 +0,0 @@
|
|||||||
TODO.
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
# 贡献名单
|
|
||||||
|
|
||||||
- [贡献名单](#贡献名单)
|
|
||||||
- [1、贡献者角色](#1贡献者角色)
|
|
||||||
- [1.1、Maintainer](#11maintainer)
|
|
||||||
- [1.2、Committer](#12committer)
|
|
||||||
- [1.3、Contributor](#13contributor)
|
|
||||||
- [2、贡献者名单](#2贡献者名单)
|
|
||||||
|
|
||||||
|
|
||||||
## 1、贡献者角色
|
|
||||||
|
|
||||||
KnowStreaming 开发者包含 Maintainer、Committer、Contributor 三种角色,每种角色的标准定义如下。
|
|
||||||
|
|
||||||
### 1.1、Maintainer
|
|
||||||
|
|
||||||
Maintainer 是对 KnowStreaming 项目的演进和发展做出显著贡献的个人。具体包含以下的标准:
|
|
||||||
|
|
||||||
- 完成多个关键模块或者工程的设计与开发,是项目的核心开发人员;
|
|
||||||
- 持续的投入和激情,能够积极参与社区、官网、issue、PR 等项目相关事项的维护;
|
|
||||||
- 在社区中具有有目共睹的影响力,能够代表 KnowStreaming 参加重要的社区会议和活动;
|
|
||||||
- 具有培养 Committer 和 Contributor 的意识和能力;
|
|
||||||
|
|
||||||
### 1.2、Committer
|
|
||||||
|
|
||||||
Committer 是具有 KnowStreaming 仓库写权限的个人,包含以下的标准:
|
|
||||||
|
|
||||||
- 能够在长时间内做持续贡献 issue、PR 的个人;
|
|
||||||
- 参与 issue 列表的维护及重要 feature 的讨论;
|
|
||||||
- 参与 code review;
|
|
||||||
|
|
||||||
### 1.3、Contributor
|
|
||||||
|
|
||||||
Contributor 是对 KnowStreaming 项目有贡献的个人,标准为:
|
|
||||||
|
|
||||||
- 提交过 PR 并被合并;
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2、贡献者名单
|
|
||||||
|
|
||||||
开源贡献者名单(不定期更新)
|
|
||||||
|
|
||||||
在名单内,但是没有收到贡献者礼品的同学,可以联系:szzdzhp001
|
|
||||||
|
|
||||||
| 姓名 | Github | 角色 | 公司 |
|
|
||||||
| ------------------- | ---------------------------------------------------------- | ----------- | -------- |
|
|
||||||
| 张亮 | [@zhangliangboy](https://github.com/zhangliangboy) | Maintainer | 滴滴出行 |
|
|
||||||
| 谢鹏 | [@PenceXie](https://github.com/PenceXie) | Maintainer | 滴滴出行 |
|
|
||||||
| 赵情融 | [@zqrferrari](https://github.com/zqrferrari) | Maintainer | 滴滴出行 |
|
|
||||||
| 石臻臻 | [@shirenchuang](https://github.com/shirenchuang) | Maintainer | 滴滴出行 |
|
|
||||||
| 曾巧 | [@ZQKC](https://github.com/ZQKC) | Maintainer | 滴滴出行 |
|
|
||||||
| 孙超 | [@lucasun](https://github.com/lucasun) | Maintainer | 滴滴出行 |
|
|
||||||
| 洪华驰 | [@brodiehong](https://github.com/brodiehong) | Maintainer | 滴滴出行 |
|
|
||||||
| 许喆 | [@potaaaaaato](https://github.com/potaaaaaato) | Committer | 滴滴出行 |
|
|
||||||
| 郭宇航 | [@GraceWalk](https://github.com/GraceWalk) | Committer | 滴滴出行 |
|
|
||||||
| 李伟 | [@velee](https://github.com/velee) | Committer | 滴滴出行 |
|
|
||||||
| 张占昌 | [@zzccctv](https://github.com/zzccctv) | Committer | 滴滴出行 |
|
|
||||||
| 王东方 | [@wangdongfang-aden](https://github.com/wangdongfang-aden) | Committer | 滴滴出行 |
|
|
||||||
| 王耀波 | [@WYAOBO](https://github.com/WYAOBO) | Committer | 滴滴出行 |
|
|
||||||
| 赵寅锐 | [@ZHAOYINRUI](https://github.com/ZHAOYINRUI) | Maintainer | 字节跳动 |
|
|
||||||
| haoqi123 | [@haoqi123](https://github.com/haoqi123) | Contributor | 前程无忧 |
|
|
||||||
| chaixiaoxue | [@chaixiaoxue](https://github.com/chaixiaoxue) | Contributor | SYNNEX |
|
|
||||||
| 陆晗 | [@luhea](https://github.com/luhea) | Contributor | 竞技世界 |
|
|
||||||
| Mengqi777 | [@Mengqi777](https://github.com/Mengqi777) | Contributor | 腾讯 |
|
|
||||||
| ruanliang-hualun | [@ruanliang-hualun](https://github.com/ruanliang-hualun) | Contributor | 网易 |
|
|
||||||
| 17hao | [@17hao](https://github.com/17hao) | Contributor | |
|
|
||||||
| Huyueeer | [@Huyueeer](https://github.com/Huyueeer) | Contributor | INVENTEC |
|
|
||||||
| lomodays207 | [@lomodays207](https://github.com/lomodays207) | Contributor | 建信金科 |
|
|
||||||
| Super .Wein(星痕) | [@superspeedone](https://github.com/superspeedone) | Contributor | 韵达 |
|
|
||||||
| Hongten | [@Hongten](https://github.com/Hongten) | Contributor | Shopee |
|
|
||||||
| 徐正熙 | [@hyper-xx)](https://github.com/hyper-xx) | Contributor | 滴滴出行 |
|
|
||||||
| RichardZhengkay | [@RichardZhengkay](https://github.com/RichardZhengkay) | Contributor | 趣街 |
|
|
||||||
| 罐子里的茶 | [@gzldc](https://github.com/gzldc) | Contributor | 道富 |
|
|
||||||
| 陈忠玉 | [@paula](https://github.com/chenzhongyu11) | Contributor | 平安产险 |
|
|
||||||
| 杨光 | [@yaangvipguang](https://github.com/yangvipguang) | Contributor |
|
|
||||||
| 王亚聪 | [@wangyacongi](https://github.com/wangyacongi) | Contributor |
|
|
||||||
| Yang Jing | [@yangbajing](https://github.com/yangbajing) | Contributor | |
|
|
||||||
| 刘新元 Liu XinYuan | [@Liu-XinYuan](https://github.com/Liu-XinYuan) | Contributor | |
|
|
||||||
| Joker | [@LiubeyJokerQueue](https://github.com/JokerQueue) | Contributor | 丰巢 |
|
|
||||||
| Eason Lau | [@Liubey](https://github.com/Liubey) | Contributor | |
|
|
||||||
| hailanxin | [@hailanxin](https://github.com/hailanxin) | Contributor | |
|
|
||||||
| Qi Zhang | [@zzzhangqi](https://github.com/zzzhangqi) | Contributor | 好雨科技 |
|
|
||||||
| fengxsong | [@fengxsong](https://github.com/fengxsong) | Contributor | |
|
|
||||||
| 谢晓东 | [@Strangevy](https://github.com/Strangevy) | Contributor | 花生日记 |
|
|
||||||
| ZhaoXinlong | [@ZhaoXinlong](https://github.com/ZhaoXinlong) | Contributor | |
|
|
||||||
| xuehaipeng | [@xuehaipeng](https://github.com/xuehaipeng) | Contributor | |
|
|
||||||
| 孔令续 | [@mrazkong](https://github.com/mrazkong) | Contributor | |
|
|
||||||
| pierre xiong | [@pierre94](https://github.com/pierre94) | Contributor | |
|
|
||||||
| PengShuaixin | [@PengShuaixin](https://github.com/PengShuaixin) | Contributor | |
|
|
||||||
| 梁壮 | [@lz](https://github.com/silent-night-no-trace) | Contributor | |
|
|
||||||
| 张晓寅 | [@ahu0605](https://github.com/ahu0605) | Contributor | 电信数智 |
|
|
||||||
| 黄海婷 | [@Huanghaiting](https://github.com/Huanghaiting) | Contributor | 云徙科技 |
|
|
||||||
| 任祥德 | [@RenChauncy](https://github.com/RenChauncy) | Contributor | 探马企服 |
|
|
||||||
| 胡圣林 | [@slhu997](https://github.com/slhu997) | Contributor | |
|
|
||||||
| 史泽颖 | [@shizeying](https://github.com/shizeying) | Contributor | |
|
|
||||||
| 王玉博 | [@Wyb7290](https://github.com/Wyb7290) | Committer | |
|
|
||||||
| 伍璇 | [@Luckywustone](https://github.com/Luckywustone) | Contributor ||
|
|
||||||
| 邓苑 | [@CatherineDY](https://github.com/CatherineDY) | Contributor ||
|
|
||||||
| 封琼凤 | [@Luckywustone](https://github.com/fengqiongfeng) | Committer ||
|
|
||||||
@@ -1,168 +0,0 @@
|
|||||||
# 贡献指南
|
|
||||||
|
|
||||||
- [贡献指南](#贡献指南)
|
|
||||||
- [1、行为准则](#1行为准则)
|
|
||||||
- [2、仓库规范](#2仓库规范)
|
|
||||||
- [2.1、Issue 规范](#21issue-规范)
|
|
||||||
- [2.2、Commit-Log 规范](#22commit-log-规范)
|
|
||||||
- [2.3、Pull-Request 规范](#23pull-request-规范)
|
|
||||||
- [3、操作示例](#3操作示例)
|
|
||||||
- [3.1、初始化环境](#31初始化环境)
|
|
||||||
- [3.2、认领问题](#32认领问题)
|
|
||||||
- [3.3、处理问题 \& 提交解决](#33处理问题--提交解决)
|
|
||||||
- [3.4、请求合并](#34请求合并)
|
|
||||||
- [4、常见问题](#4常见问题)
|
|
||||||
- [4.1、如何将多个 Commit-Log 合并为一个?](#41如何将多个-commit-log-合并为一个)
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
欢迎 👏🏻 👏🏻 👏🏻 来到 `KnowStreaming`。本文档是关于如何为 `KnowStreaming` 做出贡献的指南。如果您发现不正确或遗漏的内容, 请留下您的意见/建议。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## 1、行为准则
|
|
||||||
|
|
||||||
请务必阅读并遵守我们的:[行为准则](https://github.com/didi/KnowStreaming/blob/master/CODE_OF_CONDUCT.md)。
|
|
||||||
|
|
||||||
|
|
||||||
## 2、仓库规范
|
|
||||||
|
|
||||||
### 2.1、Issue 规范
|
|
||||||
|
|
||||||
按要求,在 [创建Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 中创建ISSUE即可。
|
|
||||||
|
|
||||||
需要重点说明的是:
|
|
||||||
- 提供出现问题的环境信息,包括使用的系统,使用的KS版本等;
|
|
||||||
- 提供出现问题的复现方式;
|
|
||||||
|
|
||||||
|
|
||||||
### 2.2、Commit-Log 规范
|
|
||||||
|
|
||||||
`Commit-Log` 包含三部分 `Header`、`Body`、`Footer`。其中 `Header` 是必须的,格式固定,`Body` 在变更有必要详细解释时使用。
|
|
||||||
|
|
||||||
|
|
||||||
**1、`Header` 规范**
|
|
||||||
|
|
||||||
`Header` 格式为 `[Type]Message`, 主要有三部分组成,分别是`Type`、`Message`,
|
|
||||||
|
|
||||||
- `Type`:说明这个提交是哪一个类型的,比如有 Bugfix、Feature、Optimize等;
|
|
||||||
- `Message`:说明提交的信息,比如修复xx问题;
|
|
||||||
|
|
||||||
|
|
||||||
实际例子:[`[Bugfix]修复新接入的集群,Controller-Host不显示的问题`](https://github.com/didi/KnowStreaming/pull/933/commits)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**2、`Body` 规范**
|
|
||||||
|
|
||||||
一般不需要,如果解决了较复杂问题,或者代码较多,需要 `Body` 说清楚解决的问题,解决的思路等信息。
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**3、实际例子**
|
|
||||||
|
|
||||||
```
|
|
||||||
[Optimize]优化 MySQL & ES 测试容器的初始化
|
|
||||||
|
|
||||||
主要的变更
|
|
||||||
1、knowstreaming/knowstreaming-manager 容器;
|
|
||||||
2、knowstreaming/knowstreaming-mysql 容器调整为使用 mysql:5.7 容器;
|
|
||||||
3、初始化 mysql:5.7 容器后,增加初始化 MySQL 表及数据的动作;
|
|
||||||
|
|
||||||
被影响的变更:
|
|
||||||
1、移动 km-dist/init/sql 下的MySQL初始化脚本至 km-persistence/src/main/resource/sql 下,以便项目测试时加载到所需的初始化 SQL;
|
|
||||||
2、删除无用的 km-dist/init/template 目录;
|
|
||||||
3、因为 km-dist/init/sql 和 km-dist/init/template 目录的调整,因此也调整 ReleaseKnowStreaming.xml 内的文件内容;
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
**TODO : 后续有兴趣的同学,可以考虑引入 Git 的 Hook 进行更好的 Commit-Log 的管理。**
|
|
||||||
|
|
||||||
|
|
||||||
### 2.3、Pull-Request 规范
|
|
||||||
|
|
||||||
详细见:[PULL-REQUEST 模版](../../.github/PULL_REQUEST_TEMPLATE.md)
|
|
||||||
|
|
||||||
需要重点说明的是:
|
|
||||||
|
|
||||||
- <font color=red > 任何 PR 都必须与有效 ISSUE 相关联。否则, PR 将被拒绝;</font>
|
|
||||||
- <font color=red> 一个分支只修改一件事,一个 PR 只修改一件事;</b></font>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## 3、操作示例
|
|
||||||
|
|
||||||
本节主要介绍对 `KnowStreaming` 进行代码贡献时,相关的操作方式及操作命令。
|
|
||||||
|
|
||||||
名词说明:
|
|
||||||
- 主仓库:https://github.com/didi/KnowStreaming 这个仓库为主仓库。
|
|
||||||
- 分仓库:Fork 到自己账号下的 KnowStreaming 仓库为分仓库;
|
|
||||||
|
|
||||||
|
|
||||||
### 3.1、初始化环境
|
|
||||||
|
|
||||||
1. `Fork KnowStreaming` 主仓库至自己账号下,见 https://github.com/didi/KnowStreaming 地址右上角的 `Fork` 按钮;
|
|
||||||
2. 克隆分仓库至本地:`git clone git@github.com:xxxxxxx/KnowStreaming.git`,该仓库的简写名通常是`origin`;
|
|
||||||
3. 添加主仓库至本地:`git remote add upstream https://github.com/didi/KnowStreaming`,`upstream`是主仓库在本地的简写名,可以随意命名,前后保持一致即可;
|
|
||||||
4. 拉取主仓库代码:`git fetch upstream`;
|
|
||||||
5. 拉取分仓库代码:`git fetch origin`;
|
|
||||||
6. 将主仓库的`master`分支,拉取到本地并命名为`github_master`:`git checkout -b upstream/master`;
|
|
||||||
|
|
||||||
最后,我们来看一下初始化完成之后的大致效果,具体如下图所示:
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
至此,我们的环境就初始化好了。后续,`github_master` 分支就是主仓库的`master`分支,我们可以使用`git pull`拉取该分支的最新代码,还可以使用`git checkout -b xxx`拉取我们想要的分支。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 3.2、认领问题
|
|
||||||
|
|
||||||
在文末评论说明自己要处理该问题即可,具体如下图所示:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
### 3.3、处理问题 & 提交解决
|
|
||||||
|
|
||||||
本节主要介绍一下处理问题 & 提交解决过程中的分支管理,具体如下图所示:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
1. 切换到主分支:`git checkout github_master`;
|
|
||||||
2. 主分支拉最新代码:`git pull`;
|
|
||||||
3. 基于主分支拉新分支:`git checkout -b fix_928`;
|
|
||||||
4. 提交代码,安装commit的规范进行提交,例如:`git commit -m "[Optimize]优化xxx问题"`;
|
|
||||||
5. 提交到自己远端仓库:`git push --set-upstream origin fix_928`;
|
|
||||||
6. `GitHub` 页面发起 `Pull Request` 请求,管理员合入主仓库。这部分详细见下一节;
|
|
||||||
|
|
||||||
|
|
||||||
### 3.4、请求合并
|
|
||||||
|
|
||||||
代码在提交到 `GitHub` 分仓库之后,就可以在 `GitHub` 的网站创建 `Pull Request`,申请将代码合入主仓库了。 `Pull Request` 具体见下图所示:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
[Pull Request 创建的例子](https://github.com/didi/KnowStreaming/pull/945)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## 4、常见问题
|
|
||||||
|
|
||||||
### 4.1、如何将多个 Commit-Log 合并为一个?
|
|
||||||
|
|
||||||
可以不需要将多个commit合并为一个,如果要合并,可以使用 `git rebase -i` 命令进行解决。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,115 +0,0 @@
|
|||||||
## YML文件MYSQL密码加密存储手册
|
|
||||||
|
|
||||||
### 1、本地部署加密
|
|
||||||
|
|
||||||
**第一步:生成密文**
|
|
||||||
|
|
||||||
在本地仓库中找到jasypt-1.9.3.jar,默认在org/jasypt/jasypt/1.9.3中,使用`java -cp`生成密文。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
java -cp jasypt-1.9.3.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input=mysql密码 password=加密的salt algorithm=PBEWithMD5AndDES
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
## 得到密文
|
|
||||||
DYbVDLg5D0WRcJSCUGWjiw==
|
|
||||||
```
|
|
||||||
|
|
||||||
**第二步:配置jasypt**
|
|
||||||
|
|
||||||
在YML文件中配置jasypt,例如
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
jasypt:
|
|
||||||
encryptor:
|
|
||||||
algorithm: PBEWithMD5AndDES
|
|
||||||
iv-generator-classname: org.jasypt.iv.NoIvGenerator
|
|
||||||
```
|
|
||||||
|
|
||||||
**第三步:配置密文**
|
|
||||||
|
|
||||||
使用密文替换YML文件中的明文密码为ENC(密文),例如[application.yml](https://github.com/didi/KnowStreaming/blob/master/km-rest/src/main/resources/application.yml)中MYSQL密码。
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
know-streaming:
|
|
||||||
username: root
|
|
||||||
password: ENC(DYbVDLg5D0WRcJSCUGWjiw==)
|
|
||||||
```
|
|
||||||
|
|
||||||
**第四步:配置加密的salt(选择其一)**
|
|
||||||
|
|
||||||
- 配置在YML文件中(不推荐)
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
jasypt:
|
|
||||||
encryptor:
|
|
||||||
password: salt
|
|
||||||
```
|
|
||||||
|
|
||||||
- 配置程序启动时的命令行参数
|
|
||||||
|
|
||||||
```bash
|
|
||||||
java -jar xxx.jar --jasypt.encryptor.password=salt
|
|
||||||
```
|
|
||||||
|
|
||||||
- 配置程序启动时的环境变量
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export JASYPT_PASSWORD=salt
|
|
||||||
java -jar xxx.jar --jasypt.encryptor.password=${JASYPT_PASSWORD}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 2、容器部署加密
|
|
||||||
|
|
||||||
利用docker swarm 提供的 secret 机制加密存储密码,使用docker swarm来管理密码。
|
|
||||||
|
|
||||||
### 2.1、secret加密存储
|
|
||||||
|
|
||||||
**第一步:初始化docker swarm**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker swarm init
|
|
||||||
```
|
|
||||||
|
|
||||||
**第二步:创建密钥**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo "admin2022_" | docker secret create mysql_password -
|
|
||||||
|
|
||||||
# 输出密钥
|
|
||||||
f964wi4gg946hu78quxsh2ge9
|
|
||||||
```
|
|
||||||
|
|
||||||
**第三步:使用密钥**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# mysql用户密码
|
|
||||||
SERVER_MYSQL_USER: root
|
|
||||||
SERVER_MYSQL_PASSWORD: mysql_password
|
|
||||||
|
|
||||||
knowstreaming-mysql:
|
|
||||||
# root 用户密码
|
|
||||||
MYSQL_ROOT_PASSWORD: mysql_password
|
|
||||||
secrets:
|
|
||||||
mysql_password:
|
|
||||||
external: true
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.2、使用密钥文件加密
|
|
||||||
|
|
||||||
**第一步:创建密钥**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo "admin2022_" > password
|
|
||||||
```
|
|
||||||
|
|
||||||
**第二步:使用密钥**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# mysql用户密码
|
|
||||||
SERVER_MYSQL_USER: root
|
|
||||||
SERVER_MYSQL_PASSWORD: mysql_password
|
|
||||||
secrets:
|
|
||||||
mysql_password:
|
|
||||||
file: ./password
|
|
||||||
```
|
|
||||||
BIN
docs/dev_guide/assets/connect_jmx_failed/check_jmx_opened.jpg
Normal file
|
After Width: | Height: | Size: 382 KiB |
|
Before Width: | Height: | Size: 63 KiB |
|
Before Width: | Height: | Size: 306 KiB |
|
Before Width: | Height: | Size: 306 KiB |
|
Before Width: | Height: | Size: 17 KiB |
@@ -7,7 +7,7 @@
|
|||||||
### 3.3.1、Cluster 指标
|
### 3.3.1、Cluster 指标
|
||||||
|
|
||||||
| 指标名称 | 指标单位 | 指标含义 | kafka 版本 | 企业/开源版指标 |
|
| 指标名称 | 指标单位 | 指标含义 | kafka 版本 | 企业/开源版指标 |
|
||||||
| ------------------------- | -------- |--------------------------------| ---------------- | --------------- |
|
| ------------------------- | -------- | ------------------------------------ | ---------------- | --------------- |
|
||||||
| HealthScore | 分 | 集群总体的健康分 | 全部版本 | 开源版 |
|
| HealthScore | 分 | 集群总体的健康分 | 全部版本 | 开源版 |
|
||||||
| HealthCheckPassed | 个 | 集群总体健康检查通过数 | 全部版本 | 开源版 |
|
| HealthCheckPassed | 个 | 集群总体健康检查通过数 | 全部版本 | 开源版 |
|
||||||
| HealthCheckTotal | 个 | 集群总体健康检查总数 | 全部版本 | 开源版 |
|
| HealthCheckTotal | 个 | 集群总体健康检查总数 | 全部版本 | 开源版 |
|
||||||
@@ -42,7 +42,7 @@
|
|||||||
| PartitionMinISR_S | 个 | 集群中的小于 PartitionMinISR 总数 | 全部版本 | 开源版 |
|
| PartitionMinISR_S | 个 | 集群中的小于 PartitionMinISR 总数 | 全部版本 | 开源版 |
|
||||||
| PartitionMinISR_E | 个 | 集群中的等于 PartitionMinISR 总数 | 全部版本 | 开源版 |
|
| PartitionMinISR_E | 个 | 集群中的等于 PartitionMinISR 总数 | 全部版本 | 开源版 |
|
||||||
| PartitionURP | 个 | 集群中的未同步的 Partition 总数 | 全部版本 | 开源版 |
|
| PartitionURP | 个 | 集群中的未同步的 Partition 总数 | 全部版本 | 开源版 |
|
||||||
| MessagesIn | 条/s | 集群每秒消息写入条数 | 全部版本 | 开源版 |
|
| MessagesIn | 条/s | 集群每条消息写入条数 | 全部版本 | 开源版 |
|
||||||
| Messages | 条 | 集群总的消息条数 | 全部版本 | 开源版 |
|
| Messages | 条 | 集群总的消息条数 | 全部版本 | 开源版 |
|
||||||
| LeaderMessages | 条 | 集群中 leader 总的消息条数 | 全部版本 | 开源版 |
|
| LeaderMessages | 条 | 集群中 leader 总的消息条数 | 全部版本 | 开源版 |
|
||||||
| BytesIn | byte/s | 集群的每秒写入字节数 | 全部版本 | 开源版 |
|
| BytesIn | byte/s | 集群的每秒写入字节数 | 全部版本 | 开源版 |
|
||||||
|
|||||||
@@ -1,180 +0,0 @@
|
|||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 接入 ZK 带认证的 Kafka 集群
|
|
||||||
|
|
||||||
- [接入 ZK 带认证的 Kafka 集群](#接入-zk-带认证的-kafka-集群)
|
|
||||||
- [1、简要说明](#1简要说明)
|
|
||||||
- [2、支持 Digest-MD5 认证](#2支持-digest-md5-认证)
|
|
||||||
- [3、支持 Kerberos 认证](#3支持-kerberos-认证)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 1、简要说明
|
|
||||||
|
|
||||||
- 1、当前 KnowStreaming 暂无页面可以直接配置 ZK 的认证信息,但是 KnowStreaming 的后端预留了 MySQL 的字段用于存储 ZK 的认证信息,用户可通过将认证信息存储至该字段,从而达到支持接入 ZK 带认证的 Kafka 集群。
|
|
||||||
|
|
||||||
|
|
||||||
- 2、该字段位于 MySQL 库 ks_km_physical_cluster 表中的 zk_properties 字段,该字段的格式是:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"openSecure": false, # 是否开启认证,开启时配置为true
|
|
||||||
"sessionTimeoutUnitMs": 15000, # session超时时间
|
|
||||||
"requestTimeoutUnitMs": 5000, # request超时时间
|
|
||||||
"otherProps": { # 其他配置,认证信息主要配置在该位置
|
|
||||||
"zookeeper.sasl.clientconfig": "kafkaClusterZK1" # 例子,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- 3、实际生效的代码位置
|
|
||||||
```java
|
|
||||||
// 代码位置:https://github.com/didi/KnowStreaming/blob/master/km-persistence/src/main/java/com/xiaojukeji/know/streaming/km/persistence/kafka/KafkaAdminZKClient.java
|
|
||||||
|
|
||||||
kafkaZkClient = KafkaZkClient.apply(
|
|
||||||
clusterPhy.getZookeeper(),
|
|
||||||
zkConfig.getOpenSecure(), // 是否开启认证,开启时配置为true
|
|
||||||
zkConfig.getSessionTimeoutUnitMs(), // session超时时间
|
|
||||||
zkConfig.getRequestTimeoutUnitMs(), // request超时时间
|
|
||||||
5,
|
|
||||||
Time.SYSTEM,
|
|
||||||
"KS-ZK-ClusterPhyId-" + clusterPhyId,
|
|
||||||
"KS-ZK-SessionExpireListener-clusterPhyId-" + clusterPhyId,
|
|
||||||
Option.apply("KS-ZK-ClusterPhyId-" + clusterPhyId),
|
|
||||||
Option.apply(this.getZKConfig(clusterPhyId, zkConfig.getOtherProps())) // 其他配置,认证信息主要配置在该位置
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
- 4、SQL例子
|
|
||||||
```sql
|
|
||||||
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK1" } }' where id=集群1的ID;
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
- 5、zk_properties 字段不能覆盖所有的场景,所以实际使用过程中还可能需要在此基础之上,进行其他的调整。比如,`Digest-MD5 认证` 和 `Kerberos 认证` 都还需要修改启动脚本等。后续看能否通过修改 ZK 客户端的源码,使得 ZK 认证的相关配置能和 Kafka 认证的配置一样方便。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## 2、支持 Digest-MD5 认证
|
|
||||||
|
|
||||||
1. 假设你有两个 Kafka 集群, 对应两个 ZK 集群;
|
|
||||||
2. 两个 ZK 集群的认证信息如下所示
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# ZK1集群的认证信息,这里的 kafkaClusterZK1 可以是随意的名称,只需要和后续数据库的配置对应上即可。
|
|
||||||
kafkaClusterZK1 {
|
|
||||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
|
||||||
username="zk1"
|
|
||||||
password="zk1-passwd";
|
|
||||||
};
|
|
||||||
|
|
||||||
# ZK2集群的认证信息,这里的 kafkaClusterZK2 可以是随意的名称,只需要和后续数据库的配置对应上即可。
|
|
||||||
kafkaClusterZK2 {
|
|
||||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
|
||||||
username="zk2"
|
|
||||||
password="zk2-passwd";
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
3. 将这两个ZK集群的认证信息存储到 `/xxx/zk_client_jaas.conf` 文件中,文件中的内容如下所示:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kafkaClusterZK1 {
|
|
||||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
|
||||||
username="zk1"
|
|
||||||
password="zk1-passwd";
|
|
||||||
};
|
|
||||||
|
|
||||||
kafkaClusterZK2 {
|
|
||||||
org.apache.zookeeper.server.auth.DigestLoginModule required
|
|
||||||
username="zk2"
|
|
||||||
password="zk2-passwd";
|
|
||||||
};
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
4. 修改 KnowStreaming 的启动脚本
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# `KnowStreaming/bin/startup.sh` 中的 47 行的 JAVA_OPT 中追加如下设置
|
|
||||||
|
|
||||||
-Djava.security.auth.login.config=/xxx/zk_client_jaas.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
5. 修改 KnowStreaming 的表数据
|
|
||||||
|
|
||||||
```sql
|
|
||||||
# 这里的 kafkaClusterZK1 要和 /xxx/zk_client_jaas.conf 中的对应上
|
|
||||||
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK1" } }' where id=集群1的ID;
|
|
||||||
|
|
||||||
update ks_km_physical_cluster set zk_properties='{ "openSecure": true, "otherProps": { "zookeeper.sasl.clientconfig": "kafkaClusterZK2" } }' where id=集群2的ID;
|
|
||||||
```
|
|
||||||
|
|
||||||
6. 重启 KnowStreaming
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
## 3、支持 Kerberos 认证
|
|
||||||
|
|
||||||
**第一步:查看用户在ZK的ACL**
|
|
||||||
|
|
||||||
假设我们使用的用户是 `kafka` 这个用户。
|
|
||||||
|
|
||||||
- 1、查看 server.properties 的配置的 zookeeper.connect 的地址;
|
|
||||||
- 2、使用 `zkCli.sh -serve zookeeper.connect的地址` 登录到ZK页面;
|
|
||||||
- 3、ZK页面上,执行命令 `getAcl /kafka` 查看 `kafka` 用户的权限;
|
|
||||||
|
|
||||||
此时,我们可以看到如下信息:
|
|
||||||

|
|
||||||
|
|
||||||
`kafka` 用户需要的权限是 `cdrwa`。如果用户没有 `cdrwa` 权限的话,需要创建用户并授权,授权命令为:`setAcl`
|
|
||||||
|
|
||||||
|
|
||||||
**第二步:创建Kerberos的keytab并修改 KnowStreaming 主机**
|
|
||||||
|
|
||||||
- 1、在 Kerberos 的域中创建 `kafka/_HOST` 的 `keytab`,并导出。例如:`kafka/dbs-kafka-test-8-53`;
|
|
||||||
- 2、导出 keytab 后上传到安装 KS 的机器的 `/etc/keytab` 下;
|
|
||||||
- 3、在 KS 机器上,执行 `kinit -kt zookeepe.keytab kafka/dbs-kafka-test-8-53` 看是否能进行 `Kerberos` 登录;
|
|
||||||
- 4、可以登录后,配置 `/opt/zookeeper.jaas` 文件,例子如下:
|
|
||||||
```bash
|
|
||||||
Client {
|
|
||||||
com.sun.security.auth.module.Krb5LoginModule required
|
|
||||||
useKeyTab=true
|
|
||||||
storeKey=false
|
|
||||||
serviceName="zookeeper"
|
|
||||||
keyTab="/etc/keytab/zookeeper.keytab"
|
|
||||||
principal="kafka/dbs-kafka-test-8-53@XXX.XXX.XXX";
|
|
||||||
};
|
|
||||||
```
|
|
||||||
- 5、需要配置 `KDC-Server` 对 `KnowStreaming` 的机器开通防火墙,并在KS的机器 `/etc/host/` 配置 `kdc-server` 的 `hostname`。并将 `krb5.conf` 导入到 `/etc` 下;
|
|
||||||
|
|
||||||
|
|
||||||
**第三步:修改 KnowStreaming 的配置**
|
|
||||||
|
|
||||||
- 1、修改数据库,开启ZK的认证
|
|
||||||
```sql
|
|
||||||
update ks_km_physical_cluster set zk_properties='{ "openSecure": true }' where id=集群1的ID;
|
|
||||||
```
|
|
||||||
|
|
||||||
- 2、在 `KnowStreaming/bin/startup.sh` 中的47行的JAVA_OPT中追加如下设置
|
|
||||||
```bash
|
|
||||||
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/zookeeper.jaas
|
|
||||||
```
|
|
||||||
|
|
||||||
- 3、重启KS集群后再 start.out 中看到如下信息,则证明Kerberos配置成功;
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
**第四步:补充说明**
|
|
||||||
|
|
||||||
- 1、多Kafka集群如果用的是一样的Kerberos域的话,只需在每个`ZK`中给`kafka`用户配置`crdwa`权限即可,这样集群初始化的时候`zkclient`是都可以认证;
|
|
||||||
- 2、多个Kerberos域暂时未适配;
|
|
||||||
@@ -2,275 +2,125 @@
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
## JMX-连接失败问题解决
|
||||||
|
|
||||||
## 2、解决连接 JMX 失败
|
集群正常接入`KnowStreaming`之后,即可以看到集群的Broker列表,此时如果查看不了Topic的实时流量,或者是Broker的实时流量信息时,那么大概率就是`JMX`连接的问题了。
|
||||||
|
|
||||||
- [2、解决连接 JMX 失败](#2解决连接-jmx-失败)
|
下面我们按照步骤来一步一步的检查。
|
||||||
- [2.1、正异常现象](#21正异常现象)
|
|
||||||
- [2.2、异因一:JMX未开启](#22异因一jmx未开启)
|
### 1、问题说明
|
||||||
- [2.2.1、异常现象](#221异常现象)
|
|
||||||
- [2.2.2、解决方案](#222解决方案)
|
**类型一:JMX配置未开启**
|
||||||
- [2.3、异原二:JMX配置错误](#23异原二jmx配置错误)
|
|
||||||
- [2.3.1、异常现象](#231异常现象)
|
未开启时,直接到`2、解决方法`查看如何开启即可。
|
||||||
- [2.3.2、解决方案](#232解决方案)
|
|
||||||
- [2.4、异因三:JMX开启SSL](#24异因三jmx开启ssl)
|

|
||||||
- [2.4.1、异常现象](#241异常现象)
|
|
||||||
- [2.4.2、解决方案](#242解决方案)
|
|
||||||
- [2.5、异因四:连接了错误IP](#25异因四连接了错误ip)
|
|
||||||
- [2.5.1、异常现象](#251异常现象)
|
|
||||||
- [2.5.2、解决方案](#252解决方案)
|
|
||||||
- [2.6、异因五:连接了错误端口](#26异因五连接了错误端口)
|
|
||||||
- [2.6.1、异常现象](#261异常现象)
|
|
||||||
- [2.6.2、解决方案](#262解决方案)
|
|
||||||
|
|
||||||
|
|
||||||
背景:Kafka 通过 JMX 服务进行运行指标的暴露,因此 `KnowStreaming` 会主动连接 Kafka 的 JMX 服务进行指标采集。如果我们发现页面缺少指标,那么可能原因之一是 Kafka 的 JMX 端口配置的有问题导致指标获取失败,进而页面没有数据。
|
**类型二:配置错误**
|
||||||
|
|
||||||
|
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
|
||||||
|
|
||||||
|
- `JMX`配置错误:见`2、解决方法`。
|
||||||
|
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
|
||||||
|
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`。
|
||||||
|
|
||||||
|
|
||||||
### 2.1、正异常现象
|
错误日志例子:
|
||||||
|
|
||||||
**1、异常现象**
|
|
||||||
|
|
||||||
Broker 列表的 JMX PORT 列出现红色感叹号,则表示 JMX 连接存在异常。
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_MLlLCfAktne4X6MBtBUd width="90%">
|
|
||||||
|
|
||||||
|
|
||||||
**2、正常现象**
|
|
||||||
|
|
||||||
Broker 列表的 JMX PORT 列出现绿色,则表示 JMX 连接正常。
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_ymtDTCiDlzfrmSCez2lx width="90%">
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 2.2、异因一:JMX未开启
|
|
||||||
|
|
||||||
#### 2.2.1、异常现象
|
|
||||||
|
|
||||||
broker列表的JMX Port值为-1,对应Broker的JMX未开启。
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_E1PD8tPsMeR2zYLFBFAu width="90%">
|
|
||||||
|
|
||||||
#### 2.2.2、解决方案
|
|
||||||
|
|
||||||
开启JMX,开启流程如下:
|
|
||||||
|
|
||||||
1、修改kafka的bin目录下面的:`kafka-server-start.sh`文件
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 在这个下面增加JMX端口的配置
|
|
||||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
|
||||||
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
|
|
||||||
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
|
|
||||||
fi
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
2、修改kafka的bin目录下面对的:`kafka-run-class.sh`文件
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# JMX settings
|
|
||||||
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
|
||||||
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=当前机器的IP"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# JMX port to use
|
|
||||||
if [ $JMX_PORT ]; then
|
|
||||||
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3、重启Kafka-Broker。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 2.3、异原二:JMX配置错误
|
|
||||||
|
|
||||||
#### 2.3.1、异常现象
|
|
||||||
|
|
||||||
错误日志:
|
|
||||||
|
|
||||||
```log
|
|
||||||
# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。
|
# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。
|
||||||
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
|
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
|
||||||
|
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
|
||||||
|
|
||||||
|
|
||||||
# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。
|
# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。
|
||||||
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999. java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
|
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
|
||||||
|
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 2.3.2、解决方案
|
**类型三:连接特定IP**
|
||||||
|
|
||||||
开启JMX,开启流程如下:
|
Broker 配置了内外网,而JMX在配置时,可能配置了内网IP或者外网IP,此时 `KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
|
||||||
|
|
||||||
1、修改kafka的bin目录下面的:`kafka-server-start.sh`文件
|
比如:
|
||||||
|
|
||||||
```bash
|
Broker在ZK的存储结构如下所示,我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址,此时可以看 `4、解决方法 —— JMX连接特定网络` 进行解决。
|
||||||
# 在这个下面增加JMX端口的配置
|
|
||||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
|
||||||
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
|
|
||||||
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
2、修改kafka的bin目录下面对的:`kafka-run-class.sh`文件
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# JMX settings
|
|
||||||
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
|
||||||
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=当前机器的IP"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# JMX port to use
|
|
||||||
if [ $JMX_PORT ]; then
|
|
||||||
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
3、重启Kafka-Broker。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 2.4、异因三:JMX开启SSL
|
|
||||||
|
|
||||||
#### 2.4.1、异常现象
|
|
||||||
|
|
||||||
```log
|
|
||||||
# 连接JMX的日志中,出现SSL认证失败的相关日志。TODO:欢迎补充具体日志案例。
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2.4.2、解决方案
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_kNyCi8H9wtHSRkWurB6S width="50%">
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
### 2.5、异因四:连接了错误IP
|
|
||||||
|
|
||||||
#### 2.5.1、异常现象
|
|
||||||
|
|
||||||
Broker 配置了内外网,而JMX在配置时,可能配置了内网IP或者外网IP,此时`KnowStreaming` 需要连接到特定网络的IP才可以进行访问。
|
|
||||||
|
|
||||||
比如:Broker在ZK的存储结构如下所示,我们期望连接到 `endpoints` 中标记为 `INTERNAL` 的地址,但是 `KnowStreaming` 却连接了 `EXTERNAL` 的地址。
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"listener_security_protocol_map": {
|
"listener_security_protocol_map": {"EXTERNAL":"SASL_PLAINTEXT","INTERNAL":"SASL_PLAINTEXT"},
|
||||||
"EXTERNAL": "SASL_PLAINTEXT",
|
"endpoints": ["EXTERNAL://192.168.0.1:7092","INTERNAL://192.168.0.2:7093"],
|
||||||
"INTERNAL": "SASL_PLAINTEXT"
|
|
||||||
},
|
|
||||||
"endpoints": [
|
|
||||||
"EXTERNAL://192.168.0.1:7092",
|
|
||||||
"INTERNAL://192.168.0.2:7093"
|
|
||||||
],
|
|
||||||
"jmx_port": 8099,
|
"jmx_port": 8099,
|
||||||
"host": "192.168.0.1",
|
"host": "192.168.0.1",
|
||||||
"timestamp": "1627289710439",
|
"timestamp": "1627289710439",
|
||||||
"port": -1,
|
"port": -1,
|
||||||
"version": 4
|
"version": 4
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 2.5.2、解决方案
|
### 2、解决方法
|
||||||
|
|
||||||
|
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。
|
||||||
|
|
||||||
|
修改`kafka-server-start.sh`文件:
|
||||||
|
```
|
||||||
|
# 在这个下面增加JMX端口的配置
|
||||||
|
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||||
|
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
|
||||||
|
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
修改`kafka-run-class.sh`文件
|
||||||
|
```
|
||||||
|
# JMX settings
|
||||||
|
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
||||||
|
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# JMX port to use
|
||||||
|
if [ $JMX_PORT ]; then
|
||||||
|
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### 3、解决方法 —— 认证的JMX
|
||||||
|
|
||||||
|
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。
|
||||||
|
|
||||||
|
在`JMX`的配置等都没有问题的情况下,如果是因为认证的原因导致连接不了的,可以在集群接入界面配置你的`JMX`认证信息。
|
||||||
|
|
||||||
|
<img src='http://img-ys011.didistatic.com/static/dc2img/do1_EUU352qMEX1Jdp7pxizp' width=350>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 4、解决方法 —— JMX连接特定网络
|
||||||
|
|
||||||
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
|
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`useWhichEndpoint`字段,从而控制 `KnowStreaming` 连接到特定的JMX IP及PORT。
|
||||||
|
|
||||||
`jmx_properties`格式:
|
`jmx_properties`格式:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"maxConn": 100, // KM对单台Broker的最大JMX连接数
|
"maxConn": 100, # KM对单台Broker的最大JMX连接数
|
||||||
"username": "xxxxx", //用户名,可以不填写
|
"username": "xxxxx", # 用户名,可以不填写
|
||||||
"password": "xxxx", // 密码,可以不填写
|
"password": "xxxx", # 密码,可以不填写
|
||||||
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
|
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭
|
||||||
"useWhichEndpoint": "EXTERNAL" //指定要连接的网络名称,填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
|
"useWhichEndpoint": "EXTERNAL" #指定要连接的网络名称,填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
SQL例子:
|
SQL例子:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
|
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "useWhichEndpoint": "xxx"}' where id={xxx};
|
||||||
```
|
```
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
---
|
+ 目前此功能只支持采用 `ZK` 做分布式协调的kafka集群。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 2.6、异因五:连接了错误端口
|
|
||||||
|
|
||||||
3.3.0 以上版本,或者是 master 分支最新代码,才具备该能力。
|
|
||||||
|
|
||||||
#### 2.6.1、异常现象
|
|
||||||
|
|
||||||
在 AWS 或者是容器上的 Kafka-Broker,使用同一个IP,但是外部服务想要去连接 JMX 端口时,需要进行映射。因此 KnowStreaming 如果直接连接 ZK 上获取到的 JMX 端口,会连接失败,因此需要具备连接端口可配置的能力。
|
|
||||||
|
|
||||||
TODO:补充具体的日志。
|
|
||||||
|
|
||||||
|
|
||||||
#### 2.6.2、解决方案
|
|
||||||
|
|
||||||
可以手动往`ks_km_physical_cluster`表的`jmx_properties`字段增加一个`specifiedJmxPortList`字段,从而控制 `KnowStreaming` 连接到特定的JMX PORT。
|
|
||||||
|
|
||||||
`jmx_properties`格式:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"jmxPort": 2445, // 最低优先级使用的jmx端口
|
|
||||||
"maxConn": 100, // KM对单台Broker的最大JMX连接数
|
|
||||||
"username": "xxxxx", //用户名,可以不填写
|
|
||||||
"password": "xxxx", // 密码,可以不填写
|
|
||||||
"openSSL": true, //开启SSL, true表示开启ssl, false表示关闭
|
|
||||||
"useWhichEndpoint": "EXTERNAL", //指定要连接的网络名称,填写EXTERNAL就是连接endpoints里面的EXTERNAL地址
|
|
||||||
"specifiedJmxPortList": [ // 配置最高优先使用的jmx端口
|
|
||||||
{
|
|
||||||
"serverId": "1", // kafka-broker的brokerId, 注意这个是字符串类型,字符串类型的原因是要兼容connect的jmx端口的连接
|
|
||||||
"jmxPort": 1234 // 该 broker 所连接的jmx端口
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"serverId": "2",
|
|
||||||
"jmxPort": 1234
|
|
||||||
},
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
SQL例子:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
UPDATE ks_km_physical_cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false , "specifiedJmxPortList": [{"serverId": "1", "jmxPort": 1234}] }' where id={xxx};
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|||||||
@@ -1,183 +0,0 @@
|
|||||||

|
|
||||||
|
|
||||||
# 页面无数据排查手册
|
|
||||||
|
|
||||||
- [页面无数据排查手册](#页面无数据排查手册)
|
|
||||||
- [1、集群接入错误](#1集群接入错误)
|
|
||||||
- [1.1、异常现象](#11异常现象)
|
|
||||||
- [1.2、解决方案](#12解决方案)
|
|
||||||
- [1.3、正常情况](#13正常情况)
|
|
||||||
- [2、JMX连接失败](#2jmx连接失败)
|
|
||||||
- [3、ElasticSearch问题](#3elasticsearch问题)
|
|
||||||
- [3.1、异因一:缺少索引](#31异因一缺少索引)
|
|
||||||
- [3.1.1、异常现象](#311异常现象)
|
|
||||||
- [3.1.2、解决方案](#312解决方案)
|
|
||||||
- [3.2、异因二:索引模板错误](#32异因二索引模板错误)
|
|
||||||
- [3.2.1、异常现象](#321异常现象)
|
|
||||||
- [3.2.2、解决方案](#322解决方案)
|
|
||||||
- [3.3、异因三:集群Shard满](#33异因三集群shard满)
|
|
||||||
- [3.3.1、异常现象](#331异常现象)
|
|
||||||
- [3.3.2、解决方案](#332解决方案)
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1、集群接入错误
|
|
||||||
|
|
||||||
### 1.1、异常现象
|
|
||||||
|
|
||||||
如下图所示,集群非空时,大概率为地址配置错误导致。
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_BRiXBvqYFK2dxSF1aqgZ width="80%">
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 1.2、解决方案
|
|
||||||
|
|
||||||
接入集群时,依据提示的错误,进行相应的解决。例如:
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_Yn4LhV8aeSEKX1zrrkUi width="50%">
|
|
||||||
|
|
||||||
### 1.3、正常情况
|
|
||||||
|
|
||||||
接入集群时,页面信息都自动正常出现,没有提示错误。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2、JMX连接失败
|
|
||||||
|
|
||||||
背景:Kafka 通过 JMX 服务进行运行指标的暴露,因此 `KnowStreaming` 会主动连接 Kafka 的 JMX 服务进行指标采集。如果我们发现页面缺少指标,那么可能原因之一是 Kafka 的 JMX 端口配置的有问题导致指标获取失败,进而页面没有数据。
|
|
||||||
|
|
||||||
|
|
||||||
具体见同目录下的文档:[解决连接JMX失败](./%E8%A7%A3%E5%86%B3%E8%BF%9E%E6%8E%A5JMX%E5%A4%B1%E8%B4%A5.md)
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 3、ElasticSearch问题
|
|
||||||
|
|
||||||
**背景:**
|
|
||||||
`KnowStreaming` 将从 Kafka 中采集到的指标存储到 ES 中,如果 ES 存在问题,则也可能会导致页面出现无数据的情况。
|
|
||||||
|
|
||||||
**日志:**
|
|
||||||
`KnowStreaming` 读写 ES 相关日志,在 `logs/es/es.log` 中!
|
|
||||||
|
|
||||||
|
|
||||||
**注意:**
|
|
||||||
mac系统在执行curl指令时,可能报zsh错误。可参考以下操作。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
1 进入.zshrc 文件 vim ~/.zshrc
|
|
||||||
2.在.zshrc中加入 setopt no_nomatch
|
|
||||||
3.更新配置 source ~/.zshrc
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3.1、异因一:缺少索引
|
|
||||||
|
|
||||||
#### 3.1.1、异常现象
|
|
||||||
|
|
||||||
报错信息
|
|
||||||
|
|
||||||
```log
|
|
||||||
# 日志位置 logs/es/es.log
|
|
||||||
com.didiglobal.logi.elasticsearch.client.model.exception.ESIndexNotFoundException: method [GET], host[http://127.0.0.1:9200], URI [/ks_kafka_broker_metric_2022-10-21,ks_kafka_broker_metric_2022-10-22/_search], status line [HTTP/1.1 404 Not Found]
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
`curl http://{ES的IP地址}:{ES的端口号}/_cat/indices/ks_kafka*` 查看KS索引列表,发现没有索引。
|
|
||||||
|
|
||||||
#### 3.1.2、解决方案
|
|
||||||
|
|
||||||
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
### 3.2、异因二:索引模板错误
|
|
||||||
|
|
||||||
#### 3.2.1、异常现象
|
|
||||||
|
|
||||||
多集群列表有数据,集群详情页图标无数据。查询KS索引模板列表,发现不存在。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl {ES的IP地址}:{ES的端口号}/_cat/templates/ks_kafka*?v&h=name
|
|
||||||
```
|
|
||||||
|
|
||||||
正常KS模板如下图所示。
|
|
||||||
|
|
||||||
<img src=http://img-ys011.didistatic.com/static/dc2img/do1_l79bPYSci9wr6KFwZDA6 width="90%">
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#### 3.2.2、解决方案
|
|
||||||
|
|
||||||
删除KS索引模板和索引
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -XDELETE {ES的IP地址}:{ES的端口号}/ks_kafka*
|
|
||||||
curl -XDELETE {ES的IP地址}:{ES的端口号}/_template/ks_kafka*
|
|
||||||
```
|
|
||||||
|
|
||||||
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来创建索引及模版。
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
### 3.3、异因三:集群Shard满
|
|
||||||
|
|
||||||
#### 3.3.1、异常现象
|
|
||||||
|
|
||||||
报错信息
|
|
||||||
|
|
||||||
```log
|
|
||||||
# 日志位置 logs/es/es.log
|
|
||||||
|
|
||||||
{"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [4] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [4] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"},"status":400}
|
|
||||||
```
|
|
||||||
|
|
||||||
尝试手动创建索引失败。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#创建ks_kafka_cluster_metric_test索引的指令
|
|
||||||
curl -s -XPUT http://{ES的IP地址}:{ES的端口号}/ks_kafka_cluster_metric_test
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### 3.3.2、解决方案
|
|
||||||
|
|
||||||
ES索引的默认分片数量为1000,达到数量以后,索引创建失败。
|
|
||||||
|
|
||||||
+ 扩大ES索引数量上限,执行指令
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -XPUT -H"content-type:application/json" http://{ES的IP地址}:{ES的端口号}/_cluster/settings -d '
|
|
||||||
{
|
|
||||||
"persistent": {
|
|
||||||
"cluster": {
|
|
||||||
"max_shards_per_node":{索引上限,默认为1000, 测试时可以将其调整为10000}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
执行 [ES索引及模版初始化](https://github.com/didi/KnowStreaming/blob/master/bin/init_es_template.sh) 脚本,来补全索引。
|
|
||||||
|
|
||||||
|
|
||||||
@@ -74,11 +74,11 @@ sh deploy_KnowStreaming-offline.sh
|
|||||||
```bash
|
```bash
|
||||||
# 相关镜像在Docker Hub都可以下载
|
# 相关镜像在Docker Hub都可以下载
|
||||||
# 快速安装(NAMESPACE需要更改为已存在的,安装启动需要几分钟初始化请稍等~)
|
# 快速安装(NAMESPACE需要更改为已存在的,安装启动需要几分钟初始化请稍等~)
|
||||||
helm install -n [NAMESPACE] [NAME] http://download.knowstreaming.com/charts/knowstreaming-manager-0.1.5.tgz
|
helm install -n [NAMESPACE] [NAME] http://download.knowstreaming.com/charts/knowstreaming-manager-0.1.3.tgz
|
||||||
|
|
||||||
# 获取KnowStreaming前端ui的service. 默认nodeport方式.
|
# 获取KnowStreaming前端ui的service. 默认nodeport方式.
|
||||||
# (http://nodeIP:nodeport,默认用户名密码:admin/admin2022_)
|
# (http://nodeIP:nodeport,默认用户名密码:admin/admin2022_)
|
||||||
# `v3.0.0-beta.2`版本开始(helm chart包版本0.1.4开始),默认账号密码为`admin` / `admin`;
|
# `v3.0.0-beta.2`版本开始,默认账号密码为`admin` / `admin`;
|
||||||
|
|
||||||
# 添加仓库
|
# 添加仓库
|
||||||
helm repo add knowstreaming http://download.knowstreaming.com/charts
|
helm repo add knowstreaming http://download.knowstreaming.com/charts
|
||||||
@@ -90,54 +90,13 @@ helm pull knowstreaming/knowstreaming-manager
|
|||||||
|
|
||||||
|
|
||||||
#### 2.1.3.2、Docker Compose
|
#### 2.1.3.2、Docker Compose
|
||||||
**环境依赖**
|
|
||||||
|
|
||||||
- [Docker](https://docs.docker.com/engine/install/)
|
|
||||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
|
||||||
|
|
||||||
|
|
||||||
**安装命令**
|
|
||||||
```bash
|
|
||||||
# `v3.0.0-beta.2`版本开始(docker镜像为0.2.0版本开始),默认账号密码为`admin` / `admin`;
|
|
||||||
# https://hub.docker.com/u/knowstreaming 在此处寻找最新镜像版本
|
|
||||||
# mysql与es可以使用自己搭建的服务,调整对应配置即可
|
|
||||||
|
|
||||||
# 复制docker-compose.yml到指定位置后执行下方命令即可启动
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
**验证安装**
|
|
||||||
```shell
|
|
||||||
docker-compose ps
|
|
||||||
# 验证启动 - 状态为 UP 则表示成功
|
|
||||||
Name Command State Ports
|
|
||||||
----------------------------------------------------------------------------------------------------
|
|
||||||
elasticsearch-single /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp
|
|
||||||
knowstreaming-init /bin/bash /es_template_cre ... Up
|
|
||||||
knowstreaming-manager /bin/sh /ks-start.sh Up 80/tcp
|
|
||||||
knowstreaming-mysql /entrypoint.sh mysqld Up (health: starting) 3306/tcp, 33060/tcp
|
|
||||||
knowstreaming-ui /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
|
|
||||||
|
|
||||||
# 稍等一分钟左右 knowstreaming-init 会退出,表示es初始化完成,可以访问页面
|
|
||||||
Name Command State Ports
|
|
||||||
-------------------------------------------------------------------------------------------
|
|
||||||
knowstreaming-init /bin/bash /es_template_cre ... Exit 0
|
|
||||||
knowstreaming-mysql /entrypoint.sh mysqld Up (healthy) 3306/tcp, 33060/tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
**访问**
|
|
||||||
```http request
|
|
||||||
http://127.0.0.1:80/
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
**docker-compose.yml**
|
|
||||||
```yml
|
```yml
|
||||||
version: "2"
|
version: "3"
|
||||||
|
|
||||||
services:
|
services:
|
||||||
# *不要调整knowstreaming-manager服务名称,ui中会用到
|
|
||||||
knowstreaming-manager:
|
knowstreaming-manager:
|
||||||
image: knowstreaming/knowstreaming-manager:latest
|
image: knowstreaming/knowstreaming-manager:0.2.0-test
|
||||||
container_name: knowstreaming-manager
|
container_name: knowstreaming-manager
|
||||||
privileged: true
|
privileged: true
|
||||||
restart: always
|
restart: always
|
||||||
@@ -151,36 +110,33 @@ services:
|
|||||||
- /ks-start.sh
|
- /ks-start.sh
|
||||||
environment:
|
environment:
|
||||||
TZ: Asia/Shanghai
|
TZ: Asia/Shanghai
|
||||||
# mysql服务地址
|
|
||||||
SERVER_MYSQL_ADDRESS: knowstreaming-mysql:3306
|
SERVER_MYSQL_ADDRESS: knowstreaming-mysql:3306
|
||||||
# mysql数据库名
|
|
||||||
SERVER_MYSQL_DB: know_streaming
|
SERVER_MYSQL_DB: know_streaming
|
||||||
# mysql用户名
|
|
||||||
SERVER_MYSQL_USER: root
|
SERVER_MYSQL_USER: root
|
||||||
# mysql用户密码
|
|
||||||
SERVER_MYSQL_PASSWORD: admin2022_
|
SERVER_MYSQL_PASSWORD: admin2022_
|
||||||
# es服务地址
|
|
||||||
SERVER_ES_ADDRESS: elasticsearch-single:9200
|
SERVER_ES_ADDRESS: elasticsearch-single:9200
|
||||||
# 服务JVM参数
|
|
||||||
JAVA_OPTS: -Xmx1g -Xms1g
|
JAVA_OPTS: -Xmx1g -Xms1g
|
||||||
# 对于kafka中ADVERTISED_LISTENERS填写的hostname可以通过该方式完成
|
|
||||||
# extra_hosts:
|
# extra_hosts:
|
||||||
# - "hostname:x.x.x.x"
|
# - "hostname:x.x.x.x"
|
||||||
# 服务日志路径
|
|
||||||
# volumes:
|
# volumes:
|
||||||
# - /ks/manage/log:/logs
|
# - /ks/manage/log:/logs
|
||||||
knowstreaming-ui:
|
knowstreaming-ui:
|
||||||
image: knowstreaming/knowstreaming-ui:latest
|
image: knowstreaming/knowstreaming-ui:0.2.0-test1
|
||||||
container_name: knowstreaming-ui
|
container_name: knowstreaming-ui
|
||||||
restart: always
|
restart: always
|
||||||
ports:
|
ports:
|
||||||
- '80:80'
|
- '18092:80'
|
||||||
environment:
|
environment:
|
||||||
TZ: Asia/Shanghai
|
TZ: Asia/Shanghai
|
||||||
depends_on:
|
depends_on:
|
||||||
- knowstreaming-manager
|
- knowstreaming-manager
|
||||||
# extra_hosts:
|
# extra_hosts:
|
||||||
# - "hostname:x.x.x.x"
|
# - "hostname:x.x.x.x"
|
||||||
|
|
||||||
elasticsearch-single:
|
elasticsearch-single:
|
||||||
image: docker.io/library/elasticsearch:7.6.2
|
image: docker.io/library/elasticsearch:7.6.2
|
||||||
container_name: elasticsearch-single
|
container_name: elasticsearch-single
|
||||||
@@ -193,19 +149,14 @@ services:
|
|||||||
# - '9300:9300'
|
# - '9300:9300'
|
||||||
environment:
|
environment:
|
||||||
TZ: Asia/Shanghai
|
TZ: Asia/Shanghai
|
||||||
# es的JVM参数
|
|
||||||
ES_JAVA_OPTS: -Xms512m -Xmx512m
|
ES_JAVA_OPTS: -Xms512m -Xmx512m
|
||||||
# 单节点配置,多节点集群参考 https://www.elastic.co/guide/en/elasticsearch/reference/7.6/docker.html#docker-compose-file
|
|
||||||
discovery.type: single-node
|
discovery.type: single-node
|
||||||
# 数据持久化路径
|
|
||||||
# volumes:
|
# volumes:
|
||||||
# - /ks/es/data:/usr/share/elasticsearch/data
|
# - /ks/es/data:/usr/share/elasticsearch/data
|
||||||
|
|
||||||
# es初始化服务,与manager使用同一镜像
|
|
||||||
# 首次启动es需初始化模版和索引,后续会自动创建
|
|
||||||
knowstreaming-init:
|
knowstreaming-init:
|
||||||
image: knowstreaming/knowstreaming-manager:latest
|
image: knowstreaming/knowstreaming-manager:0.2.0-test
|
||||||
container_name: knowstreaming-init
|
container_name: knowstreaming_init
|
||||||
depends_on:
|
depends_on:
|
||||||
- elasticsearch-single
|
- elasticsearch-single
|
||||||
command:
|
command:
|
||||||
@@ -213,26 +164,22 @@ services:
|
|||||||
- /es_template_create.sh
|
- /es_template_create.sh
|
||||||
environment:
|
environment:
|
||||||
TZ: Asia/Shanghai
|
TZ: Asia/Shanghai
|
||||||
# es服务地址
|
|
||||||
SERVER_ES_ADDRESS: elasticsearch-single:9200
|
SERVER_ES_ADDRESS: elasticsearch-single:9200
|
||||||
|
|
||||||
|
|
||||||
knowstreaming-mysql:
|
knowstreaming-mysql:
|
||||||
image: knowstreaming/knowstreaming-mysql:latest
|
image: knowstreaming/knowstreaming-mysql:0.2.0-test
|
||||||
container_name: knowstreaming-mysql
|
container_name: knowstreaming-mysql
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
TZ: Asia/Shanghai
|
TZ: Asia/Shanghai
|
||||||
# root 用户密码
|
|
||||||
MYSQL_ROOT_PASSWORD: admin2022_
|
MYSQL_ROOT_PASSWORD: admin2022_
|
||||||
# 初始化时创建的数据库名称
|
|
||||||
MYSQL_DATABASE: know_streaming
|
MYSQL_DATABASE: know_streaming
|
||||||
# 通配所有host,可以访问远程
|
|
||||||
MYSQL_ROOT_HOST: '%'
|
MYSQL_ROOT_HOST: '%'
|
||||||
expose:
|
expose:
|
||||||
- 3306
|
- 3306
|
||||||
# ports:
|
# ports:
|
||||||
# - '3306:3306'
|
# - '3306:3306'
|
||||||
# 数据持久化路径
|
|
||||||
# volumes:
|
# volumes:
|
||||||
# - /ks/mysql/data:/data/mysql
|
# - /ks/mysql/data:/data/mysql
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,405 +1,12 @@
|
|||||||
## 6.2、版本升级手册
|
## 6.2、版本升级手册
|
||||||
|
|
||||||
注意:
|
注意:如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
|
||||||
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
|
|
||||||
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
|
|
||||||
|
|
||||||
### 升级至 `master` 版本
|
### 6.2.0、升级至 `master` 版本
|
||||||
|
|
||||||
暂无
|
暂无
|
||||||
|
|
||||||
---
|
### 6.2.1、升级至 `v3.0.0-beta.2`版本
|
||||||
|
|
||||||
### 升级至 `3.4.0` 版本
|
|
||||||
|
|
||||||
**配置变更**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# 新增的配置
|
|
||||||
request: # 请求相关的配置
|
|
||||||
api-call: # api调用
|
|
||||||
timeout-unit-ms: 8000 # 超时时间,默认8000毫秒
|
|
||||||
```
|
|
||||||
|
|
||||||
**SQL 变更**
|
|
||||||
```sql
|
|
||||||
-- 多集群管理权限2023-06-27新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2026', 'Connector-新增', '1593', '1', '2', 'Connector-新增', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2028', 'Connector-编辑', '1593', '1', '2', 'Connector-编辑', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2030', 'Connector-删除', '1593', '1', '2', 'Connector-删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2032', 'Connector-重启', '1593', '1', '2', 'Connector-重启', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2034', 'Connector-暂停&恢复', '1593', '1', '2', 'Connector-暂停&恢复', '0', 'know-streaming');
|
|
||||||
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2026', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2028', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2030', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2032', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2034', '0', 'know-streaming');
|
|
||||||
|
|
||||||
|
|
||||||
-- 多集群管理权限2023-06-29新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2036', 'Security-ACL新增', '1593', '1', '2', 'Security-ACL新增', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2038', 'Security-ACL删除', '1593', '1', '2', 'Security-ACL删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2040', 'Security-User新增', '1593', '1', '2', 'Security-User新增', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2042', 'Security-User删除', '1593', '1', '2', 'Security-User删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2044', 'Security-User修改密码', '1593', '1', '2', 'Security-User修改密码', '0', 'know-streaming');
|
|
||||||
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2036', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2038', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2040', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2042', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2044', '0', 'know-streaming');
|
|
||||||
|
|
||||||
|
|
||||||
-- 多集群管理权限2023-07-06新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2046', 'Group-删除', '1593', '1', '2', 'Group-删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2048', 'GroupOffset-Topic纬度删除', '1593', '1', '2', 'GroupOffset-Topic纬度删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2050', 'GroupOffset-Partition纬度删除', '1593', '1', '2', 'GroupOffset-Partition纬度删除', '0', 'know-streaming');
|
|
||||||
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2046', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2048', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2050', '0', 'know-streaming');
|
|
||||||
|
|
||||||
|
|
||||||
-- 多集群管理权限2023-07-18新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2052', 'Security-User查看密码', '1593', '1', '2', 'Security-User查看密码', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2052', '0', 'know-streaming');
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 升级至 `3.3.0` 版本
|
|
||||||
|
|
||||||
**SQL 变更**
|
|
||||||
```sql
|
|
||||||
ALTER TABLE `logi_security_user`
|
|
||||||
CHANGE COLUMN `phone` `phone` VARCHAR(20) NOT NULL DEFAULT '' COMMENT 'mobile' ;
|
|
||||||
|
|
||||||
ALTER TABLE ks_kc_connector ADD `heartbeat_connector_name` varchar(512) DEFAULT '' COMMENT '心跳检测connector名称';
|
|
||||||
ALTER TABLE ks_kc_connector ADD `checkpoint_connector_name` varchar(512) DEFAULT '' COMMENT '进度确认connector名称';
|
|
||||||
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_TOTAL_RECORD_ERRORS', '{\"value\" : 1}', 'MirrorMaker消息处理错误的次数', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_REPLICATION_LATENCY_MS_MAX', '{\"value\" : 6000}', 'MirrorMaker消息复制最大延迟时间', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_UNASSIGNED_TASK_COUNT', '{\"value\" : 20}', 'MirrorMaker未被分配的任务数量', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_MIRROR_MAKER_FAILED_TASK_COUNT', '{\"value\" : 10}', 'MirrorMaker失败状态的任务数量', 'admin');
|
|
||||||
|
|
||||||
|
|
||||||
-- 多集群管理权限2023-01-05新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2012', 'Topic-新增Topic复制', '1593', '1', '2', 'Topic-新增Topic复制', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2014', 'Topic-详情-取消Topic复制', '1593', '1', '2', 'Topic-详情-取消Topic复制', '0', 'know-streaming');
|
|
||||||
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2012', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2014', '0', 'know-streaming');
|
|
||||||
|
|
||||||
|
|
||||||
-- 多集群管理权限2023-01-18新增
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2016', 'MM2-新增', '1593', '1', '2', 'MM2-新增', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2018', 'MM2-编辑', '1593', '1', '2', 'MM2-编辑', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2020', 'MM2-删除', '1593', '1', '2', 'MM2-删除', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2022', 'MM2-重启', '1593', '1', '2', 'MM2-重启', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_permission` (`id`, `permission_name`, `parent_id`, `leaf`, `level`, `description`, `is_delete`, `app_name`) VALUES ('2024', 'MM2-暂停&恢复', '1593', '1', '2', 'MM2-暂停&恢复', '0', 'know-streaming');
|
|
||||||
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2016', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2018', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2020', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2022', '0', 'know-streaming');
|
|
||||||
INSERT INTO `logi_security_role_permission` (`role_id`, `permission_id`, `is_delete`, `app_name`) VALUES ('1677', '2024', '0', 'know-streaming');
|
|
||||||
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS `ks_ha_active_standby_relation`;
|
|
||||||
CREATE TABLE `ks_ha_active_standby_relation` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`active_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '主集群ID',
|
|
||||||
`standby_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '备集群ID',
|
|
||||||
`res_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '资源名称',
|
|
||||||
`res_type` int(11) NOT NULL DEFAULT '-1' COMMENT '资源类型,0:集群,1:镜像Topic,2:主备Topic',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_res` (`res_type`,`active_cluster_phy_id`,`standby_cluster_phy_id`,`res_name`),
|
|
||||||
UNIQUE KEY `uniq_res_type_standby_cluster_res_name` (`res_type`,`standby_cluster_phy_id`,`res_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='HA主备关系表';
|
|
||||||
|
|
||||||
|
|
||||||
-- 删除idx_cluster_phy_id 索引并新增idx_cluster_update_time索引
|
|
||||||
ALTER TABLE `ks_km_kafka_change_record` DROP INDEX `idx_cluster_phy_id` ,
|
|
||||||
ADD INDEX `idx_cluster_update_time` (`cluster_phy_id` ASC, `update_time` ASC);
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 升级至 `3.2.0` 版本
|
|
||||||
|
|
||||||
**配置变更**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# 新增如下配置
|
|
||||||
|
|
||||||
spring:
|
|
||||||
logi-job: # know-streaming 依赖的 logi-job 模块的数据库的配置,默认与 know-streaming 的数据库配置保持一致即可
|
|
||||||
enable: true # true表示开启job任务, false表关闭。KS在部署上可以考虑部署两套服务,一套处理前端请求,一套执行job任务,此时可以通过该字段进行控制
|
|
||||||
|
|
||||||
# 线程池大小相关配置
|
|
||||||
thread-pool:
|
|
||||||
es:
|
|
||||||
search: # es查询线程池
|
|
||||||
thread-num: 20 # 线程池大小
|
|
||||||
queue-size: 10000 # 队列大小
|
|
||||||
|
|
||||||
# 客户端池大小相关配置
|
|
||||||
client-pool:
|
|
||||||
kafka-admin:
|
|
||||||
client-cnt: 1 # 每个Kafka集群创建的KafkaAdminClient数
|
|
||||||
|
|
||||||
# ES客户端配置
|
|
||||||
es:
|
|
||||||
index:
|
|
||||||
expire: 15 # 索引过期天数,15表示超过15天的索引会被KS过期删除
|
|
||||||
```
|
|
||||||
|
|
||||||
**SQL 变更**
|
|
||||||
```sql
|
|
||||||
DROP TABLE IF EXISTS `ks_kc_connect_cluster`;
|
|
||||||
CREATE TABLE `ks_kc_connect_cluster` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Connect集群ID',
|
|
||||||
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
|
|
||||||
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
|
|
||||||
`group_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群Group名称',
|
|
||||||
`cluster_url` varchar(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
|
|
||||||
`member_leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL地址',
|
|
||||||
`version` varchar(64) NOT NULL DEFAULT '' COMMENT 'connect版本',
|
|
||||||
`jmx_properties` text COMMENT 'JMX配置',
|
|
||||||
`state` tinyint(4) NOT NULL DEFAULT '1' COMMENT '集群使用的消费组状态,也表示集群状态:-1 Unknown,0 ReBalance,1 Active,2 Dead,3 Empty',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '接入时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_id_group_name` (`id`,`group_name`),
|
|
||||||
UNIQUE KEY `uniq_name_kafka_cluster` (`name`,`kafka_cluster_phy_id`),
|
|
||||||
KEY `idx_kafka_cluster_phy_id` (`kafka_cluster_phy_id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connect集群信息表';
|
|
||||||
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS `ks_kc_connector`;
|
|
||||||
CREATE TABLE `ks_kc_connector` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
|
|
||||||
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
|
|
||||||
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
|
|
||||||
`connector_class_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector类',
|
|
||||||
`connector_type` varchar(32) NOT NULL DEFAULT '' COMMENT 'Connector类型',
|
|
||||||
`state` varchar(45) NOT NULL DEFAULT '' COMMENT '状态',
|
|
||||||
`topics` text COMMENT '访问过的Topics',
|
|
||||||
`task_count` int(11) NOT NULL DEFAULT '0' COMMENT '任务数',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_connect_cluster_id_connector_name` (`connect_cluster_id`,`connector_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Connector信息表';
|
|
||||||
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS `ks_kc_worker`;
|
|
||||||
CREATE TABLE `ks_kc_worker` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
|
|
||||||
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
|
|
||||||
`member_id` varchar(512) NOT NULL DEFAULT '' COMMENT '成员ID',
|
|
||||||
`host` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
|
|
||||||
`jmx_port` int(16) NOT NULL DEFAULT '-1' COMMENT 'Jmx端口',
|
|
||||||
`url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'URL信息',
|
|
||||||
`leader_url` varchar(1024) NOT NULL DEFAULT '' COMMENT 'leaderURL信息',
|
|
||||||
`leader` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1是leader,0不是leader',
|
|
||||||
`worker_id` varchar(128) NOT NULL COMMENT 'worker地址',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_member_id` (`connect_cluster_id`,`member_id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='worker信息表';
|
|
||||||
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS `ks_kc_worker_connector`;
|
|
||||||
CREATE TABLE `ks_kc_worker_connector` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`kafka_cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Kafka集群ID',
|
|
||||||
`connect_cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'Connect集群ID',
|
|
||||||
`connector_name` varchar(512) NOT NULL DEFAULT '' COMMENT 'Connector名称',
|
|
||||||
`worker_member_id` varchar(256) NOT NULL DEFAULT '',
|
|
||||||
`task_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'Task的ID',
|
|
||||||
`state` varchar(128) DEFAULT NULL COMMENT '任务状态',
|
|
||||||
`worker_id` varchar(128) DEFAULT NULL COMMENT 'worker信息',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_relation` (`connect_cluster_id`,`connector_name`,`task_id`,`worker_member_id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Worker和Connector关系表';
|
|
||||||
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_FAILED_TASK_COUNT', '{\"value\" : 1}', 'connector失败状态的任务数量', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECTOR_UNASSIGNED_TASK_COUNT', '{\"value\" : 1}', 'connector未被分配的任务数量', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_CONNECT_CLUSTER_TASK_STARTUP_FAILURE_PERCENTAGE', '{\"value\" : 0.05}', 'Connect集群任务启动失败概率', 'admin');
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 升级至 `v3.1.0` 版本
|
|
||||||
|
|
||||||
```sql
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
|
|
||||||
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
### 升级至 `v3.0.1` 版本
|
|
||||||
|
|
||||||
**ES 索引模版**
|
|
||||||
```bash
|
|
||||||
# 新增 ks_kafka_zookeeper_metric 索引模版。
|
|
||||||
# 可通过再次执行 bin/init_es_template.sh 脚本,创建该索引模版。
|
|
||||||
|
|
||||||
# 索引模版内容
|
|
||||||
PUT _template/ks_kafka_zookeeper_metric
|
|
||||||
{
|
|
||||||
"order" : 10,
|
|
||||||
"index_patterns" : [
|
|
||||||
"ks_kafka_zookeeper_metric*"
|
|
||||||
],
|
|
||||||
"settings" : {
|
|
||||||
"index" : {
|
|
||||||
"number_of_shards" : "10"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings" : {
|
|
||||||
"properties" : {
|
|
||||||
"routingValue" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"clusterPhyId" : {
|
|
||||||
"type" : "long"
|
|
||||||
},
|
|
||||||
"metrics" : {
|
|
||||||
"properties" : {
|
|
||||||
"AvgRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MinRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MaxRequestLatency" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"OutstandingRequests" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"NodeCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"WatchCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"NumAliveConnections" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"PacketsReceived" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"PacketsSent" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"EphemeralsCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"ApproximateDataSize" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"OpenFileDescriptorCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
},
|
|
||||||
"MaxFileDescriptorCount" : {
|
|
||||||
"type" : "double"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"key" : {
|
|
||||||
"type" : "text",
|
|
||||||
"fields" : {
|
|
||||||
"keyword" : {
|
|
||||||
"ignore_above" : 256,
|
|
||||||
"type" : "keyword"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"timestamp" : {
|
|
||||||
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
|
|
||||||
"type" : "date"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"aliases" : { }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
**SQL 变更**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
DROP TABLE IF EXISTS `ks_km_zookeeper`;
|
|
||||||
CREATE TABLE `ks_km_zookeeper` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '物理集群ID',
|
|
||||||
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper主机名',
|
|
||||||
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'zookeeper端口',
|
|
||||||
`role` varchar(16) NOT NULL DEFAULT '' COMMENT '角色, leader follower observer',
|
|
||||||
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper版本',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1存活,0未存活,11存活但是4字命令使用不了',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_phy_id_host_port` (`cluster_phy_id`,`host`, `port`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Zookeeper信息表';
|
|
||||||
|
|
||||||
|
|
||||||
DROP TABLE IF EXISTS `ks_km_group`;
|
|
||||||
CREATE TABLE `ks_km_group` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`name` varchar(192) COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Group名称',
|
|
||||||
`member_count` int(11) unsigned NOT NULL DEFAULT '0' COMMENT '成员数',
|
|
||||||
`topic_members` text CHARACTER SET utf8 COMMENT 'group消费的topic列表',
|
|
||||||
`partition_assignor` varchar(255) CHARACTER SET utf8 NOT NULL COMMENT '分配策略',
|
|
||||||
`coordinator_id` int(11) NOT NULL COMMENT 'group协调器brokerId',
|
|
||||||
`type` int(11) NOT NULL COMMENT 'group类型 0:consumer 1:connector',
|
|
||||||
`state` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '' COMMENT '状态',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_phy_id_name` (`cluster_phy_id`,`name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Group信息表';
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### 升级至 `v3.0.0` 版本
|
|
||||||
|
|
||||||
**SQL 变更**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER TABLE `ks_km_physical_cluster`
|
|
||||||
ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
### 升级至 `v3.0.0-beta.2`版本
|
|
||||||
|
|
||||||
**配置变更**
|
**配置变更**
|
||||||
|
|
||||||
@@ -470,7 +77,7 @@ ALTER TABLE `logi_security_oplog`
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 升级至 `v3.0.0-beta.1`版本
|
### 6.2.2、升级至 `v3.0.0-beta.1`版本
|
||||||
|
|
||||||
**SQL 变更**
|
**SQL 变更**
|
||||||
|
|
||||||
@@ -489,7 +96,7 @@ ALTER COLUMN `operation_methods` set default '';
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### `2.x`版本 升级至 `v3.0.0-beta.0`版本
|
### 6.2.3、`2.x`版本 升级至 `v3.0.0-beta.0`版本
|
||||||
|
|
||||||
**升级步骤:**
|
**升级步骤:**
|
||||||
|
|
||||||
|
|||||||
@@ -1,37 +1,13 @@
|
|||||||
|
|
||||||

|
|
||||||
|
|
||||||
# FAQ
|
# FAQ
|
||||||
|
|
||||||
- [FAQ](#faq)
|
## 8.1、支持哪些 Kafka 版本?
|
||||||
- [1、支持哪些 Kafka 版本?](#1支持哪些-kafka-版本)
|
|
||||||
- [1、2.x 版本和 3.0 版本有什么差异?](#12x-版本和-30-版本有什么差异)
|
|
||||||
- [3、页面流量信息等无数据?](#3页面流量信息等无数据)
|
|
||||||
- [4、`Jmx`连接失败如何解决?](#4jmx连接失败如何解决)
|
|
||||||
- [5、有没有 API 文档?](#5有没有-api-文档)
|
|
||||||
- [6、删除 Topic 成功后,为何过段时间又出现了?](#6删除-topic-成功后为何过段时间又出现了)
|
|
||||||
- [7、如何在不登录的情况下,调用接口?](#7如何在不登录的情况下调用接口)
|
|
||||||
- [8、Specified key was too long; max key length is 767 bytes](#8specified-key-was-too-long-max-key-length-is-767-bytes)
|
|
||||||
- [9、出现 ESIndexNotFoundEXception 报错](#9出现-esindexnotfoundexception-报错)
|
|
||||||
- [10、km-console 打包构建失败](#10km-console-打包构建失败)
|
|
||||||
- [11、在 `km-console` 目录下执行 `npm run start` 时看不到应用构建和热加载过程?如何启动单个应用?](#11在-km-console-目录下执行-npm-run-start-时看不到应用构建和热加载过程如何启动单个应用)
|
|
||||||
- [12、权限识别失败问题](#12权限识别失败问题)
|
|
||||||
- [13、接入开启kerberos认证的kafka集群](#13接入开启kerberos认证的kafka集群)
|
|
||||||
- [14、对接Ldap的配置](#14对接ldap的配置)
|
|
||||||
- [15、测试时使用Testcontainers的说明](#15测试时使用testcontainers的说明)
|
|
||||||
- [16、JMX连接失败怎么办](#16jmx连接失败怎么办)
|
|
||||||
- [17、zk监控无数据问题](#17zk监控无数据问题)
|
|
||||||
- [18、启动失败,报NoClassDefFoundError如何解决](#18启动失败报noclassdeffounderror如何解决)
|
|
||||||
- [19、依赖ElasticSearch 8.0以上版本部署后指标信息无法正常显示如何解决]
|
|
||||||
|
|
||||||
## 1、支持哪些 Kafka 版本?
|
|
||||||
|
|
||||||
- 支持 0.10+ 的 Kafka 版本;
|
- 支持 0.10+ 的 Kafka 版本;
|
||||||
- 支持 ZK 及 Raft 运行模式的 Kafka 版本;
|
- 支持 ZK 及 Raft 运行模式的 Kafka 版本;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 1、2.x 版本和 3.0 版本有什么差异?
|
## 8.1、2.x 版本和 3.0 版本有什么差异?
|
||||||
|
|
||||||
**全新设计理念**
|
**全新设计理念**
|
||||||
|
|
||||||
@@ -47,7 +23,7 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 3、页面流量信息等无数据?
|
## 8.3、页面流量信息等无数据?
|
||||||
|
|
||||||
- 1、`Broker JMX`未正确开启
|
- 1、`Broker JMX`未正确开启
|
||||||
|
|
||||||
@@ -59,13 +35,13 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 4、`Jmx`连接失败如何解决?
|
## 8.4、`Jmx`连接失败如何解决?
|
||||||
|
|
||||||
- 参看 [Jmx 连接配置&问题解决](https://doc.knowstreaming.com/product/9-attachment#91jmx-%E8%BF%9E%E6%8E%A5%E5%A4%B1%E8%B4%A5%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3) 说明。
|
- 参看 [Jmx 连接配置&问题解决](./9-attachment#jmx-连接失败问题解决) 说明。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 5、有没有 API 文档?
|
## 8.5、有没有 API 文档?
|
||||||
|
|
||||||
`KnowStreaming` 采用 Swagger 进行 API 说明,在启动 KnowStreaming 服务之后,就可以从下面地址看到。
|
`KnowStreaming` 采用 Swagger 进行 API 说明,在启动 KnowStreaming 服务之后,就可以从下面地址看到。
|
||||||
|
|
||||||
@@ -73,7 +49,7 @@ Swagger-API 地址: [http://IP:PORT/swagger-ui.html#/](http://IP:PORT/swagger-
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 6、删除 Topic 成功后,为何过段时间又出现了?
|
## 8.6、删除 Topic 成功后,为何过段时间又出现了?
|
||||||
|
|
||||||
**原因说明:**
|
**原因说明:**
|
||||||
|
|
||||||
@@ -98,7 +74,7 @@ for (int i= 0; i < 100000; ++i) {
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 7、如何在不登录的情况下,调用接口?
|
## 8.7、如何在不登录的情况下,调用接口?
|
||||||
|
|
||||||
步骤一:接口调用时,在 header 中,增加如下信息:
|
步骤一:接口调用时,在 header 中,增加如下信息:
|
||||||
|
|
||||||
@@ -133,7 +109,7 @@ SECURITY.TRICK_USERS
|
|||||||
|
|
||||||
但是还有一点需要注意,绕过的用户仅能调用他有权限的接口,比如一个普通用户,那么他就只能调用普通的接口,不能去调用运维人员的接口。
|
但是还有一点需要注意,绕过的用户仅能调用他有权限的接口,比如一个普通用户,那么他就只能调用普通的接口,不能去调用运维人员的接口。
|
||||||
|
|
||||||
## 8、Specified key was too long; max key length is 767 bytes
|
## 8.8、Specified key was too long; max key length is 767 bytes
|
||||||
|
|
||||||
**原因:** 不同版本的 InoDB 引擎,参数‘innodb_large_prefix’默认值不同,即在 5.6 默认值为 OFF,5.7 默认值为 ON。
|
**原因:** 不同版本的 InoDB 引擎,参数‘innodb_large_prefix’默认值不同,即在 5.6 默认值为 OFF,5.7 默认值为 ON。
|
||||||
|
|
||||||
@@ -145,13 +121,13 @@ SECURITY.TRICK_USERS
|
|||||||
- 将字符集改为 latin1(一个字符=一个字节)。
|
- 将字符集改为 latin1(一个字符=一个字节)。
|
||||||
- 开启‘innodb_large_prefix’,修改默认行格式‘innodb_file_format’为 Barracuda,并设置 row_format=dynamic。
|
- 开启‘innodb_large_prefix’,修改默认行格式‘innodb_file_format’为 Barracuda,并设置 row_format=dynamic。
|
||||||
|
|
||||||
## 9、出现 ESIndexNotFoundEXception 报错
|
## 8.9、出现 ESIndexNotFoundEXception 报错
|
||||||
|
|
||||||
**原因 :**没有创建 ES 索引模版
|
**原因 :**没有创建 ES 索引模版
|
||||||
|
|
||||||
**解决方案:**执行 init_es_template.sh 脚本,创建 ES 索引模版即可。
|
**解决方案:**执行 init_es_template.sh 脚本,创建 ES 索引模版即可。
|
||||||
|
|
||||||
## 10、km-console 打包构建失败
|
## 8.10、km-console 打包构建失败
|
||||||
|
|
||||||
首先,**请确保您正在使用最新版本**,版本列表见 [Tags](https://github.com/didi/KnowStreaming/tags)。如果不是最新版本,请升级后再尝试有无问题。
|
首先,**请确保您正在使用最新版本**,版本列表见 [Tags](https://github.com/didi/KnowStreaming/tags)。如果不是最新版本,请升级后再尝试有无问题。
|
||||||
|
|
||||||
@@ -185,14 +161,14 @@ Node 版本: v12.22.12
|
|||||||
错误截图:
|
错误截图:
|
||||||
```
|
```
|
||||||
|
|
||||||
## 11、在 `km-console` 目录下执行 `npm run start` 时看不到应用构建和热加载过程?如何启动单个应用?
|
## 8.11、在 `km-console` 目录下执行 `npm run start` 时看不到应用构建和热加载过程?如何启动单个应用?
|
||||||
|
|
||||||
需要到具体的应用中执行 `npm run start`,例如 `cd packages/layout-clusters-fe` 后,执行 `npm run start`。
|
需要到具体的应用中执行 `npm run start`,例如 `cd packages/layout-clusters-fe` 后,执行 `npm run start`。
|
||||||
|
|
||||||
应用启动后需要到基座应用中查看(需要启动基座应用,即 layout-clusters-fe)。
|
应用启动后需要到基座应用中查看(需要启动基座应用,即 layout-clusters-fe)。
|
||||||
|
|
||||||
|
|
||||||
## 12、权限识别失败问题
|
## 8.12、权限识别失败问题
|
||||||
1、使用admin账号登陆KnowStreaming时,点击系统管理-用户管理-角色管理-新增角色,查看页面是否正常。
|
1、使用admin账号登陆KnowStreaming时,点击系统管理-用户管理-角色管理-新增角色,查看页面是否正常。
|
||||||
|
|
||||||
<img src="http://img-ys011.didistatic.com/static/dc2img/do1_gwGfjN9N92UxzHU8dfzr" width = "400" >
|
<img src="http://img-ys011.didistatic.com/static/dc2img/do1_gwGfjN9N92UxzHU8dfzr" width = "400" >
|
||||||
@@ -206,116 +182,3 @@ Node 版本: v12.22.12
|
|||||||
|
|
||||||
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
|
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
|
||||||
+ 解决方案:清空数据库数据,将数据库字符集调整为utf8,最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。
|
+ 解决方案:清空数据库数据,将数据库字符集调整为utf8,最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。
|
||||||
|
|
||||||
|
|
||||||
## 13、接入开启kerberos认证的kafka集群
|
|
||||||
|
|
||||||
1. 部署KnowStreaming的机器上安装krb客户端;
|
|
||||||
2. 替换/etc/krb5.conf配置文件;
|
|
||||||
3. 把kafka对应的keytab复制到改机器目录下;
|
|
||||||
4. 接入集群时认证配置,配置信息根据实际情况填写;
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"security.protocol": "SASL_PLAINTEXT",
|
|
||||||
"sasl.mechanism": "GSSAPI",
|
|
||||||
"sasl.jaas.config": "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=\"/etc/keytab/kafka.keytab\" storeKey=true useTicketCache=false principal=\"kafka/kafka@TEST.COM\";",
|
|
||||||
"sasl.kerberos.service.name": "kafka"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## 14、对接Ldap的配置
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# 需要在application.yml中增加如下配置。相关配置的信息,按实际情况进行调整
|
|
||||||
account:
|
|
||||||
ldap:
|
|
||||||
url: ldap://127.0.0.1:8080/
|
|
||||||
basedn: DC=senz,DC=local
|
|
||||||
factory: com.sun.jndi.ldap.LdapCtxFactory
|
|
||||||
filter: sAMAccountName
|
|
||||||
security:
|
|
||||||
authentication: simple
|
|
||||||
principal: CN=search,DC=senz,DC=local
|
|
||||||
credentials: xxxxxxx
|
|
||||||
auth-user-registration: false # 是否注册到mysql,默认false
|
|
||||||
auth-user-registration-role: 1677 # 1677是超级管理员角色的id,如果赋予想默认赋予普通角色,可以到ks新建一个。
|
|
||||||
|
|
||||||
# 需要在application.yml中修改如下配置
|
|
||||||
spring:
|
|
||||||
logi-security:
|
|
||||||
login-extend-bean-name: ksLdapLoginService # 表示使用ldap的service
|
|
||||||
```
|
|
||||||
|
|
||||||
## 15、测试时使用Testcontainers的说明
|
|
||||||
|
|
||||||
1. 需要docker运行环境 [Testcontainers运行环境说明](https://www.testcontainers.org/supported_docker_environment/)
|
|
||||||
2. 如果本机没有docker,可以使用[远程访问docker](https://docs.docker.com/config/daemon/remote-access/) [Testcontainers配置说明](https://www.testcontainers.org/features/configuration/#customizing-docker-host-detection)
|
|
||||||
|
|
||||||
|
|
||||||
## 16、JMX连接失败怎么办
|
|
||||||
|
|
||||||
详细见:[解决连接JMX失败](../dev_guide/%E8%A7%A3%E5%86%B3%E8%BF%9E%E6%8E%A5JMX%E5%A4%B1%E8%B4%A5.md)
|
|
||||||
|
|
||||||
|
|
||||||
## 17、zk监控无数据问题
|
|
||||||
|
|
||||||
**现象:**
|
|
||||||
zookeeper集群正常,但Ks上zk页面所有监控指标无数据,`KnowStreaming` log_error.log日志提示
|
|
||||||
|
|
||||||
```vim
|
|
||||||
[MetricCollect-Shard-0-8-thread-1] ERROR class=c.x.k.s.k.c.s.h.c.z.HealthCheckZookeeperService||method=checkWatchCount||param=ZookeeperParam(zkAddressList=[Tuple{v1=192.168.xxx.xx, v2=2181}, Tuple{v1=192.168.xxx.xx, v2=2181}, Tuple{v1=192.168.xxx.xx, v2=2181}], zkConfig=null)||config=HealthAmountRatioConfig(amount=100000, ratio=0.8)||result=Result{message='mntr is not executed because it is not in the whitelist.
|
|
||||||
', code=8031, data=null}||errMsg=get metrics failed, may be collect failed or zk mntr command not in whitelist.
|
|
||||||
2023-04-23 14:39:07.234 [MetricCollect-Shard-0-8-thread-1] ERROR class=c.x.k.s.k.c.s.h.checker.AbstractHeal
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
原因就很明确了。需要开放zk的四字命令,在`zoo.cfg`配置文件中添加
|
|
||||||
```
|
|
||||||
4lw.commands.whitelist=mntr,stat,ruok,envi,srvr,envi,cons,conf,wchs,wchp
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
建议至少开放上述几个四字命令,当然,您也可以全部开放
|
|
||||||
```
|
|
||||||
4lw.commands.whitelist=*
|
|
||||||
```
|
|
||||||
|
|
||||||
## 18、启动失败,报NoClassDefFoundError如何解决
|
|
||||||
|
|
||||||
**错误现象:**
|
|
||||||
```log
|
|
||||||
# 启动失败,报nested exception is java.lang.NoClassDefFoundError: Could not initialize class com.didiglobal.logi.job.core.WorkerSingleton$Singleton
|
|
||||||
|
|
||||||
|
|
||||||
2023-08-11 22:54:29.842 [main] ERROR class=org.springframework.boot.SpringApplication||Application run failed
|
|
||||||
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'quartzScheduler' defined in class path resource [com/didiglobal/logi/job/LogIJobAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.didiglobal.logi.job.core.Scheduler]: Factory method 'quartzScheduler' threw exception; nested exception is java.lang.NoClassDefFoundError: Could not initialize class com.didiglobal.logi.job.core.WorkerSingleton$Singleton
|
|
||||||
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:657)
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
**问题原因:**
|
|
||||||
1. `KnowStreaming` 依赖的 `Logi-Job` 初始化 `WorkerSingleton$Singleton` 失败。
|
|
||||||
2. `WorkerSingleton$Singleton` 初始化的过程中,会去获取一些操作系统的信息,如果获取时出现了异常,则会导致 `WorkerSingleton$Singleton` 初始化失败。
|
|
||||||
|
|
||||||
|
|
||||||
**临时建议:**
|
|
||||||
|
|
||||||
`Logi-Job` 问题的修复时间不好控制,之前我们测试验证了一下,在 `Windows`、`Mac`、`CentOS` 这几个操作系统下基本上都是可以正常运行的。
|
|
||||||
|
|
||||||
所以,如果有条件的话,可以暂时先使用这几个系统部署 `KnowStreaming`。
|
|
||||||
|
|
||||||
如果在在 `Windows`、`Mac`、`CentOS` 这几个操作系统下也出现了启动失败的问题,可以重试2-3次看是否还是启动失败,或者换一台机器试试。
|
|
||||||
|
|
||||||
## 依赖ElasticSearch 8.0以上版本部署后指标信息无法正常显示如何解决
|
|
||||||
**错误现象**
|
|
||||||
```log
|
|
||||||
Warnings: [299 Elasticsearch-8.9.1-a813d015ef1826148d9d389bd1c0d781c6e349f0 "Legacy index templates are deprecated in favor of composable templates."]
|
|
||||||
```
|
|
||||||
**问题原因**
|
|
||||||
1. ES8.0和ES7.0版本存在Template模式的差异,建议使用 /_index_template 端点来管理模板;
|
|
||||||
2. ES java client在此版本的行为很奇怪表现为读取数据为空;
|
|
||||||
|
|
||||||
**解决方法**
|
|
||||||
修改`es_template_create.sh`脚本中所有的`/_template`为`/_index_template`后执行即可。
|
|
||||||
|
|
||||||
|
|||||||
@@ -5,13 +5,13 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<artifactId>km-biz</artifactId>
|
<artifactId>km-biz</artifactId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
|
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>km</artifactId>
|
<artifactId>km</artifactId>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
<properties>
|
<properties>
|
||||||
@@ -62,6 +62,10 @@
|
|||||||
<groupId>commons-lang</groupId>
|
<groupId>commons-lang</groupId>
|
||||||
<artifactId>commons-lang</artifactId>
|
<artifactId>commons-lang</artifactId>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
<dependency>
|
||||||
|
<groupId>junit</groupId>
|
||||||
|
<artifactId>junit</artifactId>
|
||||||
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>commons-codec</groupId>
|
<groupId>commons-codec</groupId>
|
||||||
|
|||||||
@@ -1,15 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.cluster;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Kafka集群Connector概览
|
|
||||||
*/
|
|
||||||
public interface ClusterConnectorsManager {
|
|
||||||
PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto);
|
|
||||||
|
|
||||||
ConnectStateVO getClusterConnectorsState(Long clusterPhyId);
|
|
||||||
}
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.cluster;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* 多集群总体状态
|
|
||||||
*/
|
|
||||||
public interface ClusterZookeepersManager {
|
|
||||||
Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId);
|
|
||||||
|
|
||||||
PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto);
|
|
||||||
|
|
||||||
Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path);
|
|
||||||
}
|
|
||||||
@@ -1,15 +1,10 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.cluster;
|
package com.xiaojukeji.know.streaming.km.biz.cluster;
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 多集群总体状态
|
* 多集群总体状态
|
||||||
*/
|
*/
|
||||||
@@ -20,14 +15,10 @@ public interface MultiClusterPhyManager {
|
|||||||
*/
|
*/
|
||||||
ClusterPhysState getClusterPhysState();
|
ClusterPhysState getClusterPhysState();
|
||||||
|
|
||||||
ClusterPhysHealthState getClusterPhysHealthState();
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 查询多集群大盘
|
* 查询多集群大盘
|
||||||
* @param dto 分页信息
|
* @param dto 分页信息
|
||||||
* @return
|
* @return
|
||||||
*/
|
*/
|
||||||
PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto);
|
PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto);
|
||||||
|
|
||||||
Result<List<ClusterPhyBaseVO>> getClusterPhysBasic();
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,8 +6,6 @@ import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterBrokersManager;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterBrokersOverviewDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.BrokerMetrics;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
||||||
@@ -18,8 +16,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterBroker
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.kafkacontroller.KafkaControllerVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterRunStateEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
@@ -28,8 +24,6 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
|
|||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
|
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
|
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
@@ -57,9 +51,6 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private KafkaControllerService kafkaControllerService;
|
private KafkaControllerService kafkaControllerService;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private KafkaJMXClient kafkaJMXClient;
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
|
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
|
||||||
// 获取集群Broker列表
|
// 获取集群Broker列表
|
||||||
@@ -84,24 +75,15 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
|
|||||||
//获取controller信息
|
//获取controller信息
|
||||||
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
|
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
|
||||||
|
|
||||||
//获取jmx状态信息
|
|
||||||
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
|
|
||||||
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
|
|
||||||
|
|
||||||
|
|
||||||
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(clusterPhyId);
|
|
||||||
|
|
||||||
// 格式转换
|
// 格式转换
|
||||||
return PaginationResult.buildSuc(
|
return PaginationResult.buildSuc(
|
||||||
this.convert2ClusterBrokersOverviewVOList(
|
this.convert2ClusterBrokersOverviewVOList(
|
||||||
clusterPhy,
|
|
||||||
paginationResult.getData().getBizData(),
|
paginationResult.getData().getBizData(),
|
||||||
brokerList,
|
brokerList,
|
||||||
metricsResult.getData(),
|
metricsResult.getData(),
|
||||||
groupTopic,
|
groupTopic,
|
||||||
transactionTopic,
|
transactionTopic,
|
||||||
kafkaController,
|
kafkaController
|
||||||
jmxConnectedMap
|
|
||||||
),
|
),
|
||||||
paginationResult
|
paginationResult
|
||||||
);
|
);
|
||||||
@@ -140,8 +122,7 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
|
|||||||
clusterBrokersStateVO.setKafkaControllerAlive(true);
|
clusterBrokersStateVO.setKafkaControllerAlive(true);
|
||||||
}
|
}
|
||||||
|
|
||||||
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, KafkaConstant.CONFIG_SIMILAR_IGNORED_CONFIG_KEY_LIST) <= 0
|
clusterBrokersStateVO.setConfigSimilar(brokerConfigService.countBrokerConfigDiffsFromDB(clusterPhyId, Arrays.asList("broker.id", "listeners", "name", "value")) <= 0);
|
||||||
);
|
|
||||||
|
|
||||||
return clusterBrokersStateVO;
|
return clusterBrokersStateVO;
|
||||||
}
|
}
|
||||||
@@ -179,36 +160,27 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(ClusterPhy clusterPhy,
|
private List<ClusterBrokersOverviewVO> convert2ClusterBrokersOverviewVOList(List<Integer> pagedBrokerIdList,
|
||||||
List<Integer> pagedBrokerIdList,
|
|
||||||
List<Broker> brokerList,
|
List<Broker> brokerList,
|
||||||
List<BrokerMetrics> metricsList,
|
List<BrokerMetrics> metricsList,
|
||||||
Topic groupTopic,
|
Topic groupTopic,
|
||||||
Topic transactionTopic,
|
Topic transactionTopic,
|
||||||
KafkaController kafkaController,
|
KafkaController kafkaController) {
|
||||||
Map<Integer, Boolean> jmxConnectedMap) {
|
Map<Integer, BrokerMetrics> metricsMap = metricsList == null? new HashMap<>(): metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
|
||||||
Map<Integer, BrokerMetrics> metricsMap = metricsList == null ? new HashMap<>() : metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
|
|
||||||
|
|
||||||
Map<Integer, Broker> brokerMap = brokerList == null ? new HashMap<>() : brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
|
Map<Integer, Broker> brokerMap = brokerList == null? new HashMap<>(): brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
|
||||||
|
|
||||||
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
|
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
|
||||||
for (Integer brokerId : pagedBrokerIdList) {
|
for (Integer brokerId : pagedBrokerIdList) {
|
||||||
Broker broker = brokerMap.get(brokerId);
|
Broker broker = brokerMap.get(brokerId);
|
||||||
BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
|
BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
|
||||||
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
|
|
||||||
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
|
|
||||||
}
|
|
||||||
|
|
||||||
//补充非zk模式的JMXPort信息
|
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController));
|
||||||
if (!clusterPhy.getRunState().equals(ClusterRunStateEnum.RUN_ZK.getRunState())) {
|
|
||||||
JmxConfig jmxConfig = ConvertUtil.str2ObjByJson(clusterPhy.getJmxProperties(), JmxConfig.class);
|
|
||||||
voList.forEach(elem -> elem.setJmxPort(jmxConfig.getFinallyJmxPort(String.valueOf(elem.getBrokerId()))));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return voList;
|
return voList;
|
||||||
}
|
}
|
||||||
|
|
||||||
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController, Boolean jmxConnected) {
|
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController) {
|
||||||
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
|
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
|
||||||
clusterBrokersOverviewVO.setBrokerId(brokerId);
|
clusterBrokersOverviewVO.setBrokerId(brokerId);
|
||||||
if (broker != null) {
|
if (broker != null) {
|
||||||
@@ -231,7 +203,6 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
|
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
|
||||||
clusterBrokersOverviewVO.setJmxConnected(jmxConnected);
|
|
||||||
return clusterBrokersOverviewVO;
|
return clusterBrokersOverviewVO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,152 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterConnectorsManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterConnectorsOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.connect.MetricsConnectorsDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connect.ConnectStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.connector.ClusterConnectorOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.converter.ConnectConverter;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
|
|
||||||
import org.apache.kafka.connect.runtime.AbstractStatus;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Service;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.Arrays;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
|
|
||||||
@Service
|
|
||||||
public class ClusterConnectorsManagerImpl implements ClusterConnectorsManager {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(ClusterConnectorsManagerImpl.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorService connectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectClusterService connectClusterService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorMetricService connectorMetricService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerService workerService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerConnectorService workerConnectorService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<ClusterConnectorOverviewVO> getClusterConnectorsOverview(Long clusterPhyId, ClusterConnectorsOverviewDTO dto) {
|
|
||||||
List<ConnectCluster> clusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
|
|
||||||
|
|
||||||
List<ConnectorPO> poList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
|
|
||||||
// 查询实时指标
|
|
||||||
Result<List<ConnectorMetrics>> latestMetricsResult = connectorMetricService.getLatestMetricsFromES(
|
|
||||||
clusterPhyId,
|
|
||||||
poList.stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
|
|
||||||
dto.getLatestMetricNames()
|
|
||||||
);
|
|
||||||
|
|
||||||
if (latestMetricsResult.failed()) {
|
|
||||||
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
|
|
||||||
return PaginationResult.buildFailure(latestMetricsResult, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 转换成vo
|
|
||||||
List<ClusterConnectorOverviewVO> voList = ConnectConverter.convert2ClusterConnectorOverviewVOList(clusterList, poList,latestMetricsResult.getData());
|
|
||||||
|
|
||||||
// 请求分页信息
|
|
||||||
PaginationResult<ClusterConnectorOverviewVO> voPaginationResult = this.pagingConnectorInLocal(voList, dto);
|
|
||||||
if (voPaginationResult.failed()) {
|
|
||||||
LOGGER.error("method=getClusterConnectorsOverview||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
|
|
||||||
|
|
||||||
return PaginationResult.buildFailure(voPaginationResult, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 查询历史指标
|
|
||||||
Result<List<MetricMultiLinesVO>> lineMetricsResult = connectorMetricService.listConnectClusterMetricsFromES(
|
|
||||||
clusterPhyId,
|
|
||||||
this.buildMetricsConnectorsDTO(
|
|
||||||
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
|
|
||||||
dto.getMetricLines()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
|
|
||||||
return PaginationResult.buildSuc(
|
|
||||||
ConnectConverter.supplyData2ClusterConnectorOverviewVOList(
|
|
||||||
voPaginationResult.getData().getBizData(),
|
|
||||||
lineMetricsResult.getData()
|
|
||||||
),
|
|
||||||
voPaginationResult
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public ConnectStateVO getClusterConnectorsState(Long clusterPhyId) {
|
|
||||||
//获取Connect集群Id列表
|
|
||||||
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
|
|
||||||
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
List<ConnectWorker> connectWorkerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
|
|
||||||
return convert2ConnectStateVO(connectClusterList, connectorPOList, workerConnectorList, connectWorkerList);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
|
||||||
|
|
||||||
private MetricsConnectorsDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
|
|
||||||
MetricsConnectorsDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsConnectorsDTO.class);
|
|
||||||
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
|
|
||||||
|
|
||||||
return dto;
|
|
||||||
}
|
|
||||||
|
|
||||||
private ConnectStateVO convert2ConnectStateVO(List<ConnectCluster> connectClusterList, List<ConnectorPO> connectorPOList, List<WorkerConnector> workerConnectorList, List<ConnectWorker> connectWorkerList) {
|
|
||||||
ConnectStateVO connectStateVO = new ConnectStateVO();
|
|
||||||
connectStateVO.setConnectClusterCount(connectClusterList.size());
|
|
||||||
connectStateVO.setTotalConnectorCount(connectorPOList.size());
|
|
||||||
connectStateVO.setAliveConnectorCount(connectorPOList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
connectStateVO.setWorkerCount(connectWorkerList.size());
|
|
||||||
connectStateVO.setTotalTaskCount(workerConnectorList.size());
|
|
||||||
connectStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
return connectStateVO;
|
|
||||||
}
|
|
||||||
|
|
||||||
private PaginationResult<ClusterConnectorOverviewVO> pagingConnectorInLocal(List<ClusterConnectorOverviewVO> connectorVOList, ClusterConnectorsOverviewDTO dto) {
|
|
||||||
//模糊匹配
|
|
||||||
connectorVOList = PaginationUtil.pageByFuzzyFilter(connectorVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
|
|
||||||
|
|
||||||
//排序
|
|
||||||
if (!dto.getLatestMetricNames().isEmpty()) {
|
|
||||||
PaginationMetricsUtil.sortMetrics(connectorVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
|
|
||||||
} else {
|
|
||||||
PaginationUtil.pageBySort(connectorVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
|
|
||||||
}
|
|
||||||
|
|
||||||
//分页
|
|
||||||
return PaginationUtil.pageBySubData(connectorVOList, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -14,12 +14,10 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.res.ClusterPhyTop
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
|
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.ha.HaResTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.ha.HaActiveStandbyRelationService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
@@ -40,22 +38,16 @@ public class ClusterTopicsManagerImpl implements ClusterTopicsManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private TopicMetricService topicMetricService;
|
private TopicMetricService topicMetricService;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private HaActiveStandbyRelationService haActiveStandbyRelationService;
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) {
|
public PaginationResult<ClusterPhyTopicsOverviewVO> getClusterPhyTopicsOverview(Long clusterPhyId, ClusterTopicsOverviewDTO dto) {
|
||||||
// 获取集群所有的Topic信息
|
// 获取集群所有的Topic信息
|
||||||
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
|
List<Topic> topicList = topicService.listTopicsFromDB(clusterPhyId);
|
||||||
|
|
||||||
// 获取集群所有Topic的指标
|
// 获取集群所有Topic的指标
|
||||||
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCache(clusterPhyId);
|
Map<String, TopicMetrics> metricsMap = topicMetricService.getLatestMetricsFromCacheFirst(clusterPhyId);
|
||||||
|
|
||||||
// 获取HA信息
|
|
||||||
Set<String> haTopicNameSet = haActiveStandbyRelationService.listByClusterAndType(clusterPhyId, HaResTypeEnum.MIRROR_TOPIC).stream().map(elem -> elem.getResName()).collect(Collectors.toSet());
|
|
||||||
|
|
||||||
// 转换成vo
|
// 转换成vo
|
||||||
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap, haTopicNameSet);
|
List<ClusterPhyTopicsOverviewVO> voList = TopicVOConverter.convert2ClusterPhyTopicsOverviewVOList(topicList, metricsMap);
|
||||||
|
|
||||||
// 请求分页信息
|
// 请求分页信息
|
||||||
PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto);
|
PaginationResult<ClusterPhyTopicsOverviewVO> voPaginationResult = this.pagingTopicInLocal(voList, dto);
|
||||||
|
|||||||
@@ -1,138 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Service;
|
|
||||||
import java.util.Arrays;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
|
|
||||||
@Service
|
|
||||||
public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(ClusterZookeepersManagerImpl.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ClusterPhyService clusterPhyService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ZookeeperService zookeeperService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ZookeeperMetricService zookeeperMetricService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ZnodeService znodeService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId) {
|
|
||||||
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
|
|
||||||
if (clusterPhy == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
|
|
||||||
}
|
|
||||||
|
|
||||||
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
|
|
||||||
|
|
||||||
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
|
|
||||||
vo.setTotalServerCount(infoList.size());
|
|
||||||
vo.setAliveFollowerCount(0);
|
|
||||||
vo.setTotalFollowerCount(0);
|
|
||||||
vo.setAliveObserverCount(0);
|
|
||||||
vo.setTotalObserverCount(0);
|
|
||||||
vo.setAliveServerCount(0);
|
|
||||||
for (ZookeeperInfo info: infoList) {
|
|
||||||
if (info.getRole().equals(ZKRoleEnum.LEADER.getRole()) || info.getRole().equals(ZKRoleEnum.STANDALONE.getRole())) {
|
|
||||||
// leader 或者 standalone
|
|
||||||
vo.setLeaderNode(info.getHost());
|
|
||||||
}
|
|
||||||
|
|
||||||
if (info.getRole().equals(ZKRoleEnum.FOLLOWER.getRole())) {
|
|
||||||
vo.setTotalFollowerCount(vo.getTotalFollowerCount() + 1);
|
|
||||||
vo.setAliveFollowerCount(info.alive()? vo.getAliveFollowerCount() + 1: vo.getAliveFollowerCount());
|
|
||||||
}
|
|
||||||
|
|
||||||
if (info.getRole().equals(ZKRoleEnum.OBSERVER.getRole())) {
|
|
||||||
vo.setTotalObserverCount(vo.getTotalObserverCount() + 1);
|
|
||||||
vo.setAliveObserverCount(info.alive()? vo.getAliveObserverCount() + 1: vo.getAliveObserverCount());
|
|
||||||
}
|
|
||||||
|
|
||||||
if (info.alive()) {
|
|
||||||
vo.setAliveServerCount(vo.getAliveServerCount() + 1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 指标获取
|
|
||||||
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
|
|
||||||
clusterPhyId,
|
|
||||||
Arrays.asList(
|
|
||||||
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT,
|
|
||||||
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE,
|
|
||||||
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED,
|
|
||||||
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
|
|
||||||
)
|
|
||||||
|
|
||||||
);
|
|
||||||
if (metricsResult.failed()) {
|
|
||||||
LOGGER.error(
|
|
||||||
"method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
|
|
||||||
clusterPhyId, metricsResult.getMessage()
|
|
||||||
);
|
|
||||||
return Result.buildSuc(vo);
|
|
||||||
}
|
|
||||||
|
|
||||||
ZookeeperMetrics metrics = metricsResult.getData();
|
|
||||||
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
|
|
||||||
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
|
|
||||||
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
|
|
||||||
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
|
|
||||||
|
|
||||||
return Result.buildSuc(vo);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto) {
|
|
||||||
//获取集群zookeeper列表
|
|
||||||
List<ClusterZookeepersOverviewVO> clusterZookeepersOverviewVOList = ConvertUtil.list2List(zookeeperService.listFromDBByCluster(clusterPhyId), ClusterZookeepersOverviewVO.class);
|
|
||||||
|
|
||||||
//搜索
|
|
||||||
clusterZookeepersOverviewVOList = PaginationUtil.pageByFuzzyFilter(clusterZookeepersOverviewVOList, dto.getSearchKeywords(), Arrays.asList("host"));
|
|
||||||
|
|
||||||
//分页
|
|
||||||
PaginationResult<ClusterZookeepersOverviewVO> paginationResult = PaginationUtil.pageBySubData(clusterZookeepersOverviewVOList, dto);
|
|
||||||
|
|
||||||
return paginationResult;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path) {
|
|
||||||
Result<Znode> result = znodeService.getZnode(clusterPhyId, path);
|
|
||||||
if (result.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(result);
|
|
||||||
}
|
|
||||||
return Result.buildSuc(ConvertUtil.obj2ObjByJSON(result.getData(), ZnodeVO.class));
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -5,29 +5,32 @@ import com.didiglobal.logi.log.LogFactory;
|
|||||||
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
|
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetrics;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
|
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
|
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
|
||||||
|
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
import java.util.*;
|
import java.util.ArrayList;
|
||||||
|
import java.util.Arrays;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
@Service
|
@Service
|
||||||
@@ -40,48 +43,34 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private ClusterMetricService clusterMetricService;
|
private ClusterMetricService clusterMetricService;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private KafkaControllerService kafkaControllerService;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public ClusterPhysState getClusterPhysState() {
|
public ClusterPhysState getClusterPhysState() {
|
||||||
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
|
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
|
||||||
ClusterPhysState physState = new ClusterPhysState(0, 0, 0, clusterPhyList.size());
|
|
||||||
|
|
||||||
for (ClusterPhy clusterPhy : clusterPhyList) {
|
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(
|
||||||
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
|
clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()),
|
||||||
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
|
false
|
||||||
if (state == null) {
|
);
|
||||||
physState.setUnknownCount(physState.getUnknownCount() + 1);
|
|
||||||
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
|
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
|
||||||
|
ClusterPhysState physState = new ClusterPhysState(0, 0, clusterPhyList.size());
|
||||||
|
for (ClusterPhy clusterPhy: clusterPhyList) {
|
||||||
|
KafkaController kafkaController = controllerMap.get(clusterPhy.getId());
|
||||||
|
|
||||||
|
if (kafkaController != null && !kafkaController.alive()) {
|
||||||
|
// 存在明确的信息表示controller挂了
|
||||||
|
physState.setDownCount(physState.getDownCount() + 1);
|
||||||
|
} else if ((System.currentTimeMillis() - clusterPhy.getCreateTime().getTime() >= 5 * 60 * 1000) && kafkaController == null) {
|
||||||
|
// 集群接入时间是在近5分钟内,同时kafkaController信息不存在,则设置为down
|
||||||
physState.setDownCount(physState.getDownCount() + 1);
|
physState.setDownCount(physState.getDownCount() + 1);
|
||||||
} else {
|
} else {
|
||||||
|
// 其他情况都设置为alive
|
||||||
physState.setLiveCount(physState.getLiveCount() + 1);
|
physState.setLiveCount(physState.getLiveCount() + 1);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return physState;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public ClusterPhysHealthState getClusterPhysHealthState() {
|
|
||||||
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
|
|
||||||
|
|
||||||
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
|
|
||||||
for (ClusterPhy clusterPhy: clusterPhyList) {
|
|
||||||
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
|
|
||||||
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
|
|
||||||
if (state == null) {
|
|
||||||
physState.setUnknownCount(physState.getUnknownCount() + 1);
|
|
||||||
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
|
|
||||||
physState.setGoodCount(physState.getGoodCount() + 1);
|
|
||||||
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
|
|
||||||
physState.setMediumCount(physState.getMediumCount() + 1);
|
|
||||||
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
|
|
||||||
physState.setPoorCount(physState.getPoorCount() + 1);
|
|
||||||
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
|
|
||||||
physState.setDeadCount(physState.getDeadCount() + 1);
|
|
||||||
} else {
|
|
||||||
physState.setUnknownCount(physState.getUnknownCount() + 1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return physState;
|
return physState;
|
||||||
}
|
}
|
||||||
@@ -94,6 +83,24 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
|
|||||||
// 转为vo格式,方便后续进行分页筛选等
|
// 转为vo格式,方便后续进行分页筛选等
|
||||||
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
|
List<ClusterPhyDashboardVO> voList = ConvertUtil.list2List(clusterPhyList, ClusterPhyDashboardVO.class);
|
||||||
|
|
||||||
|
// TODO 后续产品上,看是否需要增加一个未知的状态,否则新接入的集群,因为新接入的集群,数据存在延迟
|
||||||
|
// 获取集群controller信息并补充到vo中,
|
||||||
|
Map<Long, KafkaController> controllerMap = kafkaControllerService.getKafkaControllersFromDB(clusterPhyList.stream().map(elem -> elem.getId()).collect(Collectors.toList()), false);
|
||||||
|
for (ClusterPhyDashboardVO vo: voList) {
|
||||||
|
KafkaController kafkaController = controllerMap.get(vo.getId());
|
||||||
|
|
||||||
|
if (kafkaController != null && !kafkaController.alive()) {
|
||||||
|
// 存在明确的信息表示controller挂了
|
||||||
|
vo.setAlive(Constant.DOWN);
|
||||||
|
} else if ((System.currentTimeMillis() - vo.getCreateTime().getTime() >= 5 * 60L * 1000L) && kafkaController == null) {
|
||||||
|
// 集群接入时间是在近5分钟内,同时kafkaController信息不存在,则设置为down
|
||||||
|
vo.setAlive(Constant.DOWN);
|
||||||
|
} else {
|
||||||
|
// 其他情况都设置为alive
|
||||||
|
vo.setAlive(Constant.ALIVE);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// 本地分页过滤
|
// 本地分页过滤
|
||||||
voList = this.getAndPagingDataInLocal(voList, dto);
|
voList = this.getAndPagingDataInLocal(voList, dto);
|
||||||
|
|
||||||
@@ -118,15 +125,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<List<ClusterPhyBaseVO>> getClusterPhysBasic() {
|
|
||||||
// 获取集群
|
|
||||||
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
|
|
||||||
|
|
||||||
// 转为vo格式,方便后续进行分页筛选等
|
|
||||||
return Result.buildSuc(ConvertUtil.list2List(clusterPhyList, ClusterPhyBaseVO.class));
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
/**************************************************** private method ****************************************************/
|
||||||
|
|
||||||
@@ -151,7 +149,13 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
|
|||||||
List<ClusterMetrics> metricsList = new ArrayList<>();
|
List<ClusterMetrics> metricsList = new ArrayList<>();
|
||||||
for (ClusterPhyDashboardVO vo: voList) {
|
for (ClusterPhyDashboardVO vo: voList) {
|
||||||
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
|
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
|
||||||
clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension());
|
if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) {
|
||||||
|
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
|
||||||
|
// 如果集群没有健康分,则设置一个默认的健康分数值
|
||||||
|
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE,
|
||||||
|
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
metricsList.add(clusterMetrics);
|
metricsList.add(clusterMetrics);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,16 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
|
|
||||||
|
|
||||||
import java.util.Properties;
|
|
||||||
|
|
||||||
public interface ConnectorManager {
|
|
||||||
Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator);
|
|
||||||
|
|
||||||
Result<Void> createConnector(ConnectorCreateDTO dto, String operator);
|
|
||||||
Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator);
|
|
||||||
|
|
||||||
Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName);
|
|
||||||
}
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.connector;
|
|
||||||
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/11/14
|
|
||||||
*/
|
|
||||||
public interface WorkerConnectorManager {
|
|
||||||
Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName);
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.connector.ConnectorStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.OpConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
|
|
||||||
import org.apache.kafka.connect.runtime.AbstractStatus;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Service;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Properties;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
@Service
|
|
||||||
public class ConnectorManagerImpl implements ConnectorManager {
|
|
||||||
@Autowired
|
|
||||||
private PluginService pluginService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorService connectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private OpConnectorService opConnectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerConnectorService workerConnectorService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> updateConnectorConfig(Long connectClusterId, String connectorName, Properties configs, String operator) {
|
|
||||||
Result<ConnectConfigInfos> infosResult = pluginService.validateConfig(connectClusterId, configs);
|
|
||||||
if (infosResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(infosResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (infosResult.getData().getErrorCount() > 0) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Connector参数错误");
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.updateConnectorConfig(connectClusterId, connectorName, configs, operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> createConnector(ConnectorCreateDTO dto, String operator) {
|
|
||||||
dto.getSuitableConfig().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
|
|
||||||
|
|
||||||
Result<KSConnectorInfo> createResult = opConnectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
|
|
||||||
if (createResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(createResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<KSConnector> ksConnectorResult = connectorService.getConnectorFromKafka(dto.getConnectClusterId(), dto.getConnectorName());
|
|
||||||
if (ksConnectorResult.failed()) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功,但是获取元信息失败,页面元信息会存在1分钟延迟");
|
|
||||||
}
|
|
||||||
|
|
||||||
opConnectorService.addNewToDB(ksConnectorResult.getData());
|
|
||||||
return Result.buildSuc();
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> createConnector(ConnectorCreateDTO dto, String heartbeatName, String checkpointName, String operator) {
|
|
||||||
dto.getSuitableConfig().put(KafkaConnectConstant.MIRROR_MAKER_NAME_FIELD_NAME, dto.getConnectorName());
|
|
||||||
|
|
||||||
Result<KSConnectorInfo> createResult = opConnectorService.createConnector(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
|
|
||||||
if (createResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(createResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<KSConnector> ksConnectorResult = connectorService.getConnectorFromKafka(dto.getConnectClusterId(), dto.getConnectorName());
|
|
||||||
if (ksConnectorResult.failed()) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.SUCCESS, "创建成功,但是获取元信息失败,页面元信息会存在1分钟延迟");
|
|
||||||
}
|
|
||||||
|
|
||||||
KSConnector connector = ksConnectorResult.getData();
|
|
||||||
connector.setCheckpointConnectorName(checkpointName);
|
|
||||||
connector.setHeartbeatConnectorName(heartbeatName);
|
|
||||||
|
|
||||||
opConnectorService.addNewToDB(connector);
|
|
||||||
return Result.buildSuc();
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<ConnectorStateVO> getConnectorStateVO(Long connectClusterId, String connectorName) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
|
|
||||||
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFailure(ResultStatus.NOT_EXIST);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorName().equals(connectorName)).collect(Collectors.toList());
|
|
||||||
|
|
||||||
return Result.buildSuc(convert2ConnectorOverviewVO(connectorPO, workerConnectorList));
|
|
||||||
}
|
|
||||||
|
|
||||||
private ConnectorStateVO convert2ConnectorOverviewVO(ConnectorPO connectorPO, List<WorkerConnector> workerConnectorList) {
|
|
||||||
ConnectorStateVO connectorStateVO = new ConnectorStateVO();
|
|
||||||
connectorStateVO.setConnectClusterId(connectorPO.getConnectClusterId());
|
|
||||||
connectorStateVO.setName(connectorPO.getConnectorName());
|
|
||||||
connectorStateVO.setType(connectorPO.getConnectorType());
|
|
||||||
connectorStateVO.setState(connectorPO.getState());
|
|
||||||
connectorStateVO.setTotalTaskCount(workerConnectorList.size());
|
|
||||||
connectorStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(AbstractStatus.State.RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
connectorStateVO.setTotalWorkerCount(workerConnectorList.stream().map(elem -> elem.getWorkerId()).collect(Collectors.toSet()).size());
|
|
||||||
return connectorStateVO;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.connector.impl;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.connect.connector.WorkerConnectorManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.connect.cache.LoadedConnectClusterCache;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Service;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/11/14
|
|
||||||
*/
|
|
||||||
@Service
|
|
||||||
public class WorkerConnectorManageImpl implements WorkerConnectorManager {
|
|
||||||
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(WorkerConnectorManageImpl.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerConnectorService workerConnectorService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<List<KCTaskOverviewVO>> getTaskOverview(Long connectClusterId, String connectorName) {
|
|
||||||
ConnectCluster connectCluster = LoadedConnectClusterCache.getByPhyId(connectClusterId);
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.getWorkerConnectorListFromCluster(connectCluster, connectorName);
|
|
||||||
|
|
||||||
return Result.buildSuc(ConvertUtil.list2List(workerConnectorList, KCTaskOverviewVO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.mm2;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Properties;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/12/26
|
|
||||||
*/
|
|
||||||
public interface MirrorMakerManager {
|
|
||||||
Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator);
|
|
||||||
|
|
||||||
Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
|
|
||||||
|
|
||||||
Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator);
|
|
||||||
|
|
||||||
Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
|
|
||||||
Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
|
|
||||||
Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator);
|
|
||||||
|
|
||||||
Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId);
|
|
||||||
|
|
||||||
PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto);
|
|
||||||
|
|
||||||
|
|
||||||
Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectId, String connectName);
|
|
||||||
|
|
||||||
Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName);
|
|
||||||
Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName);
|
|
||||||
|
|
||||||
Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto);
|
|
||||||
}
|
|
||||||
@@ -1,653 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.connect.mm2.impl;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.connect.connector.ConnectorManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.connect.mm2.MirrorMakerManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterMirrorMakersOverviewDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2.MirrorMakerCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.mm2.MetricsMirrorMakersDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectWorker;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.WorkerConnector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.config.ConnectConfigInfos;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.connector.KSConnectorInfo;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.ClusterMirrorMakerOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerBaseStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.mm2.MirrorMakerStateVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.plugin.ConnectConfigInfosVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricLineVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.connect.task.KCTaskOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.*;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.MirrorMakerUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.OpConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.plugin.PluginService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.worker.WorkerService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.cache.LoadedClusterPhyCache;
|
|
||||||
import org.apache.commons.lang.StringUtils;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Service;
|
|
||||||
|
|
||||||
import java.util.*;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.function.Function;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
import static org.apache.kafka.connect.runtime.AbstractStatus.State.RUNNING;
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.*;
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/12/26
|
|
||||||
*/
|
|
||||||
@Service
|
|
||||||
public class MirrorMakerManagerImpl implements MirrorMakerManager {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(MirrorMakerManagerImpl.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorService connectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private OpConnectorService opConnectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerConnectorService workerConnectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private WorkerService workerService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorManager connectorManager;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ClusterPhyService clusterPhyService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private MirrorMakerMetricService mirrorMakerMetricService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectClusterService connectClusterService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private PluginService pluginService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> createMirrorMaker(MirrorMakerCreateDTO dto, String operator) {
|
|
||||||
// 检查基本参数
|
|
||||||
Result<Void> rv = this.checkCreateMirrorMakerParamAndUnifyData(dto);
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 创建MirrorSourceConnector
|
|
||||||
Result<Void> sourceConnectResult = connectorManager.createConnector(
|
|
||||||
dto,
|
|
||||||
dto.getCheckpointConnectorConfigs() != null? MirrorMakerUtil.genCheckpointName(dto.getConnectorName()): "",
|
|
||||||
dto.getHeartbeatConnectorConfigs() != null? MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()): "",
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
if (sourceConnectResult.failed()) {
|
|
||||||
// 创建失败, 直接返回
|
|
||||||
return Result.buildFromIgnoreData(sourceConnectResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 创建 checkpoint 任务
|
|
||||||
Result<Void> checkpointResult = Result.buildSuc();
|
|
||||||
if (dto.getCheckpointConnectorConfigs() != null) {
|
|
||||||
checkpointResult = connectorManager.createConnector(
|
|
||||||
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genCheckpointName(dto.getConnectorName()), dto.getCheckpointConnectorConfigs()),
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 创建 heartbeat 任务
|
|
||||||
Result<Void> heartbeatResult = Result.buildSuc();
|
|
||||||
if (dto.getHeartbeatConnectorConfigs() != null) {
|
|
||||||
heartbeatResult = connectorManager.createConnector(
|
|
||||||
new ConnectorCreateDTO(dto.getConnectClusterId(), MirrorMakerUtil.genHeartbeatName(dto.getConnectorName()), dto.getHeartbeatConnectorConfigs()),
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 全都成功
|
|
||||||
if (checkpointResult.successful() && checkpointResult.successful()) {
|
|
||||||
return Result.buildSuc();
|
|
||||||
} else if (checkpointResult.failed() && checkpointResult.failed()) {
|
|
||||||
return Result.buildFromRSAndMsg(
|
|
||||||
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
|
|
||||||
String.format("创建 checkpoint & heartbeat 失败.%n失败信息分别为:%s%n%n%s", checkpointResult.getMessage(), heartbeatResult.getMessage())
|
|
||||||
);
|
|
||||||
} else if (checkpointResult.failed()) {
|
|
||||||
return Result.buildFromRSAndMsg(
|
|
||||||
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
|
|
||||||
String.format("创建 checkpoint 失败.%n失败信息分别为:%s", checkpointResult.getMessage())
|
|
||||||
);
|
|
||||||
} else{
|
|
||||||
return Result.buildFromRSAndMsg(
|
|
||||||
ResultStatus.KAFKA_CONNECTOR_OPERATE_FAILED,
|
|
||||||
String.format("创建 heartbeat 失败.%n失败信息分别为:%s", heartbeatResult.getMessage())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> deleteMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<Void> rv = Result.buildSuc();
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
rv = opConnectorService.deleteConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
rv = opConnectorService.deleteConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.deleteConnector(connectClusterId, sourceConnectorName, operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> modifyMirrorMakerConfig(MirrorMakerCreateDTO dto, String operator) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(dto.getConnectClusterId(), dto.getConnectorName());
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(dto.getConnectClusterId(), dto.getConnectorName()));
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<Void> rv = Result.buildSuc();
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName()) && dto.getCheckpointConnectorConfigs() != null) {
|
|
||||||
rv = opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getCheckpointConnectorName(), dto.getCheckpointConnectorConfigs(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && dto.getHeartbeatConnectorConfigs() != null) {
|
|
||||||
rv = opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), connectorPO.getHeartbeatConnectorName(), dto.getHeartbeatConnectorConfigs(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.updateConnectorConfig(dto.getConnectClusterId(), dto.getConnectorName(), dto.getSuitableConfig(), operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> restartMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<Void> rv = Result.buildSuc();
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
rv = opConnectorService.restartConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
rv = opConnectorService.restartConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.restartConnector(connectClusterId, sourceConnectorName, operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> stopMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<Void> rv = Result.buildSuc();
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
rv = opConnectorService.stopConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
rv = opConnectorService.stopConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.stopConnector(connectClusterId, sourceConnectorName, operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> resumeMirrorMaker(Long connectClusterId, String sourceConnectorName, String operator) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, sourceConnectorName);
|
|
||||||
if (connectorPO == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getConnectorNotExist(connectClusterId, sourceConnectorName));
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<Void> rv = Result.buildSuc();
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
rv = opConnectorService.resumeConnector(connectClusterId, connectorPO.getCheckpointConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
rv = opConnectorService.resumeConnector(connectClusterId, connectorPO.getHeartbeatConnectorName(), operator);
|
|
||||||
}
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return opConnectorService.resumeConnector(connectClusterId, sourceConnectorName, operator);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<MirrorMakerStateVO> getMirrorMakerStateVO(Long clusterPhyId) {
|
|
||||||
List<ConnectorPO> connectorPOList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
List<ConnectWorker> workerList = workerService.listByKafkaClusterIdFromDB(clusterPhyId);
|
|
||||||
|
|
||||||
return Result.buildSuc(convert2MirrorMakerStateVO(connectorPOList, workerConnectorList, workerList));
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<ClusterMirrorMakerOverviewVO> getClusterMirrorMakersOverview(Long clusterPhyId, ClusterMirrorMakersOverviewDTO dto) {
|
|
||||||
List<ConnectorPO> mirrorMakerList = connectorService.listByKafkaClusterIdFromDB(clusterPhyId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
|
|
||||||
List<ConnectCluster> connectClusterList = connectClusterService.listByKafkaCluster(clusterPhyId);
|
|
||||||
|
|
||||||
|
|
||||||
Result<List<MirrorMakerMetrics>> latestMetricsResult = mirrorMakerMetricService.getLatestMetricsFromES(clusterPhyId,
|
|
||||||
mirrorMakerList.stream().map(elem -> new Tuple<>(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
|
|
||||||
dto.getLatestMetricNames());
|
|
||||||
|
|
||||||
if (latestMetricsResult.failed()) {
|
|
||||||
LOGGER.error("method=getClusterMirrorMakersOverview||clusterPhyId={}||result={}||errMsg=get latest metric failed", clusterPhyId, latestMetricsResult);
|
|
||||||
return PaginationResult.buildFailure(latestMetricsResult, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList = this.convert2ClusterMirrorMakerOverviewVO(mirrorMakerList, connectClusterList, latestMetricsResult.getData());
|
|
||||||
|
|
||||||
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = this.completeClusterInfo(mirrorMakerOverviewVOList);
|
|
||||||
|
|
||||||
PaginationResult<ClusterMirrorMakerOverviewVO> voPaginationResult = this.pagingMirrorMakerInLocal(mirrorMakerVOList, dto);
|
|
||||||
|
|
||||||
if (voPaginationResult.failed()) {
|
|
||||||
LOGGER.error("method=ClusterMirrorMakerOverviewVO||clusterPhyId={}||result={}||errMsg=pagination in local failed", clusterPhyId, voPaginationResult);
|
|
||||||
|
|
||||||
return PaginationResult.buildFailure(voPaginationResult, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 查询历史指标
|
|
||||||
Result<List<MetricMultiLinesVO>> lineMetricsResult = mirrorMakerMetricService.listMirrorMakerClusterMetricsFromES(
|
|
||||||
clusterPhyId,
|
|
||||||
this.buildMetricsConnectorsDTO(
|
|
||||||
voPaginationResult.getData().getBizData().stream().map(elem -> new ClusterConnectorDTO(elem.getConnectClusterId(), elem.getConnectorName())).collect(Collectors.toList()),
|
|
||||||
dto.getMetricLines()
|
|
||||||
));
|
|
||||||
|
|
||||||
return PaginationResult.buildSuc(
|
|
||||||
this.supplyData2ClusterMirrorMakerOverviewVOList(
|
|
||||||
voPaginationResult.getData().getBizData(),
|
|
||||||
lineMetricsResult.getData()
|
|
||||||
),
|
|
||||||
voPaginationResult
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<MirrorMakerBaseStateVO> getMirrorMakerState(Long connectClusterId, String connectName) {
|
|
||||||
//mm2任务
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectName);
|
|
||||||
if (connectorPO == null){
|
|
||||||
return Result.buildFrom(ResultStatus.NOT_EXIST);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId).stream()
|
|
||||||
.filter(workerConnector -> workerConnector.getConnectorName().equals(connectorPO.getConnectorName())
|
|
||||||
|| (!StringUtils.isBlank(connectorPO.getCheckpointConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName()))
|
|
||||||
|| (!StringUtils.isBlank(connectorPO.getHeartbeatConnectorName()) && workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
MirrorMakerBaseStateVO mirrorMakerBaseStateVO = new MirrorMakerBaseStateVO();
|
|
||||||
mirrorMakerBaseStateVO.setTotalTaskCount(workerConnectorList.size());
|
|
||||||
mirrorMakerBaseStateVO.setAliveTaskCount(workerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
mirrorMakerBaseStateVO.setWorkerCount(workerConnectorList.stream().collect(Collectors.groupingBy(WorkerConnector::getWorkerId)).size());
|
|
||||||
return Result.buildSuc(mirrorMakerBaseStateVO);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Map<String, List<KCTaskOverviewVO>>> getTaskOverview(Long connectClusterId, String connectorName) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
|
|
||||||
if (connectorPO == null){
|
|
||||||
return Result.buildFrom(ResultStatus.NOT_EXIST);
|
|
||||||
}
|
|
||||||
|
|
||||||
Map<String, List<KCTaskOverviewVO>> listMap = new HashMap<>();
|
|
||||||
List<WorkerConnector> workerConnectorList = workerConnectorService.listFromDB(connectClusterId);
|
|
||||||
if (workerConnectorList.isEmpty()){
|
|
||||||
return Result.buildSuc(listMap);
|
|
||||||
}
|
|
||||||
workerConnectorList.forEach(workerConnector -> {
|
|
||||||
if (workerConnector.getConnectorName().equals(connectorPO.getConnectorName())){
|
|
||||||
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE, new ArrayList<>());
|
|
||||||
listMap.get(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
|
|
||||||
} else if (workerConnector.getConnectorName().equals(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE, new ArrayList<>());
|
|
||||||
listMap.get(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
|
|
||||||
} else if (workerConnector.getConnectorName().equals(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
listMap.putIfAbsent(KafkaConnectConstant.MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE, new ArrayList<>());
|
|
||||||
listMap.get(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE).add(ConvertUtil.obj2Obj(workerConnector, KCTaskOverviewVO.class));
|
|
||||||
}
|
|
||||||
|
|
||||||
});
|
|
||||||
|
|
||||||
return Result.buildSuc(listMap);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<List<Properties>> getMM2Configs(Long connectClusterId, String connectorName) {
|
|
||||||
ConnectorPO connectorPO = connectorService.getConnectorFromDB(connectClusterId, connectorName);
|
|
||||||
if (connectorPO == null){
|
|
||||||
return Result.buildFrom(ResultStatus.NOT_EXIST);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<Properties> propList = new ArrayList<>();
|
|
||||||
|
|
||||||
// source
|
|
||||||
Result<KSConnectorInfo> connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getConnectorName());
|
|
||||||
if (connectorResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(connectorResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
Properties props = new Properties();
|
|
||||||
props.putAll(connectorResult.getData().getConfig());
|
|
||||||
propList.add(props);
|
|
||||||
|
|
||||||
// checkpoint
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getCheckpointConnectorName())) {
|
|
||||||
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getCheckpointConnectorName());
|
|
||||||
if (connectorResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(connectorResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
props = new Properties();
|
|
||||||
props.putAll(connectorResult.getData().getConfig());
|
|
||||||
propList.add(props);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// heartbeat
|
|
||||||
if (!ValidateUtils.isBlank(connectorPO.getHeartbeatConnectorName())) {
|
|
||||||
connectorResult = connectorService.getConnectorInfoFromCluster(connectClusterId, connectorPO.getHeartbeatConnectorName());
|
|
||||||
if (connectorResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(connectorResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
props = new Properties();
|
|
||||||
props.putAll(connectorResult.getData().getConfig());
|
|
||||||
propList.add(props);
|
|
||||||
}
|
|
||||||
|
|
||||||
return Result.buildSuc(propList);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<List<ConnectConfigInfosVO>> validateConnectors(MirrorMakerCreateDTO dto) {
|
|
||||||
List<ConnectConfigInfosVO> voList = new ArrayList<>();
|
|
||||||
|
|
||||||
Result<ConnectConfigInfos> infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getSuitableConfig());
|
|
||||||
if (infoResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(infoResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
|
|
||||||
|
|
||||||
if (dto.getHeartbeatConnectorConfigs() != null) {
|
|
||||||
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getHeartbeatConnectorConfigs());
|
|
||||||
if (infoResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(infoResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (dto.getCheckpointConnectorConfigs() != null) {
|
|
||||||
infoResult = pluginService.validateConfig(dto.getConnectClusterId(), dto.getCheckpointConnectorConfigs());
|
|
||||||
if (infoResult.failed()) {
|
|
||||||
return Result.buildFromIgnoreData(infoResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
voList.add(ConvertUtil.obj2Obj(infoResult.getData(), ConnectConfigInfosVO.class));
|
|
||||||
}
|
|
||||||
|
|
||||||
return Result.buildSuc(voList);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
|
||||||
|
|
||||||
private MetricsMirrorMakersDTO buildMetricsConnectorsDTO(List<ClusterConnectorDTO> connectorDTOList, MetricDTO metricDTO) {
|
|
||||||
MetricsMirrorMakersDTO dto = ConvertUtil.obj2Obj(metricDTO, MetricsMirrorMakersDTO.class);
|
|
||||||
dto.setConnectorNameList(connectorDTOList == null? new ArrayList<>(): connectorDTOList);
|
|
||||||
|
|
||||||
return dto;
|
|
||||||
}
|
|
||||||
|
|
||||||
public Result<Void> checkCreateMirrorMakerParamAndUnifyData(MirrorMakerCreateDTO dto) {
|
|
||||||
ClusterPhy sourceClusterPhy = clusterPhyService.getClusterByCluster(dto.getSourceKafkaClusterId());
|
|
||||||
if (sourceClusterPhy == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getSourceKafkaClusterId()));
|
|
||||||
}
|
|
||||||
|
|
||||||
ConnectCluster connectCluster = connectClusterService.getById(dto.getConnectClusterId());
|
|
||||||
if (connectCluster == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getConnectClusterNotExist(dto.getConnectClusterId()));
|
|
||||||
}
|
|
||||||
|
|
||||||
ClusterPhy targetClusterPhy = clusterPhyService.getClusterByCluster(connectCluster.getKafkaClusterPhyId());
|
|
||||||
if (targetClusterPhy == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(connectCluster.getKafkaClusterPhyId()));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!dto.getSuitableConfig().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector缺少connector.class");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!MIRROR_MAKER_SOURCE_CONNECTOR_TYPE.equals(dto.getSuitableConfig().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "SourceConnector的connector.class类型错误");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (dto.getCheckpointConnectorConfigs() != null) {
|
|
||||||
if (!dto.getCheckpointConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "CheckpointConnector缺少connector.class");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE.equals(dto.getCheckpointConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Checkpoint的connector.class类型错误");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (dto.getHeartbeatConnectorConfigs() != null) {
|
|
||||||
if (!dto.getHeartbeatConnectorConfigs().containsKey(CONNECTOR_CLASS_FILED_NAME)) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "HeartbeatConnector缺少connector.class");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE.equals(dto.getHeartbeatConnectorConfigs().getProperty(CONNECTOR_CLASS_FILED_NAME))) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "Heartbeat的connector.class类型错误");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
dto.unifyData(
|
|
||||||
sourceClusterPhy.getId(), sourceClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(sourceClusterPhy.getClientProperties(), Properties.class),
|
|
||||||
targetClusterPhy.getId(), targetClusterPhy.getBootstrapServers(), ConvertUtil.str2ObjByJson(targetClusterPhy.getClientProperties(), Properties.class)
|
|
||||||
);
|
|
||||||
|
|
||||||
return Result.buildSuc();
|
|
||||||
}
|
|
||||||
|
|
||||||
private MirrorMakerStateVO convert2MirrorMakerStateVO(List<ConnectorPO> connectorPOList,List<WorkerConnector> workerConnectorList,List<ConnectWorker> workerList){
|
|
||||||
MirrorMakerStateVO mirrorMakerStateVO = new MirrorMakerStateVO();
|
|
||||||
|
|
||||||
List<ConnectorPO> sourceSet = connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
|
|
||||||
mirrorMakerStateVO.setMirrorMakerCount(sourceSet.size());
|
|
||||||
|
|
||||||
Set<Long> connectClusterIdSet = sourceSet.stream().map(ConnectorPO::getConnectClusterId).collect(Collectors.toSet());
|
|
||||||
mirrorMakerStateVO.setWorkerCount(workerList.stream().filter(elem -> connectClusterIdSet.contains(elem.getConnectClusterId())).collect(Collectors.toList()).size());
|
|
||||||
|
|
||||||
List<ConnectorPO> mirrorMakerConnectorList = new ArrayList<>();
|
|
||||||
mirrorMakerConnectorList.addAll(sourceSet);
|
|
||||||
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_CHECKPOINT_CONNECTOR_TYPE)).collect(Collectors.toList()));
|
|
||||||
mirrorMakerConnectorList.addAll(connectorPOList.stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_HEARTBEAT_CONNECTOR_TYPE)).collect(Collectors.toList()));
|
|
||||||
mirrorMakerStateVO.setTotalConnectorCount(mirrorMakerConnectorList.size());
|
|
||||||
mirrorMakerStateVO.setAliveConnectorCount(mirrorMakerConnectorList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
|
|
||||||
Set<String> connectorNameSet = mirrorMakerConnectorList.stream().map(elem -> elem.getConnectorName()).collect(Collectors.toSet());
|
|
||||||
List<WorkerConnector> taskList = workerConnectorList.stream().filter(elem -> connectorNameSet.contains(elem.getConnectorName())).collect(Collectors.toList());
|
|
||||||
mirrorMakerStateVO.setTotalTaskCount(taskList.size());
|
|
||||||
mirrorMakerStateVO.setAliveTaskCount(taskList.stream().filter(elem -> elem.getState().equals(RUNNING.name())).collect(Collectors.toList()).size());
|
|
||||||
|
|
||||||
return mirrorMakerStateVO;
|
|
||||||
}
|
|
||||||
|
|
||||||
private List<ClusterMirrorMakerOverviewVO> convert2ClusterMirrorMakerOverviewVO(List<ConnectorPO> mirrorMakerList, List<ConnectCluster> connectClusterList, List<MirrorMakerMetrics> latestMetric) {
|
|
||||||
List<ClusterMirrorMakerOverviewVO> clusterMirrorMakerOverviewVOList = new ArrayList<>();
|
|
||||||
Map<String, MirrorMakerMetrics> metricsMap = latestMetric.stream().collect(Collectors.toMap(elem -> elem.getConnectClusterId() + "@" + elem.getConnectorName(), Function.identity()));
|
|
||||||
Map<Long, ConnectCluster> connectClusterMap = connectClusterList.stream().collect(Collectors.toMap(elem -> elem.getId(), Function.identity()));
|
|
||||||
|
|
||||||
for (ConnectorPO mirrorMaker : mirrorMakerList) {
|
|
||||||
ClusterMirrorMakerOverviewVO clusterMirrorMakerOverviewVO = new ClusterMirrorMakerOverviewVO();
|
|
||||||
clusterMirrorMakerOverviewVO.setConnectClusterId(mirrorMaker.getConnectClusterId());
|
|
||||||
clusterMirrorMakerOverviewVO.setConnectClusterName(connectClusterMap.get(mirrorMaker.getConnectClusterId()).getName());
|
|
||||||
clusterMirrorMakerOverviewVO.setConnectorName(mirrorMaker.getConnectorName());
|
|
||||||
clusterMirrorMakerOverviewVO.setState(mirrorMaker.getState());
|
|
||||||
clusterMirrorMakerOverviewVO.setCheckpointConnector(mirrorMaker.getCheckpointConnectorName());
|
|
||||||
clusterMirrorMakerOverviewVO.setTaskCount(mirrorMaker.getTaskCount());
|
|
||||||
clusterMirrorMakerOverviewVO.setHeartbeatConnector(mirrorMaker.getHeartbeatConnectorName());
|
|
||||||
clusterMirrorMakerOverviewVO.setLatestMetrics(metricsMap.getOrDefault(mirrorMaker.getConnectClusterId() + "@" + mirrorMaker.getConnectorName(), new MirrorMakerMetrics(mirrorMaker.getConnectClusterId(), mirrorMaker.getConnectorName())));
|
|
||||||
clusterMirrorMakerOverviewVOList.add(clusterMirrorMakerOverviewVO);
|
|
||||||
}
|
|
||||||
return clusterMirrorMakerOverviewVOList;
|
|
||||||
}
|
|
||||||
|
|
||||||
PaginationResult<ClusterMirrorMakerOverviewVO> pagingMirrorMakerInLocal(List<ClusterMirrorMakerOverviewVO> mirrorMakerOverviewVOList, ClusterMirrorMakersOverviewDTO dto) {
|
|
||||||
List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList = PaginationUtil.pageByFuzzyFilter(mirrorMakerOverviewVOList, dto.getSearchKeywords(), Arrays.asList("connectorName"));
|
|
||||||
|
|
||||||
//排序
|
|
||||||
if (!dto.getLatestMetricNames().isEmpty()) {
|
|
||||||
PaginationMetricsUtil.sortMetrics(mirrorMakerVOList, "latestMetrics", dto.getSortMetricNameList(), "connectorName", dto.getSortType());
|
|
||||||
} else {
|
|
||||||
PaginationUtil.pageBySort(mirrorMakerVOList, dto.getSortField(), dto.getSortType(), "connectorName", dto.getSortType());
|
|
||||||
}
|
|
||||||
|
|
||||||
//分页
|
|
||||||
return PaginationUtil.pageBySubData(mirrorMakerVOList, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static List<ClusterMirrorMakerOverviewVO> supplyData2ClusterMirrorMakerOverviewVOList(List<ClusterMirrorMakerOverviewVO> voList,
|
|
||||||
List<MetricMultiLinesVO> metricLineVOList) {
|
|
||||||
Map<String, List<MetricLineVO>> metricLineMap = new HashMap<>();
|
|
||||||
if (metricLineVOList != null) {
|
|
||||||
for (MetricMultiLinesVO metricMultiLinesVO : metricLineVOList) {
|
|
||||||
metricMultiLinesVO.getMetricLines()
|
|
||||||
.forEach(metricLineVO -> {
|
|
||||||
String key = metricLineVO.getName();
|
|
||||||
List<MetricLineVO> metricLineVOS = metricLineMap.getOrDefault(key, new ArrayList<>());
|
|
||||||
metricLineVOS.add(metricLineVO);
|
|
||||||
metricLineMap.put(key, metricLineVOS);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
voList.forEach(elem -> elem.setMetricLines(metricLineMap.get(elem.getConnectClusterId() + "#" + elem.getConnectorName())));
|
|
||||||
|
|
||||||
return voList;
|
|
||||||
}
|
|
||||||
|
|
||||||
private List<ClusterMirrorMakerOverviewVO> completeClusterInfo(List<ClusterMirrorMakerOverviewVO> mirrorMakerVOList) {
|
|
||||||
|
|
||||||
Map<String, KSConnectorInfo> connectorInfoMap = new ConcurrentHashMap<>();
|
|
||||||
|
|
||||||
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
|
|
||||||
ApiCallThreadPoolService.runnableTask(String.format("method=completeClusterInfo||connectClusterId=%d||connectorName=%s||getMirrorMakerInfo", mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName()),
|
|
||||||
3000
|
|
||||||
, () -> {
|
|
||||||
Result<KSConnectorInfo> connectorInfoRet = connectorService.getConnectorInfoFromCluster(mirrorMakerVO.getConnectClusterId(), mirrorMakerVO.getConnectorName());
|
|
||||||
if (connectorInfoRet.hasData()) {
|
|
||||||
connectorInfoMap.put(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName(), connectorInfoRet.getData());
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
ApiCallThreadPoolService.waitResult();
|
|
||||||
|
|
||||||
List<ClusterMirrorMakerOverviewVO> newMirrorMakerVOList = new ArrayList<>();
|
|
||||||
for (ClusterMirrorMakerOverviewVO mirrorMakerVO : mirrorMakerVOList) {
|
|
||||||
KSConnectorInfo connectorInfo = connectorInfoMap.get(mirrorMakerVO.getConnectClusterId() + mirrorMakerVO.getConnectorName());
|
|
||||||
if (connectorInfo == null) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
String sourceClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME);
|
|
||||||
String targetClusterAlias = connectorInfo.getConfig().get(MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME);
|
|
||||||
//先默认设置为集群别名
|
|
||||||
mirrorMakerVO.setSourceKafkaClusterName(sourceClusterAlias);
|
|
||||||
mirrorMakerVO.setDestKafkaClusterName(targetClusterAlias);
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(sourceClusterAlias) && CommonUtils.isNumeric(sourceClusterAlias)) {
|
|
||||||
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(sourceClusterAlias));
|
|
||||||
if (clusterPhy != null) {
|
|
||||||
mirrorMakerVO.setSourceKafkaClusterId(clusterPhy.getId());
|
|
||||||
mirrorMakerVO.setSourceKafkaClusterName(clusterPhy.getName());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ValidateUtils.isBlank(targetClusterAlias) && CommonUtils.isNumeric(targetClusterAlias)) {
|
|
||||||
ClusterPhy clusterPhy = LoadedClusterPhyCache.getByPhyId(Long.valueOf(targetClusterAlias));
|
|
||||||
if (clusterPhy != null) {
|
|
||||||
mirrorMakerVO.setDestKafkaClusterId(clusterPhy.getId());
|
|
||||||
mirrorMakerVO.setDestKafkaClusterName(clusterPhy.getName());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
newMirrorMakerVOList.add(mirrorMakerVO);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
return newMirrorMakerVOList;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,15 +1,11 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.group;
|
package com.xiaojukeji.know.streaming.km.biz.group;
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetDeleteDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
|
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
|
||||||
@@ -26,10 +22,6 @@ public interface GroupManager {
|
|||||||
String searchGroupKeyword,
|
String searchGroupKeyword,
|
||||||
PaginationBaseDTO dto);
|
PaginationBaseDTO dto);
|
||||||
|
|
||||||
PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) throws Exception;
|
|
||||||
|
|
||||||
PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto);
|
|
||||||
|
|
||||||
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
|
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
|
||||||
String topicName,
|
String topicName,
|
||||||
String groupName,
|
String groupName,
|
||||||
@@ -39,10 +31,4 @@ public interface GroupManager {
|
|||||||
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
|
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
|
||||||
|
|
||||||
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
|
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
|
||||||
|
|
||||||
Result<Void> deleteGroupOffsets(GroupOffsetDeleteDTO dto, String operator) throws Exception;
|
|
||||||
|
|
||||||
@Deprecated
|
|
||||||
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
|
|
||||||
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList, Integer timeoutUnitMs);
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,40 +3,23 @@ package com.xiaojukeji.know.streaming.km.biz.group.impl;
|
|||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
|
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetDeleteDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSGroupDescription;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberConsumerAssignment;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafka.KSMemberDescription;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupParam;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupTopicParam;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.group.DeleteGroupTopicPartitionParam;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
|
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.converter.GroupConverter;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.group.DeleteGroupTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
|
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
|
||||||
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
|
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
|
||||||
@@ -44,30 +27,26 @@ import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.config.KSConfigUtils;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
|
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.OpGroupService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems;
|
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems;
|
||||||
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
|
import com.xiaojukeji.know.streaming.km.persistence.es.dao.GroupMetricESDAO;
|
||||||
|
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
|
||||||
|
import org.apache.kafka.clients.admin.MemberDescription;
|
||||||
|
import org.apache.kafka.clients.admin.OffsetSpec;
|
||||||
import org.apache.kafka.common.ConsumerGroupState;
|
import org.apache.kafka.common.ConsumerGroupState;
|
||||||
import org.apache.kafka.common.TopicPartition;
|
import org.apache.kafka.common.TopicPartition;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Component;
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum.CONNECT_CLUSTER_PROTOCOL_TYPE;
|
|
||||||
|
|
||||||
@Component
|
@Component
|
||||||
public class GroupManagerImpl implements GroupManager {
|
public class GroupManagerImpl implements GroupManager {
|
||||||
private static final ILog LOGGER = LogFactory.getLog(GroupManagerImpl.class);
|
private static final ILog log = LogFactory.getLog(GroupManagerImpl.class);
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private TopicService topicService;
|
private TopicService topicService;
|
||||||
@@ -75,9 +54,6 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private GroupService groupService;
|
private GroupService groupService;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private OpGroupService opGroupService;
|
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private PartitionService partitionService;
|
private PartitionService partitionService;
|
||||||
|
|
||||||
@@ -87,12 +63,6 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private GroupMetricESDAO groupMetricESDAO;
|
private GroupMetricESDAO groupMetricESDAO;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ClusterPhyService clusterPhyService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private KSConfigUtils ksConfigUtils;
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
|
public PaginationResult<GroupTopicOverviewVO> pagingGroupMembers(Long clusterPhyId,
|
||||||
String topicName,
|
String topicName,
|
||||||
@@ -100,96 +70,41 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
String searchTopicKeyword,
|
String searchTopicKeyword,
|
||||||
String searchGroupKeyword,
|
String searchGroupKeyword,
|
||||||
PaginationBaseDTO dto) {
|
PaginationBaseDTO dto) {
|
||||||
long startTimeUnitMs = System.currentTimeMillis();
|
|
||||||
|
|
||||||
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
|
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
|
||||||
|
if (paginationResult.failed()) {
|
||||||
|
return PaginationResult.buildFailure(paginationResult, dto);
|
||||||
|
}
|
||||||
|
|
||||||
if (!paginationResult.hasData()) {
|
if (!paginationResult.hasData()) {
|
||||||
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
|
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
|
||||||
}
|
}
|
||||||
|
|
||||||
List<GroupTopicOverviewVO> groupTopicVOList = this.getGroupTopicOverviewVOList(
|
// 获取指标
|
||||||
|
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
|
||||||
clusterPhyId,
|
clusterPhyId,
|
||||||
paginationResult.getData().getBizData(),
|
paginationResult.getData().getBizData().stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
|
||||||
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
|
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
|
||||||
|
AggTypeEnum.MAX
|
||||||
);
|
);
|
||||||
|
if (metricsListResult.failed()) {
|
||||||
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
|
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
|
||||||
|
log.error("method=pagingGroupMembers||clusterPhyId={}||topicName={}||groupName={}||result={}||errMsg=search es failed", clusterPhyId, topicName, groupName, metricsListResult);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) throws Exception {
|
|
||||||
long startTimeUnitMs = System.currentTimeMillis();
|
|
||||||
|
|
||||||
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
|
|
||||||
if (clusterPhy == null) {
|
|
||||||
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
Group group = groupService.getGroupFromKafka(clusterPhy, groupName);
|
|
||||||
|
|
||||||
//没有topicMember则直接返回
|
|
||||||
if (group == null || ValidateUtils.isEmptyList(group.getTopicMembers())) {
|
|
||||||
return PaginationResult.buildSuc(dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
//排序
|
|
||||||
List<GroupTopicMember> groupTopicMembers = PaginationUtil.pageBySort(group.getTopicMembers(), PaginationConstant.DEFAULT_GROUP_TOPIC_SORTED_FIELD, SortTypeEnum.DESC.getSortType());
|
|
||||||
|
|
||||||
//分页
|
|
||||||
PaginationResult<GroupTopicMember> paginationResult = PaginationUtil.pageBySubData(groupTopicMembers, dto);
|
|
||||||
|
|
||||||
List<GroupMemberPO> groupMemberPOList = paginationResult.getData().getBizData().stream().map(elem -> new GroupMemberPO(clusterPhyId, elem.getTopicName(), groupName, group.getState().getState(), elem.getMemberCount())).collect(Collectors.toList());
|
|
||||||
|
|
||||||
return PaginationResult.buildSuc(
|
return PaginationResult.buildSuc(
|
||||||
this.getGroupTopicOverviewVOList(
|
this.convert2GroupTopicOverviewVOList(paginationResult.getData().getBizData(), metricsListResult.getData()),
|
||||||
clusterPhyId,
|
|
||||||
groupMemberPOList,
|
|
||||||
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
|
|
||||||
),
|
|
||||||
paginationResult
|
paginationResult
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto) {
|
|
||||||
List<Group> groupList = groupService.listClusterGroups(clusterPhyId);
|
|
||||||
|
|
||||||
// 类型转化
|
|
||||||
List<GroupOverviewVO> voList = groupList.stream().map(GroupConverter::convert2GroupOverviewVO).collect(Collectors.toList());
|
|
||||||
|
|
||||||
// 搜索groupName
|
|
||||||
voList = PaginationUtil.pageByFuzzyFilter(voList, dto.getSearchGroupName(), Arrays.asList("name"));
|
|
||||||
|
|
||||||
//搜索topic
|
|
||||||
if (!ValidateUtils.isBlank(dto.getSearchTopicName())) {
|
|
||||||
voList = voList.stream().filter(elem -> {
|
|
||||||
for (String topicName : elem.getTopicNameList()) {
|
|
||||||
if (topicName.contains(dto.getSearchTopicName())) {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
}).collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
// 分页 后 返回
|
|
||||||
return PaginationUtil.pageBySubData(voList, dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
|
public PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
|
||||||
String topicName,
|
String topicName,
|
||||||
String groupName,
|
String groupName,
|
||||||
List<String> latestMetricNames,
|
List<String> latestMetricNames,
|
||||||
PaginationSortDTO dto) throws NotExistException, AdminOperateException {
|
PaginationSortDTO dto) throws NotExistException, AdminOperateException {
|
||||||
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
|
|
||||||
if (clusterPhy == null) {
|
|
||||||
return PaginationResult.buildFailure(MsgConstant.getClusterPhyNotExist(clusterPhyId), dto);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 获取消费组消费的TopicPartition列表
|
// 获取消费组消费的TopicPartition列表
|
||||||
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
|
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffset(clusterPhyId, groupName);
|
||||||
List<Integer> partitionList = consumedOffsetMap.keySet()
|
List<Integer> partitionList = consumedOffsetMap.keySet()
|
||||||
.stream()
|
.stream()
|
||||||
.filter(elem -> elem.topic().equals(topicName))
|
.filter(elem -> elem.topic().equals(topicName))
|
||||||
@@ -198,21 +113,15 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
Collections.sort(partitionList);
|
Collections.sort(partitionList);
|
||||||
|
|
||||||
// 获取消费组当前运行信息
|
// 获取消费组当前运行信息
|
||||||
KSGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhy, groupName);
|
ConsumerGroupDescription groupDescription = groupService.getGroupDescription(clusterPhyId, groupName);
|
||||||
|
|
||||||
// 转换存储格式
|
// 转换存储格式
|
||||||
Map<TopicPartition, KSMemberDescription> tpMemberMap = new HashMap<>();
|
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
|
||||||
|
for (MemberDescription description: groupDescription.members()) {
|
||||||
// 如果不是connect集群
|
for (TopicPartition tp: description.assignment().topicPartitions()) {
|
||||||
if (!groupDescription.protocolType().equals(CONNECT_CLUSTER_PROTOCOL_TYPE)) {
|
|
||||||
for (KSMemberDescription description : groupDescription.members()) {
|
|
||||||
// 如果是 Consumer 的 Description ,则 Assignment 的类型为 KSMemberConsumerAssignment 的
|
|
||||||
KSMemberConsumerAssignment assignment = (KSMemberConsumerAssignment) description.assignment();
|
|
||||||
for (TopicPartition tp : assignment.topicPartitions()) {
|
|
||||||
tpMemberMap.put(tp, description);
|
tpMemberMap.put(tp, description);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// 获取指标
|
// 获取指标
|
||||||
PaginationResult<GroupMetrics> metricsResult = this.pagingGroupTopicPartitionMetrics(clusterPhyId, groupName, topicName, partitionList, latestMetricNames, dto);
|
PaginationResult<GroupMetrics> metricsResult = this.pagingGroupTopicPartitionMetrics(clusterPhyId, groupName, topicName, partitionList, latestMetricNames, dto);
|
||||||
@@ -227,11 +136,11 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
vo.setTopicName(topicName);
|
vo.setTopicName(topicName);
|
||||||
vo.setPartitionId(groupMetrics.getPartitionId());
|
vo.setPartitionId(groupMetrics.getPartitionId());
|
||||||
|
|
||||||
KSMemberDescription ksMemberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
|
MemberDescription memberDescription = tpMemberMap.get(new TopicPartition(topicName, groupMetrics.getPartitionId()));
|
||||||
if (ksMemberDescription != null) {
|
if (memberDescription != null) {
|
||||||
vo.setMemberId(ksMemberDescription.consumerId());
|
vo.setMemberId(memberDescription.consumerId());
|
||||||
vo.setHost(ksMemberDescription.host());
|
vo.setHost(memberDescription.host());
|
||||||
vo.setClientId(ksMemberDescription.clientId());
|
vo.setClientId(memberDescription.clientId());
|
||||||
}
|
}
|
||||||
|
|
||||||
vo.setLatestMetrics(groupMetrics);
|
vo.setLatestMetrics(groupMetrics);
|
||||||
@@ -257,18 +166,13 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
return rv;
|
return rv;
|
||||||
}
|
}
|
||||||
|
|
||||||
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterId());
|
ConsumerGroupDescription description = groupService.getGroupDescription(dto.getClusterId(), dto.getGroupName());
|
||||||
if (clusterPhy == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterId()));
|
|
||||||
}
|
|
||||||
|
|
||||||
KSGroupDescription description = groupService.getGroupDescriptionFromKafka(clusterPhy, dto.getGroupName());
|
|
||||||
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
|
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
|
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
|
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
|
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
|
||||||
}
|
}
|
||||||
|
|
||||||
// 获取offset
|
// 获取offset
|
||||||
@@ -281,111 +185,6 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
|
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> deleteGroupOffsets(GroupOffsetDeleteDTO dto, String operator) throws Exception {
|
|
||||||
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(dto.getClusterPhyId());
|
|
||||||
if (clusterPhy == null) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(dto.getClusterPhyId()));
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// 按照group纬度进行删除
|
|
||||||
if (ValidateUtils.isBlank(dto.getGroupName())) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "groupName不允许为空");
|
|
||||||
}
|
|
||||||
if (DeleteGroupTypeEnum.GROUP.getCode().equals(dto.getDeleteType())) {
|
|
||||||
return opGroupService.deleteGroupOffset(
|
|
||||||
new DeleteGroupParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP),
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// 按照topic纬度进行删除
|
|
||||||
if (ValidateUtils.isBlank(dto.getTopicName())) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "topicName不允许为空");
|
|
||||||
}
|
|
||||||
if (DeleteGroupTypeEnum.GROUP_TOPIC.getCode().equals(dto.getDeleteType())) {
|
|
||||||
return opGroupService.deleteGroupTopicOffset(
|
|
||||||
new DeleteGroupTopicParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP, dto.getTopicName()),
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// 按照partition纬度进行删除
|
|
||||||
if (ValidateUtils.isNullOrLessThanZero(dto.getPartitionId())) {
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "partitionId不允许为空或小于0");
|
|
||||||
}
|
|
||||||
if (DeleteGroupTypeEnum.GROUP_TOPIC_PARTITION.getCode().equals(dto.getDeleteType())) {
|
|
||||||
return opGroupService.deleteGroupTopicPartitionOffset(
|
|
||||||
new DeleteGroupTopicPartitionParam(dto.getClusterPhyId(), dto.getGroupName(), DeleteGroupTypeEnum.GROUP, dto.getTopicName(), dto.getPartitionId()),
|
|
||||||
operator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "deleteType类型错误");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList) {
|
|
||||||
// 获取指标
|
|
||||||
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
|
|
||||||
clusterPhyId,
|
|
||||||
groupMemberPOList.stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
|
|
||||||
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
|
|
||||||
AggTypeEnum.MAX
|
|
||||||
);
|
|
||||||
if (metricsListResult.failed()) {
|
|
||||||
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
|
|
||||||
LOGGER.error("method=completeMetricData||clusterPhyId={}||result={}||errMsg=search es failed", clusterPhyId, metricsListResult);
|
|
||||||
}
|
|
||||||
return this.convert2GroupTopicOverviewVOList(groupMemberPOList, metricsListResult.getData());
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> poList, Integer timeoutUnitMs) {
|
|
||||||
Set<String> requestedGroupSet = new HashSet<>();
|
|
||||||
|
|
||||||
// 获取指标
|
|
||||||
Map<String, Map<String, Float>> groupTopicLagMap = new ConcurrentHashMap<>();
|
|
||||||
poList.forEach(elem -> {
|
|
||||||
if (requestedGroupSet.contains(elem.getGroupName())) {
|
|
||||||
// 该Group已经处理过
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
requestedGroupSet.add(elem.getGroupName());
|
|
||||||
ApiCallThreadPoolService.runnableTask(
|
|
||||||
String.format("clusterPhyId=%d||groupName=%s||msg=getGroupTopicLag", clusterPhyId, elem.getGroupName()),
|
|
||||||
timeoutUnitMs,
|
|
||||||
() -> {
|
|
||||||
Result<List<GroupMetrics>> listResult = groupMetricService.collectGroupMetricsFromKafka(clusterPhyId, elem.getGroupName(), GroupMetricVersionItems.GROUP_METRIC_LAG);
|
|
||||||
if (listResult == null || !listResult.hasData()) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
Map<String, Float> lagMetricMap = new HashMap<>();
|
|
||||||
listResult.getData().forEach(item -> {
|
|
||||||
Float newLag = item.getMetric(GroupMetricVersionItems.GROUP_METRIC_LAG);
|
|
||||||
if (newLag == null) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
Float oldLag = lagMetricMap.getOrDefault(item.getTopic(), newLag);
|
|
||||||
lagMetricMap.put(item.getTopic(), Math.max(oldLag, newLag));
|
|
||||||
});
|
|
||||||
|
|
||||||
groupTopicLagMap.put(elem.getGroupName(), lagMetricMap);
|
|
||||||
}
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
ApiCallThreadPoolService.waitResult();
|
|
||||||
|
|
||||||
return this.convert2GroupTopicOverviewVOList(poList, groupTopicLagMap);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
/**************************************************** private method ****************************************************/
|
||||||
|
|
||||||
@@ -422,16 +221,16 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
)));
|
)));
|
||||||
}
|
}
|
||||||
|
|
||||||
KSOffsetSpec offsetSpec = null;
|
OffsetSpec offsetSpec = null;
|
||||||
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
|
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
|
||||||
offsetSpec = KSOffsetSpec.forTimestamp(dto.getTimestamp());
|
offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp());
|
||||||
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
|
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
|
||||||
offsetSpec = KSOffsetSpec.earliest();
|
offsetSpec = OffsetSpec.earliest();
|
||||||
} else {
|
} else {
|
||||||
offsetSpec = KSOffsetSpec.latest();
|
offsetSpec = OffsetSpec.latest();
|
||||||
}
|
}
|
||||||
|
|
||||||
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec);
|
return partitionService.getPartitionOffsetFromKafka(dto.getClusterId(), dto.getTopicName(), offsetSpec, dto.getTimestamp());
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
|
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, List<GroupMetrics> metricsList) {
|
||||||
@@ -439,22 +238,13 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
metricsList = new ArrayList<>();
|
metricsList = new ArrayList<>();
|
||||||
}
|
}
|
||||||
|
|
||||||
// <GroupName, <TopicName, lag>>
|
// <GroupName, <TopicName, GroupMetrics>>
|
||||||
Map<String, Map<String, Float>> metricsMap = new HashMap<>();
|
Map<String, Map<String, GroupMetrics>> metricsMap = new HashMap<>();
|
||||||
metricsList.stream().forEach(elem -> {
|
metricsList.stream().forEach(elem -> {
|
||||||
Float metricValue = elem.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG);
|
|
||||||
if (metricValue == null) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
metricsMap.putIfAbsent(elem.getGroup(), new HashMap<>());
|
metricsMap.putIfAbsent(elem.getGroup(), new HashMap<>());
|
||||||
metricsMap.get(elem.getGroup()).put(elem.getTopic(), metricValue);
|
metricsMap.get(elem.getGroup()).put(elem.getTopic(), elem);
|
||||||
});
|
});
|
||||||
|
|
||||||
return this.convert2GroupTopicOverviewVOList(poList, metricsMap);
|
|
||||||
}
|
|
||||||
|
|
||||||
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(List<GroupMemberPO> poList, Map<String, Map<String, Float>> metricsMap) {
|
|
||||||
List<GroupTopicOverviewVO> voList = new ArrayList<>();
|
List<GroupTopicOverviewVO> voList = new ArrayList<>();
|
||||||
for (GroupMemberPO po: poList) {
|
for (GroupMemberPO po: poList) {
|
||||||
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
|
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
|
||||||
@@ -462,9 +252,9 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
Float metricValue = metricsMap.getOrDefault(po.getGroupName(), new HashMap<>()).get(po.getTopicName());
|
GroupMetrics metrics = metricsMap.getOrDefault(po.getGroupName(), new HashMap<>()).get(po.getTopicName());
|
||||||
if (metricValue != null) {
|
if (metrics != null) {
|
||||||
vo.setMaxLag(ConvertUtil.Float2Long(metricValue));
|
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
|
||||||
}
|
}
|
||||||
|
|
||||||
voList.add(vo);
|
voList.add(vo);
|
||||||
@@ -482,11 +272,15 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
|
|
||||||
|
|
||||||
// 获取Group指标信息
|
// 获取Group指标信息
|
||||||
Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.collectGroupMetricsFromKafka(clusterPhyId, groupName, latestMetricNames == null ? Arrays.asList() : latestMetricNames);
|
Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.listPartitionLatestMetricsFromES(
|
||||||
|
clusterPhyId,
|
||||||
|
groupName,
|
||||||
|
topicName,
|
||||||
|
latestMetricNames == null? Arrays.asList(): latestMetricNames
|
||||||
|
);
|
||||||
|
|
||||||
// 转换Group指标
|
// 转换Group指标
|
||||||
List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData() ? groupMetricsResult.getData().stream().filter(elem -> topicName.equals(elem.getTopic())).collect(Collectors.toList()) : new ArrayList<>();
|
List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData()? groupMetricsResult.getData(): new ArrayList<>();
|
||||||
Map<Integer, GroupMetrics> esMetricsMap = new HashMap<>();
|
Map<Integer, GroupMetrics> esMetricsMap = new HashMap<>();
|
||||||
for (GroupMetrics groupMetrics: esGroupMetricsList) {
|
for (GroupMetrics groupMetrics: esGroupMetricsList) {
|
||||||
esMetricsMap.put(groupMetrics.getPartitionId(), groupMetrics);
|
esMetricsMap.put(groupMetrics.getPartitionId(), groupMetrics);
|
||||||
@@ -502,4 +296,5 @@ public class GroupManagerImpl implements GroupManager {
|
|||||||
dto
|
dto
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
|||||||
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
|
import com.xiaojukeji.know.streaming.km.core.service.reassign.ReassignService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
|
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Component;
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
|
|||||||
@@ -19,9 +19,4 @@ public interface OpTopicManager {
|
|||||||
* 扩分区
|
* 扩分区
|
||||||
*/
|
*/
|
||||||
Result<Void> expandTopic(TopicExpansionDTO dto, String operator);
|
Result<Void> expandTopic(TopicExpansionDTO dto, String operator);
|
||||||
|
|
||||||
/**
|
|
||||||
* 清空Topic
|
|
||||||
*/
|
|
||||||
Result<Void> truncateTopic(Long clusterPhyId, String topicName, String operator);
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,8 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.biz.topic;
|
package com.xiaojukeji.know.streaming.km.biz.topic;
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
|
||||||
@@ -25,6 +23,4 @@ public interface TopicStateManager {
|
|||||||
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
|
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
|
||||||
|
|
||||||
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
|
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
|
||||||
|
|
||||||
PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto);
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,20 +10,14 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicTruncateParam;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import kafka.admin.AdminUtils;
|
import kafka.admin.AdminUtils;
|
||||||
@@ -58,9 +52,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private ClusterPhyService clusterPhyService;
|
private ClusterPhyService clusterPhyService;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private PartitionService partitionService;
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
|
public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
|
||||||
log.info("method=createTopic||param={}||operator={}.", dto, operator);
|
log.info("method=createTopic||param={}||operator={}.", dto, operator);
|
||||||
@@ -89,7 +80,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// 创建Topic
|
// 创建Topic
|
||||||
Result<Void> createTopicRes = opTopicService.createTopic(
|
return opTopicService.createTopic(
|
||||||
new TopicCreateParam(
|
new TopicCreateParam(
|
||||||
dto.getClusterId(),
|
dto.getClusterId(),
|
||||||
dto.getTopicName(),
|
dto.getTopicName(),
|
||||||
@@ -99,21 +90,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
|
|||||||
),
|
),
|
||||||
operator
|
operator
|
||||||
);
|
);
|
||||||
if (createTopicRes.successful()){
|
|
||||||
try{
|
|
||||||
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
|
|
||||||
BackoffUtils.backoff(3000);
|
|
||||||
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
|
|
||||||
if (partitionsResult.successful()){
|
|
||||||
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}catch (Exception e) {
|
|
||||||
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
|
|
||||||
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功,但记录Partition到DB中失败,等待定时任务同步partition信息");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return createTopicRes;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -158,16 +134,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
|
|||||||
return rv;
|
return rv;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public Result<Void> truncateTopic(Long clusterPhyId, String topicName, String operator) {
|
|
||||||
// 清空Topic
|
|
||||||
Result<Void> rv = opTopicService.truncateTopic(new TopicTruncateParam(clusterPhyId, topicName, KafkaConstant.TOPICK_TRUNCATE_DEFAULT_OFFSET), operator);
|
|
||||||
if (rv.failed()) {
|
|
||||||
return rv;
|
|
||||||
}
|
|
||||||
|
|
||||||
return Result.buildSuc();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
/**************************************************** private method ****************************************************/
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
|||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerConfigService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.BaseKafkaVersionControlService;
|
import com.xiaojukeji.know.streaming.km.core.service.version.BaseVersionControlService;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Component;
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
@@ -27,7 +27,7 @@ import java.util.stream.Collectors;
|
|||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
|
||||||
|
|
||||||
@Component
|
@Component
|
||||||
public class TopicConfigManagerImpl extends BaseKafkaVersionControlService implements TopicConfigManager {
|
public class TopicConfigManagerImpl extends BaseVersionControlService implements TopicConfigManager {
|
||||||
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
|
private static final ILog log = LogFactory.getLog(TopicConfigManagerImpl.class);
|
||||||
|
|
||||||
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";
|
private static final String GET_DEFAULT_TOPIC_CONFIG = "getDefaultTopicConfig";
|
||||||
|
|||||||
@@ -2,23 +2,17 @@ package com.xiaojukeji.know.streaming.km.biz.topic.impl;
|
|||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
|
|
||||||
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
|
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.offset.KSOffsetSpec;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
|
||||||
@@ -28,7 +22,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.partition.TopicPart
|
|||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
|
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
|
||||||
@@ -39,15 +32,15 @@ import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.config.KSConfigUtils;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems;
|
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
|
||||||
import com.xiaojukeji.know.streaming.km.core.utils.ApiCallThreadPoolService;
|
import org.apache.commons.lang3.ObjectUtils;
|
||||||
|
import org.apache.commons.lang3.StringUtils;
|
||||||
|
import org.apache.kafka.clients.admin.OffsetSpec;
|
||||||
import org.apache.kafka.clients.consumer.*;
|
import org.apache.kafka.clients.consumer.*;
|
||||||
import org.apache.kafka.common.TopicPartition;
|
import org.apache.kafka.common.TopicPartition;
|
||||||
import org.apache.kafka.common.config.TopicConfig;
|
import org.apache.kafka.common.config.TopicConfig;
|
||||||
@@ -61,7 +54,7 @@ import java.util.stream.Collectors;
|
|||||||
|
|
||||||
@Component
|
@Component
|
||||||
public class TopicStateManagerImpl implements TopicStateManager {
|
public class TopicStateManagerImpl implements TopicStateManager {
|
||||||
private static final ILog LOGGER = LogFactory.getLog(TopicStateManagerImpl.class);
|
private static final ILog log = LogFactory.getLog(TopicStateManagerImpl.class);
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private TopicService topicService;
|
private TopicService topicService;
|
||||||
@@ -84,15 +77,6 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private TopicConfigService topicConfigService;
|
private TopicConfigService topicConfigService;
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private GroupService groupService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private GroupManager groupManager;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private KSConfigUtils ksConfigUtils;
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
|
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
|
||||||
Topic topic = topicService.getTopic(clusterPhyId, topicName);
|
Topic topic = topicService.getTopic(clusterPhyId, topicName);
|
||||||
@@ -105,7 +89,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
TopicBrokerAllVO allVO = new TopicBrokerAllVO();
|
TopicBrokerAllVO allVO = new TopicBrokerAllVO();
|
||||||
|
|
||||||
allVO.setTotal(topic.getBrokerIdSet().size());
|
allVO.setTotal(topic.getBrokerIdSet().size());
|
||||||
allVO.setLive((int)brokerMap.values().stream().filter(Broker::alive).count());
|
allVO.setLive((int)brokerMap.values().stream().filter(elem -> elem.alive()).count());
|
||||||
allVO.setDead(allVO.getTotal() - allVO.getLive());
|
allVO.setDead(allVO.getTotal() - allVO.getLive());
|
||||||
|
|
||||||
allVO.setPartitionCount(topic.getPartitionNum());
|
allVO.setPartitionCount(topic.getPartitionNum());
|
||||||
@@ -147,38 +131,107 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 获取分区beginOffset
|
// 获取分区beginOffset
|
||||||
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.earliest());
|
Result<Map<TopicPartition, Long>> beginOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.earliest(), null);
|
||||||
if (beginOffsetsMapResult.failed()) {
|
if (beginOffsetsMapResult.failed()) {
|
||||||
return Result.buildFromIgnoreData(beginOffsetsMapResult);
|
return Result.buildFromIgnoreData(beginOffsetsMapResult);
|
||||||
}
|
}
|
||||||
// 获取分区endOffset
|
// 获取分区endOffset
|
||||||
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), KSOffsetSpec.latest());
|
Result<Map<TopicPartition, Long>> endOffsetsMapResult = partitionService.getPartitionOffsetFromKafka(clusterPhyId, topicName, dto.getFilterPartitionId(), OffsetSpec.latest(), null);
|
||||||
if (endOffsetsMapResult.failed()) {
|
if (endOffsetsMapResult.failed()) {
|
||||||
return Result.buildFromIgnoreData(endOffsetsMapResult);
|
return Result.buildFromIgnoreData(endOffsetsMapResult);
|
||||||
}
|
}
|
||||||
|
|
||||||
// 数据采集
|
List<TopicRecordVO> voList = new ArrayList<>();
|
||||||
List<TopicRecordVO> voList = this.getTopicMessages(clusterPhy, topicName, beginOffsetsMapResult.getData(), endOffsetsMapResult.getData(), startTime, dto);
|
|
||||||
|
KafkaConsumer<String, String> kafkaConsumer = null;
|
||||||
|
try {
|
||||||
|
// 创建kafka-consumer
|
||||||
|
kafkaConsumer = new KafkaConsumer<>(this.generateClientProperties(clusterPhy, dto.getMaxRecords()));
|
||||||
|
|
||||||
|
List<TopicPartition> partitionList = new ArrayList<>();
|
||||||
|
long maxMessage = 0;
|
||||||
|
for (Map.Entry<TopicPartition, Long> entry : endOffsetsMapResult.getData().entrySet()) {
|
||||||
|
long begin = beginOffsetsMapResult.getData().get(entry.getKey());
|
||||||
|
long end = entry.getValue();
|
||||||
|
if (begin == end){
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
maxMessage += end - begin;
|
||||||
|
partitionList.add(entry.getKey());
|
||||||
|
}
|
||||||
|
maxMessage = Math.min(maxMessage, dto.getMaxRecords());
|
||||||
|
kafkaConsumer.assign(partitionList);
|
||||||
|
|
||||||
|
Map<TopicPartition, OffsetAndTimestamp> partitionOffsetAndTimestampMap = new HashMap<>();
|
||||||
|
// 获取指定时间每个分区的offset(按指定开始时间查询消息时)
|
||||||
|
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
|
||||||
|
Map<TopicPartition, Long> timestampsToSearch = new HashMap<>();
|
||||||
|
partitionList.forEach(topicPartition -> {
|
||||||
|
timestampsToSearch.put(topicPartition, dto.getStartTimestampUnitMs());
|
||||||
|
});
|
||||||
|
partitionOffsetAndTimestampMap = kafkaConsumer.offsetsForTimes(timestampsToSearch);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (TopicPartition partition : partitionList) {
|
||||||
|
if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getFilterOffsetReset()) {
|
||||||
|
// 重置到最旧
|
||||||
|
kafkaConsumer.seek(partition, beginOffsetsMapResult.getData().get(partition));
|
||||||
|
} else if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
|
||||||
|
// 重置到指定时间
|
||||||
|
kafkaConsumer.seek(partition, partitionOffsetAndTimestampMap.get(partition).offset());
|
||||||
|
} else if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getFilterOffsetReset()) {
|
||||||
|
// 重置到指定位置
|
||||||
|
|
||||||
|
} else {
|
||||||
|
// 默认,重置到最新
|
||||||
|
kafkaConsumer.seek(partition, Math.max(beginOffsetsMapResult.getData().get(partition), endOffsetsMapResult.getData().get(partition) - dto.getMaxRecords()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 这里需要减去 KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS 是因为poll一次需要耗时,如果这里不减去,则可能会导致poll之后,超过要求的时间
|
||||||
|
while (System.currentTimeMillis() - startTime <= dto.getPullTimeoutUnitMs() && voList.size() < maxMessage) {
|
||||||
|
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
|
||||||
|
for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
|
||||||
|
if (this.checkIfIgnore(consumerRecord, dto.getFilterKey(), dto.getFilterValue())) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
voList.add(TopicVOConverter.convert2TopicRecordVO(topicName, consumerRecord));
|
||||||
|
if (voList.size() >= dto.getMaxRecords()) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 超时则返回
|
||||||
|
if (System.currentTimeMillis() - startTime + KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS > dto.getPullTimeoutUnitMs()
|
||||||
|
|| voList.size() > dto.getMaxRecords()) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// 排序
|
// 排序
|
||||||
if (ValidateUtils.isBlank(dto.getSortType())) {
|
if (ObjectUtils.isNotEmpty(voList)) {
|
||||||
// 默认按时间倒序排序
|
// 默认按时间倒序排序
|
||||||
|
if (StringUtils.isBlank(dto.getSortType())) {
|
||||||
dto.setSortType(SortTypeEnum.DESC.getSortType());
|
dto.setSortType(SortTypeEnum.DESC.getSortType());
|
||||||
}
|
}
|
||||||
if (ValidateUtils.isBlank(dto.getSortField())) {
|
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType());
|
||||||
// 默认按照timestampUnitMs字段排序
|
|
||||||
dto.setSortField(PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD.equals(dto.getSortField())) {
|
|
||||||
// 如果是时间类型,则第二排序规则是offset
|
|
||||||
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType(), PaginationConstant.TOPIC_RECORDS_OFFSET_SORTED_FIELD, dto.getSortType());
|
|
||||||
} else {
|
|
||||||
// 如果是非时间类型,则第二排序规则是时间
|
|
||||||
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType(), PaginationConstant.TOPIC_RECORDS_TIME_SORTED_FIELD, dto.getSortType());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return Result.buildSuc(voList.subList(0, Math.min(dto.getMaxRecords(), voList.size())));
|
return Result.buildSuc(voList.subList(0, Math.min(dto.getMaxRecords(), voList.size())));
|
||||||
|
} catch (Exception e) {
|
||||||
|
log.error("method=getTopicMessages||clusterPhyId={}||topicName={}||param={}||errMsg=exception", clusterPhyId, topicName, dto, e);
|
||||||
|
|
||||||
|
throw new AdminOperateException(e.getMessage(), e, ResultStatus.KAFKA_OPERATE_FAILED);
|
||||||
|
} finally {
|
||||||
|
if (kafkaConsumer != null) {
|
||||||
|
try {
|
||||||
|
kafkaConsumer.close(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
|
||||||
|
} catch (Exception e) {
|
||||||
|
// ignore
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -233,38 +286,27 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames) {
|
public Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames) {
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
List<Partition> partitionList = partitionService.listPartitionByTopic(clusterPhyId, topicName);
|
List<Partition> partitionList = partitionService.listPartitionByTopic(clusterPhyId, topicName);
|
||||||
if (ValidateUtils.isEmptyList(partitionList)) {
|
if (ValidateUtils.isEmptyList(partitionList)) {
|
||||||
return Result.buildSuc();
|
return Result.buildSuc();
|
||||||
}
|
}
|
||||||
|
|
||||||
Map<Integer, PartitionMetrics> metricsMap = new HashMap<>();
|
|
||||||
ApiCallThreadPoolService.runnableTask(
|
|
||||||
String.format("clusterPhyId=%d||topicName=%s||method=getTopicPartitions", clusterPhyId, topicName),
|
|
||||||
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTime),
|
|
||||||
() -> {
|
|
||||||
Result<List<PartitionMetrics>> metricsResult = partitionMetricService.collectPartitionsMetricsFromKafka(clusterPhyId, topicName, metricsNames);
|
Result<List<PartitionMetrics>> metricsResult = partitionMetricService.collectPartitionsMetricsFromKafka(clusterPhyId, topicName, metricsNames);
|
||||||
if (metricsResult.failed()) {
|
if (metricsResult.failed()) {
|
||||||
// 仅打印错误日志,但是不直接返回错误
|
// 仅打印错误日志,但是不直接返回错误
|
||||||
LOGGER.error(
|
log.error(
|
||||||
"method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from kafka failed",
|
"class=TopicStateManagerImpl||method=getTopicPartitions||clusterPhyId={}||topicName={}||result={}||msg=get metrics from es failed",
|
||||||
clusterPhyId, topicName, metricsResult
|
clusterPhyId, topicName, metricsResult
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 转map
|
||||||
|
Map<Integer, PartitionMetrics> metricsMap = new HashMap<>();
|
||||||
|
if (metricsResult.hasData()) {
|
||||||
for (PartitionMetrics metrics: metricsResult.getData()) {
|
for (PartitionMetrics metrics: metricsResult.getData()) {
|
||||||
metricsMap.put(metrics.getPartitionId(), metrics);
|
metricsMap.put(metrics.getPartitionId(), metrics);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
);
|
|
||||||
boolean finished = ApiCallThreadPoolService.waitResultAndReturnFinished(1);
|
|
||||||
|
|
||||||
if (!finished && metricsMap.isEmpty()) {
|
|
||||||
// 未完成 -> 打印日志
|
|
||||||
LOGGER.error("method=getTopicPartitions||clusterPhyId={}||topicName={}||msg=get metrics from kafka failed", clusterPhyId, topicName);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<TopicPartitionVO> voList = new ArrayList<>();
|
List<TopicPartitionVO> voList = new ArrayList<>();
|
||||||
for (Partition partition: partitionList) {
|
for (Partition partition: partitionList) {
|
||||||
@@ -282,7 +324,7 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
|
|
||||||
// Broker统计信息
|
// Broker统计信息
|
||||||
vo.setBrokerCount(brokerMap.size());
|
vo.setBrokerCount(brokerMap.size());
|
||||||
vo.setLiveBrokerCount((int)brokerMap.values().stream().filter(Broker::alive).count());
|
vo.setLiveBrokerCount((int)brokerMap.values().stream().filter(elem -> elem.alive()).count());
|
||||||
vo.setDeadBrokerCount(vo.getBrokerCount() - vo.getLiveBrokerCount());
|
vo.setDeadBrokerCount(vo.getBrokerCount() - vo.getLiveBrokerCount());
|
||||||
|
|
||||||
// Partition统计信息
|
// Partition统计信息
|
||||||
@@ -304,25 +346,6 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
return Result.buildSuc(vo);
|
return Result.buildSuc(vo);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto) {
|
|
||||||
long startTimeUnitMs = System.currentTimeMillis();
|
|
||||||
|
|
||||||
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, "", "", searchGroupName, dto);
|
|
||||||
|
|
||||||
if (!paginationResult.hasData()) {
|
|
||||||
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<GroupTopicOverviewVO> groupTopicVOList = groupManager.getGroupTopicOverviewVOList(
|
|
||||||
clusterPhyId,
|
|
||||||
paginationResult.getData().getBizData(),
|
|
||||||
ksConfigUtils.getApiCallLeftTimeUnitMs(System.currentTimeMillis() - startTimeUnitMs) // 超时时间
|
|
||||||
);
|
|
||||||
|
|
||||||
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
/**************************************************** private method ****************************************************/
|
||||||
|
|
||||||
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {
|
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {
|
||||||
@@ -338,8 +361,11 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
// ignore
|
// ignore
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
if (filterValue != null && consumerRecord.value() != null && !consumerRecord.value().contains(filterValue)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
return (filterValue != null && consumerRecord.value() != null && !consumerRecord.value().contains(filterValue));
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
private TopicBrokerSingleVO getTopicBrokerSingle(Long clusterPhyId,
|
private TopicBrokerSingleVO getTopicBrokerSingle(Long clusterPhyId,
|
||||||
@@ -399,90 +425,4 @@ public class TopicStateManagerImpl implements TopicStateManager {
|
|||||||
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Math.max(2, Math.min(5, maxPollRecords)));
|
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, Math.max(2, Math.min(5, maxPollRecords)));
|
||||||
return props;
|
return props;
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<TopicRecordVO> getTopicMessages(ClusterPhy clusterPhy,
|
|
||||||
String topicName,
|
|
||||||
Map<TopicPartition, Long> beginOffsetsMap,
|
|
||||||
Map<TopicPartition, Long> endOffsetsMap,
|
|
||||||
long startTime,
|
|
||||||
TopicRecordDTO dto) throws AdminOperateException {
|
|
||||||
List<TopicRecordVO> voList = new ArrayList<>();
|
|
||||||
|
|
||||||
try (KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(this.generateClientProperties(clusterPhy, dto.getMaxRecords()))) {
|
|
||||||
// 移动到指定位置
|
|
||||||
long maxMessage = this.assignAndSeekToSpecifiedOffset(kafkaConsumer, beginOffsetsMap, endOffsetsMap, dto);
|
|
||||||
|
|
||||||
// 这里需要减去 KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS 是因为poll一次需要耗时,如果这里不减去,则可能会导致poll之后,超过要求的时间
|
|
||||||
while (System.currentTimeMillis() - startTime <= dto.getPullTimeoutUnitMs() && voList.size() < maxMessage) {
|
|
||||||
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofMillis(KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS));
|
|
||||||
for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {
|
|
||||||
if (this.checkIfIgnore(consumerRecord, dto.getFilterKey(), dto.getFilterValue())) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
voList.add(TopicVOConverter.convert2TopicRecordVO(topicName, consumerRecord));
|
|
||||||
if (voList.size() >= dto.getMaxRecords()) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 超时则返回
|
|
||||||
if (System.currentTimeMillis() - startTime + KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS > dto.getPullTimeoutUnitMs()
|
|
||||||
|| voList.size() > dto.getMaxRecords()) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return voList;
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("method=getTopicMessages||clusterPhyId={}||topicName={}||param={}||errMsg=exception", clusterPhy.getId(), topicName, dto, e);
|
|
||||||
|
|
||||||
throw new AdminOperateException(e.getMessage(), e, ResultStatus.KAFKA_OPERATE_FAILED);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private long assignAndSeekToSpecifiedOffset(KafkaConsumer<String, String> kafkaConsumer,
|
|
||||||
Map<TopicPartition, Long> beginOffsetsMap,
|
|
||||||
Map<TopicPartition, Long> endOffsetsMap,
|
|
||||||
TopicRecordDTO dto) {
|
|
||||||
List<TopicPartition> partitionList = new ArrayList<>();
|
|
||||||
long maxMessage = 0;
|
|
||||||
for (Map.Entry<TopicPartition, Long> entry : endOffsetsMap.entrySet()) {
|
|
||||||
long begin = beginOffsetsMap.get(entry.getKey());
|
|
||||||
long end = entry.getValue();
|
|
||||||
if (begin == end){
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
maxMessage += end - begin;
|
|
||||||
partitionList.add(entry.getKey());
|
|
||||||
}
|
|
||||||
maxMessage = Math.min(maxMessage, dto.getMaxRecords());
|
|
||||||
kafkaConsumer.assign(partitionList);
|
|
||||||
|
|
||||||
Map<TopicPartition, OffsetAndTimestamp> partitionOffsetAndTimestampMap = new HashMap<>();
|
|
||||||
// 获取指定时间每个分区的offset(按指定开始时间查询消息时)
|
|
||||||
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
|
|
||||||
Map<TopicPartition, Long> timestampsToSearch = new HashMap<>();
|
|
||||||
partitionList.forEach(topicPartition -> timestampsToSearch.put(topicPartition, dto.getStartTimestampUnitMs()));
|
|
||||||
partitionOffsetAndTimestampMap = kafkaConsumer.offsetsForTimes(timestampsToSearch);
|
|
||||||
}
|
|
||||||
|
|
||||||
for (TopicPartition partition : partitionList) {
|
|
||||||
if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getFilterOffsetReset()) {
|
|
||||||
// 重置到最旧
|
|
||||||
kafkaConsumer.seek(partition, beginOffsetsMap.get(partition));
|
|
||||||
} else if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
|
|
||||||
// 重置到指定时间
|
|
||||||
kafkaConsumer.seek(partition, partitionOffsetAndTimestampMap.get(partition).offset());
|
|
||||||
} else if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getFilterOffsetReset()) {
|
|
||||||
// 重置到指定位置
|
|
||||||
|
|
||||||
} else {
|
|
||||||
// 默认,重置到最新
|
|
||||||
kafkaConsumer.seek(partition, Math.max(beginOffsetsMap.get(partition), endOffsetsMap.get(partition) - dto.getMaxRecords()));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return maxMessage;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ public interface VersionControlManager {
|
|||||||
* 获取当前ks所有支持的kafka版本
|
* 获取当前ks所有支持的kafka版本
|
||||||
* @return
|
* @return
|
||||||
*/
|
*/
|
||||||
Result<Map<String, Long>> listAllKafkaVersions();
|
Result<Map<String, Long>> listAllVersions();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
|
* 获取全部集群 clusterId 中类型为 type 的指标,不论支持不支持
|
||||||
@@ -28,7 +28,7 @@ public interface VersionControlManager {
|
|||||||
* @param type
|
* @param type
|
||||||
* @return
|
* @return
|
||||||
*/
|
*/
|
||||||
Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type);
|
Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 获取当前用户设置的用于展示的指标配置
|
* 获取当前用户设置的用于展示的指标配置
|
||||||
|
|||||||
@@ -14,10 +14,10 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.config.metric.UserMetricConfigVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.config.metric.UserMetricConfigVO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
|
import com.xiaojukeji.know.streaming.km.common.bean.vo.version.VersionItemVO;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.VersionUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
@@ -30,14 +30,10 @@ import java.util.stream.Collectors;
|
|||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.V_MAX;
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.*;
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.BrokerMetricVersionItems.*;
|
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.BrokerMetricVersionItems.*;
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems.*;
|
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.ClusterMetricVersionItems.*;
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.GroupMetricVersionItems.*;
|
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.GroupMetricVersionItems.*;
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.TopicMetricVersionItems.*;
|
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems.*;
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.MirrorMakerMetricVersionItems.*;
|
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.ConnectClusterMetricVersionItems.*;
|
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.connect.ConnectorMetricVersionItems.*;
|
|
||||||
import static com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ZookeeperMetricVersionItems.*;
|
|
||||||
|
|
||||||
@Service
|
@Service
|
||||||
public class VersionControlManagerImpl implements VersionControlManager {
|
public class VersionControlManagerImpl implements VersionControlManager {
|
||||||
@@ -52,8 +48,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
|
|
||||||
@PostConstruct
|
@PostConstruct
|
||||||
public void init(){
|
public void init(){
|
||||||
// topic
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
|
||||||
@@ -63,8 +58,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
|
||||||
|
|
||||||
// cluster
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
|
||||||
@@ -79,14 +73,12 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
|
||||||
|
|
||||||
// group
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true));
|
||||||
|
|
||||||
// broker
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
|
||||||
@@ -99,73 +91,8 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
|
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
|
||||||
|
|
||||||
// zookeeper
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_STATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_MAX_REQUEST_LATENCY, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OUTSTANDING_REQUESTS, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NODE_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_WATCH_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_NUM_ALIVE_CONNECTIONS, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_RECEIVED, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_PACKETS_SENT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_EPHEMERALS_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_APPROXIMATE_DATA_SIZE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_OPEN_FILE_DESCRIPTOR_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_DISCONNECTS_PER_SEC, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_SYNC_CONNECTS_PER_SEC, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_ZOOKEEPER.getCode(), ZOOKEEPER_METRIC_KAFKA_ZK_REQUEST_LATENCY_99TH, true));
|
|
||||||
|
|
||||||
// mm2
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_BYTE_RATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_AGE_MS_MAX, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_RECORD_RATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_MIRROR_MAKER.getCode(), MIRROR_MAKER_METRIC_REPLICATION_LATENCY_MS_MAX, true));
|
|
||||||
|
|
||||||
// Connect Cluster
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_ATTEMPTS_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_FAILURE_PERCENTAGE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_CONNECTOR_STARTUP_FAILURE_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_ATTEMPTS_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_FAILURE_PERCENTAGE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_TASK_STARTUP_FAILURE_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CLUSTER.getCode(), CONNECT_CLUSTER_METRIC_COLLECT_COST_TIME, true));
|
|
||||||
|
|
||||||
|
|
||||||
// Connect Connector
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_STATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_CHECK_PASSED, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_HEALTH_CHECK_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_COLLECT_COST_TIME, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_TOTAL_TASK_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_RUNNING_TASK_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_CONNECTOR_FAILED_TASK_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_ACTIVE_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_POLL_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_WRITE_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_ACTIVE_COUNT, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_READ_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_SEND_TOTAL, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_DEADLETTERQUEUE_PRODUCE_FAILURES, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_DEADLETTERQUEUE_PRODUCE_REQUESTS, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_TOTAL_ERRORS_LOGGED, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_POLL_RATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SOURCE_RECORD_WRITE_RATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_READ_RATE, true));
|
|
||||||
defaultMetrics.add(new UserMetricConfig(METRIC_CONNECT_CONNECTOR.getCode(), CONNECTOR_METRIC_SINK_RECORD_SEND_RATE, true));
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ClusterPhyService clusterPhyService;
|
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
|
|
||||||
@@ -181,40 +108,27 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
|
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
|
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
|
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
|
||||||
|
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
|
|
||||||
|
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CLUSTER.getCode()), VersionItemVO.class));
|
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_CONNECTOR.getCode()), VersionItemVO.class));
|
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_CONNECT_MIRROR_MAKER.getCode()), VersionItemVO.class));
|
|
||||||
|
|
||||||
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
|
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
|
||||||
|
|
||||||
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
|
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
|
||||||
Collectors.toMap(
|
Collectors.toMap(u -> u.getType() + "@" + u.getName(), Function.identity() ));
|
||||||
u -> u.getType() + "@" + u.getName(),
|
|
||||||
Function.identity(),
|
|
||||||
(v1, v2) -> v1)
|
|
||||||
);
|
|
||||||
|
|
||||||
return Result.buildSuc(map);
|
return Result.buildSuc(map);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Result<Map<String, Long>> listAllKafkaVersions() {
|
public Result<Map<String, Long>> listAllVersions() {
|
||||||
return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
|
return Result.buildSuc(VersionEnum.allVersionsWithOutMax());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Result<List<VersionItemVO>> listKafkaClusterVersionControlItem(Long clusterId, Integer type) {
|
public Result<List<VersionItemVO>> listClusterVersionControlItem(Long clusterId, Integer type) {
|
||||||
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
|
List<VersionControlItem> allItem = versionControlService.listVersionControlItem(type);
|
||||||
List<VersionItemVO> versionItemVOS = new ArrayList<>();
|
List<VersionItemVO> versionItemVOS = new ArrayList<>();
|
||||||
|
|
||||||
String versionStr = clusterPhyService.getVersionFromCacheFirst(clusterId);
|
|
||||||
|
|
||||||
for (VersionControlItem item : allItem){
|
for (VersionControlItem item : allItem){
|
||||||
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
|
VersionItemVO itemVO = ConvertUtil.obj2Obj(item, VersionItemVO.class);
|
||||||
boolean support = versionControlService.isClusterSupport(versionStr, item);
|
boolean support = versionControlService.isClusterSupport(clusterId, item);
|
||||||
|
|
||||||
itemVO.setSupport(support);
|
itemVO.setSupport(support);
|
||||||
itemVO.setDesc(itemSupportDesc(item, support));
|
itemVO.setDesc(itemSupportDesc(item, support));
|
||||||
@@ -227,7 +141,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
|
public Result<List<UserMetricConfigVO>> listUserMetricItem(Long clusterId, Integer type, String operator) {
|
||||||
Result<List<VersionItemVO>> ret = listKafkaClusterVersionControlItem(clusterId, type);
|
Result<List<VersionItemVO>> ret = listClusterVersionControlItem(clusterId, type);
|
||||||
if(null == ret || ret.failed()){
|
if(null == ret || ret.failed()){
|
||||||
return Result.buildFail();
|
return Result.buildFail();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,13 +5,13 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<artifactId>km-collector</artifactId>
|
<artifactId>km-collector</artifactId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
|
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>km</artifactId>
|
<artifactId>km</artifactId>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
<dependencies>
|
<dependencies>
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
|
import com.xiaojukeji.know.streaming.km.collector.service.CollectThreadPoolService;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BaseMetricEvent;
|
||||||
import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
|
import com.xiaojukeji.know.streaming.km.common.component.SpringTool;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
@@ -8,20 +9,17 @@ import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
|||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
public abstract class AbstractMetricCollector<M, C> {
|
public abstract class AbstractMetricCollector<T> {
|
||||||
public abstract String getClusterVersion(C c);
|
public abstract void collectMetrics(ClusterPhy clusterPhy);
|
||||||
|
|
||||||
public abstract VersionItemTypeEnum collectorType();
|
public abstract VersionItemTypeEnum collectorType();
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private CollectThreadPoolService collectThreadPoolService;
|
private CollectThreadPoolService collectThreadPoolService;
|
||||||
|
|
||||||
public abstract void collectMetrics(C c);
|
|
||||||
|
|
||||||
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
|
protected FutureWaitUtil<Void> getFutureUtilByClusterPhyId(Long clusterPhyId) {
|
||||||
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
|
return collectThreadPoolService.selectSuitableFutureUtil(clusterPhyId * 1000L + this.collectorType().getCode());
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
|
import com.alibaba.fastjson.JSON;
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
|
||||||
@@ -10,6 +11,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
|
||||||
@@ -26,8 +28,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
|
|||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
@Component
|
@Component
|
||||||
public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMetrics> {
|
public class BrokerMetricCollector extends AbstractMetricCollector<BrokerMetrics> {
|
||||||
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricCollector.class);
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
@@ -39,31 +41,32 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
|
|||||||
private BrokerService brokerService;
|
private BrokerService brokerService;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<BrokerMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
|
Long startTime = System.currentTimeMillis();
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
|
|
||||||
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
|
List<Broker> brokers = brokerService.listAliveBrokersFromDB(clusterPhy.getId());
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
List<BrokerMetrics> metricsList = new ArrayList<>();
|
List<BrokerMetrics> brokerMetrics = new ArrayList<>();
|
||||||
for(Broker broker : brokers) {
|
for(Broker broker : brokers) {
|
||||||
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
|
BrokerMetrics metrics = new BrokerMetrics(clusterPhyId, broker.getBrokerId(), broker.getHost(), broker.getPort());
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
brokerMetrics.add(metrics);
|
||||||
metricsList.add(metrics);
|
|
||||||
|
|
||||||
future.runnableTask(
|
future.runnableTask(
|
||||||
String.format("class=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
|
String.format("method=BrokerMetricCollector||clusterPhyId=%d||brokerId=%d", clusterPhyId, broker.getBrokerId()),
|
||||||
30000,
|
30000,
|
||||||
() -> collectMetrics(clusterPhyId, metrics, items)
|
() -> collectMetrics(clusterPhyId, metrics, items)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
future.waitExecute(30000);
|
future.waitExecute(30000);
|
||||||
this.publishMetric(new BrokerMetricEvent(this, metricsList));
|
this.publishMetric(new BrokerMetricEvent(this, brokerMetrics));
|
||||||
|
|
||||||
return metricsList;
|
LOGGER.info("method=BrokerMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -75,6 +78,7 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
|
|||||||
|
|
||||||
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
|
private void collectMetrics(Long clusterPhyId, BrokerMetrics metrics, List<VersionControlItem> items) {
|
||||||
long startTime = System.currentTimeMillis();
|
long startTime = System.currentTimeMillis();
|
||||||
|
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
||||||
|
|
||||||
for(VersionControlItem v : items) {
|
for(VersionControlItem v : items) {
|
||||||
try {
|
try {
|
||||||
@@ -88,11 +92,14 @@ public class BrokerMetricCollector extends AbstractKafkaMetricCollector<BrokerMe
|
|||||||
}
|
}
|
||||||
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
metrics.putMetric(ret.getData().getMetrics());
|
||||||
|
|
||||||
|
if(!EnvUtil.isOnline()){
|
||||||
|
LOGGER.info("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||metric={}!",
|
||||||
|
clusterPhyId, metrics.getBrokerId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
|
||||||
|
}
|
||||||
} catch (Exception e){
|
} catch (Exception e){
|
||||||
LOGGER.error(
|
LOGGER.error("method=BrokerMetricCollector||clusterId={}||brokerId={}||metric={}||errMsg=exception!",
|
||||||
"method=collectMetrics||clusterPhyId={}||brokerId={}||metricName={}||errMsg=exception!",
|
clusterPhyId, metrics.getBrokerId(), v.getName(), e);
|
||||||
clusterPhyId, metrics.getBrokerId(), v.getName(), e
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
@@ -7,15 +7,18 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ClusterMetric
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Component;
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Arrays;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CLUSTER;
|
||||||
@@ -24,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
|
|||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
@Component
|
@Component
|
||||||
public class ClusterMetricCollector extends AbstractKafkaMetricCollector<ClusterMetrics> {
|
public class ClusterMetricCollector extends AbstractMetricCollector<ClusterMetricPO> {
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ClusterMetricCollector.class);
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
@@ -34,37 +37,35 @@ public class ClusterMetricCollector extends AbstractKafkaMetricCollector<Cluster
|
|||||||
private ClusterMetricService clusterMetricService;
|
private ClusterMetricService clusterMetricService;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<ClusterMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
Long startTime = System.currentTimeMillis();
|
Long startTime = System.currentTimeMillis();
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
|
ClusterMetrics metrics = new ClusterMetrics(clusterPhyId, clusterPhy.getKafkaVersion());
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
for(VersionControlItem v : items) {
|
for(VersionControlItem v : items) {
|
||||||
future.runnableTask(
|
future.runnableTask(
|
||||||
String.format("class=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
|
String.format("method=ClusterMetricCollector||clusterPhyId=%d||metricName=%s", clusterPhyId, v.getName()),
|
||||||
30000,
|
30000,
|
||||||
() -> {
|
() -> {
|
||||||
try {
|
try {
|
||||||
if(null != metrics.getMetrics().get(v.getName())){
|
if(null != metrics.getMetrics().get(v.getName())){return null;}
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
|
Result<ClusterMetrics> ret = clusterMetricService.collectClusterMetricsFromKafka(clusterPhyId, v.getName());
|
||||||
if(null == ret || ret.failed() || null == ret.getData()){
|
if(null == ret || ret.failed() || null == ret.getData()){return null;}
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
metrics.putMetric(ret.getData().getMetrics());
|
||||||
|
|
||||||
|
if(!EnvUtil.isOnline()){
|
||||||
|
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||metricValue={}",
|
||||||
|
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics()));
|
||||||
|
}
|
||||||
} catch (Exception e){
|
} catch (Exception e){
|
||||||
LOGGER.error(
|
LOGGER.error("method=ClusterMetricCollector||clusterPhyId={}||metricName={}||errMsg=exception!",
|
||||||
"method=collectKafkaMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
|
clusterPhyId, v.getName(), e);
|
||||||
clusterPhyId, v.getName(), e
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return null;
|
return null;
|
||||||
@@ -75,9 +76,10 @@ public class ClusterMetricCollector extends AbstractKafkaMetricCollector<Cluster
|
|||||||
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
||||||
|
|
||||||
publishMetric(new ClusterMetricEvent(this, Collections.singletonList(metrics)));
|
publishMetric(new ClusterMetricEvent(this, Arrays.asList(metrics)));
|
||||||
|
|
||||||
return Collections.singletonList(metrics);
|
LOGGER.info("method=ClusterMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -1,5 +1,6 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
|
import com.alibaba.fastjson.JSON;
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
||||||
@@ -9,16 +10,20 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.group.GroupMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
|
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
||||||
import org.apache.kafka.common.TopicPartition;
|
import org.apache.commons.collections.CollectionUtils;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Component;
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
import java.util.*;
|
import java.util.ArrayList;
|
||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
import java.util.concurrent.ConcurrentHashMap;
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_GROUP;
|
||||||
@@ -27,8 +32,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
|
|||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
@Component
|
@Component
|
||||||
public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetrics> {
|
public class GroupMetricCollector extends AbstractMetricCollector<List<GroupMetrics>> {
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(GroupMetricCollector.class);
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
@@ -40,38 +45,40 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
|
|||||||
private GroupService groupService;
|
private GroupService groupService;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<GroupMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
|
Long startTime = System.currentTimeMillis();
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
|
|
||||||
List<String> groupNameList = new ArrayList<>();
|
List<String> groups = new ArrayList<>();
|
||||||
try {
|
try {
|
||||||
groupNameList = groupService.listGroupsFromKafka(clusterPhy);
|
groups = groupService.listGroupsFromKafka(clusterPhyId);
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
LOGGER.error("method=collectKafkaMetrics||clusterPhyId={}||msg=exception!", clusterPhyId, e);
|
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||msg=exception!", clusterPhyId, e);
|
||||||
}
|
}
|
||||||
|
|
||||||
if(ValidateUtils.isEmptyList(groupNameList)) {
|
if(CollectionUtils.isEmpty(groups)){return;}
|
||||||
return Collections.emptyList();
|
|
||||||
}
|
|
||||||
|
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
FutureWaitUtil<Void> future = getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
|
Map<String, List<GroupMetrics>> metricsMap = new ConcurrentHashMap<>();
|
||||||
for(String groupName : groupNameList) {
|
for(String groupName : groups) {
|
||||||
future.runnableTask(
|
future.runnableTask(
|
||||||
String.format("class=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
|
String.format("method=GroupMetricCollector||clusterPhyId=%d||groupName=%s", clusterPhyId, groupName),
|
||||||
30000,
|
30000,
|
||||||
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
|
() -> collectMetrics(clusterPhyId, groupName, metricsMap, items));
|
||||||
}
|
}
|
||||||
|
|
||||||
future.waitResult(30000);
|
future.waitResult(30000);
|
||||||
|
|
||||||
List<GroupMetrics> metricsList = metricsMap.values().stream().collect(ArrayList::new, ArrayList::addAll, ArrayList::addAll);
|
List<GroupMetrics> metricsList = new ArrayList<>();
|
||||||
|
metricsMap.values().forEach(elem -> metricsList.addAll(elem));
|
||||||
|
|
||||||
publishMetric(new GroupMetricEvent(this, metricsList));
|
publishMetric(new GroupMetricEvent(this, metricsList));
|
||||||
return metricsList;
|
|
||||||
|
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||startTime={}||cost={}||msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -84,7 +91,9 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
|
|||||||
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
|
private void collectMetrics(Long clusterPhyId, String groupName, Map<String, List<GroupMetrics>> metricsMap, List<VersionControlItem> items) {
|
||||||
long startTime = System.currentTimeMillis();
|
long startTime = System.currentTimeMillis();
|
||||||
|
|
||||||
Map<TopicPartition, GroupMetrics> subMetricMap = new HashMap<>();
|
List<GroupMetrics> groupMetricsList = new ArrayList<>();
|
||||||
|
|
||||||
|
Map<String, GroupMetrics> tpGroupPOMap = new HashMap<>();
|
||||||
|
|
||||||
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
|
GroupMetrics groupMetrics = new GroupMetrics(clusterPhyId, groupName, true);
|
||||||
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
||||||
@@ -98,31 +107,38 @@ public class GroupMetricCollector extends AbstractKafkaMetricCollector<GroupMetr
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret.getData().forEach(metrics -> {
|
ret.getData().stream().forEach(metrics -> {
|
||||||
if (metrics.isBGroupMetric()) {
|
if (metrics.isBGroupMetric()) {
|
||||||
groupMetrics.putMetric(metrics.getMetrics());
|
groupMetrics.putMetric(metrics.getMetrics());
|
||||||
return;
|
} else {
|
||||||
}
|
String topicName = metrics.getTopic();
|
||||||
|
Integer partitionId = metrics.getPartitionId();
|
||||||
|
String tpGroupKey = genTopicPartitionGroupKey(topicName, partitionId);
|
||||||
|
|
||||||
TopicPartition tp = new TopicPartition(metrics.getTopic(), metrics.getPartitionId());
|
tpGroupPOMap.putIfAbsent(tpGroupKey, new GroupMetrics(clusterPhyId, partitionId, topicName, groupName, false));
|
||||||
subMetricMap.putIfAbsent(tp, new GroupMetrics(clusterPhyId, metrics.getPartitionId(), metrics.getTopic(), groupName, false));
|
tpGroupPOMap.get(tpGroupKey).putMetric(metrics.getMetrics());
|
||||||
subMetricMap.get(tp).putMetric(metrics.getMetrics());
|
}
|
||||||
});
|
});
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(
|
if(!EnvUtil.isOnline()){
|
||||||
"method=collectMetrics||clusterPhyId={}||groupName={}||errMsg=exception!",
|
LOGGER.info("method=GroupMetricCollector||clusterPhyId={}||groupName={}||metricName={}||metricValue={}",
|
||||||
clusterPhyId, groupName, e
|
clusterPhyId, groupName, metricName, JSON.toJSONString(ret.getData()));
|
||||||
);
|
}
|
||||||
|
}catch (Exception e){
|
||||||
|
LOGGER.error("method=GroupMetricCollector||clusterPhyId={}||groupName={}||errMsg=exception!", clusterPhyId, groupName, e);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
List<GroupMetrics> metricsList = new ArrayList<>();
|
groupMetricsList.add(groupMetrics);
|
||||||
metricsList.add(groupMetrics);
|
groupMetricsList.addAll(tpGroupPOMap.values());
|
||||||
metricsList.addAll(subMetricMap.values());
|
|
||||||
|
|
||||||
// 记录采集性能
|
// 记录采集性能
|
||||||
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
groupMetrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
||||||
|
|
||||||
metricsMap.put(groupName, metricsList);
|
metricsMap.put(groupName, groupMetricsList);
|
||||||
|
}
|
||||||
|
|
||||||
|
private String genTopicPartitionGroupKey(String topic, Integer partitionId){
|
||||||
|
return topic + "@" + partitionId;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -0,0 +1,121 @@
|
|||||||
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
|
import com.didiglobal.logi.log.ILog;
|
||||||
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
|
||||||
|
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
|
||||||
|
import org.apache.commons.collections.CollectionUtils;
|
||||||
|
import org.springframework.context.ApplicationListener;
|
||||||
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
|
import javax.annotation.PostConstruct;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Objects;
|
||||||
|
import java.util.concurrent.LinkedBlockingDeque;
|
||||||
|
import java.util.concurrent.ThreadPoolExecutor;
|
||||||
|
import java.util.concurrent.TimeUnit;
|
||||||
|
|
||||||
|
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.*;
|
||||||
|
|
||||||
|
@Component
|
||||||
|
public class MetricESSender implements ApplicationListener<BaseMetricEvent> {
|
||||||
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
|
private static final int THRESHOLD = 100;
|
||||||
|
|
||||||
|
private ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(10, 20, 6000, TimeUnit.MILLISECONDS,
|
||||||
|
new LinkedBlockingDeque<>(1000),
|
||||||
|
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
|
||||||
|
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount()));
|
||||||
|
|
||||||
|
@PostConstruct
|
||||||
|
public void init(){
|
||||||
|
LOGGER.info("class=MetricESSender||method=init||msg=init finished");
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void onApplicationEvent(BaseMetricEvent event) {
|
||||||
|
if(event instanceof BrokerMetricEvent) {
|
||||||
|
BrokerMetricEvent brokerMetricEvent = (BrokerMetricEvent)event;
|
||||||
|
send2es(BROKER_INDEX,
|
||||||
|
ConvertUtil.list2List(brokerMetricEvent.getBrokerMetrics(), BrokerMetricPO.class)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if(event instanceof ClusterMetricEvent) {
|
||||||
|
ClusterMetricEvent clusterMetricEvent = (ClusterMetricEvent)event;
|
||||||
|
send2es(CLUSTER_INDEX,
|
||||||
|
ConvertUtil.list2List(clusterMetricEvent.getClusterMetrics(), ClusterMetricPO.class)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if(event instanceof TopicMetricEvent) {
|
||||||
|
TopicMetricEvent topicMetricEvent = (TopicMetricEvent)event;
|
||||||
|
send2es(TOPIC_INDEX,
|
||||||
|
ConvertUtil.list2List(topicMetricEvent.getTopicMetrics(), TopicMetricPO.class)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if(event instanceof PartitionMetricEvent) {
|
||||||
|
PartitionMetricEvent partitionMetricEvent = (PartitionMetricEvent)event;
|
||||||
|
send2es(PARTITION_INDEX,
|
||||||
|
ConvertUtil.list2List(partitionMetricEvent.getPartitionMetrics(), PartitionMetricPO.class)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if(event instanceof GroupMetricEvent) {
|
||||||
|
GroupMetricEvent groupMetricEvent = (GroupMetricEvent)event;
|
||||||
|
send2es(GROUP_INDEX,
|
||||||
|
ConvertUtil.list2List(groupMetricEvent.getGroupMetrics(), GroupMetricPO.class)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if(event instanceof ReplicaMetricEvent) {
|
||||||
|
ReplicaMetricEvent replicaMetricEvent = (ReplicaMetricEvent)event;
|
||||||
|
send2es(REPLICATION_INDEX,
|
||||||
|
ConvertUtil.list2List(replicaMetricEvent.getReplicationMetrics(), ReplicationMetricPO.class)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 根据不同监控维度来发送
|
||||||
|
*/
|
||||||
|
private boolean send2es(String index, List<? extends BaseESPO> statsList){
|
||||||
|
if (CollectionUtils.isEmpty(statsList)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!EnvUtil.isOnline()) {
|
||||||
|
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
|
||||||
|
index, statsList.size());
|
||||||
|
}
|
||||||
|
|
||||||
|
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
|
||||||
|
if (Objects.isNull( baseMetricESDao )) {
|
||||||
|
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
int size = statsList.size();
|
||||||
|
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
|
||||||
|
|
||||||
|
if (size < THRESHOLD) {
|
||||||
|
esExecutor.execute(
|
||||||
|
() -> baseMetricESDao.batchInsertStats(statsList)
|
||||||
|
);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int i = 1; i < num + 1; i++) {
|
||||||
|
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
|
||||||
|
int start = (i - 1) * THRESHOLD;
|
||||||
|
|
||||||
|
esExecutor.execute(
|
||||||
|
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
@@ -9,6 +9,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
|
||||||
@@ -25,8 +27,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
|
|||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
@Component
|
@Component
|
||||||
public class PartitionMetricCollector extends AbstractKafkaMetricCollector<PartitionMetrics> {
|
public class PartitionMetricCollector extends AbstractMetricCollector<PartitionMetrics> {
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(PartitionMetricCollector.class);
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
@@ -38,10 +40,13 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
|
|||||||
private TopicService topicService;
|
private TopicService topicService;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<PartitionMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
|
Long startTime = System.currentTimeMillis();
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
|
List<Topic> topicList = topicService.listTopicsFromCacheFirst(clusterPhyId);
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
|
// 获取集群所有分区
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
@@ -50,9 +55,9 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
|
|||||||
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
|
metricsMap.put(topic.getTopicName(), new ConcurrentHashMap<>());
|
||||||
|
|
||||||
future.runnableTask(
|
future.runnableTask(
|
||||||
String.format("class=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
|
String.format("method=PartitionMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
|
||||||
30000,
|
30000,
|
||||||
() -> this.collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
|
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap.get(topic.getTopicName()), items)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -63,7 +68,10 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
|
|||||||
|
|
||||||
this.publishMetric(new PartitionMetricEvent(this, metricsList));
|
this.publishMetric(new PartitionMetricEvent(this, metricsList));
|
||||||
|
|
||||||
return metricsList;
|
LOGGER.info(
|
||||||
|
"method=PartitionMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -101,9 +109,17 @@ public class PartitionMetricCollector extends AbstractKafkaMetricCollector<Parti
|
|||||||
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
|
PartitionMetrics allMetrics = metricsMap.get(subMetrics.getPartitionId());
|
||||||
allMetrics.putMetric(subMetrics.getMetrics());
|
allMetrics.putMetric(subMetrics.getMetrics());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!EnvUtil.isOnline()) {
|
||||||
|
LOGGER.info(
|
||||||
|
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||metricValue={}!",
|
||||||
|
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
LOGGER.info(
|
LOGGER.info(
|
||||||
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
|
"class=PartitionMetricCollector||method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception",
|
||||||
clusterPhyId, topicName, v.getName(), e
|
clusterPhyId, topicName, v.getName(), e
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -0,0 +1,124 @@
|
|||||||
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
|
import com.alibaba.fastjson.JSON;
|
||||||
|
import com.didiglobal.logi.log.ILog;
|
||||||
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ReplicationMetrics;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
|
||||||
|
import com.xiaojukeji.know.streaming.km.core.service.replica.ReplicaMetricService;
|
||||||
|
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
import org.springframework.stereotype.Component;
|
||||||
|
|
||||||
|
import java.util.ArrayList;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_REPLICATION;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @author didi
|
||||||
|
*/
|
||||||
|
@Component
|
||||||
|
public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationMetrics> {
|
||||||
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private VersionControlService versionControlService;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private ReplicaMetricService replicaMetricService;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private PartitionService partitionService;
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
|
Long startTime = System.currentTimeMillis();
|
||||||
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
|
List<Partition> partitions = partitionService.listPartitionByCluster(clusterPhyId);
|
||||||
|
|
||||||
|
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
|
List<ReplicationMetrics> metricsList = new ArrayList<>();
|
||||||
|
for(Partition partition : partitions) {
|
||||||
|
for (Integer brokerId: partition.getAssignReplicaList()) {
|
||||||
|
ReplicationMetrics metrics = new ReplicationMetrics(clusterPhyId, partition.getTopicName(), brokerId, partition.getPartitionId());
|
||||||
|
metricsList.add(metrics);
|
||||||
|
|
||||||
|
future.runnableTask(
|
||||||
|
String.format("method=ReplicaMetricCollector||clusterPhyId=%d||brokerId=%d||topicName=%s||partitionId=%d",
|
||||||
|
clusterPhyId, brokerId, partition.getTopicName(), partition.getPartitionId()),
|
||||||
|
30000,
|
||||||
|
() -> collectMetrics(clusterPhyId, metrics, items)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
future.waitExecute(30000);
|
||||||
|
|
||||||
|
publishMetric(new ReplicaMetricEvent(this, metricsList));
|
||||||
|
|
||||||
|
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public VersionItemTypeEnum collectorType() {
|
||||||
|
return METRIC_REPLICATION;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**************************************************** private method ****************************************************/
|
||||||
|
|
||||||
|
private ReplicationMetrics collectMetrics(Long clusterPhyId, ReplicationMetrics metrics, List<VersionControlItem> items) {
|
||||||
|
long startTime = System.currentTimeMillis();
|
||||||
|
|
||||||
|
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
||||||
|
|
||||||
|
for(VersionControlItem v : items) {
|
||||||
|
try {
|
||||||
|
if (metrics.getMetrics().containsKey(v.getName())) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafkaWithCache(
|
||||||
|
clusterPhyId,
|
||||||
|
metrics.getTopic(),
|
||||||
|
metrics.getBrokerId(),
|
||||||
|
metrics.getPartitionId(),
|
||||||
|
v.getName()
|
||||||
|
);
|
||||||
|
|
||||||
|
if (null == ret || ret.failed() || null == ret.getData()) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics.putMetric(ret.getData().getMetrics());
|
||||||
|
|
||||||
|
if (!EnvUtil.isOnline()) {
|
||||||
|
LOGGER.info("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partitionId={}||metricName={}||metricValue={}",
|
||||||
|
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), JSON.toJSONString(ret.getData().getMetrics()));
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (Exception e) {
|
||||||
|
LOGGER.error("method=ReplicaMetricCollector||clusterPhyId={}||topicName={}||partition={}||metricName={}||errMsg=exception!",
|
||||||
|
clusterPhyId, metrics.getTopic(), metrics.getPartitionId(), v.getName(), e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 记录采集性能
|
||||||
|
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
||||||
|
|
||||||
|
return metrics;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
package com.xiaojukeji.know.streaming.km.collector.metric;
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
import com.didiglobal.logi.log.ILog;
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
import com.didiglobal.logi.log.LogFactory;
|
||||||
@@ -10,6 +10,8 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionContro
|
|||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
|
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.TopicMetricEvent;
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
||||||
|
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
|
||||||
@@ -29,8 +31,8 @@ import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemT
|
|||||||
* @author didi
|
* @author didi
|
||||||
*/
|
*/
|
||||||
@Component
|
@Component
|
||||||
public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetrics> {
|
public class TopicMetricCollector extends AbstractMetricCollector<List<TopicMetrics>> {
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(TopicMetricCollector.class);
|
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private VersionControlService versionControlService;
|
private VersionControlService versionControlService;
|
||||||
@@ -44,10 +46,11 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
|
|||||||
private static final Integer AGG_METRICS_BROKER_ID = -10000;
|
private static final Integer AGG_METRICS_BROKER_ID = -10000;
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<TopicMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
public void collectMetrics(ClusterPhy clusterPhy) {
|
||||||
|
Long startTime = System.currentTimeMillis();
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
Long clusterPhyId = clusterPhy.getId();
|
||||||
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
|
List<Topic> topics = topicService.listTopicsFromCacheFirst(clusterPhyId);
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
||||||
|
|
||||||
@@ -61,7 +64,7 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
|
|||||||
allMetricsMap.put(topic.getTopicName(), metricsMap);
|
allMetricsMap.put(topic.getTopicName(), metricsMap);
|
||||||
|
|
||||||
future.runnableTask(
|
future.runnableTask(
|
||||||
String.format("class=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
|
String.format("method=TopicMetricCollector||clusterPhyId=%d||topicName=%s", clusterPhyId, topic.getTopicName()),
|
||||||
30000,
|
30000,
|
||||||
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
|
() -> collectMetrics(clusterPhyId, topic.getTopicName(), metricsMap, items)
|
||||||
);
|
);
|
||||||
@@ -74,7 +77,8 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
|
|||||||
|
|
||||||
this.publishMetric(new TopicMetricEvent(this, metricsList));
|
this.publishMetric(new TopicMetricEvent(this, metricsList));
|
||||||
|
|
||||||
return metricsList;
|
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||startTime={}||costTime={}||msg=collect finished.",
|
||||||
|
clusterPhyId, startTime, System.currentTimeMillis() - startTime);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -114,9 +118,14 @@ public class TopicMetricCollector extends AbstractKafkaMetricCollector<TopicMetr
|
|||||||
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
|
metricsMap.get(metrics.getBrokerId()).putMetric(metrics.getMetrics());
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
if (!EnvUtil.isOnline()) {
|
||||||
|
LOGGER.info("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||metricValue={}.",
|
||||||
|
clusterPhyId, topicName, v.getName(), ConvertUtil.obj2Json(ret.getData())
|
||||||
|
);
|
||||||
|
}
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
LOGGER.error(
|
LOGGER.error("method=TopicMetricCollector||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
|
||||||
"method=collectMetrics||clusterPhyId={}||topicName={}||metricName={}||errMsg=exception!",
|
|
||||||
clusterPhyId, topicName, v.getName(), e
|
clusterPhyId, topicName, v.getName(), e
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author didi
|
|
||||||
*/
|
|
||||||
public abstract class AbstractConnectMetricCollector<M> extends AbstractMetricCollector<M, ConnectCluster> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(AbstractConnectMetricCollector.class);
|
|
||||||
|
|
||||||
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectClusterService connectClusterService;
|
|
||||||
|
|
||||||
public abstract List<M> collectConnectMetrics(ConnectCluster connectCluster);
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getClusterVersion(ConnectCluster connectCluster){
|
|
||||||
return connectClusterService.getClusterVersion(connectCluster.getId());
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void collectMetrics(ConnectCluster connectCluster) {
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
// 采集指标
|
|
||||||
List<M> metricsList = this.collectConnectMetrics(connectCluster);
|
|
||||||
|
|
||||||
// 输出耗时信息
|
|
||||||
LOGGER.info(
|
|
||||||
"metricType={}||connectClusterId={}||costTimeUnitMs={}",
|
|
||||||
this.collectorType().getMessage(), connectCluster.getId(), System.currentTimeMillis() - startTime
|
|
||||||
);
|
|
||||||
|
|
||||||
// 输出采集到的指标信息
|
|
||||||
METRIC_COLLECTED_LOGGER.debug("metricType={}||connectClusterId={}||metrics={}!",
|
|
||||||
this.collectorType().getMessage(), connectCluster.getId(), ConvertUtil.obj2Json(metricsList)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectClusterMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.cluster.ConnectClusterMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import java.util.Collections;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CLUSTER;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author didi
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class ConnectClusterMetricCollector extends AbstractConnectMetricCollector<ConnectClusterMetrics> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricCollector.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private VersionControlService versionControlService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectClusterMetricService connectClusterMetricService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<ConnectClusterMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
|
|
||||||
Long startTime = System.currentTimeMillis();
|
|
||||||
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
|
|
||||||
Long connectClusterId = connectCluster.getId();
|
|
||||||
|
|
||||||
ConnectClusterMetrics metrics = new ConnectClusterMetrics(clusterPhyId, connectClusterId);
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(getClusterVersion(connectCluster), collectorType().getCode());
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
|
|
||||||
|
|
||||||
for (VersionControlItem item : items) {
|
|
||||||
future.runnableTask(
|
|
||||||
String.format("class=ConnectClusterMetricCollector||connectClusterId=%d||metricName=%s", connectClusterId, item.getName()),
|
|
||||||
30000,
|
|
||||||
() -> {
|
|
||||||
try {
|
|
||||||
Result<ConnectClusterMetrics> ret = connectClusterMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, item.getName());
|
|
||||||
if (null == ret || !ret.hasData()) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
|
||||||
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(
|
|
||||||
"method=collectConnectMetrics||connectClusterId={}||metricName={}||errMsg=exception!",
|
|
||||||
connectClusterId, item.getName(), e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
future.waitExecute(30000);
|
|
||||||
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
|
||||||
|
|
||||||
this.publishMetric(new ConnectClusterMetricEvent(this, Collections.singletonList(metrics)));
|
|
||||||
|
|
||||||
return Collections.singletonList(metrics);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public VersionItemTypeEnum collectorType() {
|
|
||||||
return METRIC_CONNECT_CLUSTER;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.connect;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.connect.ConnectorMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.connect.ConnectorTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_CONNECTOR;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author didi
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class ConnectConnectorMetricCollector extends AbstractConnectMetricCollector<ConnectorMetrics> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ConnectConnectorMetricCollector.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private VersionControlService versionControlService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorService connectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorMetricService connectorMetricService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<ConnectorMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
|
|
||||||
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
|
|
||||||
Long connectClusterId = connectCluster.getId();
|
|
||||||
|
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
|
|
||||||
Result<List<String>> connectorList = connectorService.listConnectorsFromCluster(connectCluster);
|
|
||||||
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(connectClusterId);
|
|
||||||
|
|
||||||
List<ConnectorMetrics> metricsList = new ArrayList<>();
|
|
||||||
for (String connectorName : connectorList.getData()) {
|
|
||||||
ConnectorMetrics metrics = new ConnectorMetrics(connectClusterId, connectorName);
|
|
||||||
metrics.setClusterPhyId(clusterPhyId);
|
|
||||||
|
|
||||||
metricsList.add(metrics);
|
|
||||||
future.runnableTask(
|
|
||||||
String.format("class=ConnectConnectorMetricCollector||connectClusterId=%d||connectorName=%s", connectClusterId, connectorName),
|
|
||||||
30000,
|
|
||||||
() -> collectMetrics(connectClusterId, connectorName, metrics, items)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
future.waitResult(30000);
|
|
||||||
|
|
||||||
this.publishMetric(new ConnectorMetricEvent(this, metricsList));
|
|
||||||
|
|
||||||
return metricsList;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public VersionItemTypeEnum collectorType() {
|
|
||||||
return METRIC_CONNECT_CONNECTOR;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
|
||||||
|
|
||||||
private void collectMetrics(Long connectClusterId, String connectorName, ConnectorMetrics metrics, List<VersionControlItem> items) {
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
ConnectorTypeEnum connectorType = connectorService.getConnectorType(connectClusterId, connectorName);
|
|
||||||
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
|
||||||
|
|
||||||
for (VersionControlItem v : items) {
|
|
||||||
try {
|
|
||||||
//过滤已测得指标
|
|
||||||
if (metrics.getMetrics().get(v.getName()) != null) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<ConnectorMetrics> ret = connectorMetricService.collectConnectClusterMetricsFromKafka(connectClusterId, connectorName, v.getName(), connectorType);
|
|
||||||
if (null == ret || ret.failed() || null == ret.getData()) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(
|
|
||||||
"method=collectMetrics||connectClusterId={}||connectorName={}||metric={}||errMsg=exception!",
|
|
||||||
connectClusterId, connectorName, v.getName(), e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 记录采集性能
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,117 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.connect.mm2;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.metric.connect.AbstractConnectMetricCollector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.ConnectCluster;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.connect.mm2.MirrorMakerTopic;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.mm2.MirrorMakerMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.connect.ConnectorPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureWaitUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.connector.ConnectorService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.connect.mm2.MirrorMakerService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant.MIRROR_MAKER_SOURCE_CONNECTOR_TYPE;
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_CONNECT_MIRROR_MAKER;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/12/15
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class MirrorMakerMetricCollector extends AbstractConnectMetricCollector<MirrorMakerMetrics> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricCollector.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private VersionControlService versionControlService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private MirrorMakerService mirrorMakerService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ConnectorService connectorService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private MirrorMakerMetricService mirrorMakerMetricService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public VersionItemTypeEnum collectorType() {
|
|
||||||
return METRIC_CONNECT_MIRROR_MAKER;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<MirrorMakerMetrics> collectConnectMetrics(ConnectCluster connectCluster) {
|
|
||||||
Long clusterPhyId = connectCluster.getKafkaClusterPhyId();
|
|
||||||
Long connectClusterId = connectCluster.getId();
|
|
||||||
|
|
||||||
List<ConnectorPO> mirrorMakerList = connectorService.listByConnectClusterIdFromDB(connectClusterId).stream().filter(elem -> elem.getConnectorClassName().equals(MIRROR_MAKER_SOURCE_CONNECTOR_TYPE)).collect(Collectors.toList());
|
|
||||||
Map<String, MirrorMakerTopic> mirrorMakerTopicMap = mirrorMakerService.getMirrorMakerTopicMap(connectClusterId).getData();
|
|
||||||
|
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(connectCluster), collectorType().getCode());
|
|
||||||
FutureWaitUtil<Void> future = this.getFutureUtilByClusterPhyId(clusterPhyId);
|
|
||||||
|
|
||||||
List<MirrorMakerMetrics> metricsList = new ArrayList<>();
|
|
||||||
|
|
||||||
for (ConnectorPO mirrorMaker : mirrorMakerList) {
|
|
||||||
MirrorMakerMetrics metrics = new MirrorMakerMetrics(clusterPhyId, connectClusterId, mirrorMaker.getConnectorName());
|
|
||||||
metricsList.add(metrics);
|
|
||||||
|
|
||||||
List<MirrorMakerTopic> mirrorMakerTopicList = mirrorMakerService.getMirrorMakerTopicList(mirrorMaker, mirrorMakerTopicMap);
|
|
||||||
future.runnableTask(String.format("class=MirrorMakerMetricCollector||connectClusterId=%d||mirrorMakerName=%s", connectClusterId, mirrorMaker.getConnectorName()),
|
|
||||||
30000,
|
|
||||||
() -> collectMetrics(connectClusterId, mirrorMaker.getConnectorName(), metrics, items, mirrorMakerTopicList));
|
|
||||||
}
|
|
||||||
future.waitResult(30000);
|
|
||||||
|
|
||||||
this.publishMetric(new MirrorMakerMetricEvent(this,metricsList));
|
|
||||||
|
|
||||||
return metricsList;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**************************************************** private method ****************************************************/
|
|
||||||
private void collectMetrics(Long connectClusterId, String mirrorMakerName, MirrorMakerMetrics metrics, List<VersionControlItem> items, List<MirrorMakerTopic> mirrorMakerTopicList) {
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
|
||||||
|
|
||||||
for (VersionControlItem v : items) {
|
|
||||||
try {
|
|
||||||
//已测量指标过滤
|
|
||||||
if (metrics.getMetrics().get(v.getName()) != null) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
Result<MirrorMakerMetrics> ret = mirrorMakerMetricService.collectMirrorMakerMetricsFromKafka(connectClusterId, mirrorMakerName, mirrorMakerTopicList, v.getName());
|
|
||||||
if (ret == null || !ret.hasData()) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
|
||||||
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(
|
|
||||||
"method=collectMetrics||connectClusterId={}||mirrorMakerName={}||metric={}||errMsg=exception!",
|
|
||||||
connectClusterId, mirrorMakerName, v.getName(), e
|
|
||||||
);
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.metric.AbstractMetricCollector;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.LoggerUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author didi
|
|
||||||
*/
|
|
||||||
public abstract class AbstractKafkaMetricCollector<M> extends AbstractMetricCollector<M, ClusterPhy> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricCollector.class);
|
|
||||||
|
|
||||||
protected static final ILog METRIC_COLLECTED_LOGGER = LoggerUtil.getMetricCollectedLogger();
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ClusterPhyService clusterPhyService;
|
|
||||||
|
|
||||||
public abstract List<M> collectKafkaMetrics(ClusterPhy clusterPhy);
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getClusterVersion(ClusterPhy clusterPhy){
|
|
||||||
return clusterPhyService.getVersionFromCacheFirst(clusterPhy.getId());
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void collectMetrics(ClusterPhy clusterPhy) {
|
|
||||||
long startTime = System.currentTimeMillis();
|
|
||||||
|
|
||||||
// 采集指标
|
|
||||||
List<M> metricsList = this.collectKafkaMetrics(clusterPhy);
|
|
||||||
|
|
||||||
// 输出耗时信息
|
|
||||||
LOGGER.info(
|
|
||||||
"metricType={}||clusterPhyId={}||costTimeUnitMs={}",
|
|
||||||
this.collectorType().getMessage(), clusterPhy.getId(), System.currentTimeMillis() - startTime
|
|
||||||
);
|
|
||||||
|
|
||||||
// 输出采集到的指标信息
|
|
||||||
METRIC_COLLECTED_LOGGER.debug("metricType={}||clusterPhyId={}||metrics={}!",
|
|
||||||
this.collectorType().getMessage(), clusterPhy.getId(), ConvertUtil.obj2Json(metricsList)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.metric.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
|
|
||||||
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
|
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import java.util.Collections;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_ZOOKEEPER;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author didi
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class ZookeeperMetricCollector extends AbstractKafkaMetricCollector<ZookeeperMetrics> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricCollector.class);
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private VersionControlService versionControlService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ZookeeperMetricService zookeeperMetricService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private ZookeeperService zookeeperService;
|
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private KafkaControllerService kafkaControllerService;
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<ZookeeperMetrics> collectKafkaMetrics(ClusterPhy clusterPhy) {
|
|
||||||
Long startTime = System.currentTimeMillis();
|
|
||||||
Long clusterPhyId = clusterPhy.getId();
|
|
||||||
List<VersionControlItem> items = versionControlService.listVersionControlItem(this.getClusterVersion(clusterPhy), collectorType().getCode());
|
|
||||||
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
|
|
||||||
.stream()
|
|
||||||
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
|
|
||||||
|
|
||||||
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, Constant.COLLECT_METRICS_ERROR_COST_TIME);
|
|
||||||
if (ValidateUtils.isEmptyList(aliveZKList)) {
|
|
||||||
// 没有存活的ZK时,发布事件,然后直接返回
|
|
||||||
publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
|
|
||||||
return Collections.singletonList(metrics);
|
|
||||||
}
|
|
||||||
|
|
||||||
// 构造参数
|
|
||||||
ZookeeperMetricParam param = new ZookeeperMetricParam(
|
|
||||||
clusterPhyId,
|
|
||||||
aliveZKList.stream().map(elem -> new Tuple<String, Integer>(elem.getHost(), elem.getPort())).collect(Collectors.toList()),
|
|
||||||
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
|
|
||||||
kafkaController == null? Constant.INVALID_CODE: kafkaController.getBrokerId(),
|
|
||||||
null
|
|
||||||
);
|
|
||||||
|
|
||||||
for(VersionControlItem v : items) {
|
|
||||||
try {
|
|
||||||
if(null != metrics.getMetrics().get(v.getName())) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
param.setMetricName(v.getName());
|
|
||||||
|
|
||||||
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
|
|
||||||
if(null == ret || ret.failed() || null == ret.getData()){
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
metrics.putMetric(ret.getData().getMetrics());
|
|
||||||
} catch (Exception e){
|
|
||||||
LOGGER.error(
|
|
||||||
"method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
|
|
||||||
clusterPhyId, v.getName(), e
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
|
|
||||||
|
|
||||||
this.publishMetric(new ZookeeperMetricEvent(this, Collections.singletonList(metrics)));
|
|
||||||
|
|
||||||
return Collections.singletonList(metrics);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public VersionItemTypeEnum collectorType() {
|
|
||||||
return METRIC_ZOOKEEPER;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -237,7 +237,7 @@ public class CollectThreadPoolService {
|
|||||||
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
|
private synchronized FutureWaitUtil<Void> closeOldAndCreateNew(Long shardId) {
|
||||||
// 新的
|
// 新的
|
||||||
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
|
FutureWaitUtil<Void> newFutureUtil = FutureWaitUtil.init(
|
||||||
"MetricCollect-Shard-" + shardId,
|
"CollectorMetricsFutureUtil-Shard-" + shardId,
|
||||||
this.futureUtilThreadNum,
|
this.futureUtilThreadNum,
|
||||||
this.futureUtilThreadNum,
|
this.futureUtilThreadNum,
|
||||||
this.futureUtilQueueSize
|
this.futureUtilQueueSize
|
||||||
|
|||||||
@@ -1,52 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
|
|
||||||
import org.apache.commons.collections.CollectionUtils;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public abstract class AbstractMetricESSender {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(AbstractMetricESSender.class);
|
|
||||||
|
|
||||||
private static final int THRESHOLD = 100;
|
|
||||||
|
|
||||||
private static final FutureUtil<Void> esExecutor = FutureUtil.init(
|
|
||||||
"MetricsESSender",
|
|
||||||
10,
|
|
||||||
20,
|
|
||||||
10000
|
|
||||||
);
|
|
||||||
|
|
||||||
/**
|
|
||||||
* 根据不同监控维度来发送
|
|
||||||
*/
|
|
||||||
protected boolean send2es(String index, List<? extends BaseESPO> statsList) {
|
|
||||||
LOGGER.info("method=send2es||indexName={}||metricsSize={}||msg=send metrics to es", index, statsList.size());
|
|
||||||
|
|
||||||
if (CollectionUtils.isEmpty(statsList)) {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
|
|
||||||
if (Objects.isNull(baseMetricESDao)) {
|
|
||||||
LOGGER.error("method=send2es||indexName={}||errMsg=find dao failed", index);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (int i = 0; i < statsList.size(); i += THRESHOLD) {
|
|
||||||
final int idxStart = i;
|
|
||||||
|
|
||||||
// 异步发送
|
|
||||||
esExecutor.submitTask(
|
|
||||||
() -> baseMetricESDao.batchInsertStats(statsList.subList(idxStart, Math.min(idxStart + THRESHOLD, statsList.size())))
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectClusterMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectClusterMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CLUSTER_INDEX;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/11/7
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class ConnectClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectClusterMetricEvent> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ConnectClusterMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("class=ConnectClusterMetricESSender||method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(ConnectClusterMetricEvent event) {
|
|
||||||
send2es(CONNECT_CLUSTER_INDEX, ConvertUtil.list2List(event.getConnectClusterMetrics(), ConnectClusterMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.connect;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.connect.ConnectorMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.connect.ConnectorMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_CONNECTOR_INDEX;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyb
|
|
||||||
* @date 2022/11/7
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class ConnectorMetricESSender extends AbstractMetricESSender implements ApplicationListener<ConnectorMetricEvent> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(ConnectorMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("class=ConnectorMetricESSender||method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(ConnectorMetricEvent event) {
|
|
||||||
send2es(CONNECT_CONNECTOR_INDEX, ConvertUtil.list2List(event.getConnectorMetricsList(), ConnectorMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.BROKER_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(BrokerMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(BrokerMetricEvent event) {
|
|
||||||
send2es(BROKER_INDEX, ConvertUtil.list2List(event.getBrokerMetrics(), BrokerMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CLUSTER_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(ClusterMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(ClusterMetricEvent event) {
|
|
||||||
send2es(CLUSTER_INDEX, ConvertUtil.list2List(event.getClusterMetrics(), ClusterMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.GROUP_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(GroupMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(GroupMetricEvent event) {
|
|
||||||
send2es(GROUP_INDEX, ConvertUtil.list2List(event.getGroupMetrics(), GroupMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.PARTITION_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(PartitionMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(PartitionMetricEvent event) {
|
|
||||||
send2es(PARTITION_INDEX, ConvertUtil.list2List(event.getPartitionMetrics(), PartitionMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.TOPIC_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(TopicMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(TopicMetricEvent event) {
|
|
||||||
send2es(TOPIC_INDEX, ConvertUtil.list2List(event.getTopicMetrics(), TopicMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.kafka;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.ZOOKEEPER_INDEX;
|
|
||||||
|
|
||||||
@Component
|
|
||||||
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
|
|
||||||
private static final ILog LOGGER = LogFactory.getLog(ZookeeperMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(ZookeeperMetricEvent event) {
|
|
||||||
send2es(ZOOKEEPER_INDEX, ConvertUtil.list2List(event.getZookeeperMetrics(), ZookeeperMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.collector.sink.mm2;
|
|
||||||
|
|
||||||
import com.didiglobal.logi.log.ILog;
|
|
||||||
import com.didiglobal.logi.log.LogFactory;
|
|
||||||
import com.xiaojukeji.know.streaming.km.collector.sink.AbstractMetricESSender;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.mm2.MirrorMakerMetricEvent;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.mm2.MirrorMakerMetricPO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
|
|
||||||
import org.springframework.context.ApplicationListener;
|
|
||||||
import org.springframework.stereotype.Component;
|
|
||||||
|
|
||||||
import javax.annotation.PostConstruct;
|
|
||||||
|
|
||||||
import static com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant.CONNECT_MM2_INDEX;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022/12/20
|
|
||||||
*/
|
|
||||||
@Component
|
|
||||||
public class MirrorMakerMetricESSender extends AbstractMetricESSender implements ApplicationListener<MirrorMakerMetricEvent> {
|
|
||||||
protected static final ILog LOGGER = LogFactory.getLog(MirrorMakerMetricESSender.class);
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
public void init(){
|
|
||||||
LOGGER.info("method=init||msg=init finished");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void onApplicationEvent(MirrorMakerMetricEvent event) {
|
|
||||||
send2es(CONNECT_MM2_INDEX, ConvertUtil.list2List(event.getMetricsList(), MirrorMakerMetricPO.class));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -5,13 +5,13 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<artifactId>km-common</artifactId>
|
<artifactId>km-common</artifactId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
|
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>km</artifactId>
|
<artifactId>km</artifactId>
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
<groupId>com.xiaojukeji.kafka</groupId>
|
||||||
<version>${revision}</version>
|
<version>${km.revision}</version>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
<properties>
|
<properties>
|
||||||
@@ -81,6 +81,10 @@
|
|||||||
<version>3.0.2</version>
|
<version>3.0.2</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
|
<dependency>
|
||||||
|
<groupId>junit</groupId>
|
||||||
|
<artifactId>junit</artifactId>
|
||||||
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.projectlombok</groupId>
|
<groupId>org.projectlombok</groupId>
|
||||||
<artifactId>lombok</artifactId>
|
<artifactId>lombok</artifactId>
|
||||||
@@ -123,9 +127,5 @@
|
|||||||
<groupId>org.apache.kafka</groupId>
|
<groupId>org.apache.kafka</groupId>
|
||||||
<artifactId>kafka_2.13</artifactId>
|
<artifactId>kafka_2.13</artifactId>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
|
||||||
<groupId>org.apache.kafka</groupId>
|
|
||||||
<artifactId>connect-runtime</artifactId>
|
|
||||||
</dependency>
|
|
||||||
</dependencies>
|
</dependencies>
|
||||||
</project>
|
</project>
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
import javax.validation.constraints.NotNull;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 22/02/24
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
public class ClusterConnectorsOverviewDTO extends PaginationSortDTO {
|
|
||||||
@NotNull(message = "latestMetricNames不允许为空")
|
|
||||||
@ApiModelProperty("需要指标点的信息")
|
|
||||||
private List<String> latestMetricNames;
|
|
||||||
|
|
||||||
@NotNull(message = "metricLines不允许为空")
|
|
||||||
@ApiModelProperty("需要指标曲线的信息")
|
|
||||||
private MetricDTO metricLines;
|
|
||||||
|
|
||||||
@ApiModelProperty("需要排序的指标名称列表,比较第一个不为空的metric")
|
|
||||||
private List<String> sortMetricNameList;
|
|
||||||
}
|
|
||||||
@@ -1,18 +1,19 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationMulFuzzySearchDTO;
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
import io.swagger.annotations.ApiModelProperty;
|
||||||
import lombok.Data;
|
import lombok.Data;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @author wyb
|
* @author zengqiao
|
||||||
* @date 2022/10/17
|
* @date 22/02/24
|
||||||
*/
|
*/
|
||||||
@Data
|
@Data
|
||||||
public class ClusterGroupSummaryDTO extends PaginationBaseDTO {
|
public class ClusterGroupsOverviewDTO extends PaginationMulFuzzySearchDTO {
|
||||||
@ApiModelProperty("查找该Topic")
|
@ApiModelProperty("查找该Topic")
|
||||||
private String searchTopicName;
|
private String topicName;
|
||||||
|
|
||||||
@ApiModelProperty("查找该Group")
|
@ApiModelProperty("查找该Group")
|
||||||
private String searchGroupName;
|
private String groupName;
|
||||||
}
|
}
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
|
||||||
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 22/12/12
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
public class ClusterMirrorMakersOverviewDTO extends ClusterConnectorsOverviewDTO {
|
|
||||||
}
|
|
||||||
@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
|||||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
|
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
|
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
import io.swagger.annotations.ApiModel;
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
import io.swagger.annotations.ApiModelProperty;
|
||||||
import lombok.Data;
|
import lombok.Data;
|
||||||
@@ -35,8 +34,4 @@ public class ClusterPhyBaseDTO extends BaseDTO {
|
|||||||
@NotNull(message = "jmxProperties不允许为空")
|
@NotNull(message = "jmxProperties不允许为空")
|
||||||
@ApiModelProperty(value="Jmx配置")
|
@ApiModelProperty(value="Jmx配置")
|
||||||
protected JmxConfig jmxProperties;
|
protected JmxConfig jmxProperties;
|
||||||
|
|
||||||
// TODO 前端页面增加时,需要加一个不为空的限制
|
|
||||||
@ApiModelProperty(value="ZK配置")
|
|
||||||
protected ZKConfig zkProperties;
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,13 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author wyc
|
|
||||||
* @date 2022/9/23
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
public class ClusterZookeepersOverviewDTO extends PaginationBaseDTO {
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
import lombok.NoArgsConstructor;
|
|
||||||
|
|
||||||
import javax.validation.constraints.NotBlank;
|
|
||||||
import javax.validation.constraints.NotNull;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@NoArgsConstructor
|
|
||||||
@ApiModel(description = "集群Connector")
|
|
||||||
public class ClusterConnectorDTO extends BaseDTO {
|
|
||||||
@NotNull(message = "connectClusterId不允许为空")
|
|
||||||
@ApiModelProperty(value = "Connector集群ID", example = "1")
|
|
||||||
protected Long connectClusterId;
|
|
||||||
|
|
||||||
@NotBlank(message = "name不允许为空串")
|
|
||||||
@ApiModelProperty(value = "Connector名称", example = "know-streaming-connector")
|
|
||||||
protected String connectorName;
|
|
||||||
|
|
||||||
public ClusterConnectorDTO(Long connectClusterId, String connectorName) {
|
|
||||||
this.connectClusterId = connectClusterId;
|
|
||||||
this.connectorName = connectorName;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.cluster;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "集群Connector")
|
|
||||||
public class ConnectClusterDTO extends BaseDTO {
|
|
||||||
@ApiModelProperty(value = "Connect集群ID", example = "1")
|
|
||||||
private Long id;
|
|
||||||
|
|
||||||
@ApiModelProperty(value = "Connect集群名称", example = "know-streaming")
|
|
||||||
private String name;
|
|
||||||
|
|
||||||
@ApiModelProperty(value = "Connect集群URL", example = "http://127.0.0.1:8080")
|
|
||||||
private String clusterUrl;
|
|
||||||
|
|
||||||
@ApiModelProperty(value = "Connect集群版本", example = "2.5.1")
|
|
||||||
private String version;
|
|
||||||
|
|
||||||
@ApiModelProperty(value = "JMX配置", example = "")
|
|
||||||
private String jmxProperties;
|
|
||||||
}
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
import javax.validation.constraints.NotBlank;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "操作Connector")
|
|
||||||
public class ConnectorActionDTO extends ClusterConnectorDTO {
|
|
||||||
@NotBlank(message = "action不允许为空串")
|
|
||||||
@ApiModelProperty(value = "Connector名称", example = "stop|restart|resume")
|
|
||||||
private String action;
|
|
||||||
}
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
|
|
||||||
|
|
||||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
import lombok.NoArgsConstructor;
|
|
||||||
|
|
||||||
import java.util.Properties;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
|
||||||
@NoArgsConstructor
|
|
||||||
@ApiModel(description = "创建Connector")
|
|
||||||
public class ConnectorCreateDTO extends ClusterConnectorDTO {
|
|
||||||
@Deprecated
|
|
||||||
@ApiModelProperty(value = "配置, 优先使用config字段,3.5.0版本将删除该字段", example = "")
|
|
||||||
protected Properties configs;
|
|
||||||
|
|
||||||
@ApiModelProperty(value = "配置", example = "")
|
|
||||||
protected Properties config;
|
|
||||||
|
|
||||||
public ConnectorCreateDTO(Long connectClusterId, String connectorName, Properties config) {
|
|
||||||
super(connectClusterId, connectorName);
|
|
||||||
this.config = config;
|
|
||||||
}
|
|
||||||
|
|
||||||
public Properties getSuitableConfig() {
|
|
||||||
return config != null? config: configs;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,14 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.ClusterConnectorDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "删除Connector")
|
|
||||||
public class ConnectorDeleteDTO extends ClusterConnectorDTO {
|
|
||||||
}
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-12-12
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "操作MM2")
|
|
||||||
public class MirrorMaker2ActionDTO extends ConnectorActionDTO {
|
|
||||||
}
|
|
||||||
@@ -1,14 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorDeleteDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-12-12
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "删除MM2")
|
|
||||||
public class MirrorMaker2DeleteDTO extends ConnectorDeleteDTO {
|
|
||||||
}
|
|
||||||
@@ -1,69 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.mm2;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorCreateDTO;
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.constant.connect.KafkaConnectConstant;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
import org.apache.kafka.clients.CommonClientConfigs;
|
|
||||||
|
|
||||||
import javax.validation.Valid;
|
|
||||||
import javax.validation.constraints.NotNull;
|
|
||||||
import java.util.Properties;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-12-12
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "创建MM2")
|
|
||||||
public class MirrorMakerCreateDTO extends ConnectorCreateDTO {
|
|
||||||
@NotNull(message = "sourceKafkaClusterId不允许为空")
|
|
||||||
@ApiModelProperty(value = "源Kafka集群ID", example = "")
|
|
||||||
private Long sourceKafkaClusterId;
|
|
||||||
|
|
||||||
@Valid
|
|
||||||
@ApiModelProperty(value = "heartbeat-connector的信息", example = "")
|
|
||||||
private Properties heartbeatConnectorConfigs;
|
|
||||||
|
|
||||||
@Valid
|
|
||||||
@ApiModelProperty(value = "checkpoint-connector的信息", example = "")
|
|
||||||
private Properties checkpointConnectorConfigs;
|
|
||||||
|
|
||||||
public void unifyData(Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
|
|
||||||
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
|
|
||||||
if (sourceKafkaProps == null) {
|
|
||||||
sourceKafkaProps = new Properties();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (targetKafkaProps == null) {
|
|
||||||
targetKafkaProps = new Properties();
|
|
||||||
}
|
|
||||||
|
|
||||||
this.unifyData(this.getSuitableConfig(), sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
|
|
||||||
|
|
||||||
if (heartbeatConnectorConfigs != null) {
|
|
||||||
this.unifyData(this.heartbeatConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (checkpointConnectorConfigs != null) {
|
|
||||||
this.unifyData(this.checkpointConnectorConfigs, sourceKafkaClusterId, sourceBootstrapServers, sourceKafkaProps, targetKafkaClusterId, targetBootstrapServers, targetKafkaProps);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private void unifyData(Properties dataConfig,
|
|
||||||
Long sourceKafkaClusterId, String sourceBootstrapServers, Properties sourceKafkaProps,
|
|
||||||
Long targetKafkaClusterId, String targetBootstrapServers, Properties targetKafkaProps) {
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_ALIAS_FIELD_NAME, sourceKafkaClusterId);
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, sourceBootstrapServers);
|
|
||||||
for (Object configKey: sourceKafkaProps.keySet()) {
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_SOURCE_CLUSTER_FIELD_NAME + "." + configKey, sourceKafkaProps.getProperty((String) configKey));
|
|
||||||
}
|
|
||||||
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_ALIAS_FIELD_NAME, targetKafkaClusterId);
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, targetBootstrapServers);
|
|
||||||
for (Object configKey: targetKafkaProps.keySet()) {
|
|
||||||
dataConfig.put(KafkaConnectConstant.MIRROR_MAKER_TARGET_CLUSTER_FIELD_NAME + "." + configKey, targetKafkaProps.getProperty((String) configKey));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
package com.xiaojukeji.know.streaming.km.common.bean.dto.connect.task;
|
|
||||||
|
|
||||||
import com.xiaojukeji.know.streaming.km.common.bean.dto.connect.connector.ConnectorActionDTO;
|
|
||||||
import io.swagger.annotations.ApiModel;
|
|
||||||
import io.swagger.annotations.ApiModelProperty;
|
|
||||||
import lombok.Data;
|
|
||||||
|
|
||||||
import javax.validation.constraints.NotNull;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @author zengqiao
|
|
||||||
* @date 2022-10-17
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@ApiModel(description = "操作Task")
|
|
||||||
public class TaskActionDTO extends ConnectorActionDTO {
|
|
||||||
@NotNull(message = "taskId不允许为NULL")
|
|
||||||
@ApiModelProperty(value = "taskId", example = "123")
|
|
||||||
private Long taskId;
|
|
||||||
}
|
|
||||||