Compare commits

..

4 Commits

Author SHA1 Message Date
EricZeng
65499443c2 Merge pull request #590 from didi/master
默认用户名密码调整说明
2022-09-15 16:21:36 +08:00
EricZeng
6515dd28aa Merge pull request #589 from didi/master
合并主分支
2022-09-15 15:54:09 +08:00
EricZeng
13354145fc Merge pull request #558 from didi/master
合并主分支
2022-09-05 17:08:51 +08:00
EricZeng
0b376bd69c Merge pull request #552 from didi/master
合并主分支
2022-09-05 11:37:26 +08:00
417 changed files with 4422 additions and 15790 deletions

View File

@@ -1,51 +0,0 @@
---
name: 报告Bug
about: 报告KnowStreaming的相关Bug
title: ''
labels: bug
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个Bug。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 重现该问题的步骤
1. xxx
2. xxx
3. xxx
### 预期结果
<!-- 写下应该出现的预期结果?-->
### 实际结果
<!-- 实际发生了什么? -->
---
如果有异常请附上异常Trace:
```
Just put your stack trace here!
```

View File

@@ -1,8 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: 讨论问题
url: https://github.com/didi/KnowStreaming/discussions/new
about: 发起问题、讨论 等等
- name: KnowStreaming官网
url: https://knowstreaming.com/
about: KnowStreaming website

View File

@@ -1,26 +0,0 @@
---
name: 优化建议
about: 相关功能优化建议
title: ''
labels: Optimization Suggestions
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
你是否希望来认领这个优化建议。
「 Y / N 」
### 环境信息
* KnowStreaming version : <font size=4 color =red> xxx </font>
* Operating System version : <font size=4 color =red> xxx </font>
* Java version : <font size=4 color =red> xxx </font>
### 需要优化的功能点
### 建议如何优化

View File

@@ -1,20 +0,0 @@
---
name: 提议新功能/需求
about: 给KnowStreaming提一个功能需求
title: ''
labels: feature
assignees: ''
---
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
你是否希望来认领这个Feature。
「 Y / N 」
## 这里描述需求
<!--请尽可能的描述清楚您的需求 -->

View File

@@ -1,12 +0,0 @@
---
name: 提个问题
about: 问KnowStreaming相关问题
title: ''
labels: question
assignees: ''
---
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
## 在这里提出你的问题

View File

@@ -1,22 +0,0 @@
请不要在没有先创建Issue的情况下创建Pull Request。
## 变更的目的是什么
XXXXX
## 简短的更新日志
XX
## 验证这一变化
XXXX
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
* [ ] 确保有针对更改提交的 Github issue通常在您开始处理之前。诸如拼写错误之类的琐碎更改不需要 Github issue。您的Pull Request应该只解决这个问题而不需要进行其他更改—— 一个 PR 解决一个问题。
* [ ] 格式化 Pull Request 标题,如[ISSUE #123] support Confluent Schema Registry。 Pull Request 中的每个提交都应该有一个有意义的主题行和正文。
* [ ] 编写足够详细的Pull Request描述以了解Pull Request的作用、方式和原因。
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改请记住在test 模块中添加 integration-test
* [ ] 确保编译通过,集成测试通过

View File

@@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at shirenchuang@didiglobal.com . All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@@ -1,150 +1,28 @@
# Contribution Guideline
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
## Pull Requests
# 为KnowStreaming做贡献
Before sending pull request to this project, please read and follow guidelines below.
1. Branch: We only accept pull request on `dev` branch.
2. Coding style: Follow the coding style used in LogiKM.
3. Commit message: Use English and be aware of your spell.
4. Test: Make sure to test your code.
欢迎👏🏻来到KnowStreaming本文档是关于如何为KnowStreaming做出贡献的指南。
Add device mode, API version, related log, screenshots and other related information in your pull request if possible.
如果您发现不正确或遗漏的内容, 请留下意见/建议。
NOTE: We assume all your contribution can be licensed under the [Apache License 2.0](LICENSE).
## 行为守则
请务必阅读并遵守我们的 [行为准则](./CODE_OF_CONDUCT.md).
## Issues
We love clearly described issues. :)
Following information can help us to resolve the issue faster.
## 贡献
**KnowStreaming** 欢迎任何角色的新参与者,包括 **User** 、**Contributor**、**Committer**、**PMC** 。
我们鼓励新人积极加入 **KnowStreaming** 项目从User到Contributor、Committer ,甚至是 PMC 角色。
为了做到这一点,新人需要积极地为 **KnowStreaming** 项目做出贡献。以下介绍如何对 **KnowStreaming** 进行贡献。
### 创建/打开 Issue
如果您在文档中发现拼写错误、在代码中**发现错误**或想要**新功能**或想要**提供建议**,您可以在 GitHub 上[创建一个Issue](https://github.com/didi/KnowStreaming/issues/new/choose) 进行报告。
如果您想直接贡献, 您可以选择下面标签的问题。
- [contribution welcome](https://github.com/didi/KnowStreaming/labels/contribution%20welcome) : 非常需要解决/新增 的Issues
- [good first issue](https://github.com/didi/KnowStreaming/labels/good%20first%20issue): 对新人比较友好, 新人可以拿这个Issue来练练手热热身。
<font color=red ><b> 请注意,任何 PR 都必须与有效issue相关联。否则PR 将被拒绝。</b></font>
### 开始你的贡献
**分支介绍**
我们将 `dev`分支作为开发分支, 说明这是一个不稳定的分支。
此外,我们的分支模型符合 [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/). 我们强烈建议新人在创建PR之前先阅读上述文章。
**贡献流程**
为方便描述,我们这里定义一下2个名词
自己Fork出来的仓库是私人仓库, 我们这里称之为 **分叉仓库**
Fork的源项目,我们称之为:**源仓库**
现在如果您准备好创建PR, 以下是贡献者的工作流程:
1. Fork [KnowStreaming](https://github.com/didi/KnowStreaming) 项目到自己的仓库
2. 从源仓库的`dev`拉取并创建自己的本地分支,例如: `dev`
3. 在本地分支上对代码进行修改
4. Rebase 开发分支, 并解决冲突
5. commit 并 push 您的更改到您自己的**分叉仓库**
6. 创建一个 Pull Request 到**源仓库**的`dev`分支中。
7. 等待回复。如果回复的慢,请无情的催促。
更为详细的贡献流程请看:[贡献流程](./docs/contributer_guide/贡献流程.md)
创建Pull Request时
1. 请遵循 PR的 [模板](./.github/PULL_REQUEST_TEMPLATE.md)
2. 请确保 PR 有相应的issue。
3. 如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档(在对应的issue中)。
4. 注意单个 PR 不能太大。如果需要进行大量更改,最好将更改分成几个单独的 PR。
5. 在合并PR之前尽量的将最终的提交信息清晰简洁, 将多次修改的提交尽可能的合并为一次提交。
6. 创建 PR 后将为PR分配一个或多个reviewers。
<font color=red><b>如果您的 PR 包含较大的更改,例如组件重构或新组件,请编写有关其设计和使用的详细文档。</b></font>
# 代码审查指南
Commiter将轮流review代码以确保在合并前至少有一名Commiter
一些原则:
- 可读性——重要的代码应该有详细的文档。API 应该有 Javadoc。代码风格应与现有风格保持一致。
- 优雅:新的函数、类或组件应该设计得很好。
- 可测试性——单元测试用例应该覆盖 80% 的新代码。
- 可维护性 - 遵守我们的编码规范。
# 开发者
## 成为Contributor
只要成功提交并合并PR , 则为Contributor
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
## 尝试成为Commiter
一般来说, 贡献8个重要的补丁并至少让三个不同的人来Review他们(您需要3个Commiter的支持)。
然后请人给你提名, 您需要展示您的
1. 至少8个重要的PR和项目的相关问题
2. 与团队合作的能力
3. 了解项目的代码库和编码风格
4. 编写好代码的能力
当前的Commiter可以通过在KnowStreaming中的Issue标签 `nomination`(提名)来提名您
1. 你的名字和姓氏
2. 指向您的Git个人资料的链接
3. 解释为什么你应该成为Commiter
4. 详细说明提名人与您合作的3个PR以及相关问题,这些问题可以证明您的能力。
另外2个Commiter需要支持您的**提名**如果5个工作日内没有人反对您就是提交者,如果有人反对或者想要更多的信息Commiter会讨论并通常达成共识(5个工作日内) 。
# 开源奖励计划
我们非常欢迎开发者们为KnowStreaming开源项目贡献一份力量相应也将给予贡献者激励以表认可与感谢。
## 参与贡献
1. 积极参与 Issue 的讨论如答疑解惑、提供想法或报告无法解决的错误Issue
2. 撰写和改进项目的文档Wiki
3. 提交补丁优化代码Coding
## 你将获得
1. 加入KnowStreaming开源项目贡献者名单并展示
2. KnowStreaming开源贡献者证书(纸质&电子版)
3. KnowStreaming贡献者精美大礼包(KnowStreamin/滴滴 周边)
## 相关规则
- Contributer和Commiter都会有对应的证书和对应的礼包
- 每季度有KnowStreaming项目团队评选出杰出贡献者,颁发相应证书。
- 年末进行年度评选
贡献者名单请看:[贡献者名单](./docs/contributer_guide/开发者名单.md)
* Device mode and hardware information.
* API version.
* Logs.
* Screenshots.
* Steps to reproduce the issue.

View File

@@ -45,29 +45,22 @@
## `Know Streaming` 简介
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。
我们现在正在收集 Know Streaming 用户信息,以帮助我们进一步改进 Know Streaming。
请在 [issue#663](https://github.com/didi/KnowStreaming/issues/663) 上提供您的使用信息来支持我们:[谁在使用 Know Streaming](https://github.com/didi/KnowStreaming/issues/663)
整体具有以下特点:
`Know Streaming`是一套云原生的Kafka管控平台脱胎于众多互联网内部多年的Kafka运营实践经验专注于Kafka运维管控、监控告警、资源治理、多活容灾等核心场景。在用户体验、监控、运维管控上进行了平台化、可视化、智能化的建设提供一系列特色的功能极大地方便了用户和运维人员的日常使用让普通运维人员都能成为Kafka专家。整体具有以下特点:
- 👀 &nbsp;**零侵入、全覆盖**
- 无需侵入改造 `Apache Kafka` ,一键便能纳管 `0.10.x` ~ `3.x.x` 众多版本的Kafka包括 `ZK``Raft` 运行模式的版本,同时在兼容架构上具备良好的扩展性,帮助您提升集群管理水平;
- 🌪️ &nbsp;**零成本、界面化**
- 提炼高频 CLI 能力,设计合理的产品路径,提供清新美观的 GUI 界面,支持 Cluster、Broker、Zookeeper、Topic、ConsumerGroup、Message、ACL、Connect 等组件 GUI 管理普通用户5分钟即可上手
- 提炼高频 CLI 能力,设计合理的产品路径,提供清新美观的 GUI 界面,支持 Cluster、Broker、Topic、Group、Message、ACL 等组件 GUI 管理普通用户5分钟即可上手
- 👏 &nbsp;**云原生、插件化**
- 基于云原生构建,具备水平扩展能力,只需要增加节点即可获取更强的采集及对外服务能力,提供众多可热插拔的企业级特性,覆盖可观测性生态整合、资源治理、多活容灾等核心场景;
- 🚀 &nbsp;**专业能力**
- 集群管理:支持一键纳管,健康分析、核心组件观测 等功能;
- 集群管理:支持集群一键纳管,健康分析、核心组件观测 等功能;
- 观测提升:多维度指标观测大盘、观测指标最佳实践 等功能;
- 异常巡检:集群多维度健康巡检、集群多维度健康分 等功能;
- 能力增强:集群负载均衡、Topic扩缩副本、Topic副本迁移 等功能;
- 能力增强Topic扩缩副本、Topic副本迁移 等功能;
&nbsp;
@@ -106,13 +99,9 @@
## 成为社区贡献者
1. [贡献源码](https://doc.knowstreaming.com/product/10-contribution) 了解如何成为 Know Streaming 的贡献者
2. [具体贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)
3. [开源激励计划](https://doc.knowstreaming.com/product/10-contribution#105-开源激励计划)
4. [贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)
点击 [这里](CONTRIBUTING.md)了解如何成为 Know Streaming 的贡献者
获取KnowStreaming开源社区证书。
## 加入技术交流群
@@ -144,13 +133,6 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`**
微信加群:添加`mike_zhangliang``PenceXie`的微信号备注KnowStreaming加群。
<br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。
感谢感谢!!!
<img width="116" alt="wx" src="https://user-images.githubusercontent.com/71620349/192257217-c4ebc16c-3ad9-485d-a914-5911d3a4f46b.png">
## Star History

View File

@@ -1,132 +1,4 @@
## v3.1.0
**Bug修复**
- 修复重置 Group Offset 的提示信息中缺少Dead状态也可进行重置的描述
- 修复新建 Topic 后,立即查看 Topic Messages 信息时,会提示 Topic 不存在的问题;
- 修复副本变更时,优先副本选举未被正常处罚执行的问题;
- 修复 git 目录不存在时,打包不能正常进行的问题;
- 修复 KRaft 模式的 Kafka 集群JMX PORT 显示 -1 的问题;
**体验优化**
- 优化Cluster、Broker、Topic、Group的健康分为健康状态
- 去除健康巡检配置中的权重信息;
- 错误提示页面展示优化;
- 前端打包编译依赖默认使用 taobao 镜像;
- 重新设计优化导航栏的 icon
**新增**
- 个人头像下拉信息中,新增产品版本信息;
- 多集群列表页面,新增集群健康状态分布信息;
**Kafka ZK 部分 (v3.1.0版本正式发布)**
- 新增 ZK 集群的指标大盘信息;
- 新增 ZK 集群的服务状态概览信息;
- 新增 ZK 集群的服务节点列表信息;
- 新增 Kafka 在 ZK 的存储数据查看功能;
- 新增 ZK 的健康巡检及健康状态计算;
---
## v3.0.1
**Bug修复**
- 修复重置 Group Offset 时,提示信息中缺少 Dead 状态也可进行重置的信息;
- 修复 Ldap 某个属性不存在时,会直接抛出空指针导致登陆失败的问题;
- 修复集群 Topic 列表页,健康分详情信息中,检查时间展示错误的问题;
- 修复更新健康检查结果时,出现死锁的问题;
- 修复 Replica 索引模版错误的问题;
- 修复 FAQ 文档中的错误链接;
- 修复 Broker 的 TopN 指标不存在时,页面数据不展示的问题;
- 修复 Group 详情页,图表时间范围选择不生效的问题;
**体验优化**
- 集群 Group 列表按照 Group 维度进行展示;
- 优化避免因 ES 中该指标不存在,导致日志中出现大量空指针的问题;
- 优化全局 Message & Notification 展示效果;
- 优化 Topic 扩分区名称 & 描述展示;
**新增**
- Broker 列表页面,新增 JMX 是否成功连接的信息;
**ZK 部分(未完全发布)**
- 后端补充 Kafka ZK 指标采集Kafka ZK 信息获取相关功能;
- 增加本地缓存,避免同一采集周期内 ZK 指标重复采集;
- 增加 ZK 节点采集失败跳过策略,避免不断对存在问题的节点不断尝试;
- 修复 zkAvgLatency 指标转 Long 时抛出异常问题;
- 修复 ks_km_zookeeper 表中role 字段类型错误问题;
---
## v3.0.0
**Bug修复**
- 修复 Group 指标防重复采集不生效问题
- 修复自动创建 ES 索引模版失败问题
- 修复 Group+Topic 列表中存在已删除Topic的问题
- 修复使用 MySQL-8 ,因兼容问题, start_time 信息为 NULL 时,会导致创建任务失败的问题
- 修复 Group 信息表更新时,出现死锁的问题
- 修复图表补点逻辑与图表时间范围不适配的问题
**体验优化**
- 按照资源类别,拆分健康巡检任务
- 优化 Group 详情页的指标为实时获取
- 图表拖拽排序支持用户级存储
- 多集群列表 ZK 信息展示兼容无 ZK 情况
- Topic 详情消息预览支持复制功能
- 部分内容大数字支持千位分割符展示
**新增**
- 集群信息中,新增 Zookeeper 客户端配置字段
- 集群信息中,新增 Kafka 集群运行模式字段
- 新增 docker-compose 的部署方式
---
## v3.0.0-beta.3
**文档**
- FAQ 补充权限识别失败问题的说明
- 同步更新文档,保持与官网一致
**Bug修复**
- Offset 信息获取时,过滤掉无 Leader 的分区
- 升级 oshi-core 版本至 5.6.1 版本,修复 Windows 系统获取系统指标失败问题
- 修复 JMX 连接被关闭后,未进行重建的问题
- 修复因 DB 中 Broker 信息不存在导致 TotalLogSize 指标获取时抛空指针问题
- 修复 dml-logi.sql 中SQL 注释错误的问题
- 修复 startup.sh 中,识别操作系统类型错误的问题
- 修复配置管理页面删除配置失败的问题
- 修复系统管理应用文件引用路径
- 修复 Topic Messages 详情提示信息点击跳转 404 的问题
- 修复扩副本时,当前副本数不显示问题
**体验优化**
- Topic-Messages 页面增加返回数据的排序以及按照Earliest/Latest的获取方式
- 优化 GroupOffsetResetEnum 类名为 OffsetTypeEnum使得类名含义更准确
- 移动 KafkaZKDAO 类,及 Kafka Znode 实体类的位置,使得 Kafka Zookeeper DAO 更加内聚及便于识别
- 后端补充 Overview 页面指标排序的功能
- 前端 Webpack 配置优化
- Cluster Overview 图表取消放大展示功能
- 列表页增加手动刷新功能
- 接入/编辑集群,优化 JMX-PORTVersion 信息的回显优化JMX信息的展示
- 提高登录页面图片展示清晰度
- 部分样式和文案优化
---
## v3.0.0-beta.2

View File

@@ -439,7 +439,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
"ks_kafka_partition_metric*"
],
"settings" : {
"index" : {
@@ -500,7 +500,30 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
}
},
"aliases" : { }
}'
}[root@10-255-0-23 template]# cat ks_kafka_replication_metric
PUT _template/ks_kafka_replication_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_replication_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"index" : true,
"type" : "date",
"doc_values" : true
}
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{
"order" : 10,
@@ -617,92 +640,7 @@ curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: appl
}
},
"aliases" : { }
}'
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${SERVER_ES_ADDRESS}/_template/ks_kafka_zookeeper_metric -d '{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}'
}'
for i in {0..6};
do
@@ -712,7 +650,6 @@ do
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_zookeeper_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2
done
done

View File

@@ -9,7 +9,7 @@ error_exit ()
[ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME
if [ -z "$JAVA_HOME" ]; then
if [ "Darwin" = "$(uname -s)" ]; then
if $darwin; then
if [ -x '/usr/libexec/java_home' ] ; then
export JAVA_HOME=`/usr/libexec/java_home`

View File

@@ -1 +0,0 @@
TODO.

View File

@@ -1,6 +0,0 @@
开源贡献者证书发放名单(定期更新)
贡献者名单请看:[贡献者名单](https://doc.knowstreaming.com/product/10-contribution#106-贡献者名单)

View File

@@ -1,6 +0,0 @@
<br>
<br>
请点击:[贡献流程](https://doc.knowstreaming.com/product/10-contribution#102-贡献流程)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,69 +0,0 @@
## 支持Kerberos认证的ZK
### 1、修改 KnowStreaming 代码
代码位置:`src/main/java/com/xiaojukeji/know/streaming/km/persistence/kafka/KafkaAdminZKClient.java`
`createZKClient``135行 的 false 改为 true
![need_modify_code.png](assets/support_kerberos_zk/need_modify_code.png)
修改完后重新进行打包编译,打包编译见:[打包编译](https://github.com/didi/KnowStreaming/blob/master/docs/install_guide/%E6%BA%90%E7%A0%81%E7%BC%96%E8%AF%91%E6%89%93%E5%8C%85%E6%89%8B%E5%86%8C.md
)
### 2、查看用户在ZK的ACL
假设我们使用的用户是 `kafka` 这个用户。
- 1、查看 server.properties 的配置的 zookeeper.connect 的地址;
- 2、使用 `zkCli.sh -serve zookeeper.connect的地址` 登录到ZK页面
- 3、ZK页面上执行命令 `getAcl /kafka` 查看 `kafka` 用户的权限;
此时,我们可以看到如下信息:
![watch_user_acl.png](assets/support_kerberos_zk/watch_user_acl.png)
`kafka` 用户需要的权限是 `cdrwa`。如果用户没有 `cdrwa` 权限的话,需要创建用户并授权,授权命令为:`setAcl`
### 3、创建Kerberos的keytab并修改 KnowStreaming 主机
- 1、在 Kerberos 的域中创建 `kafka/_HOST` 的 `keytab`,并导出。例如:`kafka/dbs-kafka-test-8-53`
- 2、导出 keytab 后上传到安装 KS 的机器的 `/etc/keytab` 下;
- 3、在 KS 机器上,执行 `kinit -kt zookeepe.keytab kafka/dbs-kafka-test-8-53` 看是否能进行 `Kerberos` 登录;
- 4、可以登录后配置 `/opt/zookeeper.jaas` 文件,例子如下:
```sql
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=false
serviceName="zookeeper"
keyTab="/etc/keytab/zookeeper.keytab"
principal="kafka/dbs-kafka-test-8-53@XXX.XXX.XXX";
};
```
- 5、需要配置 `KDC-Server` 对 `KnowStreaming` 的机器开通防火墙并在KS的机器 `/etc/host/` 配置 `kdc-server` 的 `hostname`。并将 `krb5.conf` 导入到 `/etc` 下;
### 4、修改 KnowStreaming 的配置
- 1、在 `/usr/local/KnowStreaming/KnowStreaming/bin/startup.sh` 中的47行的JAVA_OPT中追加如下设置
```bash
-Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/zookeeper.jaas
```
- 2、重启KS集群后再 start.out 中看到如下信息则证明Kerberos配置成功
![success_1.png](assets/support_kerberos_zk/success_1.png)
![success_2.png](assets/support_kerberos_zk/success_2.png)
### 5、补充说明
- 1、多Kafka集群如果用的是一样的Kerberos域的话只需在每个`ZK`中给`kafka`用户配置`crdwa`权限即可,这样集群初始化的时候`zkclient`是都可以认证;
- 2、当前需要修改代码重新打包才可以支持后续考虑通过页面支持Kerberos认证的ZK接入
- 3、多个Kerberos域暂时未适配

View File

@@ -59,8 +59,6 @@ sh deploy_KnowStreaming-offline.sh
### 2.1.3、容器部署
#### 2.1.3.1、Helm
**环境依赖**
- Kubernetes >= 1.14 Helm >= 2.17.0
@@ -74,11 +72,11 @@ sh deploy_KnowStreaming-offline.sh
```bash
# 相关镜像在Docker Hub都可以下载
# 快速安装(NAMESPACE需要更改为已存在的安装启动需要几分钟初始化请稍等~)
helm install -n [NAMESPACE] [NAME] http://download.knowstreaming.com/charts/knowstreaming-manager-0.1.5.tgz
helm install -n [NAMESPACE] [NAME] http://download.knowstreaming.com/charts/knowstreaming-manager-0.1.3.tgz
# 获取KnowStreaming前端ui的service. 默认nodeport方式.
# (http://nodeIP:nodeport默认用户名密码admin/admin2022_)
# `v3.0.0-beta.2`版本开始helm chart包版本0.1.4开始),默认账号密码为`admin` / `admin`
# `v3.0.0-beta.2`版本开始,默认账号密码为`admin` / `admin`
# 添加仓库
helm repo add knowstreaming http://download.knowstreaming.com/charts
@@ -89,156 +87,6 @@ helm pull knowstreaming/knowstreaming-manager
&nbsp;
#### 2.1.3.2、Docker Compose
**环境依赖**
- [Docker](https://docs.docker.com/engine/install/)
- [Docker Compose](https://docs.docker.com/compose/install/)
**安装命令**
```bash
# `v3.0.0-beta.2`版本开始(docker镜像为0.2.0版本开始),默认账号密码为`admin` / `admin`
# https://hub.docker.com/u/knowstreaming 在此处寻找最新镜像版本
# mysql与es可以使用自己搭建的服务,调整对应配置即可
# 复制docker-compose.yml到指定位置后执行下方命令即可启动
docker-compose up -d
```
**验证安装**
```shell
docker-compose ps
# 验证启动 - 状态为 UP 则表示成功
Name Command State Ports
----------------------------------------------------------------------------------------------------
elasticsearch-single /usr/local/bin/docker-entr ... Up 9200/tcp, 9300/tcp
knowstreaming-init /bin/bash /es_template_cre ... Up
knowstreaming-manager /bin/sh /ks-start.sh Up 80/tcp
knowstreaming-mysql /entrypoint.sh mysqld Up (health: starting) 3306/tcp, 33060/tcp
knowstreaming-ui /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp
# 稍等一分钟左右 knowstreaming-init 会退出表示es初始化完成可以访问页面
Name Command State Ports
-------------------------------------------------------------------------------------------
knowstreaming-init /bin/bash /es_template_cre ... Exit 0
knowstreaming-mysql /entrypoint.sh mysqld Up (healthy) 3306/tcp, 33060/tcp
```
**访问**
```http request
http://127.0.0.1:80/
```
**docker-compose.yml**
```yml
version: "2"
services:
# *不要调整knowstreaming-manager服务名称ui中会用到
knowstreaming-manager:
image: knowstreaming/knowstreaming-manager:latest
container_name: knowstreaming-manager
privileged: true
restart: always
depends_on:
- elasticsearch-single
- knowstreaming-mysql
expose:
- 80
command:
- /bin/sh
- /ks-start.sh
environment:
TZ: Asia/Shanghai
# mysql服务地址
SERVER_MYSQL_ADDRESS: knowstreaming-mysql:3306
# mysql数据库名
SERVER_MYSQL_DB: know_streaming
# mysql用户名
SERVER_MYSQL_USER: root
# mysql用户密码
SERVER_MYSQL_PASSWORD: admin2022_
# es服务地址
SERVER_ES_ADDRESS: elasticsearch-single:9200
# 服务JVM参数
JAVA_OPTS: -Xmx1g -Xms1g
# 对于kafka中ADVERTISED_LISTENERS填写的hostname可以通过该方式完成
# extra_hosts:
# - "hostname:x.x.x.x"
# 服务日志路径
# volumes:
# - /ks/manage/log:/logs
knowstreaming-ui:
image: knowstreaming/knowstreaming-ui:latest
container_name: knowstreaming-ui
restart: always
ports:
- '80:80'
environment:
TZ: Asia/Shanghai
depends_on:
- knowstreaming-manager
# extra_hosts:
# - "hostname:x.x.x.x"
elasticsearch-single:
image: docker.io/library/elasticsearch:7.6.2
container_name: elasticsearch-single
restart: always
expose:
- 9200
- 9300
# ports:
# - '9200:9200'
# - '9300:9300'
environment:
TZ: Asia/Shanghai
# es的JVM参数
ES_JAVA_OPTS: -Xms512m -Xmx512m
# 单节点配置,多节点集群参考 https://www.elastic.co/guide/en/elasticsearch/reference/7.6/docker.html#docker-compose-file
discovery.type: single-node
# 数据持久化路径
# volumes:
# - /ks/es/data:/usr/share/elasticsearch/data
# es初始化服务与manager使用同一镜像
# 首次启动es需初始化模版和索引,后续会自动创建
knowstreaming-init:
image: knowstreaming/knowstreaming-manager:latest
container_name: knowstreaming-init
depends_on:
- elasticsearch-single
command:
- /bin/bash
- /es_template_create.sh
environment:
TZ: Asia/Shanghai
# es服务地址
SERVER_ES_ADDRESS: elasticsearch-single:9200
knowstreaming-mysql:
image: knowstreaming/knowstreaming-mysql:latest
container_name: knowstreaming-mysql
restart: always
environment:
TZ: Asia/Shanghai
# root 用户密码
MYSQL_ROOT_PASSWORD: admin2022_
# 初始化时创建的数据库名称
MYSQL_DATABASE: know_streaming
# 通配所有host,可以访问远程
MYSQL_ROOT_HOST: '%'
expose:
- 3306
# ports:
# - '3306:3306'
# 数据持久化路径
# volumes:
# - /ks/mysql/data:/data/mysql
```
&nbsp;
### 2.1.4、手动部署
**部署流程**

View File

@@ -1,173 +1,12 @@
## 6.2、版本升级手册
注意:
- 如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
- 如果中间某个版本没有升级信息,则表示该版本直接替换安装包即可从前一个版本升级至当前版本。
注意:如果想升级至具体版本,需要将你当前版本至你期望使用版本的变更统统执行一遍,然后才能正常使用。
### 6.2.0、升级至 `master` 版本
暂无
### 6.2.1、升级至 `v3.1.0` 版本
```sql
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_BRAIN_SPLIT', '{ \"value\": 1} ', 'ZK 脑裂', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_OUTSTANDING_REQUESTS', '{ \"amount\": 100, \"ratio\":0.8} ', 'ZK Outstanding 请求堆积数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_WATCH_COUNT', '{ \"amount\": 100000, \"ratio\": 0.8 } ', 'ZK WatchCount 数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_ALIVE_CONNECTIONS', '{ \"amount\": 10000, \"ratio\": 0.8 } ', 'ZK 连接数', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_APPROXIMATE_DATA_SIZE', '{ \"amount\": 524288000, \"ratio\": 0.8 } ', 'ZK 数据大小(Byte)', 'admin');
INSERT INTO `ks_km_platform_cluster_config` (`cluster_id`, `value_group`, `value_name`, `value`, `description`, `operator`) VALUES ('-1', 'HEALTH', 'HC_ZK_SENT_RATE', '{ \"amount\": 500000, \"ratio\": 0.8 } ', 'ZK 发包数', 'admin');
```
### 6.2.2、升级至 `v3.0.1` 版本
**ES 索引模版**
```bash
# 新增 ks_kafka_zookeeper_metric 索引模版。
# 可通过再次执行 bin/init_es_template.sh 脚本,创建该索引模版。
# 索引模版内容
PUT _template/ks_kafka_zookeeper_metric
{
"order" : 10,
"index_patterns" : [
"ks_kafka_zookeeper_metric*"
],
"settings" : {
"index" : {
"number_of_shards" : "10"
}
},
"mappings" : {
"properties" : {
"routingValue" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"clusterPhyId" : {
"type" : "long"
},
"metrics" : {
"properties" : {
"AvgRequestLatency" : {
"type" : "double"
},
"MinRequestLatency" : {
"type" : "double"
},
"MaxRequestLatency" : {
"type" : "double"
},
"OutstandingRequests" : {
"type" : "double"
},
"NodeCount" : {
"type" : "double"
},
"WatchCount" : {
"type" : "double"
},
"NumAliveConnections" : {
"type" : "double"
},
"PacketsReceived" : {
"type" : "double"
},
"PacketsSent" : {
"type" : "double"
},
"EphemeralsCount" : {
"type" : "double"
},
"ApproximateDataSize" : {
"type" : "double"
},
"OpenFileDescriptorCount" : {
"type" : "double"
},
"MaxFileDescriptorCount" : {
"type" : "double"
}
}
},
"key" : {
"type" : "text",
"fields" : {
"keyword" : {
"ignore_above" : 256,
"type" : "keyword"
}
}
},
"timestamp" : {
"format" : "yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis",
"type" : "date"
}
}
},
"aliases" : { }
}
```
**SQL 变更**
```sql
DROP TABLE IF EXISTS `ks_km_zookeeper`;
CREATE TABLE `ks_km_zookeeper` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '物理集群ID',
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper主机名',
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'zookeeper端口',
`role` varchar(16) NOT NULL DEFAULT '' COMMENT '角色, leader follower observer',
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'zookeeper版本',
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 1存活0未存活11存活但是4字命令使用不了',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_host_port` (`cluster_phy_id`,`host`, `port`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Zookeeper信息表';
DROP TABLE IF EXISTS `ks_km_group`;
CREATE TABLE `ks_km_group` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
`cluster_phy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
`name` varchar(192) COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Group名称',
`member_count` int(11) unsigned NOT NULL DEFAULT '0' COMMENT '成员数',
`topic_members` text CHARACTER SET utf8 COMMENT 'group消费的topic列表',
`partition_assignor` varchar(255) CHARACTER SET utf8 NOT NULL COMMENT '分配策略',
`coordinator_id` int(11) NOT NULL COMMENT 'group协调器brokerId',
`type` int(11) NOT NULL COMMENT 'group类型 0consumer 1connector',
`state` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '' COMMENT '状态',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_cluster_phy_id_name` (`cluster_phy_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Group信息表';
```
### 6.2.3、升级至 `v3.0.0` 版本
**SQL 变更**
```sql
ALTER TABLE `ks_km_physical_cluster`
ADD COLUMN `zk_properties` TEXT NULL COMMENT 'ZK配置' AFTER `jmx_properties`;
```
---
### 6.2.4、升级至 `v3.0.0-beta.2`版本
### 6.2.1、升级至 `v3.0.0-beta.2`版本
**配置变更**
@@ -201,7 +40,8 @@ thread-pool:
```
**SQL 变更**
**SQL变更**
```sql
-- 多集群管理权限2022-09-06新增
@@ -238,13 +78,14 @@ ALTER TABLE `logi_security_oplog`
---
### 6.2.5、升级至 `v3.0.0-beta.1`版本
### 6.2.2、升级至 `v3.0.0-beta.1`版本
**SQL 变更**
**SQL变更**
1、在`ks_km_broker`表增加了一个监听信息字段。
2、为`logi_security_oplog` operation_methods 字段设置默认值''。
因此需要执行下面的 sql 对数据库表进行更新。
2、为`logi_security_oplog`表operation_methods字段设置默认值''。
因此需要执行下面的sql对数据库表进行更新。
```sql
ALTER TABLE `ks_km_broker`
@@ -257,7 +98,8 @@ ALTER COLUMN `operation_methods` set default '';
---
### 6.2.6、`2.x`版本 升级至 `v3.0.0-beta.0`版本
### 6.2.3、`2.x`版本 升级至 `v3.0.0-beta.0`版本
**升级步骤:**
@@ -281,14 +123,14 @@ ALTER COLUMN `operation_methods` set default '';
UPDATE ks_km_topic
INNER JOIN
(SELECT
topic.cluster_id AS cluster_id,
topic.topic_name AS topic_name,
topic.description AS description
topic.cluster_id AS cluster_id,
topic.topic_name AS topic_name,
topic.description AS description
FROM topic WHERE description != ''
) AS t
ON ks_km_topic.cluster_phy_id = t.cluster_id
AND ks_km_topic.topic_name = t.topic_name
AND ks_km_topic.id > 0
SET ks_km_topic.description = t.description;
ON ks_km_topic.cluster_phy_id = t.cluster_id
AND ks_km_topic.topic_name = t.topic_name
AND ks_km_topic.id > 0
SET ks_km_topic.description = t.description;
```

View File

@@ -37,7 +37,7 @@
## 8.4、`Jmx`连接失败如何解决?
- 参看 [Jmx 连接配置&问题解决](https://doc.knowstreaming.com/product/9-attachment#91jmx-%E8%BF%9E%E6%8E%A5%E5%A4%B1%E8%B4%A5%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3) 说明。
- 参看 [Jmx 连接配置&问题解决](./9-attachment#jmx-连接失败问题解决) 说明。
&nbsp;
@@ -166,19 +166,3 @@ Node 版本: v12.22.12
需要到具体的应用中执行 `npm run start`,例如 `cd packages/layout-clusters-fe` 后,执行 `npm run start`
应用启动后需要到基座应用中查看(需要启动基座应用,即 layout-clusters-fe
## 8.12、权限识别失败问题
1、使用admin账号登陆KnowStreaming时点击系统管理-用户管理-角色管理-新增角色,查看页面是否正常。
<img src="http://img-ys011.didistatic.com/static/dc2img/do1_gwGfjN9N92UxzHU8dfzr" width = "400" >
2、查看'/logi-security/api/v1/permission/tree'接口返回值,出现如下图所示乱码现象。
![接口返回值](http://img-ys011.didistatic.com/static/dc2img/do1_jTxBkwNGU9vZuYQQbdNw)
3、查看logi_security_permission表看看是否出现了中文乱码现象。
根据以上几点,我们可以确定是由于数据库乱码造成的权限识别失败问题。
+ 原因:由于数据库编码和我们提供的脚本不一致,数据库里的数据发生了乱码,因此出现权限识别失败问题。
+ 解决方案清空数据库数据将数据库字符集调整为utf8最后重新执行[dml-logi.sql](https://github.com/didi/KnowStreaming/blob/master/km-dist/init/sql/dml-logi.sql)脚本导入数据即可。

View File

@@ -11,7 +11,7 @@
下面是用户第一次使用我们产品的典型体验路径:
![text](http://img-ys011.didistatic.com/static/dc2img/do1_qgqPsAY46sZeBaPUCwXY)
![text](http://img-ys011.didistatic.com/static/dc2img/do1_YehqxqmsVaqU5gf3XphI)
## 5.3、常用功能

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
/**
* 多集群总体状态
*/
public interface ClusterZookeepersManager {
Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId);
PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto);
Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path);
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.biz.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
@@ -16,8 +15,6 @@ public interface MultiClusterPhyManager {
*/
ClusterPhysState getClusterPhysState();
ClusterPhysHealthState getClusterPhysHealthState();
/**
* 查询多集群大盘
* @param dto 分页信息

View File

@@ -24,7 +24,6 @@ import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerMetricService;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.persistence.kafka.KafkaJMXClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -52,9 +51,6 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
@Autowired
private KafkaControllerService kafkaControllerService;
@Autowired
private KafkaJMXClient kafkaJMXClient;
@Override
public PaginationResult<ClusterBrokersOverviewVO> getClusterPhyBrokersOverview(Long clusterPhyId, ClusterBrokersOverviewDTO dto) {
// 获取集群Broker列表
@@ -79,10 +75,6 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
//获取controller信息
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
//获取jmx状态信息
Map<Integer, Boolean> jmxConnectedMap = new HashMap<>();
brokerList.forEach(elem -> jmxConnectedMap.put(elem.getBrokerId(), kafkaJMXClient.getClientWithCheck(clusterPhyId, elem.getBrokerId()) != null));
// 格式转换
return PaginationResult.buildSuc(
this.convert2ClusterBrokersOverviewVOList(
@@ -91,8 +83,7 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
metricsResult.getData(),
groupTopic,
transactionTopic,
kafkaController,
jmxConnectedMap
kafkaController
),
paginationResult
);
@@ -174,24 +165,22 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
List<BrokerMetrics> metricsList,
Topic groupTopic,
Topic transactionTopic,
KafkaController kafkaController,
Map<Integer, Boolean> jmxConnectedMap) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null ? new HashMap<>() : metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
KafkaController kafkaController) {
Map<Integer, BrokerMetrics> metricsMap = metricsList == null? new HashMap<>(): metricsList.stream().collect(Collectors.toMap(BrokerMetrics::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null ? new HashMap<>() : brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
Map<Integer, Broker> brokerMap = brokerList == null? new HashMap<>(): brokerList.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
List<ClusterBrokersOverviewVO> voList = new ArrayList<>(pagedBrokerIdList.size());
for (Integer brokerId : pagedBrokerIdList) {
Broker broker = brokerMap.get(brokerId);
BrokerMetrics brokerMetrics = metricsMap.get(brokerId);
Boolean jmxConnected = jmxConnectedMap.get(brokerId);
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController, jmxConnected));
voList.add(this.convert2ClusterBrokersOverviewVO(brokerId, broker, brokerMetrics, groupTopic, transactionTopic, kafkaController));
}
return voList;
}
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController, Boolean jmxConnected) {
private ClusterBrokersOverviewVO convert2ClusterBrokersOverviewVO(Integer brokerId, Broker broker, BrokerMetrics brokerMetrics, Topic groupTopic, Topic transactionTopic, KafkaController kafkaController) {
ClusterBrokersOverviewVO clusterBrokersOverviewVO = new ClusterBrokersOverviewVO();
clusterBrokersOverviewVO.setBrokerId(brokerId);
if (broker != null) {
@@ -214,7 +203,6 @@ public class ClusterBrokersManagerImpl implements ClusterBrokersManager {
}
clusterBrokersOverviewVO.setLatestMetrics(brokerMetrics);
clusterBrokersOverviewVO.setJmxConnected(jmxConnected);
return clusterBrokersOverviewVO;
}

View File

@@ -1,137 +0,0 @@
package com.xiaojukeji.know.streaming.km.biz.cluster.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.ClusterZookeepersManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterZookeepersOverviewDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.Znode;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ClusterZookeepersStateVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper.ZnodeVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.enums.zookeeper.ZKRoleEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.ZookeeperMetricVersionItems;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZnodeService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.Arrays;
import java.util.List;
@Service
public class ClusterZookeepersManagerImpl implements ClusterZookeepersManager {
private static final ILog LOGGER = LogFactory.getLog(ClusterZookeepersManagerImpl.class);
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZnodeService znodeService;
@Override
public Result<ClusterZookeepersStateVO> getClusterPhyZookeepersState(Long clusterPhyId) {
ClusterPhy clusterPhy = clusterPhyService.getClusterByCluster(clusterPhyId);
if (clusterPhy == null) {
return Result.buildFromRSAndMsg(ResultStatus.CLUSTER_NOT_EXIST, MsgConstant.getClusterPhyNotExist(clusterPhyId));
}
List<ZookeeperInfo> infoList = zookeeperService.listFromDBByCluster(clusterPhyId);
ClusterZookeepersStateVO vo = new ClusterZookeepersStateVO();
vo.setTotalServerCount(infoList.size());
vo.setAliveFollowerCount(0);
vo.setTotalFollowerCount(0);
vo.setAliveObserverCount(0);
vo.setTotalObserverCount(0);
vo.setAliveServerCount(0);
for (ZookeeperInfo info: infoList) {
if (info.getRole().equals(ZKRoleEnum.LEADER.getRole())) {
vo.setLeaderNode(info.getHost());
}
if (info.getRole().equals(ZKRoleEnum.FOLLOWER.getRole())) {
vo.setTotalFollowerCount(vo.getTotalFollowerCount() + 1);
vo.setAliveFollowerCount(info.alive()? vo.getAliveFollowerCount() + 1: vo.getAliveFollowerCount());
}
if (info.getRole().equals(ZKRoleEnum.OBSERVER.getRole())) {
vo.setTotalObserverCount(vo.getTotalObserverCount() + 1);
vo.setAliveObserverCount(info.alive()? vo.getAliveObserverCount() + 1: vo.getAliveObserverCount());
}
if (info.alive()) {
vo.setAliveServerCount(vo.getAliveServerCount() + 1);
}
}
// 指标获取
Result<ZookeeperMetrics> metricsResult = zookeeperMetricService.batchCollectMetricsFromZookeeper(
clusterPhyId,
Arrays.asList(
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED,
ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL
)
);
if (metricsResult.failed()) {
LOGGER.error(
"class=ClusterZookeepersManagerImpl||method=getClusterPhyZookeepersState||clusterPhyId={}||errMsg={}",
clusterPhyId, metricsResult.getMessage()
);
return Result.buildSuc(vo);
}
ZookeeperMetrics metrics = metricsResult.getData();
vo.setWatchCount(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_WATCH_COUNT)));
vo.setHealthState(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_STATE)));
vo.setHealthCheckPassed(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_PASSED)));
vo.setHealthCheckTotal(ConvertUtil.float2Integer(metrics.getMetrics().get(ZookeeperMetricVersionItems.ZOOKEEPER_METRIC_HEALTH_CHECK_TOTAL)));
return Result.buildSuc(vo);
}
@Override
public PaginationResult<ClusterZookeepersOverviewVO> getClusterPhyZookeepersOverview(Long clusterPhyId, ClusterZookeepersOverviewDTO dto) {
//获取集群zookeeper列表
List<ClusterZookeepersOverviewVO> clusterZookeepersOverviewVOList = ConvertUtil.list2List(zookeeperService.listFromDBByCluster(clusterPhyId), ClusterZookeepersOverviewVO.class);
//搜索
clusterZookeepersOverviewVOList = PaginationUtil.pageByFuzzyFilter(clusterZookeepersOverviewVOList, dto.getSearchKeywords(), Arrays.asList("host"));
//分页
PaginationResult<ClusterZookeepersOverviewVO> paginationResult = PaginationUtil.pageBySubData(clusterZookeepersOverviewVOList, dto);
return paginationResult;
}
@Override
public Result<ZnodeVO> getZnodeVO(Long clusterPhyId, String path) {
Result<Znode> result = znodeService.getZnode(clusterPhyId, path);
if (result.failed()) {
return Result.buildFromIgnoreData(result);
}
return Result.buildSuc(ConvertUtil.obj2ObjByJSON(result.getData(), ZnodeVO.class));
}
/**************************************************** private method ****************************************************/
}

View File

@@ -5,7 +5,6 @@ import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.cluster.MultiClusterPhyManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricsClusterPhyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysHealthState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhysState;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.MultiClusterDashboardDTO;
@@ -17,7 +16,6 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboa
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationMetricsUtil;
@@ -77,32 +75,6 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
return physState;
}
@Override
public ClusterPhysHealthState getClusterPhysHealthState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
ClusterPhysHealthState physState = new ClusterPhysHealthState(clusterPhyList.size());
for (ClusterPhy clusterPhy: clusterPhyList) {
ClusterMetrics metrics = clusterMetricService.getLatestMetricsFromCache(clusterPhy.getId());
Float state = metrics.getMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE);
if (state == null) {
physState.setUnknownCount(physState.getUnknownCount() + 1);
} else if (state.intValue() == HealthStateEnum.GOOD.getDimension()) {
physState.setGoodCount(physState.getGoodCount() + 1);
} else if (state.intValue() == HealthStateEnum.MEDIUM.getDimension()) {
physState.setMediumCount(physState.getMediumCount() + 1);
} else if (state.intValue() == HealthStateEnum.POOR.getDimension()) {
physState.setPoorCount(physState.getPoorCount() + 1);
} else if (state.intValue() == HealthStateEnum.DEAD.getDimension()) {
physState.setDeadCount(physState.getDeadCount() + 1);
} else {
physState.setUnknownCount(physState.getUnknownCount() + 1);
}
}
return physState;
}
@Override
public PaginationResult<ClusterPhyDashboardVO> getClusterPhysDashboard(MultiClusterDashboardDTO dto) {
// 获取集群
@@ -176,7 +148,16 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
// 获取所有的metrics
List<ClusterMetrics> metricsList = new ArrayList<>();
for (ClusterPhyDashboardVO vo: voList) {
metricsList.add(clusterMetricService.getLatestMetricsFromCache(vo.getId()));
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
if (!clusterMetrics.getMetrics().containsKey(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE)) {
Float alive = clusterMetrics.getMetrics().get(ClusterMetricVersionItems.CLUSTER_METRIC_ALIVE);
// 如果集群没有健康分,则设置一个默认的健康分数值
clusterMetrics.putMetric(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_SCORE,
(alive != null && alive <= 0)? 0.0f: Constant.DEFAULT_CLUSTER_HEALTH_SCORE.floatValue()
);
}
metricsList.add(clusterMetrics);
}
// 范围搜索

View File

@@ -1,14 +1,11 @@
package com.xiaojukeji.know.streaming.km.biz.group;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
@@ -25,10 +22,6 @@ public interface GroupManager {
String searchGroupKeyword,
PaginationBaseDTO dto);
PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto);
PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto);
PaginationResult<GroupTopicConsumedDetailVO> pagingGroupTopicConsumedMetrics(Long clusterPhyId,
String topicName,
String groupName,
@@ -38,6 +31,4 @@ public interface GroupManager {
Result<Set<TopicPartitionKS>> listClusterPhyGroupPartitions(Long clusterPhyId, String groupName, Long startTime, Long endTime);
Result<Void> resetGroupOffsets(GroupOffsetResetDTO dto, String operator) throws Exception;
List<GroupTopicOverviewVO> getGroupTopicOverviewVOList (Long clusterPhyId, List<GroupMemberPO> groupMemberPOList);
}

View File

@@ -3,14 +3,11 @@ package com.xiaojukeji.know.streaming.km.biz.group.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.cluster.ClusterGroupSummaryDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.group.GroupOffsetResetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.Group;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.group.GroupTopicMember;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.GroupMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -18,15 +15,11 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.TopicPartitionKS;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicConsumedDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.constant.PaginationConstant;
import com.xiaojukeji.know.streaming.km.common.converter.GroupConverter;
import com.xiaojukeji.know.streaming.km.common.enums.AggTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.GroupOffsetResetEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
@@ -78,60 +71,30 @@ public class GroupManagerImpl implements GroupManager {
String searchGroupKeyword,
PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, groupName, searchTopicKeyword, searchGroupKeyword, dto);
if (paginationResult.failed()) {
return PaginationResult.buildFailure(paginationResult, dto);
}
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = this.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
@Override
public PaginationResult<GroupTopicOverviewVO> pagingGroupTopicMembers(Long clusterPhyId, String groupName, PaginationBaseDTO dto) {
Group group = groupService.getGroupFromDB(clusterPhyId, groupName);
//没有topicMember则直接返回
if (group == null || ValidateUtils.isEmptyList(group.getTopicMembers())) {
return PaginationResult.buildSuc(dto);
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
paginationResult.getData().getBizData().stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=pagingGroupMembers||clusterPhyId={}||topicName={}||groupName={}||result={}||errMsg=search es failed", clusterPhyId, topicName, groupName, metricsListResult);
}
//排序
List<GroupTopicMember> groupTopicMembers = PaginationUtil.pageBySort(group.getTopicMembers(), PaginationConstant.DEFAULT_GROUP_TOPIC_SORTED_FIELD, SortTypeEnum.DESC.getSortType());
//分页
PaginationResult<GroupTopicMember> paginationResult = PaginationUtil.pageBySubData(groupTopicMembers, dto);
List<GroupMemberPO> groupMemberPOList = paginationResult.getData().getBizData().stream().map(elem -> new GroupMemberPO(clusterPhyId, elem.getTopicName(), groupName, group.getState().getState(), elem.getMemberCount())).collect(Collectors.toList());
return PaginationResult.buildSuc(this.getGroupTopicOverviewVOList(clusterPhyId, groupMemberPOList), paginationResult);
}
@Override
public PaginationResult<GroupOverviewVO> pagingClusterGroupsOverview(Long clusterPhyId, ClusterGroupSummaryDTO dto) {
List<Group> groupList = groupService.listClusterGroups(clusterPhyId);
// 类型转化
List<GroupOverviewVO> voList = groupList.stream().map(elem -> GroupConverter.convert2GroupOverviewVO(elem)).collect(Collectors.toList());
// 搜索groupName
voList = PaginationUtil.pageByFuzzyFilter(voList, dto.getSearchGroupName(), Arrays.asList("name"));
//搜索topic
if (!ValidateUtils.isBlank(dto.getSearchTopicName())) {
voList = voList.stream().filter(elem -> {
for (String topicName : elem.getTopicNameList()) {
if (topicName.contains(dto.getSearchTopicName())) {
return true;
}
}
return false;
}).collect(Collectors.toList());
}
// 分页 后 返回
return PaginationUtil.pageBySubData(voList, dto);
return PaginationResult.buildSuc(
this.convert2GroupTopicOverviewVOList(paginationResult.getData().getBizData(), metricsListResult.getData()),
paginationResult
);
}
@Override
@@ -141,7 +104,7 @@ public class GroupManagerImpl implements GroupManager {
List<String> latestMetricNames,
PaginationSortDTO dto) throws NotExistException, AdminOperateException {
// 获取消费组消费的TopicPartition列表
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffsetFromKafka(clusterPhyId, groupName);
Map<TopicPartition, Long> consumedOffsetMap = groupService.getGroupOffset(clusterPhyId, groupName);
List<Integer> partitionList = consumedOffsetMap.keySet()
.stream()
.filter(elem -> elem.topic().equals(topicName))
@@ -150,7 +113,7 @@ public class GroupManagerImpl implements GroupManager {
Collections.sort(partitionList);
// 获取消费组当前运行信息
ConsumerGroupDescription groupDescription = groupService.getGroupDescriptionFromKafka(clusterPhyId, groupName);
ConsumerGroupDescription groupDescription = groupService.getGroupDescription(clusterPhyId, groupName);
// 转换存储格式
Map<TopicPartition, MemberDescription> tpMemberMap = new HashMap<>();
@@ -203,13 +166,13 @@ public class GroupManagerImpl implements GroupManager {
return rv;
}
ConsumerGroupDescription description = groupService.getGroupDescriptionFromKafka(dto.getClusterId(), dto.getGroupName());
ConsumerGroupDescription description = groupService.getGroupDescription(dto.getClusterId(), dto.getGroupName());
if (ConsumerGroupState.DEAD.equals(description.state()) && !dto.isCreateIfNotExist()) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, "group不存在, 重置失败");
}
if (!ConsumerGroupState.EMPTY.equals(description.state()) && !ConsumerGroupState.DEAD.equals(description.state())) {
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty | Dead 情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, String.format("group处于%s, 重置失败(仅Empty情况可重置)", GroupStateEnum.getByRawState(description.state()).getState()));
}
// 获取offset
@@ -222,22 +185,6 @@ public class GroupManagerImpl implements GroupManager {
return groupService.resetGroupOffsets(dto.getClusterId(), dto.getGroupName(), offsetMapResult.getData(), operator);
}
@Override
public List<GroupTopicOverviewVO> getGroupTopicOverviewVOList(Long clusterPhyId, List<GroupMemberPO> groupMemberPOList) {
// 获取指标
Result<List<GroupMetrics>> metricsListResult = groupMetricService.listLatestMetricsAggByGroupTopicFromES(
clusterPhyId,
groupMemberPOList.stream().map(elem -> new GroupTopic(elem.getGroupName(), elem.getTopicName())).collect(Collectors.toList()),
Arrays.asList(GroupMetricVersionItems.GROUP_METRIC_LAG),
AggTypeEnum.MAX
);
if (metricsListResult.failed()) {
// 如果查询失败,则输出错误信息,但是依旧进行已有数据的返回
log.error("method=completeMetricData||clusterPhyId={}||result={}||errMsg=search es failed", clusterPhyId, metricsListResult);
}
return this.convert2GroupTopicOverviewVOList(groupMemberPOList, metricsListResult.getData());
}
/**************************************************** private method ****************************************************/
@@ -252,12 +199,12 @@ public class GroupManagerImpl implements GroupManager {
return Result.buildFromRSAndMsg(ResultStatus.NOT_EXIST, MsgConstant.getTopicNotExist(dto.getClusterId(), dto.getTopicName()));
}
if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()
if (GroupOffsetResetEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()
&& ValidateUtils.isEmptyList(dto.getOffsetList())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误指定offset重置需传offset信息");
}
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()
if (GroupOffsetResetEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()
&& ValidateUtils.isNull(dto.getTimestamp())) {
return Result.buildFromRSAndMsg(ResultStatus.PARAM_ILLEGAL, "参数错误,指定时间重置需传时间信息");
}
@@ -266,7 +213,7 @@ public class GroupManagerImpl implements GroupManager {
}
private Result<Map<TopicPartition, Long>> getPartitionOffset(GroupOffsetResetDTO dto) {
if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()) {
if (GroupOffsetResetEnum.PRECISE_OFFSET.getResetType() == dto.getResetType()) {
return Result.buildSuc(dto.getOffsetList().stream().collect(Collectors.toMap(
elem -> new TopicPartition(dto.getTopicName(), elem.getPartitionId()),
PartitionOffsetDTO::getOffset,
@@ -275,9 +222,9 @@ public class GroupManagerImpl implements GroupManager {
}
OffsetSpec offsetSpec = null;
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
if (GroupOffsetResetEnum.PRECISE_TIMESTAMP.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.forTimestamp(dto.getTimestamp());
} else if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getResetType()) {
} else if (GroupOffsetResetEnum.EARLIEST.getResetType() == dto.getResetType()) {
offsetSpec = OffsetSpec.earliest();
} else {
offsetSpec = OffsetSpec.latest();
@@ -325,11 +272,15 @@ public class GroupManagerImpl implements GroupManager {
// 获取Group指标信息
Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.collectGroupMetricsFromKafka(clusterPhyId, groupName, latestMetricNames == null ? Arrays.asList() : latestMetricNames);
Result<List<GroupMetrics>> groupMetricsResult = groupMetricService.listPartitionLatestMetricsFromES(
clusterPhyId,
groupName,
topicName,
latestMetricNames == null? Arrays.asList(): latestMetricNames
);
// 转换Group指标
List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData() ? groupMetricsResult.getData().stream().filter(elem -> topicName.equals(elem.getTopic())).collect(Collectors.toList()) : new ArrayList<>();
List<GroupMetrics> esGroupMetricsList = groupMetricsResult.hasData()? groupMetricsResult.getData(): new ArrayList<>();
Map<Integer, GroupMetrics> esMetricsMap = new HashMap<>();
for (GroupMetrics groupMetrics: esGroupMetricsList) {
esMetricsMap.put(groupMetrics.getPartitionId(), groupMetrics);
@@ -346,31 +297,4 @@ public class GroupManagerImpl implements GroupManager {
);
}
private List<GroupTopicOverviewVO> convert2GroupTopicOverviewVOList(String groupName, String state, List<GroupTopicMember> groupTopicList, List<GroupMetrics> metricsList) {
if (metricsList == null) {
metricsList = new ArrayList<>();
}
// <TopicName, GroupMetrics>
Map<String, GroupMetrics> metricsMap = new HashMap<>();
for (GroupMetrics metrics : metricsList) {
if (!groupName.equals(metrics.getGroup())) continue;
metricsMap.put(metrics.getTopic(), metrics);
}
List<GroupTopicOverviewVO> voList = new ArrayList<>();
for (GroupTopicMember po : groupTopicList) {
GroupTopicOverviewVO vo = ConvertUtil.obj2Obj(po, GroupTopicOverviewVO.class);
vo.setGroupName(groupName);
vo.setState(state);
GroupMetrics metrics = metricsMap.get(po.getTopicName());
if (metrics != null) {
vo.setMaxLag(ConvertUtil.Float2Long(metrics.getMetrics().get(GroupMetricVersionItems.GROUP_METRIC_LAG)));
}
voList.add(vo);
}
return voList;
}
}

View File

@@ -1,10 +1,7 @@
package com.xiaojukeji.know.streaming.km.biz.topic;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -25,6 +22,4 @@ public interface TopicStateManager {
Result<List<TopicPartitionVO>> getTopicPartitions(Long clusterPhyId, String topicName, List<String> metricsNames);
Result<TopicBrokersPartitionsSummaryVO> getTopicBrokersPartitionsSummary(Long clusterPhyId, String topicName);
PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto);
}

View File

@@ -10,18 +10,14 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils;
@@ -56,9 +52,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired
private ClusterPhyService clusterPhyService;
@Autowired
private PartitionService partitionService;
@Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -87,7 +80,7 @@ public class OpTopicManagerImpl implements OpTopicManager {
);
// 创建Topic
Result<Void> createTopicRes = opTopicService.createTopic(
return opTopicService.createTopic(
new TopicCreateParam(
dto.getClusterId(),
dto.getTopicName(),
@@ -97,21 +90,6 @@ public class OpTopicManagerImpl implements OpTopicManager {
),
operator
);
if (createTopicRes.successful()){
try{
FutureUtil.quickStartupFutureUtil.submitTask(() -> {
BackoffUtils.backoff(3000);
Result<List<Partition>> partitionsResult = partitionService.listPartitionsFromKafka(clusterPhy, dto.getTopicName());
if (partitionsResult.successful()){
partitionService.updatePartitions(clusterPhy.getId(), dto.getTopicName(), partitionsResult.getData(), new ArrayList<>());
}
});
}catch (Exception e) {
log.error("method=createTopic||param={}||operator={}||msg=add partition to db failed||errMsg=exception", dto, operator, e);
return Result.buildFromRSAndMsg(ResultStatus.MYSQL_OPERATE_FAILED, "Topic创建成功但记录Partition到DB中失败等待定时任务同步partition信息");
}
}
return createTopicRes;
}
@Override

View File

@@ -2,22 +2,17 @@ package com.xiaojukeji.know.streaming.km.biz.topic.impl;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.biz.group.GroupManager;
import com.xiaojukeji.know.streaming.km.biz.topic.TopicStateManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicRecordDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.PartitionMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.TopicMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.entity.partition.Partition;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.PaginationResult;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.bean.po.group.GroupMemberPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.broker.BrokerReplicaSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.group.GroupTopicOverviewVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicBrokersPartitionsSummaryVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicRecordVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.TopicStateVO;
@@ -27,27 +22,25 @@ import com.xiaojukeji.know.streaming.km.common.bean.vo.topic.partition.TopicPart
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.converter.PartitionConverter;
import com.xiaojukeji.know.streaming.km.common.converter.TopicVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.SortTypeEnum;
import com.xiaojukeji.know.streaming.km.common.exception.AdminOperateException;
import com.xiaojukeji.know.streaming.km.common.exception.NotExistException;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.PaginationUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.group.GroupService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionMetricService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicMetricService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.TopicMetricVersionItems;
import org.apache.commons.lang3.ObjectUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.config.TopicConfig;
import org.springframework.beans.factory.annotation.Autowired;
@@ -83,12 +76,6 @@ public class TopicStateManagerImpl implements TopicStateManager {
@Autowired
private TopicConfigService topicConfigService;
@Autowired
private GroupService groupService;
@Autowired
private GroupManager groupManager;
@Override
public TopicBrokerAllVO getTopicBrokerAll(Long clusterPhyId, String topicName, String searchBrokerHost) throws NotExistException {
Topic topic = topicService.getTopic(clusterPhyId, topicName);
@@ -173,31 +160,8 @@ public class TopicStateManagerImpl implements TopicStateManager {
}
maxMessage = Math.min(maxMessage, dto.getMaxRecords());
kafkaConsumer.assign(partitionList);
Map<TopicPartition, OffsetAndTimestamp> partitionOffsetAndTimestampMap = new HashMap<>();
// 获取指定时间每个分区的offset按指定开始时间查询消息时
if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
Map<TopicPartition, Long> timestampsToSearch = new HashMap<>();
partitionList.forEach(topicPartition -> {
timestampsToSearch.put(topicPartition, dto.getStartTimestampUnitMs());
});
partitionOffsetAndTimestampMap = kafkaConsumer.offsetsForTimes(timestampsToSearch);
}
for (TopicPartition partition : partitionList) {
if (OffsetTypeEnum.EARLIEST.getResetType() == dto.getFilterOffsetReset()) {
// 重置到最旧
kafkaConsumer.seek(partition, beginOffsetsMapResult.getData().get(partition));
} else if (OffsetTypeEnum.PRECISE_TIMESTAMP.getResetType() == dto.getFilterOffsetReset()) {
// 重置到指定时间
kafkaConsumer.seek(partition, partitionOffsetAndTimestampMap.get(partition).offset());
} else if (OffsetTypeEnum.PRECISE_OFFSET.getResetType() == dto.getFilterOffsetReset()) {
// 重置到指定位置
} else {
// 默认,重置到最新
kafkaConsumer.seek(partition, Math.max(beginOffsetsMapResult.getData().get(partition), endOffsetsMapResult.getData().get(partition) - dto.getMaxRecords()));
}
kafkaConsumer.seek(partition, Math.max(beginOffsetsMapResult.getData().get(partition), endOffsetsMapResult.getData().get(partition) - dto.getMaxRecords()));
}
// 这里需要减去 KafkaConstant.POLL_ONCE_TIMEOUT_UNIT_MS 是因为poll一次需要耗时如果这里不减去则可能会导致poll之后超过要求的时间
@@ -221,15 +185,6 @@ public class TopicStateManagerImpl implements TopicStateManager {
}
}
// 排序
if (ObjectUtils.isNotEmpty(voList)) {
// 默认按时间倒序排序
if (StringUtils.isBlank(dto.getSortType())) {
dto.setSortType(SortTypeEnum.DESC.getSortType());
}
PaginationUtil.pageBySort(voList, dto.getSortField(), dto.getSortType());
}
return Result.buildSuc(voList.subList(0, Math.min(dto.getMaxRecords(), voList.size())));
} catch (Exception e) {
log.error("method=getTopicMessages||clusterPhyId={}||topicName={}||param={}||errMsg=exception", clusterPhyId, topicName, dto, e);
@@ -358,19 +313,6 @@ public class TopicStateManagerImpl implements TopicStateManager {
return Result.buildSuc(vo);
}
@Override
public PaginationResult<GroupTopicOverviewVO> pagingTopicGroupsOverview(Long clusterPhyId, String topicName, String searchGroupName, PaginationBaseDTO dto) {
PaginationResult<GroupMemberPO> paginationResult = groupService.pagingGroupMembers(clusterPhyId, topicName, "", "", searchGroupName, dto);
if (!paginationResult.hasData()) {
return PaginationResult.buildSuc(new ArrayList<>(), paginationResult);
}
List<GroupTopicOverviewVO> groupTopicVOList = groupManager.getGroupTopicOverviewVOList(clusterPhyId, paginationResult.getData().getBizData());
return PaginationResult.buildSuc(groupTopicVOList, paginationResult);
}
/**************************************************** private method ****************************************************/
private boolean checkIfIgnore(ConsumerRecord<String, String> consumerRecord, String filterKey, String filterValue) {

View File

@@ -7,7 +7,6 @@ import com.didiglobal.logi.log.LogFactory;
import com.didiglobal.logi.security.common.dto.config.ConfigDTO;
import com.didiglobal.logi.security.service.ConfigService;
import com.xiaojukeji.know.streaming.km.biz.version.VersionControlManager;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.MetricDetailDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.metrices.UserMetricConfigDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric.UserMetricConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
@@ -47,49 +46,49 @@ public class VersionControlManagerImpl implements VersionControlManager {
@PostConstruct
public void init(){
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_TOTAL_PRODUCE_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_FETCH_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_FAILED_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_UNDER_REPLICA_PARTITIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_TOTAL_PRODUCE_REQUESTS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_BYTES_REJECTED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_TOPIC.getCode(), TOPIC_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_MESSAGES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITIONS_NO_LEADER, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITION_URP, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_LOG_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_REQ_QUEUE_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_RES_QUEUE_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_ACTIVE_CONTROLLER_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_TOTAL_LOG_SIZE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_CONNECTIONS, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_MESSAGES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_GROUP_REBALANCES, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_JOB_RUNNING, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITIONS_NO_LEADER, true));
defaultMetrics.add(new UserMetricConfig(METRIC_CLUSTER.getCode(), CLUSTER_METRIC_PARTITION_URP, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_OFFSET_CONSUMED, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_LAG, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_GROUP.getCode(), GROUP_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_STATE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_REQ_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_HEALTH_SCORE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_REQ_QUEUE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_RES_QUEUE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_LEADERS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_UNDER_REPLICATE_PARTITION, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_MESSAGE_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_TOTAL_PRODUCE_REQ, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_NETWORK_RPO_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_REQ_AVG_IDLE, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_CONNECTION_COUNT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_IN, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_BYTES_OUT, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_PARTITIONS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_LEADERS_SKEW, true));
defaultMetrics.add(new UserMetricConfig(METRIC_BROKER.getCode(), BROKER_METRIC_UNDER_REPLICATE_PARTITION, true));
}
@Autowired
@@ -107,15 +106,10 @@ public class VersionControlManagerImpl implements VersionControlManager {
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_BROKER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_PARTITION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_REPLICATION.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(METRIC_ZOOKEEPER.getCode()), VersionItemVO.class));
allVersionItemVO.addAll(ConvertUtil.list2List(versionControlService.listVersionControlItem(WEB_OP.getCode()), VersionItemVO.class));
Map<String, VersionItemVO> map = allVersionItemVO.stream().collect(
Collectors.toMap(
u -> u.getType() + "@" + u.getName(),
Function.identity(),
(v1, v2) -> v1)
);
Collectors.toMap(u -> u.getType() + "@" + u.getName(), Function.identity() ));
return Result.buildSuc(map);
}
@@ -165,9 +159,6 @@ public class VersionControlManagerImpl implements VersionControlManager {
UserMetricConfig umc = userMetricConfigMap.get(itemType + "@" + metric);
userMetricConfigVO.setSet(null != umc && umc.isSet());
if (umc != null) {
userMetricConfigVO.setRank(umc.getRank());
}
userMetricConfigVO.setName(itemVO.getName());
userMetricConfigVO.setType(itemVO.getType());
userMetricConfigVO.setDesc(itemVO.getDesc());
@@ -187,29 +178,13 @@ public class VersionControlManagerImpl implements VersionControlManager {
@Override
public Result<Void> updateUserMetricItem(Long clusterId, Integer type, UserMetricConfigDTO dto, String operator) {
Map<String, Boolean> metricsSetMap = dto.getMetricsSet();
//转换metricDetailDTOList
List<MetricDetailDTO> metricDetailDTOList = dto.getMetricDetailDTOList();
Map<String, MetricDetailDTO> metricDetailMap = new HashMap<>();
if (metricDetailDTOList != null && !metricDetailDTOList.isEmpty()) {
metricDetailMap = metricDetailDTOList.stream().collect(Collectors.toMap(MetricDetailDTO::getMetric, Function.identity()));
}
//转换metricsSetMap
if (metricsSetMap != null && !metricsSetMap.isEmpty()) {
for (Map.Entry<String, Boolean> metricAndShowEntry : metricsSetMap.entrySet()) {
if (metricDetailMap.containsKey(metricAndShowEntry.getKey())) continue;
metricDetailMap.put(metricAndShowEntry.getKey(), new MetricDetailDTO(metricAndShowEntry.getKey(), metricAndShowEntry.getValue(), null));
}
}
if (metricDetailMap.isEmpty()) {
if(null == metricsSetMap || metricsSetMap.isEmpty()){
return Result.buildSuc();
}
Set<UserMetricConfig> userMetricConfigs = getUserMetricConfig(operator);
for (MetricDetailDTO metricDetailDTO : metricDetailMap.values()) {
UserMetricConfig userMetricConfig = new UserMetricConfig(type, metricDetailDTO.getMetric(), metricDetailDTO.getSet(), metricDetailDTO.getRank());
for(Map.Entry<String, Boolean> metricAndShowEntry : metricsSetMap.entrySet()){
UserMetricConfig userMetricConfig = new UserMetricConfig(type, metricAndShowEntry.getKey(), metricAndShowEntry.getValue());
userMetricConfigs.remove(userMetricConfig);
userMetricConfigs.add(userMetricConfig);
}
@@ -253,7 +228,7 @@ public class VersionControlManagerImpl implements VersionControlManager {
return defaultMetrics;
}
return JSON.parseObject(value, new TypeReference<Set<UserMetricConfig>>() {});
return JSON.parseObject(value, new TypeReference<Set<UserMetricConfig>>(){});
}
public static void main(String[] args){

View File

@@ -0,0 +1,121 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.*;
@Component
public class MetricESSender implements ApplicationListener<BaseMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final int THRESHOLD = 100;
private ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(10, 20, 6000, TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount()));
@PostConstruct
public void init(){
LOGGER.info("class=MetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(BaseMetricEvent event) {
if(event instanceof BrokerMetricEvent) {
BrokerMetricEvent brokerMetricEvent = (BrokerMetricEvent)event;
send2es(BROKER_INDEX,
ConvertUtil.list2List(brokerMetricEvent.getBrokerMetrics(), BrokerMetricPO.class)
);
} else if(event instanceof ClusterMetricEvent) {
ClusterMetricEvent clusterMetricEvent = (ClusterMetricEvent)event;
send2es(CLUSTER_INDEX,
ConvertUtil.list2List(clusterMetricEvent.getClusterMetrics(), ClusterMetricPO.class)
);
} else if(event instanceof TopicMetricEvent) {
TopicMetricEvent topicMetricEvent = (TopicMetricEvent)event;
send2es(TOPIC_INDEX,
ConvertUtil.list2List(topicMetricEvent.getTopicMetrics(), TopicMetricPO.class)
);
} else if(event instanceof PartitionMetricEvent) {
PartitionMetricEvent partitionMetricEvent = (PartitionMetricEvent)event;
send2es(PARTITION_INDEX,
ConvertUtil.list2List(partitionMetricEvent.getPartitionMetrics(), PartitionMetricPO.class)
);
} else if(event instanceof GroupMetricEvent) {
GroupMetricEvent groupMetricEvent = (GroupMetricEvent)event;
send2es(GROUP_INDEX,
ConvertUtil.list2List(groupMetricEvent.getGroupMetrics(), GroupMetricPO.class)
);
} else if(event instanceof ReplicaMetricEvent) {
ReplicaMetricEvent replicaMetricEvent = (ReplicaMetricEvent)event;
send2es(REPLICATION_INDEX,
ConvertUtil.list2List(replicaMetricEvent.getReplicationMetrics(), ReplicationMetricPO.class)
);
}
}
/**
* 根据不同监控维度来发送
*/
private boolean send2es(String index, List<? extends BaseESPO> statsList){
if (CollectionUtils.isEmpty(statsList)) {
return true;
}
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull( baseMetricESDao )) {
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
return false;
}
int size = statsList.size();
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
if (size < THRESHOLD) {
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList)
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
);
}
return true;
}
}

View File

@@ -91,7 +91,7 @@ public class ReplicaMetricCollector extends AbstractMetricCollector<ReplicationM
continue;
}
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafka(
Result<ReplicationMetrics> ret = replicaMetricService.collectReplicaMetricsFromKafkaWithCache(
clusterPhyId,
metrics.getTopic(),
metrics.getBrokerId(),

View File

@@ -1,122 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.metric;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.kafkacontroller.KafkaController;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric.ZookeeperMetricParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.version.VersionControlItem;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.ZookeeperInfo;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import com.xiaojukeji.know.streaming.km.core.service.kafkacontroller.KafkaControllerService;
import com.xiaojukeji.know.streaming.km.core.service.version.VersionControlService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperMetricService;
import com.xiaojukeji.know.streaming.km.core.service.zookeeper.ZookeeperService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionItemTypeEnum.METRIC_ZOOKEEPER;
/**
* @author didi
*/
@Component
public class ZookeeperMetricCollector extends AbstractMetricCollector<ZookeeperMetricPO> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@Autowired
private VersionControlService versionControlService;
@Autowired
private ZookeeperMetricService zookeeperMetricService;
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private KafkaControllerService kafkaControllerService;
@Override
public void collectMetrics(ClusterPhy clusterPhy) {
Long startTime = System.currentTimeMillis();
Long clusterPhyId = clusterPhy.getId();
List<VersionControlItem> items = versionControlService.listVersionControlItem(clusterPhyId, collectorType().getCode());
List<ZookeeperInfo> aliveZKList = zookeeperService.listFromDBByCluster(clusterPhyId)
.stream()
.filter(elem -> Constant.ALIVE.equals(elem.getStatus()))
.collect(Collectors.toList());
KafkaController kafkaController = kafkaControllerService.getKafkaControllerFromDB(clusterPhyId);
ZookeeperMetrics metrics = ZookeeperMetrics.initWithMetric(clusterPhyId, Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (float)Constant.INVALID_CODE);
if (ValidateUtils.isEmptyList(aliveZKList)) {
// 没有存活的ZK时发布事件然后直接返回
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
return;
}
// 构造参数
ZookeeperMetricParam param = new ZookeeperMetricParam(
clusterPhyId,
aliveZKList.stream().map(elem -> new Tuple<String, Integer>(elem.getHost(), elem.getPort())).collect(Collectors.toList()),
ConvertUtil.str2ObjByJson(clusterPhy.getZkProperties(), ZKConfig.class),
kafkaController == null? Constant.INVALID_CODE: kafkaController.getBrokerId(),
null
);
for(VersionControlItem v : items) {
try {
if(null != metrics.getMetrics().get(v.getName())) {
continue;
}
param.setMetricName(v.getName());
Result<ZookeeperMetrics> ret = zookeeperMetricService.collectMetricsFromZookeeper(param);
if(null == ret || ret.failed() || null == ret.getData()){
continue;
}
metrics.putMetric(ret.getData().getMetrics());
if(!EnvUtil.isOnline()){
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||metricValue={}",
clusterPhyId, v.getName(), ConvertUtil.obj2Json(ret.getData().getMetrics())
);
}
} catch (Exception e){
LOGGER.error(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||metricName={}||errMsg=exception!",
clusterPhyId, v.getName(), e
);
}
}
metrics.putMetric(Constant.COLLECT_METRICS_COST_TIME_METRICS_NAME, (System.currentTimeMillis() - startTime) / 1000.0f);
publishMetric(new ZookeeperMetricEvent(this, Arrays.asList(metrics)));
LOGGER.info(
"class=ZookeeperMetricCollector||method=collectMetrics||clusterPhyId={}||startTime={}||costTime={}||msg=msg=collect finished.",
clusterPhyId, startTime, System.currentTimeMillis() - startTime
);
}
@Override
public VersionItemTypeEnum collectorType() {
return METRIC_ZOOKEEPER;
}
}

View File

@@ -1,72 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.po.BaseESPO;
import com.xiaojukeji.know.streaming.km.common.utils.EnvUtil;
import com.xiaojukeji.know.streaming.km.common.utils.NamedThreadFactory;
import com.xiaojukeji.know.streaming.km.persistence.es.dao.BaseMetricESDAO;
import org.apache.commons.collections.CollectionUtils;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public abstract class AbstractMetricESSender {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
private static final int THRESHOLD = 100;
private static final ThreadPoolExecutor esExecutor = new ThreadPoolExecutor(
10,
20,
6000,
TimeUnit.MILLISECONDS,
new LinkedBlockingDeque<>(1000),
new NamedThreadFactory("KM-Collect-MetricESSender-ES"),
(r, e) -> LOGGER.warn("class=MetricESSender||msg=KM-Collect-MetricESSender-ES Deque is blocked, taskCount:{}" + e.getTaskCount())
);
/**
* 根据不同监控维度来发送
*/
protected boolean send2es(String index, List<? extends BaseESPO> statsList){
if (CollectionUtils.isEmpty(statsList)) {
return true;
}
if (!EnvUtil.isOnline()) {
LOGGER.info("class=MetricESSender||method=send2es||ariusStats={}||size={}",
index, statsList.size());
}
BaseMetricESDAO baseMetricESDao = BaseMetricESDAO.getByStatsType(index);
if (Objects.isNull( baseMetricESDao )) {
LOGGER.error("class=MetricESSender||method=send2es||errMsg=fail to find {}", index);
return false;
}
int size = statsList.size();
int num = (size) % THRESHOLD == 0 ? (size / THRESHOLD) : (size / THRESHOLD + 1);
if (size < THRESHOLD) {
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList)
);
return true;
}
for (int i = 1; i < num + 1; i++) {
int end = (i * THRESHOLD) > size ? size : (i * THRESHOLD);
int start = (i - 1) * THRESHOLD;
esExecutor.execute(
() -> baseMetricESDao.batchInsertStats(statsList.subList(start, end))
);
}
return true;
}
}

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.BrokerMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.BrokerMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.BROKER_INDEX;
@Component
public class BrokerMetricESSender extends AbstractMetricESSender implements ApplicationListener<BrokerMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=BrokerMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(BrokerMetricEvent event) {
send2es(BROKER_INDEX, ConvertUtil.list2List(event.getBrokerMetrics(), BrokerMetricPO.class));
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ClusterMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ClusterMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.CLUSTER_INDEX;
@Component
public class ClusterMetricESSender extends AbstractMetricESSender implements ApplicationListener<ClusterMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=ClusterMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ClusterMetricEvent event) {
send2es(CLUSTER_INDEX, ConvertUtil.list2List(event.getClusterMetrics(), ClusterMetricPO.class));
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.GroupMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.GroupMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.GROUP_INDEX;
@Component
public class GroupMetricESSender extends AbstractMetricESSender implements ApplicationListener<GroupMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(GroupMetricEvent event) {
send2es(GROUP_INDEX, ConvertUtil.list2List(event.getGroupMetrics(), GroupMetricPO.class));
}
}

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.PartitionMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.PartitionMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.PARTITION_INDEX;
@Component
public class PartitionMetricESSender extends AbstractMetricESSender implements ApplicationListener<PartitionMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=PartitionMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(PartitionMetricEvent event) {
send2es(PARTITION_INDEX, ConvertUtil.list2List(event.getPartitionMetrics(), PartitionMetricPO.class));
}
}

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ReplicaMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ReplicationMetricPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.REPLICATION_INDEX;
@Component
public class ReplicaMetricESSender extends AbstractMetricESSender implements ApplicationListener<ReplicaMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=GroupMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ReplicaMetricEvent event) {
send2es(REPLICATION_INDEX, ConvertUtil.list2List(event.getReplicationMetrics(), ReplicationMetricPO.class));
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.*;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.*;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.TOPIC_INDEX;
@Component
public class TopicMetricESSender extends AbstractMetricESSender implements ApplicationListener<TopicMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=TopicMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(TopicMetricEvent event) {
send2es(TOPIC_INDEX, ConvertUtil.list2List(event.getTopicMetrics(), TopicMetricPO.class));
}
}

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.collector.sink;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.bean.event.metric.ZookeeperMetricEvent;
import com.xiaojukeji.know.streaming.km.common.bean.po.metrice.ZookeeperMetricPO;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import static com.xiaojukeji.know.streaming.km.common.constant.ESIndexConstant.ZOOKEEPER_INDEX;
@Component
public class ZookeeperMetricESSender extends AbstractMetricESSender implements ApplicationListener<ZookeeperMetricEvent> {
protected static final ILog LOGGER = LogFactory.getLog("METRIC_LOGGER");
@PostConstruct
public void init(){
LOGGER.info("class=ZookeeperMetricESSender||method=init||msg=init finished");
}
@Override
public void onApplicationEvent(ZookeeperMetricEvent event) {
send2es(ZOOKEEPER_INDEX, ConvertUtil.list2List(event.getZookeeperMetrics(), ZookeeperMetricPO.class));
}
}

View File

@@ -1,18 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/17
*/
@Data
public class ClusterGroupSummaryDTO extends PaginationBaseDTO {
@ApiModelProperty("查找该Topic")
private String searchTopicName;
@ApiModelProperty("查找该Group")
private String searchGroupName;
}

View File

@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
@@ -35,8 +34,4 @@ public class ClusterPhyBaseDTO extends BaseDTO {
@NotNull(message = "jmxProperties不允许为空")
@ApiModelProperty(value="Jmx配置")
protected JmxConfig jmxProperties;
// TODO 前端页面增加时,需要加一个不为空的限制
@ApiModelProperty(value="ZK配置")
protected ZKConfig zkProperties;
}

View File

@@ -1,13 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.cluster;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ClusterZookeepersOverviewDTO extends PaginationBaseDTO {
}

View File

@@ -3,7 +3,6 @@ package com.xiaojukeji.know.streaming.km.common.bean.dto.group;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.know.streaming.km.common.bean.dto.partition.PartitionOffsetDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.ClusterTopicDTO;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
@@ -24,7 +23,7 @@ public class GroupOffsetResetDTO extends ClusterTopicDTO {
private String groupName;
/**
* @see OffsetTypeEnum
* @see com.xiaojukeji.know.streaming.km.common.enums.GroupOffsetResetEnum
*/
@NotNull(message = "resetType不允许为空")
@ApiModelProperty(value = "重置方式", example = "1")

View File

@@ -1,32 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.metrices;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
/**
* @author didi
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@ApiModel(description = "指标详细属性信息")
public class MetricDetailDTO extends BaseDTO {
@ApiModelProperty("指标名称")
private String metric;
@ApiModelProperty("指标是否显示")
private Boolean set;
@NotNull(message = "MetricDetailDTO的rank字段应不为空")
@ApiModelProperty("指标优先级")
private Integer rank;
}

View File

@@ -7,8 +7,6 @@ import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.Valid;
import java.util.List;
import java.util.Map;
@@ -19,8 +17,4 @@ import java.util.Map;
public class UserMetricConfigDTO extends BaseDTO {
@ApiModelProperty("指标展示设置项key指标名value是否展现(true展现/false不展现)")
private Map<String, Boolean> metricsSet;
@Valid
@ApiModelProperty("指标自定义属性列表")
private List<MetricDetailDTO> metricDetailDTOList;
}

View File

@@ -1,8 +1,7 @@
package com.xiaojukeji.know.streaming.km.common.bean.dto.topic;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationSortDTO;
import com.xiaojukeji.know.streaming.km.common.enums.OffsetTypeEnum;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
@@ -16,7 +15,7 @@ import javax.validation.constraints.NotNull;
@Data
@JsonIgnoreProperties(ignoreUnknown = true)
@ApiModel(description = "Topic记录")
public class TopicRecordDTO extends PaginationSortDTO {
public class TopicRecordDTO extends BaseDTO {
@NotNull(message = "truncate不允许为空")
@ApiModelProperty(value = "是否截断", example = "true")
private Boolean truncate;
@@ -35,13 +34,4 @@ public class TopicRecordDTO extends PaginationSortDTO {
@ApiModelProperty(value = "预览超时时间", example = "10000")
private Long pullTimeoutUnitMs = 8000L;
/**
* @see OffsetTypeEnum
*/
@ApiModelProperty(value = "offset", example = "")
private Integer filterOffsetReset = 0;
@ApiModelProperty(value = "开始日期时间戳", example = "")
private Long startTimestampUnitMs;
}

View File

@@ -3,9 +3,9 @@ package com.xiaojukeji.know.streaming.km.common.bean.entity.broker;
import com.alibaba.fastjson.TypeReference;
import com.xiaojukeji.know.streaming.km.common.bean.entity.common.IpPortData;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig;
import com.xiaojukeji.know.streaming.km.common.bean.po.broker.BrokerPO;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.zookeeper.znode.brokers.BrokerMetadata;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@@ -66,19 +66,33 @@ public class Broker implements Serializable {
*/
private Map<String, IpPortData> endpointMap;
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp, JmxConfig jmxConfig) {
public static Broker buildFrom(Long clusterPhyId, Node node, Long startTimestamp) {
Broker metadata = new Broker();
metadata.setClusterPhyId(clusterPhyId);
metadata.setBrokerId(node.id());
metadata.setHost(node.host());
metadata.setPort(node.port());
metadata.setJmxPort(jmxConfig != null ? jmxConfig.getJmxPort() : -1);
metadata.setJmxPort(-1);
metadata.setStartTimestamp(startTimestamp);
metadata.setRack(node.rack());
metadata.setStatus(1);
return metadata;
}
public static Broker buildFrom(Long clusterPhyId, Integer brokerId, BrokerMetadata brokerMetadata) {
Broker metadata = new Broker();
metadata.setClusterPhyId(clusterPhyId);
metadata.setBrokerId(brokerId);
metadata.setHost(brokerMetadata.getHost());
metadata.setPort(brokerMetadata.getPort());
metadata.setJmxPort(brokerMetadata.getJmxPort());
metadata.setStartTimestamp(brokerMetadata.getTimestamp());
metadata.setRack(brokerMetadata.getRack());
metadata.setStatus(1);
metadata.setEndpointMap(brokerMetadata.getEndpointMap());
return metadata;
}
public static Broker buildFrom(BrokerPO brokerPO) {
Broker broker = ConvertUtil.obj2Obj(brokerPO, Broker.class);
String endpointMapStr = brokerPO.getEndpointMap();

View File

@@ -53,16 +53,9 @@ public class ClusterPhy implements Comparable<ClusterPhy>, EntifyIdInterface {
/**
* jmx配置
* @see com.xiaojukeji.know.streaming.km.common.bean.entity.config.JmxConfig
*/
private String jmxProperties;
/**
* zk配置
* @see com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig
*/
private String zkProperties;
/**
* 开启ACL
* @see com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterAuthTypeEnum

View File

@@ -1,37 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.cluster;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 集群状态信息
* @author zengqiao
* @date 22/02/24
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ClusterPhysHealthState {
private Integer unknownCount;
private Integer goodCount;
private Integer mediumCount;
private Integer poorCount;
private Integer deadCount;
private Integer total;
public ClusterPhysHealthState(Integer total) {
this.unknownCount = 0;
this.goodCount = 0;
this.mediumCount = 0;
this.poorCount = 0;
this.deadCount = 0;
this.total = total;
}
}

View File

@@ -1,70 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import java.io.Serializable;
import java.util.Properties;
/**
* @author zengqiao
* @date 22/02/24
*/
@ApiModel(description = "ZK配置")
public class ZKConfig implements Serializable {
@ApiModelProperty(value="ZK的jmx配置")
private JmxConfig jmxConfig;
@ApiModelProperty(value="ZK是否开启secure", example = "false")
private Boolean openSecure = false;
@ApiModelProperty(value="ZK的Session超时时间", example = "15000")
private Integer sessionTimeoutUnitMs = 15000;
@ApiModelProperty(value="ZK的Request超时时间", example = "5000")
private Integer requestTimeoutUnitMs = 5000;
@ApiModelProperty(value="ZK的Request超时时间")
private Properties otherProps = new Properties();
public JmxConfig getJmxConfig() {
return jmxConfig == null? new JmxConfig(): jmxConfig;
}
public void setJmxConfig(JmxConfig jmxConfig) {
this.jmxConfig = jmxConfig;
}
public Boolean getOpenSecure() {
return openSecure != null && openSecure;
}
public void setOpenSecure(Boolean openSecure) {
this.openSecure = openSecure;
}
public Integer getSessionTimeoutUnitMs() {
return sessionTimeoutUnitMs == null? Constant.DEFAULT_SESSION_TIMEOUT_UNIT_MS: sessionTimeoutUnitMs;
}
public void setSessionTimeoutUnitMs(Integer sessionTimeoutUnitMs) {
this.sessionTimeoutUnitMs = sessionTimeoutUnitMs;
}
public Integer getRequestTimeoutUnitMs() {
return requestTimeoutUnitMs == null? Constant.DEFAULT_REQUEST_TIMEOUT_UNIT_MS: requestTimeoutUnitMs;
}
public void setRequestTimeoutUnitMs(Integer requestTimeoutUnitMs) {
this.requestTimeoutUnitMs = requestTimeoutUnitMs;
}
public Properties getOtherProps() {
return otherProps == null? new Properties() : otherProps;
}
public void setOtherProps(Properties otherProps) {
this.otherProps = otherProps;
}
}

View File

@@ -13,4 +13,9 @@ public class BaseClusterHealthConfig extends BaseClusterConfigValue {
* 健康检查名称
*/
protected HealthCheckNameEnum checkNameEnum;
/**
* 权重
*/
protected Float weight;
}

View File

@@ -1,19 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.healthcheck;
import lombok.Data;
/**
* @author wyb
* @date 2022/10/26
*/
@Data
public class HealthAmountRatioConfig extends BaseClusterHealthConfig {
/**
* 总数
*/
private Integer amount;
/**
* 比例
*/
private Double ratio;
}

View File

@@ -1,12 +1,12 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.config.metric;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class UserMetricConfig {
private int type;
@@ -15,22 +15,6 @@ public class UserMetricConfig {
private boolean set;
private Integer rank;
public UserMetricConfig(int type, String metric, boolean set, Integer rank) {
this.type = type;
this.metric = metric;
this.set = set;
this.rank = rank;
}
public UserMetricConfig(int type, String metric, boolean set) {
this.type = type;
this.metric = metric;
this.set = set;
this.rank = null;
}
@Override
public int hashCode(){
return metric.hashCode() << 1 + type;

View File

@@ -1,74 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.apache.kafka.clients.admin.ConsumerGroupDescription;
import java.util.ArrayList;
import java.util.List;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Group {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
* @see GroupTypeEnum
*/
private GroupTypeEnum type;
/**
* group名称
*/
private String name;
/**
* group状态
* @see GroupStateEnum
*/
private GroupStateEnum state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private List<GroupTopicMember> topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
public Group(Long clusterPhyId, String groupName, ConsumerGroupDescription groupDescription) {
this.clusterPhyId = clusterPhyId;
this.type = groupDescription.isSimpleConsumerGroup()? GroupTypeEnum.CONSUMER: GroupTypeEnum.CONNECTOR;
this.name = groupName;
this.state = GroupStateEnum.getByRawState(groupDescription.state());
this.memberCount = groupDescription.members() == null? 0: groupDescription.members().size();
this.topicMembers = new ArrayList<>();
this.partitionAssignor = groupDescription.partitionAssignor();
this.coordinatorId = groupDescription.coordinator() == null? Constant.INVALID_CODE: groupDescription.coordinator().id();
}
}

View File

@@ -1,27 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.group;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* @author wyb
* @date 2022/10/10
*/
@Data
@NoArgsConstructor
public class GroupTopicMember {
/**
* Topic名称
*/
private String topicName;
/**
* 消费此Topic的成员数量
*/
private Integer memberCount;
public GroupTopicMember(String topicName, Integer memberCount) {
this.topicName = topicName;
this.memberCount = memberCount;
}
}

View File

@@ -1,83 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.health;
import com.xiaojukeji.know.streaming.km.common.bean.po.health.HealthCheckResultPO;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthCheckNameEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.stream.Collectors;
@Data
@NoArgsConstructor
public class HealthCheckAggResult {
private HealthCheckNameEnum checkNameEnum;
private List<HealthCheckResultPO> poList;
private Boolean passed;
public HealthCheckAggResult(HealthCheckNameEnum checkNameEnum, List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.poList = poList;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
passed = false;
}
}
public Integer getTotalCount() {
if (poList == null) {
return 0;
}
return poList.size();
}
public Integer getPassedCount() {
if (poList == null) {
return 0;
}
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
*/
public Integer calRawHealthScore() {
if (poList == null || poList.isEmpty()) {
return 100;
}
return 100 * this.getPassedCount() / this.getTotalCount();
}
public List<String> getNotPassedResNameList() {
if (poList == null) {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getCreateTime();
}
public Date getUpdateTime() {
if (ValidateUtils.isEmptyList(poList)) {
return null;
}
return poList.get(0).getUpdateTime();
}
}

View File

@@ -17,6 +17,10 @@ import java.util.stream.Collectors;
public class HealthScoreResult {
private HealthCheckNameEnum checkNameEnum;
private Float presentDimensionTotalWeight;
private Float allDimensionTotalWeight;
private BaseClusterHealthConfig baseConfig;
private List<HealthCheckResultPO> poList;
@@ -24,11 +28,15 @@ public class HealthScoreResult {
private Boolean passed;
public HealthScoreResult(HealthCheckNameEnum checkNameEnum,
Float presentDimensionTotalWeight,
Float allDimensionTotalWeight,
BaseClusterHealthConfig baseConfig,
List<HealthCheckResultPO> poList) {
this.checkNameEnum = checkNameEnum;
this.baseConfig = baseConfig;
this.poList = poList;
this.presentDimensionTotalWeight = presentDimensionTotalWeight;
this.allDimensionTotalWeight = allDimensionTotalWeight;
if (!ValidateUtils.isEmptyList(poList) && poList.stream().filter(elem -> elem.getPassed() <= 0).count() <= 0) {
passed = true;
} else {
@@ -51,6 +59,32 @@ public class HealthScoreResult {
return (int) (poList.stream().filter(elem -> elem.getPassed() > 0).count());
}
/**
* 计算所有检查结果的健康分
* 比如:计算集群健康分
*/
public Float calAllWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / allDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前维度的健康分
* 比如计算集群Broker健康分
*/
public Float calDimensionWeightHealthScore() {
Float healthScore = 100 * baseConfig.getWeight() / presentDimensionTotalWeight;
if (poList == null || poList.isEmpty()) {
return 0.0f;
}
return healthScore * this.getPassedCount() / this.getTotalCount();
}
/**
* 计算当前检查的健康分
* 比如计算集群Broker健康检查中的某一项的健康分
@@ -68,7 +102,7 @@ public class HealthScoreResult {
return new ArrayList<>();
}
return poList.stream().filter(elem -> elem.getPassed() <= 0 && !ValidateUtils.isBlank(elem.getResName())).map(elem -> elem.getResName()).collect(Collectors.toList());
return poList.stream().filter(elem -> elem.getPassed() <= 0).map(elem -> elem.getResName()).collect(Collectors.toList());
}
public Date getCreateTime() {

View File

@@ -1,28 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.metrics;
import lombok.Data;
import lombok.ToString;
/**
* @author zengqiao
* @date 20/6/17
*/
@Data
@ToString
public class ZookeeperMetrics extends BaseMetrics {
public ZookeeperMetrics(Long clusterPhyId) {
super(clusterPhyId);
}
public static ZookeeperMetrics initWithMetric(Long clusterPhyId, String metric, Float value) {
ZookeeperMetrics metrics = new ZookeeperMetrics(clusterPhyId);
metrics.setClusterPhyId( clusterPhyId );
metrics.putMetric(metric, value);
return metrics;
}
@Override
public String unique() {
return "ZK@" + clusterPhyId;
}
}

View File

@@ -1,47 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
public class ZookeeperMetricParam extends MetricParam {
private Long clusterPhyId;
private List<Tuple<String, Integer>> zkAddressList;
private ZKConfig zkConfig;
private String metricName;
private Integer kafkaControllerId;
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.metricName = metricName;
}
public ZookeeperMetricParam(Long clusterPhyId,
List<Tuple<String, Integer>> zkAddressList,
ZKConfig zkConfig,
Integer kafkaControllerId,
String metricName) {
this.clusterPhyId = clusterPhyId;
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
this.kafkaControllerId = kafkaControllerId;
this.metricName = metricName;
}
}

View File

@@ -1,26 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.param.zookeeper;
import com.xiaojukeji.know.streaming.km.common.bean.entity.config.ZKConfig;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.cluster.ClusterPhyParam;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.List;
/**
* @author didi
*/
@Data
@NoArgsConstructor
public class ZookeeperParam extends ClusterPhyParam {
private List<Tuple<String, Integer>> zkAddressList;
private ZKConfig zkConfig;
public ZookeeperParam(Long clusterPhyId, List<Tuple<String, Integer>> zkAddressList, ZKConfig zkConfig) {
super(clusterPhyId);
this.zkAddressList = zkAddressList;
this.zkConfig = zkConfig;
}
}

View File

@@ -1,6 +1,5 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.reassign;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import lombok.Data;
import org.apache.kafka.common.TopicPartition;
@@ -20,10 +19,4 @@ public class ReassignResult {
return state.isDone();
}
public boolean checkPreferredReplicaElectionUnNeed(String reassignBrokerIds, String originalBrokerIds) {
Integer targetLeader = CommonUtils.string2IntList(reassignBrokerIds).get(0);
Integer originalLeader = CommonUtils.string2IntList(originalBrokerIds).get(0);
return originalLeader.equals(targetLeader);
}
}

View File

@@ -56,7 +56,6 @@ public enum ResultStatus {
KAFKA_OPERATE_FAILED(8010, "Kafka操作失败"),
MYSQL_OPERATE_FAILED(8020, "MySQL操作失败"),
ZK_OPERATE_FAILED(8030, "ZK操作失败"),
ZK_FOUR_LETTER_CMD_FORBIDDEN(8031, "ZK四字命令被禁止"),
ES_OPERATE_ERROR(8040, "ES操作失败"),
HTTP_REQ_ERROR(8050, "第三方http请求异常"),

View File

@@ -23,8 +23,6 @@ public class VersionMetricControlItem extends VersionControlItem{
public static final String CATEGORY_PERFORMANCE = "Performance";
public static final String CATEGORY_FLOW = "Flow";
public static final String CATEGORY_CLIENT = "Client";
/**
* 指标单位名称,非指标的没有
*/

View File

@@ -1,22 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.utils.Tuple;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import org.apache.zookeeper.data.Stat;
@Data
public class Znode {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private Stat stat;
@ApiModelProperty(value = "节点路径", example = "")
private String namespace;
}

View File

@@ -1,42 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper;
import com.xiaojukeji.know.streaming.km.common.bean.entity.BaseEntity;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
public class ZookeeperInfo extends BaseEntity {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
public boolean alive() {
return !(Constant.DOWN.equals(status));
}
}

View File

@@ -1,9 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import java.io.Serializable;
/**
* 四字命令结果数据的基础类
*/
public class BaseFourLetterWordCmdData implements Serializable {
}

View File

@@ -1,38 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdData extends BaseFourLetterWordCmdData {
private Long clientPort;
private String dataDir;
private String dataLogDir;
private Long tickTime;
private Long maxClientCnxns;
private Long minSessionTimeout;
private Long maxSessionTimeout;
private Integer serverId;
private String initLimit;
private Long syncLimit;
private Long electionAlg;
private Long electionPort;
private Long quorumPort;
private Long peerType;
}

View File

@@ -1,39 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkWatchCount;
private Long zkEphemeralsCount;
private Long zkApproximateDataSize;
private Long zkOpenFileDescriptorCount;
private Long zkMaxFileDescriptorCount;
}

View File

@@ -1,30 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword;
import lombok.Data;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdData extends BaseFourLetterWordCmdData {
private String zkVersion;
private Float zkAvgLatency;
private Long zkMaxLatency;
private Long zkMinLatency;
private Long zkPacketsReceived;
private Long zkPacketsSent;
private Long zkNumAliveConnections;
private Long zkOutstandingRequests;
private String zkServerState;
private Long zkZnodeCount;
private Long zkZxid;
}

View File

@@ -1,116 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ConfigCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* clientPort=2183
* dataDir=/data1/data/zkData2/version-2
* dataLogDir=/data1/data/zkLog2/version-2
* tickTime=2000
* maxClientCnxns=60
* minSessionTimeout=4000
* maxSessionTimeout=40000
* serverId=2
* initLimit=15
* syncLimit=10
* electionAlg=3
* electionPort=4445
* quorumPort=4444
* peerType=0
*/
@Data
public class ConfigCmdDataParser implements FourLetterWordDataParser<ConfigCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ConfigCmdDataParser.class);
private Result<ConfigCmdData> dataResult = null;
@Override
public String getCmd() {
return FourLetterWordUtil.ConfigCmd;
}
@Override
public ConfigCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('=');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ConfigCmdData configCmdData = new ConfigCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "clientPort":
configCmdData.setClientPort(Long.valueOf(elem.getValue()));
break;
case "dataDir":
configCmdData.setDataDir(elem.getValue());
break;
case "dataLogDir":
configCmdData.setDataLogDir(elem.getValue());
break;
case "tickTime":
configCmdData.setTickTime(Long.valueOf(elem.getValue()));
break;
case "maxClientCnxns":
configCmdData.setMaxClientCnxns(Long.valueOf(elem.getValue()));
break;
case "minSessionTimeout":
configCmdData.setMinSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "maxSessionTimeout":
configCmdData.setMaxSessionTimeout(Long.valueOf(elem.getValue()));
break;
case "serverId":
configCmdData.setServerId(Integer.valueOf(elem.getValue()));
break;
case "initLimit":
configCmdData.setInitLimit(elem.getValue());
break;
case "syncLimit":
configCmdData.setSyncLimit(Long.valueOf(elem.getValue()));
break;
case "electionAlg":
configCmdData.setElectionAlg(Long.valueOf(elem.getValue()));
break;
case "electionPort":
configCmdData.setElectionPort(Long.valueOf(elem.getValue()));
break;
case "quorumPort":
configCmdData.setQuorumPort(Long.valueOf(elem.getValue()));
break;
case "peerType":
configCmdData.setPeerType(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=ConfigCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ConfigCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return configCmdData;
}
}

View File

@@ -1,10 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
/**
* 四字命令结果解析类
*/
public interface FourLetterWordDataParser<T> {
String getCmd();
T parseAndInitData(Long clusterPhyId, String host, int port, String cmdData);
}

View File

@@ -1,117 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.MonitorCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
* zk_avg_latency 0
* zk_max_latency 399
* zk_min_latency 0
* zk_packets_received 234857
* zk_packets_sent 234860
* zk_num_alive_connections 4
* zk_outstanding_requests 0
* zk_server_state follower
* zk_znode_count 35566
* zk_watch_count 39
* zk_ephemerals_count 10
* zk_approximate_data_size 3356708
* zk_open_file_descriptor_count 35
* zk_max_file_descriptor_count 819200
*/
@Data
public class MonitorCmdDataParser implements FourLetterWordDataParser<MonitorCmdData> {
private static final ILog LOGGER = LogFactory.getLog(MonitorCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.MonitorCmd;
}
@Override
public MonitorCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf('\t');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
MonitorCmdData monitorCmdData = new MonitorCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "zk_version":
monitorCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "zk_avg_latency":
monitorCmdData.setZkAvgLatency(Float.valueOf(elem.getValue()));
break;
case "zk_max_latency":
monitorCmdData.setZkMaxLatency(Long.valueOf(elem.getValue()));
break;
case "zk_min_latency":
monitorCmdData.setZkMinLatency(Long.valueOf(elem.getValue()));
break;
case "zk_packets_received":
monitorCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "zk_packets_sent":
monitorCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "zk_num_alive_connections":
monitorCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "zk_outstanding_requests":
monitorCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "zk_server_state":
monitorCmdData.setZkServerState(elem.getValue());
break;
case "zk_znode_count":
monitorCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "zk_watch_count":
monitorCmdData.setZkWatchCount(Long.valueOf(elem.getValue()));
break;
case "zk_ephemerals_count":
monitorCmdData.setZkEphemeralsCount(Long.valueOf(elem.getValue()));
break;
case "zk_approximate_data_size":
monitorCmdData.setZkApproximateDataSize(Long.valueOf(elem.getValue()));
break;
case "zk_open_file_descriptor_count":
monitorCmdData.setZkOpenFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
case "zk_max_file_descriptor_count":
monitorCmdData.setZkMaxFileDescriptorCount(Long.valueOf(elem.getValue()));
break;
default:
LOGGER.warn(
"class=MonitorCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=MonitorCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return monitorCmdData;
}
}

View File

@@ -1,97 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.parser;
import com.didiglobal.logi.log.ILog;
import com.didiglobal.logi.log.LogFactory;
import com.xiaojukeji.know.streaming.km.common.bean.entity.zookeeper.fourletterword.ServerCmdData;
import com.xiaojukeji.know.streaming.km.common.utils.zookeeper.FourLetterWordUtil;
import lombok.Data;
import java.util.HashMap;
import java.util.Map;
/**
* Zookeeper version: 3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 19:49 GMT
* Latency min/avg/max: 0/0/2209
* Received: 278202469
* Sent: 279449055
* Connections: 31
* Outstanding: 0
* Zxid: 0x20033fc12
* Mode: leader
* Node count: 10084
* Proposal sizes last/min/max: 36/32/31260 leader特有
*/
@Data
public class ServerCmdDataParser implements FourLetterWordDataParser<ServerCmdData> {
private static final ILog LOGGER = LogFactory.getLog(ServerCmdDataParser.class);
@Override
public String getCmd() {
return FourLetterWordUtil.ServerCmd;
}
@Override
public ServerCmdData parseAndInitData(Long clusterPhyId, String host, int port, String cmdData) {
Map<String, String> dataMap = new HashMap<>();
for (String elem : cmdData.split("\n")) {
if (elem.isEmpty()) {
continue;
}
int idx = elem.indexOf(':');
if (idx >= 0) {
dataMap.put(elem.substring(0, idx), elem.substring(idx + 1).trim());
}
}
ServerCmdData serverCmdData = new ServerCmdData();
dataMap.entrySet().stream().forEach(elem -> {
try {
switch (elem.getKey()) {
case "Zookeeper version":
serverCmdData.setZkVersion(elem.getValue().split("-")[0]);
break;
case "Latency min/avg/max":
String[] data = elem.getValue().split("/");
serverCmdData.setZkMinLatency(Long.valueOf(data[0]));
serverCmdData.setZkAvgLatency(Float.valueOf(data[1]));
serverCmdData.setZkMaxLatency(Long.valueOf(data[2]));
break;
case "Received":
serverCmdData.setZkPacketsReceived(Long.valueOf(elem.getValue()));
break;
case "Sent":
serverCmdData.setZkPacketsSent(Long.valueOf(elem.getValue()));
break;
case "Connections":
serverCmdData.setZkNumAliveConnections(Long.valueOf(elem.getValue()));
break;
case "Outstanding":
serverCmdData.setZkOutstandingRequests(Long.valueOf(elem.getValue()));
break;
case "Mode":
serverCmdData.setZkServerState(elem.getValue());
break;
case "Node count":
serverCmdData.setZkZnodeCount(Long.valueOf(elem.getValue()));
break;
case "Zxid":
serverCmdData.setZkZxid(Long.parseUnsignedLong(elem.getValue().trim().substring(2), 16));
break;
default:
LOGGER.warn(
"class=ServerCmdDataParser||method=parseAndInitData||name={}||value={}||msg=data not parsed!",
elem.getKey(), elem.getValue()
);
}
} catch (Exception e) {
LOGGER.error(
"class=ServerCmdDataParser||method=parseAndInitData||clusterPhyId={}||host={}||port={}||name={}||value={}||errMsg=exception!",
clusterPhyId, host, port, elem.getKey(), elem.getValue(), e
);
}
});
return serverCmdData;
}
}

View File

@@ -8,6 +8,8 @@ import org.springframework.context.ApplicationEvent;
*/
@Getter
public class BaseMetricEvent extends ApplicationEvent {
public BaseMetricEvent(Object source) {
super( source );
}

View File

@@ -1,20 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.event.metric;
import com.xiaojukeji.know.streaming.km.common.bean.entity.metrics.ZookeeperMetrics;
import lombok.Getter;
import java.util.List;
/**
* @author didi
*/
@Getter
public class ZookeeperMetricEvent extends BaseMetricEvent {
private List<ZookeeperMetrics> zookeeperMetrics;
public ZookeeperMetricEvent(Object source, List<ZookeeperMetrics> zookeeperMetrics) {
super( source );
this.zookeeperMetrics = zookeeperMetrics;
}
}

View File

@@ -41,11 +41,6 @@ public class ClusterPhyPO extends BasePO {
*/
private String jmxProperties;
/**
* zk配置
*/
private String zkProperties;
/**
* 认证类型
* @see com.xiaojukeji.know.streaming.km.common.enums.cluster.ClusterAuthTypeEnum

View File

@@ -3,6 +3,7 @@ package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
@@ -22,19 +23,12 @@ public class GroupMemberPO extends BasePO {
private Integer memberCount;
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount) {
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, Date updateTime) {
this.clusterPhyId = clusterPhyId;
this.topicName = topicName;
this.groupName = groupName;
this.state = state;
this.memberCount = memberCount;
}
public GroupMemberPO(Long clusterPhyId, String topicName, String groupName, String state, Integer memberCount, Date updateTime) {
this.clusterPhyId = clusterPhyId;
this.topicName = topicName;
this.groupName = groupName;
this.state = state;
this.memberCount = memberCount;
this.state = GroupStateEnum.UNKNOWN.getState();
this.memberCount = 0;
this.updateTime = updateTime;
}
}

View File

@@ -1,61 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.group;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupStateEnum;
import com.xiaojukeji.know.streaming.km.common.enums.group.GroupTypeEnum;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "group")
public class GroupPO extends BasePO {
/**
* 集群id
*/
private Long clusterPhyId;
/**
* group类型
*
* @see GroupTypeEnum
*/
private Integer type;
/**
* group名称
*/
private String name;
/**
* group状态
*
* @see GroupStateEnum
*/
private String state;
/**
* group成员数量
*/
private Integer memberCount;
/**
* group消费的topic列表
*/
private String topicMembers;
/**
* group分配策略
*/
private String partitionAssignor;
/**
* group协调器brokerId
*/
private int coordinatorId;
}

View File

@@ -1,24 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.metrice;
import lombok.Data;
import lombok.NoArgsConstructor;
import static com.xiaojukeji.know.streaming.km.common.utils.CommonUtils.monitorTimestamp2min;
@Data
@NoArgsConstructor
public class ZookeeperMetricPO extends BaseMetricESPO {
public ZookeeperMetricPO(Long clusterPhyId){
super(clusterPhyId);
}
@Override
public String getKey() {
return "ZK@" + clusterPhyId + "@" + monitorTimestamp2min(timestamp);
}
@Override
public String getRoutingValue() {
return String.valueOf(clusterPhyId);
}
}

View File

@@ -1,40 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.po.zookeeper;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
@Data
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "zookeeper")
public class ZookeeperInfoPO extends BasePO {
/**
* 集群Id
*/
private Long clusterPhyId;
/**
* 主机
*/
private String host;
/**
* 端口
*/
private Integer port;
/**
* 角色
*/
private String role;
/**
* 版本
*/
private String version;
/**
* ZK状态
*/
private Integer status;
}

View File

@@ -31,15 +31,9 @@ public class ClusterPhyBaseVO extends BaseTimeVO {
@ApiModelProperty(value="Jmx配置", example = "{}")
protected String jmxProperties;
@ApiModelProperty(value="ZK配置", example = "{}")
protected String zkProperties;
@ApiModelProperty(value="描述", example = "测试")
protected String description;
@ApiModelProperty(value="集群的kafka版本", example = "2.5.1")
protected String kafkaVersion;
@ApiModelProperty(value="集群的运行模式", example = "2raft模式其他是ZK模式")
private Integer runState;
}

View File

@@ -1,32 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.cluster;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
@ApiModel(description = "集群健康状态信息")
public class ClusterPhysHealthStateVO {
@ApiModelProperty(value = "未知", example = "30")
private Integer unknownCount;
@ApiModelProperty(value = "", example = "30")
private Integer goodCount;
@ApiModelProperty(value = "", example = "30")
private Integer mediumCount;
@ApiModelProperty(value = "", example = "30")
private Integer poorCount;
@ApiModelProperty(value = "down", example = "30")
private Integer deadCount;
@ApiModelProperty(value = "总数", example = "150")
private Integer total;
}

View File

@@ -31,9 +31,6 @@ public class ClusterBrokersOverviewVO extends BrokerMetadataVO {
@ApiModelProperty(value = "jmx端口")
private Integer jmxPort;
@ApiModelProperty(value = "jmx连接状态 true:连接成功 false:连接失败")
private Boolean jmxConnected;
@ApiModelProperty(value = "是否存活 true存活 false不存活")
private Boolean alive;
}

View File

@@ -14,7 +14,4 @@ import lombok.NoArgsConstructor;
public class UserMetricConfigVO extends VersionItemVO {
@ApiModelProperty(value = "该指标用户是否设置展现", example = "true")
private Boolean set;
@ApiModelProperty(value = "该指标展示优先级", example = "1")
private Integer rank;
}

View File

@@ -1,27 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.group;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* @author wyb
* @date 2022/10/9
*/
@Data
@ApiModel(value = "Group信息")
public class GroupOverviewVO {
@ApiModelProperty(value = "Group名称", example = "group-know-streaming-test")
private String name;
@ApiModelProperty(value = "Group状态", example = "Empty")
private String state;
@ApiModelProperty(value = "group的成员数", example = "12")
private Integer memberCount;
@ApiModelProperty(value = "Topic列表", example = "[topic1,topic2]")
private List<String> topicNameList;
}

View File

@@ -10,7 +10,7 @@ import lombok.Data;
*/
@Data
@ApiModel(value = "GroupTopic信息")
public class GroupTopicOverviewVO extends GroupTopicBasicVO {
public class GroupTopicOverviewVO extends GroupTopicBasicVO{
@ApiModelProperty(value = "最大Lag", example = "12345678")
private Long maxLag;
}

View File

@@ -32,6 +32,9 @@ public class HealthCheckConfigVO {
@ApiModelProperty(value="检查说明", example = "Group延迟")
private String configDesc;
@ApiModelProperty(value="权重", example = "10")
private Float weight;
@ApiModelProperty(value="检查配置", example = "100")
private String value;
}

View File

@@ -18,9 +18,6 @@ public class HealthScoreBaseResultVO extends BaseTimeVO {
@ApiModelProperty(value="检查维度", example = "1")
private Integer dimension;
@ApiModelProperty(value="检查维度名称", example = "cluster")
private String dimensionName;
@ApiModelProperty(value="检查名称", example = "Group延迟")
private String configName;
@@ -30,6 +27,9 @@ public class HealthScoreBaseResultVO extends BaseTimeVO {
@ApiModelProperty(value="检查说明", example = "Group延迟")
private String configDesc;
@ApiModelProperty(value="权重百分比[0-100]", example = "10")
private Integer weightPercent;
@ApiModelProperty(value="得分", example = "100")
private Integer score;

View File

@@ -1,12 +1,16 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.point.MetricPointVO;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author didi
@@ -22,4 +26,19 @@ public class MetricMultiLinesVO {
@ApiModelProperty(value = "指标名称对应的指标线")
private List<MetricLineVO> metricLines;
public List<MetricPointVO> getMetricPoints(String resName) {
if (ValidateUtils.isNull(metricLines)) {
return new ArrayList<>();
}
List<MetricLineVO> voList = metricLines.stream().filter(elem -> elem.getName().equals(resName)).collect(Collectors.toList());
if (ValidateUtils.isEmptyList(voList)) {
return new ArrayList<>();
}
// 仅获取idx=0的指标
return voList.get(0).getMetricPoints();
}
}

View File

@@ -1,29 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "Zookeeper信息概览")
public class ClusterZookeepersOverviewVO {
@ApiModelProperty(value = "主机ip", example = "121.0.0.1")
private String host;
@ApiModelProperty(value = "主机存活状态1Live0Down", example = "1")
private Integer status;
@ApiModelProperty(value = "端口号", example = "2416")
private Integer port;
@ApiModelProperty(value = "版本", example = "1.1.2")
private String version;
@ApiModelProperty(value = "角色", example = "Leader")
private String role;
}

View File

@@ -1,47 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
@ApiModel(description = "ZK状态信息")
public class ClusterZookeepersStateVO {
@ApiModelProperty(value = "健康检查状态", example = "1")
private Integer healthState;
@ApiModelProperty(value = "健康检查通过数", example = "1")
private Integer healthCheckPassed;
@ApiModelProperty(value = "健康检查总数", example = "1")
private Integer healthCheckTotal;
@ApiModelProperty(value = "ZK的Leader机器", example = "127.0.0.1")
private String leaderNode;
@ApiModelProperty(value = "Watch数", example = "123456")
private Integer watchCount;
@ApiModelProperty(value = "节点存活数", example = "8")
private Integer aliveServerCount;
@ApiModelProperty(value = "总节点数", example = "10")
private Integer totalServerCount;
@ApiModelProperty(value = "Follower角色存活数", example = "8")
private Integer aliveFollowerCount;
@ApiModelProperty(value = "Follower角色总数", example = "10")
private Integer totalFollowerCount;
@ApiModelProperty(value = "Observer角色存活数", example = "3")
private Integer aliveObserverCount;
@ApiModelProperty(value = "Observer角色总数", example = "3")
private Integer totalObserverCount;
}

View File

@@ -1,44 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeStatVO {
@ApiModelProperty(value = "节点被创建时的事物的ID", example = "0x1f09")
private Long czxid;
@ApiModelProperty(value = "创建时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long ctime;
@ApiModelProperty(value = "节点最后一次被修改时的事物的ID", example = "0x1f09")
private Long mzxid;
@ApiModelProperty(value = "最后一次修改时间", example = "Sat Mar 16 15:38:34 CST 2019")
private Long mtime;
@ApiModelProperty(value = "子节点列表最近一次呗修改的事物ID", example = "0x31")
private Long pzxid;
@ApiModelProperty(value = "子节点版本号", example = "0")
private Integer cversion;
@ApiModelProperty(value = "数据版本号", example = "0")
private Integer version;
@ApiModelProperty(value = "ACL版本号", example = "0")
private Integer aversion;
@ApiModelProperty(value = "创建临时节点的事物ID持久节点事物为0", example = "0")
private Long ephemeralOwner;
@ApiModelProperty(value = "数据长度,每个节点都可保存数据", example = "22")
private Integer dataLength;
@ApiModelProperty(value = "子节点的个数", example = "6")
private Integer numChildren;
}

View File

@@ -1,25 +0,0 @@
package com.xiaojukeji.know.streaming.km.common.bean.vo.zookeeper;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
/**
* @author wyc
* @date 2022/9/23
*/
@Data
public class ZnodeVO {
@ApiModelProperty(value = "节点名称", example = "broker")
private String name;
@ApiModelProperty(value = "节点数据", example = "saassad")
private String data;
@ApiModelProperty(value = "节点属性", example = "")
private ZnodeStatVO stat;
@ApiModelProperty(value = "节点路径", example = "/cluster")
private String namespace;
}

Some files were not shown because too many files have changed in this diff Show More