Compare commits

...

26 Commits

Author SHA1 Message Date
Ve Lee
0388612a54 合并主分支 (#1241) 2024-10-12 20:26:14 +08:00
Ve Lee
4c10b4ce9c Fix node download url (#1240) 2024-10-12 19:59:58 +08:00
liwei
e5836dc29f Fix node download url 2024-10-12 19:44:18 +08:00
Ve Lee
99e086c1c5 [Bugfix]fix Topic level metric query (#1239)
1:topic维度的查询BytesIn,BytesOut 聚合类型应该是 sum 不能是 avg
2:getAggListMetrics dsl需要加上 brokerAgg = 1的条件
2024-10-12 14:37:56 +08:00
ruanliang-hualun
a4085adf10 [Bugfix]fix Topic level metric query 2024-10-07 20:51:22 +08:00
Peng
bfc6999c93 Update README.md 2024-08-23 15:38:04 +08:00
chang-wd
260cbb92d2 [Feature] Consume just filter key or value, not both. 消费消息支持单独过滤key或者value. (#1157)
close #1155

Consume just filter key or value, not both.
消费消息支持单独过滤key或者value。

---------

Co-authored-by: weidong_chang <weidong_chang@intsig.net>
2024-06-30 22:56:36 +08:00
Peng
232f06e5c2 Update README.md 2024-06-25 17:19:25 +08:00
jiangminbing
fcf0a08e0a [Bugfix] 修复BrokerConfigServiceImpl.getBrokerConfigByZKClient方法一定返回空的问题 (#1198)
修复获取ZK-Broker配置,出现空列表的问题

Co-authored-by: jiangmb <jiangmb@televehicle.com>
2024-01-06 16:40:11 +08:00
fang
68839a6725 [DOC]新增MySQL密码以加密方式存储并使用的文档 (#1135) 2023-12-10 01:15:46 +08:00
ZQKC
2390ae8941 版本修改为3.4.0 2023-12-03 15:21:49 +08:00
EricZeng
4ae34d0030 合并企业版开发分支 (#1206) 2023-12-03 14:32:51 +08:00
EricZeng
95bce89ce5 合并master分支 (#1205) 2023-12-03 14:31:47 +08:00
erge
49d3d078d3 合并主分支 (#1199) (#1201)
请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

- [Bugfix]修复重置offset接口调用过多问题
- [Bugfix]修复消费组Offset重置后,提示重置成功,但是前端不刷新数据,Offset无变化的问题
- [Optimize]消费组详情控制数据实时刷新

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;

请不要在没有先创建Issue的情况下创建Pull Request。

## 变更的目的是什么

XXXXX

## 简短的更新日志

XX

## 验证这一变化

XXXX

请遵循此清单,以帮助我们快速轻松地整合您的贡献:

* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit
代码时进行填写,在 GitHub 上修改不了;
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
* [ ] 确保编译通过,集成测试通过;
2023-11-30 21:56:42 +08:00
EricZeng
2339a6f0cd 合并主分支进行测试 (#1197) 2023-11-27 21:08:53 +08:00
EricZeng
2744f5b6dd 验证功能是否正常 (#1193) 2023-11-27 13:51:21 +08:00
qiao.zeng
6e9dc4f807 Merge branch 'fix_1043' into ve_3.x_dev 2023-11-12 15:31:18 +08:00
qiao.zeng
a8be274ca6 合并master分支 2023-11-12 15:30:08 +08:00
qiao.zeng
251f7f7110 [Bugfix]修复Truncate数据不生效的问题 2023-11-12 15:06:10 +08:00
ZQKC
b1aa12bfa5 合并Master分支 2023-07-07 13:09:28 +08:00
zhaoli
9f6882cf0d [bugfix]leader重选时忽略ElectionNotNeededException异常,返回成功 2023-04-03 11:49:06 +08:00
ZQKC
d3cc0cb687 [Bugfix]修复Balance功能,ES密码未生效的问题(#992) 2023-04-02 20:30:19 +08:00
zengqiao
77b87f1dbe 升级至企业版3.3.0 2023-02-24 17:52:27 +08:00
zengqiao
a82d7f594e 合并3.3.0企业版改动 2023-02-24 17:49:26 +08:00
zengqiao
cca7246281 合并3.3.0分支 2023-02-24 17:13:50 +08:00
zengqiao
c56d8cfb0f 增加rebalance / testing / license能力 2023-02-23 11:56:46 +08:00
147 changed files with 10647 additions and 77 deletions

View File

@@ -136,7 +136,7 @@
👍 我们正在组建国内最大,最权威的 **[Kafka中文社区](https://z.didi.cn/5gSF9)**
在这里你可以结交各大互联网的 Kafka大佬 以及 4000+ Kafka爱好者一起实现知识共享实时掌控最新行业资讯期待 👏 &nbsp; 您的加入中~ https://z.didi.cn/5gSF9
在这里你可以结交各大互联网的 Kafka大佬 以及 6200+ Kafka爱好者一起实现知识共享实时掌控最新行业资讯期待 👏 &nbsp; 您的加入中~ https://z.didi.cn/5gSF9
有问必答~ 互动有礼~
@@ -146,7 +146,7 @@ PS: 提问请尽量把问题一次性描述清楚,并告知环境信息情况
**`2、微信群`**
微信加群:添加`PenceXie` `szzdzhp001`的微信号备注KnowStreaming加群。
微信加群:添加`PenceXie` 的微信号备注KnowStreaming加群。
<br/>
加群之前有劳点一下 star一个小小的 star 是对KnowStreaming作者们努力建设社区的动力。

View File

@@ -0,0 +1,115 @@
## YML文件MYSQL密码加密存储手册
### 1、本地部署加密
**第一步:生成密文**
在本地仓库中找到jasypt-1.9.3.jar默认在org/jasypt/jasypt/1.9.3中,使用`java -cp`生成密文。
```bash
java -cp jasypt-1.9.3.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input=mysql密码 password=加密的salt algorithm=PBEWithMD5AndDES
```
```bash
## 得到密文
DYbVDLg5D0WRcJSCUGWjiw==
```
**第二步配置jasypt**
在YML文件中配置jasypt例如
```yaml
jasypt:
encryptor:
algorithm: PBEWithMD5AndDES
iv-generator-classname: org.jasypt.iv.NoIvGenerator
```
**第三步:配置密文**
使用密文替换YML文件中的明文密码为ENC(密文),例如[application.yml](https://github.com/didi/KnowStreaming/blob/master/km-rest/src/main/resources/application.yml)中MYSQL密码。
```yaml
know-streaming:
username: root
password: ENC(DYbVDLg5D0WRcJSCUGWjiw==)
```
**第四步配置加密的salt选择其一**
- 配置在YML文件中不推荐
```yaml
jasypt:
encryptor:
password: salt
```
- 配置程序启动时的命令行参数
```bash
java -jar xxx.jar --jasypt.encryptor.password=salt
```
- 配置程序启动时的环境变量
```bash
export JASYPT_PASSWORD=salt
java -jar xxx.jar --jasypt.encryptor.password=${JASYPT_PASSWORD}
```
## 2、容器部署加密
利用docker swarm 提供的 secret 机制加密存储密码使用docker swarm来管理密码。
### 2.1、secret加密存储
**第一步初始化docker swarm**
```bash
docker swarm init
```
**第二步:创建密钥**
```bash
echo "admin2022_" | docker secret create mysql_password -
# 输出密钥
f964wi4gg946hu78quxsh2ge9
```
**第三步:使用密钥**
```yaml
# mysql用户密码
SERVER_MYSQL_USER: root
SERVER_MYSQL_PASSWORD: mysql_password
knowstreaming-mysql:
# root 用户密码
MYSQL_ROOT_PASSWORD: mysql_password
secrets:
mysql_password:
external: true
```
### 2.2、使用密钥文件加密
**第一步:创建密钥**
```bash
echo "admin2022_" > password
```
**第二步:使用密钥**
```yaml
# mysql用户密码
SERVER_MYSQL_USER: root
SERVER_MYSQL_PASSWORD: mysql_password
secrets:
mysql_password:
file: ./password
```

View File

@@ -29,6 +29,11 @@
<artifactId>km-core</artifactId>
<version>${project.parent.version}</version>
</dependency>
<dependency>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-rebalance</artifactId>
<version>${project.parent.version}</version>
</dependency>
<!-- spring -->
<dependency>

View File

@@ -15,6 +15,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.Result;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyBaseVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.cluster.ClusterPhyDashboardVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.metrics.line.MetricMultiLinesVO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.common.converter.ClusterVOConverter;
import com.xiaojukeji.know.streaming.km.common.enums.health.HealthStateEnum;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
@@ -24,6 +25,10 @@ import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterMetricService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.version.metrics.kafka.ClusterMetricVersionItems;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.common.BalanceMetricConstant;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.ClusterBalanceItemState;
import com.xiaojukeji.know.streaming.km.rebalance.core.service.ClusterBalanceService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@@ -40,6 +45,9 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
@Autowired
private ClusterMetricService clusterMetricService;
@Autowired
private ClusterBalanceService clusterBalanceService;
@Override
public ClusterPhysState getClusterPhysState() {
List<ClusterPhy> clusterPhyList = clusterPhyService.listAllClusters();
@@ -153,6 +161,11 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
ClusterMetrics clusterMetrics = clusterMetricService.getLatestMetricsFromCache(vo.getId());
clusterMetrics.getMetrics().putIfAbsent(ClusterMetricVersionItems.CLUSTER_METRIC_HEALTH_STATE, (float) HealthStateEnum.UNKNOWN.getDimension());
Result<ClusterMetrics> balanceMetricsResult = this.getClusterLoadReBalanceInfo(vo.getId());
if (balanceMetricsResult.hasData()) {
clusterMetrics.putMetric(balanceMetricsResult.getData().getMetrics());
}
metricsList.add(clusterMetrics);
}
@@ -174,4 +187,21 @@ public class MultiClusterPhyManagerImpl implements MultiClusterPhyManager {
dto.setClusterPhyIds(clusterIdList);
return dto;
}
private Result<ClusterMetrics> getClusterLoadReBalanceInfo(Long clusterPhyId) {
Result<ClusterBalanceItemState> stateResult = clusterBalanceService.getItemStateFromCacheFirst(clusterPhyId);
if (stateResult.failed()) {
return Result.buildFromIgnoreData(stateResult);
}
ClusterBalanceItemState state = stateResult.getData();
ClusterMetrics metric = ClusterMetrics.initWithMetrics(clusterPhyId, BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_ENABLE, state.getEnable()? Constant.YES: Constant.NO);
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_CPU, state.getResItemState(Resource.CPU).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_NW_IN, state.getResItemState(Resource.NW_IN).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_NW_OUT, state.getResItemState(Resource.NW_OUT).floatValue());
metric.putMetric(BalanceMetricConstant.CLUSTER_METRIC_LOAD_RE_BALANCE_DISK, state.getResItemState(Resource.DISK).floatValue());
return Result.buildSuc(metric);
}
}

View File

@@ -7,6 +7,7 @@ import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicCreateDTO;
import com.xiaojukeji.know.streaming.km.common.bean.dto.topic.TopicExpansionDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.config.KafkaTopicConfigParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicCreateParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicParam;
import com.xiaojukeji.know.streaming.km.common.bean.entity.param.topic.TopicPartitionExpandParam;
@@ -17,17 +18,17 @@ import com.xiaojukeji.know.streaming.km.common.bean.entity.result.ResultStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.common.constant.KafkaConstant;
import com.xiaojukeji.know.streaming.km.common.constant.MsgConstant;
import com.xiaojukeji.know.streaming.km.common.utils.BackoffUtils;
import com.xiaojukeji.know.streaming.km.common.utils.FutureUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.common.utils.*;
import com.xiaojukeji.know.streaming.km.common.utils.kafka.KafkaReplicaAssignUtil;
import com.xiaojukeji.know.streaming.km.core.service.broker.BrokerService;
import com.xiaojukeji.know.streaming.km.core.service.cluster.ClusterPhyService;
import com.xiaojukeji.know.streaming.km.core.service.partition.PartitionService;
import com.xiaojukeji.know.streaming.km.core.service.topic.OpTopicService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicConfigService;
import com.xiaojukeji.know.streaming.km.core.service.topic.TopicService;
import kafka.admin.AdminUtils;
import kafka.admin.BrokerMetadata;
import org.apache.kafka.common.config.TopicConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
@@ -61,6 +62,9 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Autowired
private PartitionService partitionService;
@Autowired
private TopicConfigService topicConfigService;
@Override
public Result<Void> createTopic(TopicCreateDTO dto, String operator) {
log.info("method=createTopic||param={}||operator={}.", dto, operator);
@@ -160,10 +164,27 @@ public class OpTopicManagerImpl implements OpTopicManager {
@Override
public Result<Void> truncateTopic(Long clusterPhyId, String topicName, String operator) {
// 增加delete配置
Result<Tuple<Boolean, String>> rt = this.addDeleteConfigIfNotExist(clusterPhyId, topicName, operator);
if (rt.failed()) {
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||operator={}||result={}||msg=get config from kafka failed", clusterPhyId, topicName, operator, rt);
return Result.buildFromIgnoreData(rt);
}
// 清空Topic
Result<Void> rv = opTopicService.truncateTopic(new TopicTruncateParam(clusterPhyId, topicName, KafkaConstant.TOPICK_TRUNCATE_DEFAULT_OFFSET), operator);
if (rv.failed()) {
return rv;
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||originConfig={}||operator={}||result={}||msg=truncate topic failed", clusterPhyId, topicName, rt.getData().v2(), operator, rv);
// config被修改了则错误提示需要提醒一下否则直接返回错误
return rt.getData().v1() ? Result.buildFailure(rv.getCode(), rv.getMessage() + "\t\n" + String.format("Topic的CleanupPolicy已被修改需要手动恢复为%s", rt.getData().v2())) : rv;
}
// 恢复compact配置
rv = this.recoverConfigIfChanged(clusterPhyId, topicName, rt.getData().v1(), rt.getData().v2(), operator);
if (rv.failed()) {
log.error("method=truncateTopic||clusterPhyId={}||topicName={}||originConfig={}||operator={}||result={}||msg=truncate topic success but recover config failed", clusterPhyId, topicName, rt.getData().v2(), operator, rv);
// config被修改了则错误提示需要提醒一下否则直接返回错误
return Result.buildFailure(rv.getCode(), String.format("Topic清空操作已成功但是恢复CleanupPolicy配置失败需要手动恢复为%s。", rt.getData().v2()) + "\t\n" + rv.getMessage());
}
return Result.buildSuc();
@@ -171,6 +192,44 @@ public class OpTopicManagerImpl implements OpTopicManager {
/**************************************************** private method ****************************************************/
private Result<Tuple<Boolean, String>> addDeleteConfigIfNotExist(Long clusterPhyId, String topicName, String operator) {
// 获取Topic配置
Result<Map<String, String>> configMapResult = topicConfigService.getTopicConfigFromKafka(clusterPhyId, topicName);
if (configMapResult.failed()) {
return Result.buildFromIgnoreData(configMapResult);
}
String cleanupPolicyValue = configMapResult.getData().getOrDefault(TopicConfig.CLEANUP_POLICY_CONFIG, "");
List<String> cleanupPolicyValueList = CommonUtils.string2StrList(cleanupPolicyValue);
if (cleanupPolicyValueList.size() == 1 && cleanupPolicyValueList.contains(TopicConfig.CLEANUP_POLICY_DELETE)) {
// 不需要修改
return Result.buildSuc(new Tuple<>(Boolean.FALSE, cleanupPolicyValue));
}
Map<String, String> changedConfigMap = new HashMap<>(1);
changedConfigMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, TopicConfig.CLEANUP_POLICY_DELETE);
Result<Void> rv = topicConfigService.modifyTopicConfig(new KafkaTopicConfigParam(clusterPhyId, topicName, changedConfigMap), operator);
if (rv.failed()) {
// 修改失败
return Result.buildFromIgnoreData(rv);
}
return Result.buildSuc(new Tuple<>(Boolean.TRUE, cleanupPolicyValue));
}
private Result<Void> recoverConfigIfChanged(Long clusterPhyId, String topicName, Boolean changed, String originValue, String operator) {
if (!changed) {
// 没有修改,直接返回
return Result.buildSuc();
}
// 恢复配置
Map<String, String> changedConfigMap = new HashMap<>(1);
changedConfigMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, originValue);
return topicConfigService.modifyTopicConfig(new KafkaTopicConfigParam(clusterPhyId, topicName, changedConfigMap), operator);
}
private Seq<BrokerMetadata> buildBrokerMetadataSeq(Long clusterPhyId, final List<Integer> selectedBrokerIdList) {
// 选取Broker列表

View File

@@ -2,6 +2,7 @@ package com.xiaojukeji.know.streaming.km.common.enums.operaterecord;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import java.util.List;
@@ -40,6 +41,9 @@ public enum ModuleEnum {
JOB_KAFKA_REPLICA_REASSIGN(110, "Job-KafkaReplica迁移"),
@EnterpriseLoadReBalance
JOB_CLUSTER_BALANCE(111, "Job-ClusterBalance"),
;
ModuleEnum(int code, String desc) {

View File

@@ -1,2 +1,2 @@
BUSINESS_VERSION='false'
BUSINESS_VERSION='true'
PUBLIC_PATH=''

View File

@@ -31,12 +31,7 @@ export const { Provider, Consumer } = React.createContext('zh');
const defaultLanguage = 'zh';
const AppContent = (props: {
getLicenseInfo?: (cbk: (msg: string) => void) => void | undefined;
licenseEventBus?: Record<string, any> | undefined;
}) => {
const { getLicenseInfo, licenseEventBus } = props;
const AppContent = (props: any) => {
return (
<div className="config-system">
<DProLayout.Sider prefixCls={'dcd-two-columns'} width={200} theme={'light'} systemKey={systemKey} menuConf={leftMenus} />
@@ -44,7 +39,7 @@ const AppContent = (props: {
<RouteGuard
routeList={pageRoutes}
beforeEach={() => {
getLicenseInfo?.((msg) => licenseEventBus?.emit('licenseError', msg));
// getLicenseInfo?.((msg) => licenseEventBus?.emit('licenseError', msg));
return Promise.resolve(true);
}}
noMatch={() => <Redirect to="/404" />}
@@ -55,7 +50,6 @@ const AppContent = (props: {
};
const App = (props: any) => {
const { getLicenseInfo, licenseEventBus } = props;
const intlMessages = _.get(localeMap[defaultLanguage], 'intlMessages', intlZhCN);
const locale = _.get(localeMap[defaultLanguage], 'intl', 'zh-CN');
const antdLocale = _.get(localeMap[defaultLanguage], 'dantd', dantdZhCN);
@@ -65,7 +59,7 @@ const App = (props: any) => {
<AppContainer intlProvider={{ locale, messages: intlMessages }} antdProvider={{ locale: antdLocale }}>
<Router basename={systemKey}>
<Switch>
<AppContent getLicenseInfo={getLicenseInfo} licenseEventBus={licenseEventBus} />
<AppContent />
</Switch>
</Router>
</AppContainer>

View File

@@ -73,44 +73,6 @@ const logout = () => {
localStorage.removeItem('userInfo');
};
const LicenseLimitModal = () => {
const [visible, setVisible] = useState<boolean>(false);
const [msg, setMsg] = useState<string>('');
useLayoutEffect(() => {
licenseEventBus.on('licenseError', (desc: string) => {
!visible && setVisible(true);
setMsg(desc);
});
return () => {
licenseEventBus.removeAll('licenseError');
};
}, []);
return (
<Modal
visible={visible}
centered={true}
width={400}
zIndex={10001}
title={
<>
<IconFont type="icon-yichang" style={{ marginRight: 10, fontSize: 18 }} />
</>
}
footer={null}
onCancel={() => setVisible(false)}
>
<div style={{ margin: '0 28px', lineHeight: '24px' }}>
<div>
{msg}<a></a>
</div>
</div>
</Modal>
);
};
const AppContent = (props: { setlanguage: (language: string) => void }) => {
const { pathname } = useLocation();
const history = useHistory();
@@ -186,7 +148,7 @@ const AppContent = (props: { setlanguage: (language: string) => void }) => {
}}
onMount={(customProps: any) => {
judgePage404();
registerApps(systemsConfig, { ...customProps, getLicenseInfo, licenseEventBus }, () => {
registerApps(systemsConfig, { ...customProps }, () => {
// postMessage();
});
}}
@@ -207,7 +169,6 @@ const AppContent = (props: { setlanguage: (language: string) => void }) => {
}}
/>
</Switch>
<LicenseLimitModal />
</>
</DProLayout.Container>
);
@@ -241,7 +202,6 @@ export default function App(): JSX.Element {
<BrowserRouter basename="">
<Switch>
<Route path="/login" component={Login} />
<Route path="/no-license" exact component={NoLicense} />
<Route render={() => <AppContent setlanguage={setlanguage} />} />
</Switch>
</BrowserRouter>

View File

@@ -33,6 +33,7 @@ interface PropsType {
};
onChange: (options: KsHeaderOptions) => void;
openMetricFilter: () => void;
setScreenType?: any;
}
interface ScopeData {
@@ -56,12 +57,29 @@ const GRID_SIZE_OPTIONS = [
},
];
// connect 筛选逻辑补充
const CONNECT_OPTIONS = [
{
label: '全部',
value: 'all',
},
{
label: 'Cluster',
value: 'Connect',
},
{
label: 'Connector',
value: 'Connector',
},
];
const MetricOperateBar = ({
nodeSelect = {},
hideNodeScope = false,
hideGridSelect = false,
onChange: onChangeCallback,
openMetricFilter,
setScreenType,
}: PropsType): JSX.Element => {
const [gridNum, setGridNum] = useState<number>(GRID_SIZE_OPTIONS[1].value);
const [rangeTime, setRangeTime] = useState<[number, number]>(() => {
@@ -139,6 +157,17 @@ const MetricOperateBar = ({
<DRangeTime timeChange={timeChange} rangeTimeArr={rangeTime} />
</div>
<div className="header-right">
{/* connect 单独逻辑 */}
{setScreenType && (
<Select
style={{ width: 120, marginRight: 10 }}
defaultValue="all"
options={CONNECT_OPTIONS}
onChange={(e) => {
setScreenType(e);
}}
/>
)}
{/* 节点范围 */}
{!hideNodeScope && (
<NodeSelect name={nodeSelect.name || ''} onChange={nodeScopeChange}>

View File

@@ -72,7 +72,7 @@ const ChartList = (props: ChartListProps) => {
const { metricName, metricType, metricUnit, metricLines, showLegend } = data;
return (
<div key={metricName} className="dashboard-drag-item-box">
<div key={metricName + metricType} className="dashboard-drag-item-box">
<div className="dashboard-drag-item-box-title">
<Tooltip
placement="topLeft"

View File

@@ -252,6 +252,7 @@ const ClusterList = (props: { searchParams: SearchParams; showAccessCluster: any
const {
Brokers: brokers,
Zookeepers: zks,
// ConnectionsCount: connect,
HealthCheckPassed: healthCheckPassed,
HealthCheckTotal: healthCheckTotal,
HealthState: healthState,
@@ -352,6 +353,18 @@ const ClusterList = (props: { searchParams: SearchParams; showAccessCluster: any
<div className="indicator-left-item-value">{zookeepersAvailable === -1 ? '-' : zks}</div>
</div>
)}
{/* <div className="indicator-left-item">
<div className="indicator-left-item-title">
<span
className="indicator-left-item-title-dot"
style={{
background: itemData.latestMetrics?.metrics?.BrokersNotAlive ? '#FF7066' : '#34C38F',
}}
></span>
Connect
</div>
<div className="indicator-left-item-value">{connect}</div>
</div> */}
</div>
<div className="indicator-right">
{metricPoints.map((row, index) => {

View File

@@ -144,6 +144,8 @@ const ConsumeClientTest = () => {
...configInfo,
needFilterKeyValue: changeValue === 1 || changeValue === 2,
needFilterSize: changeValue === 3 || changeValue === 4 || changeValue === 5,
needFilterKey: changeValue === 6,
needFilterValue: changeValue === 7,
});
break;
}

View File

@@ -16,19 +16,19 @@ export const cardList = [
export const filterList = [
{
label: 'none',
label: 'None',
value: 0,
},
{
label: 'contains',
label: 'Contains',
value: 1,
},
{
label: 'does not contains',
label: 'Does Not Contains',
value: 2,
},
{
label: 'equals',
label: 'Equals',
value: 3,
},
{
@@ -39,6 +39,14 @@ export const filterList = [
label: 'Under Size',
value: 5,
},
{
label: 'Key Contains',
value: 6,
},
{
label: 'Value Contains',
value: 7,
}
];
export const untilList = [
@@ -324,10 +332,10 @@ export const getFormConfig = (topicMetaData: any, info = {} as any, partitionLis
key: 'filterKey',
label: 'Key',
type: FormItemType.input,
invisible: !info?.needFilterKeyValue,
invisible: !info?.needFilterKeyValue && !info?.needFilterKey,
rules: [
{
required: info?.needFilterKeyValue,
required: info?.needFilterKeyValue || info?.needFilterKey,
message: '请输入Key',
},
],
@@ -336,10 +344,10 @@ export const getFormConfig = (topicMetaData: any, info = {} as any, partitionLis
key: 'filterValue',
label: 'Value',
type: FormItemType.input,
invisible: !info?.needFilterKeyValue,
invisible: !info?.needFilterKeyValue && !info?.needFilterValue,
rules: [
{
required: info?.needFilterKeyValue,
required: info?.needFilterKeyValue || info?.needFilterValue,
message: '请输入Value',
},
],

View File

@@ -44,7 +44,7 @@ const ExpandPartition = (props: { record: any; onConfirm: () => void }) => {
setLoading(true);
const metricParams = {
aggType: 'avg',
aggType: 'sum',
endTime: Math.round(endStamp),
metricsNames: ['BytesIn', 'BytesOut'],
startTime: Math.round(startStamp),

View File

@@ -61,7 +61,6 @@ const LayoutContainer = () => {
// 路由前置守卫
const routeBeforeEach = useCallback(
(path: string, permissionNode: string | number) => {
getLicenseInfo((msg) => licenseEventBus.emit('licenseError', msg));
// 判断进入页面的前置条件是否满足,如果不满足,则展示加载状态
const isClusterNotExist = path.includes(':clusterId') && !global.clusterInfo;
const isNotLoadedPermissions = typeof global.hasPermission !== 'function';

View File

@@ -32,8 +32,6 @@
<configuration>
<nodeVersion>v12.22.12</nodeVersion>
<npmVersion>6.14.16</npmVersion>
<nodeDownloadRoot>https://npm.taobao.org/mirrors/node/</nodeDownloadRoot>
<npmDownloadRoot>https://registry.npm.taobao.org/npm/-/</npmDownloadRoot>
</configuration>
</execution>
<execution>

View File

@@ -37,6 +37,7 @@ import scala.jdk.javaapi.CollectionConverters;
import javax.annotation.PostConstruct;
import java.util.*;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.common.enums.version.VersionEnum.*;
@@ -154,9 +155,11 @@ public class BrokerConfigServiceImpl extends BaseKafkaVersionControlService impl
if (propertiesResult.failed()) {
return Result.buildFromIgnoreData(propertiesResult);
}
List<String> configKeyList = propertiesResult.getData().keySet().stream().map(Object::toString).collect(Collectors.toList());
return Result.buildSuc(KafkaConfigConverter.convert2KafkaBrokerConfigDetailList(
new ArrayList<>(),
configKeyList,
propertiesResult.getData()
));
}

View File

@@ -15,6 +15,9 @@ public class KSConfigUtils {
private KSConfigUtils() {
}
@Value("${cluster-balance.ignored-topics.time-second:300}")
private Integer clusterBalanceIgnoredTopicsTimeSecond;
@Value(value = "${request.api-call.timeout-unit-ms:8000}")
private Integer apiCallTimeoutUnitMs;

View File

@@ -143,7 +143,7 @@ public class HealthStateServiceImpl implements HealthStateService {
// DB中不存在则默认是存活的
metrics.getMetrics().put(BROKER_METRIC_HEALTH_STATE, (float)HealthStateEnum.GOOD.getDimension());
} else if (!broker.alive()) {
metrics.getMetrics().put(BROKER_METRIC_HEALTH_STATE, (float)HealthStateEnum.DEAD.getDimension());
metrics.getMetrics().put(BROKER_METRIC_HEALTH_STATE, (float) HealthStateEnum.DEAD.getDimension());
} else {
metrics.getMetrics().put(BROKER_METRIC_HEALTH_STATE, (float)this.calHealthState(aggResultList).getDimension());
}

View File

@@ -19,6 +19,7 @@ import org.apache.kafka.clients.admin.ElectLeadersOptions;
import org.apache.kafka.clients.admin.ElectLeadersResult;
import org.apache.kafka.common.ElectionType;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.errors.ElectionNotNeededException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.jdk.javaapi.CollectionConverters;
@@ -108,12 +109,17 @@ public class OpPartitionServiceImpl extends BaseKafkaVersionControlService imple
return Result.buildSuc();
} catch (Exception e) {
if(e.getCause() instanceof ElectionNotNeededException) {
// ignore ElectionNotNeededException
return Result.buildSuc();
}
LOGGER.error(
"method=preferredReplicaElectionByKafkaClient||clusterPhyId={}||errMsg=exception",
partitionParam.getClusterPhyId(), e
);
return Result.buildFromRSAndMsg(ResultStatus.ZK_OPERATE_FAILED, e.getMessage());
return Result.buildFromRSAndMsg(ResultStatus.KAFKA_OPERATE_FAILED, e.getMessage());
}
}
}

View File

@@ -0,0 +1,64 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>km</artifactId>
<groupId>com.xiaojukeji.kafka</groupId>
<version>${revision}</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>km-rebalance</artifactId>
<dependencies>
<!--算法依赖的包-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
</dependency>
<dependency>
<groupId>net.sf.jopt-simple</groupId>
<artifactId>jopt-simple</artifactId>
</dependency>
<!--应用层依赖的包-->
<dependency>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-common</artifactId>
<version>${project.parent.version}</version>
</dependency>
<dependency>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>km-core</artifactId>
<version>${project.parent.version}</version>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,143 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.ExecutionRebalance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceParameter;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.HostEnv;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.OptimizerResult;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.CommandLineUtils;
import joptsimple.OptionParser;
import joptsimple.OptionSet;
import org.apache.commons.io.FileUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
public class KafkaRebalanceMain {
public void run(OptionSet options) {
try {
BalanceParameter balanceParameter = new BalanceParameter();
if (options.has("excluded-topics")) {
balanceParameter.setExcludedTopics(options.valueOf("excluded-topics").toString());
}
if (options.has("offline-brokers")) {
balanceParameter.setOfflineBrokers(options.valueOf("offline-brokers").toString());
}
if (options.has("disk-threshold")) {
Double diskThreshold = (Double) options.valueOf("disk-threshold");
balanceParameter.setDiskThreshold(diskThreshold);
}
if (options.has("cpu-threshold")) {
Double cpuThreshold = (Double) options.valueOf("cpu-threshold");
balanceParameter.setCpuThreshold(cpuThreshold);
}
if (options.has("network-in-threshold")) {
Double networkInThreshold = (Double) options.valueOf("network-in-threshold");
balanceParameter.setNetworkInThreshold(networkInThreshold);
}
if (options.has("network-out-threshold")) {
Double networkOutThreshold = (Double) options.valueOf("network-out-threshold");
balanceParameter.setNetworkOutThreshold(networkOutThreshold);
}
if (options.has("balance-brokers")) {
balanceParameter.setBalanceBrokers(options.valueOf("balance-brokers").toString());
}
if (options.has("topic-leader-threshold")) {
Double topicLeaderThreshold = (Double) options.valueOf("topic-leader-threshold");
balanceParameter.setTopicLeaderThreshold(topicLeaderThreshold);
}
if (options.has("topic-replica-threshold")) {
Double topicReplicaThreshold = (Double) options.valueOf("topic-replica-threshold");
balanceParameter.setTopicReplicaThreshold(topicReplicaThreshold);
}
if (options.has("ignored-topics")) {
balanceParameter.setIgnoredTopics(options.valueOf("ignored-topics").toString());
}
String path = options.valueOf("output-path").toString();
String goals = options.valueOf("goals").toString();
balanceParameter.setGoals(Arrays.asList(goals.split(",")));
balanceParameter.setCluster(options.valueOf("cluster").toString());
Properties kafkaConfig = new Properties();
kafkaConfig.setProperty(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, options.valueOf("bootstrap-servers").toString());
balanceParameter.setKafkaConfig(kafkaConfig);
if (options.has("es-password")) {
balanceParameter.setEsInfo(options.valueOf("es-rest-url").toString(), options.valueOf("es-password").toString(), options.valueOf("es-index-prefix").toString());
} else {
balanceParameter.setEsInfo(options.valueOf("es-rest-url").toString(), "", options.valueOf("es-index-prefix").toString());
}
balanceParameter.setBeforeSeconds((Integer) options.valueOf("before-seconds"));
String envFile = options.valueOf("hardware-env-file").toString();
String envJson = FileUtils.readFileToString(new File(envFile), "UTF-8");
List<HostEnv> env = new ObjectMapper().readValue(envJson, new TypeReference<List<HostEnv>>() {
});
balanceParameter.setHardwareEnv(env);
ExecutionRebalance exec = new ExecutionRebalance();
OptimizerResult optimizerResult = exec.optimizations(balanceParameter);
FileUtils.write(new File(path.concat("/overview.json")), optimizerResult.resultJsonOverview(), "UTF-8");
FileUtils.write(new File(path.concat("/detailed.json")), optimizerResult.resultJsonDetailed(), "UTF-8");
FileUtils.write(new File(path.concat("/task.json")), optimizerResult.resultJsonTask(), "UTF-8");
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
OptionParser parser = new OptionParser();
parser.accepts("bootstrap-servers", "Kafka cluster boot server").withRequiredArg().ofType(String.class);
parser.accepts("es-rest-url", "The url of elasticsearch").withRequiredArg().ofType(String.class);
parser.accepts("es-password", "The password of elasticsearch").withRequiredArg().ofType(String.class);
parser.accepts("es-index-prefix", "The Index Prefix of elasticsearch").withRequiredArg().ofType(String.class);
parser.accepts("goals", "Balanced goals include TopicLeadersDistributionGoal,TopicReplicaDistributionGoal,DiskDistributionGoal,NetworkInboundDistributionGoal,NetworkOutboundDistributionGoal").withRequiredArg().ofType(String.class);
parser.accepts("cluster", "Balanced cluster name").withRequiredArg().ofType(String.class);
parser.accepts("excluded-topics", "Topic does not perform data balancing").withOptionalArg().ofType(String.class);
parser.accepts("ignored-topics","Topics that do not contain model calculations").withOptionalArg().ofType(String.class);
parser.accepts("offline-brokers", "Broker does not perform data balancing").withOptionalArg().ofType(String.class);
parser.accepts("balance-brokers", "Balanced brokers list").withOptionalArg().ofType(String.class);
parser.accepts("disk-threshold", "Disk data balance threshold").withOptionalArg().ofType(Double.class);
parser.accepts("topic-leader-threshold","topic leader threshold").withOptionalArg().ofType(Double.class);
parser.accepts("topic-replica-threshold","topic replica threshold").withOptionalArg().ofType(Double.class);
parser.accepts("cpu-threshold", "Cpu utilization balance threshold").withOptionalArg().ofType(Double.class);
parser.accepts("network-in-threshold", "Network inflow threshold").withOptionalArg().ofType(Double.class);
parser.accepts("network-out-threshold", "Network outflow threshold").withOptionalArg().ofType(Double.class);
parser.accepts("before-seconds", "Query es data time").withRequiredArg().ofType(Integer.class);
parser.accepts("hardware-env-file", "Machine environment information includes cpu, disk and network").withRequiredArg().ofType(String.class);
parser.accepts("output-path", "Cluster balancing result file directory").withRequiredArg().ofType(String.class);
OptionSet options = parser.parse(args);
if (args.length == 0) {
CommandLineUtils.printUsageAndDie(parser, "Running parameters need to be configured to perform cluster balancing");
}
if (!options.has("bootstrap-servers")) {
CommandLineUtils.printUsageAndDie(parser, "bootstrap-servers cannot be empty");
}
if (!options.has("es-rest-url")) {
CommandLineUtils.printUsageAndDie(parser, "es-rest-url cannot be empty");
}
if (!options.has("es-index-prefix")) {
CommandLineUtils.printUsageAndDie(parser, "es-index-prefix cannot be empty");
}
if (!options.has("goals")) {
CommandLineUtils.printUsageAndDie(parser, "goals cannot be empty");
}
if (!options.has("cluster")) {
CommandLineUtils.printUsageAndDie(parser, "cluster name cannot be empty");
}
if (!options.has("before-seconds")) {
CommandLineUtils.printUsageAndDie(parser, "before-seconds cannot be empty");
}
if (!options.has("hardware-env-file")) {
CommandLineUtils.printUsageAndDie(parser, "hardware-env-file cannot be empty");
}
if (!options.has("output-path")) {
CommandLineUtils.printUsageAndDie(parser, "output-path cannot be empty");
}
KafkaRebalanceMain rebalanceMain = new KafkaRebalanceMain();
rebalanceMain.run(options);
}
}

View File

@@ -0,0 +1,15 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.exception;
public class OptimizationFailureException extends Exception {
public OptimizationFailureException(String message, Throwable cause) {
super(message, cause);
}
public OptimizationFailureException(String message) {
super(message);
}
public OptimizationFailureException(Throwable cause) {
super(cause);
}
}

View File

@@ -0,0 +1,78 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceGoal;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceParameter;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceThreshold;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BrokerBalanceState;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Load;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.GoalOptimizer;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.OptimizerResult;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.GoalUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.Validate;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.HashMap;
import java.util.Map;
public class ExecutionRebalance {
private static final Logger logger = LoggerFactory.getLogger(ExecutionRebalance.class);
public OptimizerResult optimizations(BalanceParameter balanceParameter) {
Validate.isTrue(StringUtils.isNotBlank(balanceParameter.getCluster()), "cluster is empty");
Validate.isTrue(balanceParameter.getKafkaConfig() != null, "Kafka config properties is empty");
Validate.isTrue(balanceParameter.getGoals() != null, "Balance goals is empty");
Validate.isTrue(StringUtils.isNotBlank(balanceParameter.getEsIndexPrefix()), "EsIndexPrefix is empty");
Validate.isTrue(StringUtils.isNotBlank(balanceParameter.getEsRestURL()), "EsRestURL is empty");
Validate.isTrue(balanceParameter.getHardwareEnv() != null, "HardwareEnv is empty");
logger.info("Cluster balancing start");
ClusterModel clusterModel = GoalUtils.getInitClusterModel(balanceParameter);
GoalOptimizer optimizer = new GoalOptimizer();
OptimizerResult optimizerResult = optimizer.optimizations(clusterModel, new OptimizationOptions(balanceParameter));
logger.info("Cluster balancing completed");
return optimizerResult;
}
public static Map<Resource, Double> getClusterAvgResourcesState(BalanceParameter balanceParameter) {
ClusterModel clusterModel = GoalUtils.getInitClusterModel(balanceParameter);
Load load = clusterModel.load();
Map<Resource, Double> avgResource = new HashMap<>();
avgResource.put(Resource.DISK, load.loadFor(Resource.DISK) / clusterModel.brokers().size());
avgResource.put(Resource.CPU, load.loadFor(Resource.CPU) / clusterModel.brokers().size());
avgResource.put(Resource.NW_OUT, load.loadFor(Resource.NW_OUT) / clusterModel.brokers().size());
avgResource.put(Resource.NW_IN, load.loadFor(Resource.NW_IN) / clusterModel.brokers().size());
return avgResource;
}
public static Map<Integer, BrokerBalanceState> getBrokerResourcesBalanceState(BalanceParameter balanceParameter) {
Map<Integer, BrokerBalanceState> balanceState = new HashMap<>();
ClusterModel clusterModel = GoalUtils.getInitClusterModel(balanceParameter);
double[] clusterAvgResource = clusterModel.avgOfUtilization();
Map<String, BalanceThreshold> balanceThreshold = GoalUtils.getBalanceThreshold(balanceParameter, clusterAvgResource);
clusterModel.brokers().forEach(i -> {
BrokerBalanceState state = new BrokerBalanceState();
if (balanceParameter.getGoals().contains(BalanceGoal.DISK.goal())) {
state.setDiskAvgResource(i.load().loadFor(Resource.DISK));
state.setDiskUtilization(i.utilizationFor(Resource.DISK));
state.setDiskBalanceState(balanceThreshold.get(BalanceGoal.DISK.goal()).state(i.utilizationFor(Resource.DISK)));
}
if (balanceParameter.getGoals().contains(BalanceGoal.NW_IN.goal())) {
state.setBytesInAvgResource(i.load().loadFor(Resource.NW_IN));
state.setBytesInUtilization(i.utilizationFor(Resource.NW_IN));
state.setBytesInBalanceState(balanceThreshold.get(BalanceGoal.NW_IN.goal()).state(i.utilizationFor(Resource.NW_IN)));
}
if (balanceParameter.getGoals().contains(BalanceGoal.NW_OUT.goal())) {
state.setBytesOutAvgResource(i.load().loadFor(Resource.NW_OUT));
state.setBytesOutUtilization(i.utilizationFor(Resource.NW_OUT));
state.setBytesOutBalanceState(balanceThreshold.get(BalanceGoal.NW_OUT.goal()).state(i.utilizationFor(Resource.NW_OUT)));
}
balanceState.put(i.id(), state);
});
return balanceState;
}
}

View File

@@ -0,0 +1,76 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
public class BalanceActionHistory {
//均衡目标
private String goal;
//均衡动作
private String actionType;
//均衡Topic
private String topic;
//均衡分区
private int partition;
//源Broker
private int sourceBrokerId;
//目标Broker
private int destinationBrokerId;
public String getGoal() {
return goal;
}
public void setGoal(String goal) {
this.goal = goal;
}
public String getActionType() {
return actionType;
}
public void setActionType(String actionType) {
this.actionType = actionType;
}
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
public int getPartition() {
return partition;
}
public void setPartition(int partition) {
this.partition = partition;
}
public int getSourceBrokerId() {
return sourceBrokerId;
}
public void setSourceBrokerId(int sourceBrokerId) {
this.sourceBrokerId = sourceBrokerId;
}
public int getDestinationBrokerId() {
return destinationBrokerId;
}
public void setDestinationBrokerId(int destinationBrokerId) {
this.destinationBrokerId = destinationBrokerId;
}
@Override
public String toString() {
return "BalanceActionHistory{" +
"goal='" + goal + '\'' +
", actionType='" + actionType + '\'' +
", topic='" + topic + '\'' +
", partition='" + partition + '\'' +
", sourceBrokerId='" + sourceBrokerId + '\'' +
", destinationBrokerId='" + destinationBrokerId + '\'' +
'}';
}
}

View File

@@ -0,0 +1,173 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
public class BalanceDetailed {
private int brokerId;
private String host;
//当前CPU使用率
private double currentCPUUtilization;
//最新CPU使用率
private double lastCPUUtilization;
//当前磁盘使用率
private double currentDiskUtilization;
//最新磁盘使用量
private double lastDiskUtilization;
//当前网卡入流量
private double currentNetworkInUtilization;
//最新网卡入流量
private double lastNetworkInUtilization;
//当前网卡出流量
private double currentNetworkOutUtilization;
//最新网卡出流量
private double lastNetworkOutUtilization;
//均衡状态
private int balanceState = 0;
//迁入磁盘容量
private double moveInDiskSize;
//迁出磁盘容量
private double moveOutDiskSize;
//迁入副本数
private double moveInReplicas;
//迁出副本数
private double moveOutReplicas;
public int getBrokerId() {
return brokerId;
}
public void setBrokerId(int brokerId) {
this.brokerId = brokerId;
}
public double getCurrentCPUUtilization() {
return currentCPUUtilization;
}
public void setCurrentCPUUtilization(double currentCPUUtilization) {
this.currentCPUUtilization = currentCPUUtilization;
}
public double getLastCPUUtilization() {
return lastCPUUtilization;
}
public void setLastCPUUtilization(double lastCPUUtilization) {
this.lastCPUUtilization = lastCPUUtilization;
}
public double getCurrentDiskUtilization() {
return currentDiskUtilization;
}
public void setCurrentDiskUtilization(double currentDiskUtilization) {
this.currentDiskUtilization = currentDiskUtilization;
}
public double getLastDiskUtilization() {
return lastDiskUtilization;
}
public void setLastDiskUtilization(double lastDiskUtilization) {
this.lastDiskUtilization = lastDiskUtilization;
}
public double getCurrentNetworkInUtilization() {
return currentNetworkInUtilization;
}
public void setCurrentNetworkInUtilization(double currentNetworkInUtilization) {
this.currentNetworkInUtilization = currentNetworkInUtilization;
}
public double getLastNetworkInUtilization() {
return lastNetworkInUtilization;
}
public void setLastNetworkInUtilization(double lastNetworkInUtilization) {
this.lastNetworkInUtilization = lastNetworkInUtilization;
}
public double getCurrentNetworkOutUtilization() {
return currentNetworkOutUtilization;
}
public void setCurrentNetworkOutUtilization(double currentNetworkOutUtilization) {
this.currentNetworkOutUtilization = currentNetworkOutUtilization;
}
public double getLastNetworkOutUtilization() {
return lastNetworkOutUtilization;
}
public void setLastNetworkOutUtilization(double lastNetworkOutUtilization) {
this.lastNetworkOutUtilization = lastNetworkOutUtilization;
}
public int getBalanceState() {
return balanceState;
}
public void setBalanceState(int balanceState) {
this.balanceState = balanceState;
}
public double getMoveInDiskSize() {
return moveInDiskSize;
}
public void setMoveInDiskSize(double moveInDiskSize) {
this.moveInDiskSize = moveInDiskSize;
}
public double getMoveOutDiskSize() {
return moveOutDiskSize;
}
public void setMoveOutDiskSize(double moveOutDiskSize) {
this.moveOutDiskSize = moveOutDiskSize;
}
public double getMoveInReplicas() {
return moveInReplicas;
}
public void setMoveInReplicas(double moveInReplicas) {
this.moveInReplicas = moveInReplicas;
}
public double getMoveOutReplicas() {
return moveOutReplicas;
}
public void setMoveOutReplicas(double moveOutReplicas) {
this.moveOutReplicas = moveOutReplicas;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
@Override
public String toString() {
return "BalanceDetailed{" +
"brokerId=" + brokerId +
", host='" + host + '\'' +
", currentCPUUtilization=" + currentCPUUtilization +
", lastCPUUtilization=" + lastCPUUtilization +
", currentDiskUtilization=" + currentDiskUtilization +
", lastDiskUtilization=" + lastDiskUtilization +
", currentNetworkInUtilization=" + currentNetworkInUtilization +
", lastNetworkInUtilization=" + lastNetworkInUtilization +
", currentNetworkOutUtilization=" + currentNetworkOutUtilization +
", lastNetworkOutUtilization=" + lastNetworkOutUtilization +
", balanceState=" + balanceState +
", moveInDiskSize=" + moveInDiskSize +
", moveOutDiskSize=" + moveOutDiskSize +
", moveInReplicas=" + moveInReplicas +
", moveOutReplicas=" + moveOutReplicas +
'}';
}
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
public enum BalanceGoal {
// KM传参时使用
TOPIC_LEADERS("TopicLeadersDistributionGoal"),
TOPIC_REPLICA("TopicReplicaDistributionGoal"),
DISK("DiskDistributionGoal"),
NW_IN("NetworkInboundDistributionGoal"),
NW_OUT("NetworkOutboundDistributionGoal");
private final String goal;
BalanceGoal(String goal) {
this.goal = goal;
}
public String goal() {
return goal;
}
}

View File

@@ -0,0 +1,102 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import java.util.Map;
public class BalanceOverview {
//任务类型
private String taskType;
//节点范围
private String nodeRange;
//总的迁移大小
private double totalMoveSize;
//topic黑名单
private String topicBlacklist;
//迁移副本数
private int moveReplicas;
//迁移Topic
private String moveTopics;
//均衡阈值
private Map<Resource, Double> balanceThreshold;
//移除节点
private String removeNode;
public String getTaskType() {
return taskType;
}
public void setTaskType(String taskType) {
this.taskType = taskType;
}
public String getNodeRange() {
return nodeRange;
}
public void setNodeRange(String nodeRange) {
this.nodeRange = nodeRange;
}
public double getTotalMoveSize() {
return totalMoveSize;
}
public void setTotalMoveSize(double totalMoveSize) {
this.totalMoveSize = totalMoveSize;
}
public String getTopicBlacklist() {
return topicBlacklist;
}
public void setTopicBlacklist(String topicBlacklist) {
this.topicBlacklist = topicBlacklist;
}
public int getMoveReplicas() {
return moveReplicas;
}
public void setMoveReplicas(int moveReplicas) {
this.moveReplicas = moveReplicas;
}
public String getMoveTopics() {
return moveTopics;
}
public void setMoveTopics(String moveTopics) {
this.moveTopics = moveTopics;
}
public Map<Resource, Double> getBalanceThreshold() {
return balanceThreshold;
}
public void setBalanceThreshold(Map<Resource, Double> balanceThreshold) {
this.balanceThreshold = balanceThreshold;
}
public String getRemoveNode() {
return removeNode;
}
public void setRemoveNode(String removeNode) {
this.removeNode = removeNode;
}
@Override
public String toString() {
return "BalanceOverview{" +
"taskType='" + taskType + '\'' +
", nodeRange='" + nodeRange + '\'' +
", totalMoveSize=" + totalMoveSize +
", topicBlacklist='" + topicBlacklist + '\'' +
", moveReplicas=" + moveReplicas +
", moveTopics='" + moveTopics + '\'' +
", balanceThreshold=" + balanceThreshold +
", removeNode='" + removeNode + '\'' +
'}';
}
}

View File

@@ -0,0 +1,207 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import java.util.List;
import java.util.Properties;
public class BalanceParameter {
//集群名称
private String cluster;
//集群访问配置
private Properties kafkaConfig;
//ES访问地址
private String esRestURL;
//ES访问密码
private String esPassword;
//ES存储索引前缀
private String esIndexPrefix;
//均衡目标
private List<String> goals;
//Topic黑名单参与模型计算
private String excludedTopics = "";
//忽略的Topic列表不参与模型计算
private String ignoredTopics = "";
//下线的Broker
private String offlineBrokers = "";
//需要均衡的Broker
private String balanceBrokers = "";
//默认Topic副本分布阈值
private double topicReplicaThreshold = 0.1;
//磁盘浮动阈值
private double diskThreshold = 0.1;
//CPU浮动阈值
private double cpuThreshold = 0.1;
//流入浮动阈值
private double networkInThreshold = 0.1;
//流出浮动阈值
private double networkOutThreshold = 0.1;
//均衡时间范围
private int beforeSeconds = 300;
//集群中所有Broker的硬件环境:cpu、disk、bytesIn、bytesOut
private List<HostEnv> hardwareEnv;
//最小Leader浮动阈值,不追求绝对平均,避免集群流量抖动
private double topicLeaderThreshold = 0.1;
public String getCluster() {
return cluster;
}
public void setCluster(String cluster) {
this.cluster = cluster;
}
public String getEsRestURL() {
return esRestURL;
}
public void setEsInfo(String esRestURL, String esPassword, String esIndexPrefix) {
this.esRestURL = esRestURL;
this.esPassword = esPassword;
this.esIndexPrefix = esIndexPrefix;
}
public String getEsPassword() {
return esPassword;
}
public List<String> getGoals() {
return goals;
}
public void setGoals(List<String> goals) {
this.goals = goals;
}
public String getExcludedTopics() {
return excludedTopics;
}
public void setExcludedTopics(String excludedTopics) {
this.excludedTopics = excludedTopics;
}
public String getIgnoredTopics() {
return ignoredTopics;
}
public void setIgnoredTopics(String ignoredTopics) {
this.ignoredTopics = ignoredTopics;
}
public double getTopicReplicaThreshold() {
return topicReplicaThreshold;
}
public void setTopicReplicaThreshold(double topicReplicaThreshold) {
this.topicReplicaThreshold = topicReplicaThreshold;
}
public double getDiskThreshold() {
return diskThreshold;
}
public void setDiskThreshold(double diskThreshold) {
this.diskThreshold = diskThreshold;
}
public double getCpuThreshold() {
return cpuThreshold;
}
public void setCpuThreshold(double cpuThreshold) {
this.cpuThreshold = cpuThreshold;
}
public double getNetworkInThreshold() {
return networkInThreshold;
}
public void setNetworkInThreshold(double networkInThreshold) {
this.networkInThreshold = networkInThreshold;
}
public double getNetworkOutThreshold() {
return networkOutThreshold;
}
public void setNetworkOutThreshold(double networkOutThreshold) {
this.networkOutThreshold = networkOutThreshold;
}
public List<HostEnv> getHardwareEnv() {
return hardwareEnv;
}
public void setHardwareEnv(List<HostEnv> hardwareEnv) {
this.hardwareEnv = hardwareEnv;
}
public String getBalanceBrokers() {
return balanceBrokers;
}
public void setBalanceBrokers(String balanceBrokers) {
this.balanceBrokers = balanceBrokers;
}
public Properties getKafkaConfig() {
return kafkaConfig;
}
public void setKafkaConfig(Properties kafkaConfig) {
this.kafkaConfig = kafkaConfig;
}
public String getEsIndexPrefix() {
return esIndexPrefix;
}
public String getOfflineBrokers() {
return offlineBrokers;
}
public void setOfflineBrokers(String offlineBrokers) {
this.offlineBrokers = offlineBrokers;
}
public int getBeforeSeconds() {
return beforeSeconds;
}
public void setBeforeSeconds(int beforeSeconds) {
this.beforeSeconds = beforeSeconds;
}
public double getTopicLeaderThreshold() {
return topicLeaderThreshold;
}
public void setTopicLeaderThreshold(double topicLeaderThreshold) {
this.topicLeaderThreshold = topicLeaderThreshold;
}
@Override
public String toString() {
return "BalanceParameter{" +
"cluster='" + cluster + '\'' +
", kafkaConfig=" + kafkaConfig +
", esRestURL='" + esRestURL + '\'' +
", esPassword='" + esPassword + '\'' +
", esIndexPrefix='" + esIndexPrefix + '\'' +
", goals=" + goals +
", excludedTopics='" + excludedTopics + '\'' +
", ignoredTopics='" + ignoredTopics + '\'' +
", offlineBrokers='" + offlineBrokers + '\'' +
", balanceBrokers='" + balanceBrokers + '\'' +
", topicReplicaThreshold=" + topicReplicaThreshold +
", diskThreshold=" + diskThreshold +
", cpuThreshold=" + cpuThreshold +
", networkInThreshold=" + networkInThreshold +
", networkOutThreshold=" + networkOutThreshold +
", beforeSeconds=" + beforeSeconds +
", hardwareEnv=" + hardwareEnv +
", topicLeaderThreshold=" + topicLeaderThreshold +
'}';
}
}

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import java.util.List;
public class BalanceTask {
private String topic;
private int partition;
//副本分配列表
private List<Integer> replicas;
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
public int getPartition() {
return partition;
}
public void setPartition(int partition) {
this.partition = partition;
}
public List<Integer> getReplicas() {
return replicas;
}
public void setReplicas(List<Integer> replicas) {
this.replicas = replicas;
}
@Override
public String toString() {
return "BalanceTask{" +
"topic='" + topic + '\'' +
", partition=" + partition +
", replicas=" + replicas +
'}';
}
}

View File

@@ -0,0 +1,41 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
public class BalanceThreshold {
private final Resource _resource;
private final double _upper;
private final double _lower;
public BalanceThreshold(Resource resource, double threshold, double avgResource) {
_resource = resource;
_upper = avgResource * (1 + threshold);
_lower = avgResource * (1 - threshold);
}
public Resource resource() {
return _resource;
}
public boolean isInRange(double utilization) {
return utilization > _lower && utilization < _upper;
}
public int state(double utilization) {
if (utilization <= _lower) {
return -1;
} else if (utilization >= _upper) {
return 1;
}
return 0;
}
@Override
public String toString() {
return "BalanceThreshold{" +
"_resource=" + _resource +
", _upper=" + _upper +
", _lower=" + _lower +
'}';
}
}

View File

@@ -0,0 +1,144 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
public class BrokerBalanceState {
//CPU平均资源
private Double cpuAvgResource;
//CPU资源使用率
private Double cpuUtilization;
// -1,低于均衡范围
// 0,均衡范围内
// 1,高于均衡范围
private Integer cpuBalanceState;
//磁盘平均资源
private Double diskAvgResource;
//磁盘资源使用率
private Double diskUtilization;
//磁盘均衡状态
private Integer diskBalanceState;
//流入平均资源
private Double bytesInAvgResource;
//流入资源使用率
private Double bytesInUtilization;
//流入均衡状态
private Integer bytesInBalanceState;
//流出平均资源
private Double bytesOutAvgResource;
//流出资源使用率
private Double bytesOutUtilization;
//流出均衡状态
private Integer bytesOutBalanceState;
public Double getCpuAvgResource() {
return cpuAvgResource;
}
public void setCpuAvgResource(Double cpuAvgResource) {
this.cpuAvgResource = cpuAvgResource;
}
public Double getCpuUtilization() {
return cpuUtilization;
}
public void setCpuUtilization(Double cpuUtilization) {
this.cpuUtilization = cpuUtilization;
}
public Integer getCpuBalanceState() {
return cpuBalanceState;
}
public void setCpuBalanceState(Integer cpuBalanceState) {
this.cpuBalanceState = cpuBalanceState;
}
public Double getDiskAvgResource() {
return diskAvgResource;
}
public void setDiskAvgResource(Double diskAvgResource) {
this.diskAvgResource = diskAvgResource;
}
public Double getDiskUtilization() {
return diskUtilization;
}
public void setDiskUtilization(Double diskUtilization) {
this.diskUtilization = diskUtilization;
}
public Integer getDiskBalanceState() {
return diskBalanceState;
}
public void setDiskBalanceState(Integer diskBalanceState) {
this.diskBalanceState = diskBalanceState;
}
public Double getBytesInAvgResource() {
return bytesInAvgResource;
}
public void setBytesInAvgResource(Double bytesInAvgResource) {
this.bytesInAvgResource = bytesInAvgResource;
}
public Double getBytesInUtilization() {
return bytesInUtilization;
}
public void setBytesInUtilization(Double bytesInUtilization) {
this.bytesInUtilization = bytesInUtilization;
}
public Integer getBytesInBalanceState() {
return bytesInBalanceState;
}
public void setBytesInBalanceState(Integer bytesInBalanceState) {
this.bytesInBalanceState = bytesInBalanceState;
}
public Double getBytesOutAvgResource() {
return bytesOutAvgResource;
}
public void setBytesOutAvgResource(Double bytesOutAvgResource) {
this.bytesOutAvgResource = bytesOutAvgResource;
}
public Double getBytesOutUtilization() {
return bytesOutUtilization;
}
public void setBytesOutUtilization(Double bytesOutUtilization) {
this.bytesOutUtilization = bytesOutUtilization;
}
public Integer getBytesOutBalanceState() {
return bytesOutBalanceState;
}
public void setBytesOutBalanceState(Integer bytesOutBalanceState) {
this.bytesOutBalanceState = bytesOutBalanceState;
}
@Override
public String toString() {
return "BrokerBalanceState{" +
"cpuAvgResource=" + cpuAvgResource +
", cpuUtilization=" + cpuUtilization +
", cpuBalanceState=" + cpuBalanceState +
", diskAvgResource=" + diskAvgResource +
", diskUtilization=" + diskUtilization +
", diskBalanceState=" + diskBalanceState +
", bytesInAvgResource=" + bytesInAvgResource +
", bytesInUtilization=" + bytesInUtilization +
", bytesInBalanceState=" + bytesInBalanceState +
", bytesOutAvgResource=" + bytesOutAvgResource +
", bytesOutUtilization=" + bytesOutUtilization +
", bytesOutBalanceState=" + bytesOutBalanceState +
'}';
}
}

View File

@@ -0,0 +1,76 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
public class HostEnv {
//BrokerId
private int id;
//机器IP
private String host;
//机架ID
private String rackId;
//CPU核数
private int cpu;
//磁盘总容量
private double disk;
//网卡容量
private double network;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public String getRackId() {
return rackId;
}
public void setRackId(String rackId) {
this.rackId = rackId;
}
public int getCpu() {
return cpu;
}
public void setCpu(int cpu) {
this.cpu = cpu;
}
public double getDisk() {
return disk;
}
public void setDisk(double disk) {
this.disk = disk;
}
public double getNetwork() {
return network;
}
public void setNetwork(double network) {
this.network = network;
}
@Override
public String toString() {
return "HostEnv{" +
"id=" + id +
", host='" + host + '\'' +
", rackId='" + rackId + '\'' +
", cpu=" + cpu +
", disk=" + disk +
", network=" + network +
'}';
}
}

View File

@@ -0,0 +1,218 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Broker;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ReplicaPlacementInfo;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ExecutionProposal;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.GoalUtils;
import org.apache.kafka.common.TopicPartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
import java.util.stream.Collectors;
public class OptimizerResult {
private static final Logger logger = LoggerFactory.getLogger(OptimizerResult.class);
private Set<ExecutionProposal> _proposals;
private final BalanceParameter parameter;
private Set<Broker> _balanceBrokersBefore;
private Set<Broker> _balanceBrokersAfter;
private final ClusterModel clusterModel;
private final Map<TopicPartition, List<BalanceActionHistory>> balanceActionHistory;
private final Map<String, BalanceThreshold> balanceThreshold;
public OptimizerResult(ClusterModel clusterModel, OptimizationOptions optimizationOptions) {
this.clusterModel = clusterModel;
balanceActionHistory = clusterModel.balanceActionHistory();
parameter = optimizationOptions.parameter();
double[] clusterAvgResource = clusterModel.avgOfUtilization();
balanceThreshold = GoalUtils.getBalanceThreshold(parameter, clusterAvgResource);
}
/**
* 计划概览
*/
public BalanceOverview resultOverview() {
BalanceOverview overview = new BalanceOverview();
overview.setTopicBlacklist(parameter.getExcludedTopics());
overview.setMoveReplicas(_proposals.size());
overview.setNodeRange(parameter.getBalanceBrokers());
overview.setRemoveNode(parameter.getOfflineBrokers());
Map<Resource, Double> balanceThreshold = new HashMap<>();
balanceThreshold.put(Resource.CPU, parameter.getCpuThreshold());
balanceThreshold.put(Resource.DISK, parameter.getDiskThreshold());
balanceThreshold.put(Resource.NW_IN, parameter.getNetworkInThreshold());
balanceThreshold.put(Resource.NW_OUT, parameter.getNetworkOutThreshold());
overview.setBalanceThreshold(balanceThreshold);
Set<String> moveTopicsSet = _proposals.stream().map(j -> j.tp().topic()).collect(Collectors.toSet());
String moveTopics = String.join(",", moveTopicsSet);
overview.setMoveTopics(moveTopics);
//Leader切换时不需要进行统计
double totalMoveSize = _proposals.stream().filter(i -> Integer.max(i.replicasToAdd().size(), i.replicasToRemove().size()) != 0).mapToDouble(ExecutionProposal::partitionSize).sum();
overview.setTotalMoveSize(totalMoveSize);
return overview;
}
/**
* 计划明细
*/
public Map<Integer, BalanceDetailed> resultDetailed() {
Map<Integer, BalanceDetailed> details = new HashMap<>();
_balanceBrokersBefore.forEach(i -> {
BalanceDetailed balanceDetailed = new BalanceDetailed();
balanceDetailed.setBrokerId(i.id());
balanceDetailed.setHost(i.host());
balanceDetailed.setCurrentCPUUtilization(i.utilizationFor(Resource.CPU));
balanceDetailed.setCurrentDiskUtilization(i.utilizationFor(Resource.DISK));
balanceDetailed.setCurrentNetworkInUtilization(i.utilizationFor(Resource.NW_IN));
balanceDetailed.setCurrentNetworkOutUtilization(i.utilizationFor(Resource.NW_OUT));
details.put(i.id(), balanceDetailed);
});
Map<Integer, Double> totalAddReplicaCount = new HashMap<>();
Map<Integer, Double> totalAddDataSize = new HashMap<>();
Map<Integer, Double> totalRemoveReplicaCount = new HashMap<>();
Map<Integer, Double> totalRemoveDataSize = new HashMap<>();
_proposals.forEach(i -> {
i.replicasToAdd().forEach((k, v) -> {
totalAddReplicaCount.merge(k, v[0], Double::sum);
totalAddDataSize.merge(k, v[1], Double::sum);
});
i.replicasToRemove().forEach((k, v) -> {
totalRemoveReplicaCount.merge(k, v[0], Double::sum);
totalRemoveDataSize.merge(k, v[1], Double::sum);
});
});
_balanceBrokersAfter.forEach(i -> {
BalanceDetailed balanceDetailed = details.get(i.id());
balanceDetailed.setLastCPUUtilization(i.utilizationFor(Resource.CPU));
balanceDetailed.setLastDiskUtilization(i.utilizationFor(Resource.DISK));
balanceDetailed.setLastNetworkInUtilization(i.utilizationFor(Resource.NW_IN));
balanceDetailed.setLastNetworkOutUtilization(i.utilizationFor(Resource.NW_OUT));
balanceDetailed.setMoveInReplicas(totalAddReplicaCount.getOrDefault(i.id(), 0.0));
balanceDetailed.setMoveOutReplicas(totalRemoveReplicaCount.getOrDefault(i.id(), 0.0));
balanceDetailed.setMoveInDiskSize(totalAddDataSize.getOrDefault(i.id(), 0.0));
balanceDetailed.setMoveOutDiskSize(totalRemoveDataSize.getOrDefault(i.id(), 0.0));
for (String str : parameter.getGoals()) {
BalanceThreshold threshold = balanceThreshold.get(str);
if (!threshold.isInRange(i.utilizationFor(threshold.resource()))) {
balanceDetailed.setBalanceState(-1);
break;
}
}
});
return details;
}
/**
* 计划任务
*/
public List<BalanceTask> resultTask() {
List<BalanceTask> balanceTasks = new ArrayList<>();
_proposals.forEach(proposal -> {
BalanceTask task = new BalanceTask();
task.setTopic(proposal.tp().topic());
task.setPartition(proposal.tp().partition());
List<Integer> replicas = proposal.newReplicas().stream().map(ReplicaPlacementInfo::brokerId).collect(Collectors.toList());
task.setReplicas(replicas);
balanceTasks.add(task);
});
return balanceTasks;
}
public Map<TopicPartition, List<BalanceActionHistory>> resultBalanceActionHistory() {
return Collections.unmodifiableMap(balanceActionHistory);
}
public String resultJsonOverview() {
try {
return new ObjectMapper().writeValueAsString(resultOverview());
} catch (Exception e) {
logger.error("result overview json process error", e);
}
return "{}";
}
public String resultJsonDetailed() {
try {
return new ObjectMapper().writeValueAsString(resultDetailed());
} catch (Exception e) {
logger.error("result detailed json process error", e);
}
return "{}";
}
public String resultJsonTask() {
try {
Map<String, Object> reassign = new HashMap<>();
reassign.put("partitions", resultTask());
reassign.put("version", 1);
return new ObjectMapper().writeValueAsString(reassign);
} catch (Exception e) {
logger.error("result task json process error", e);
}
return "{}";
}
public List<TopicChangeHistory> resultTopicChangeHistory() {
List<TopicChangeHistory> topicChangeHistoryList = new ArrayList<>();
for (ExecutionProposal proposal : _proposals) {
TopicChangeHistory changeHistory = new TopicChangeHistory();
changeHistory.setTopic(proposal.tp().topic());
changeHistory.setPartition(proposal.tp().partition());
changeHistory.setOldLeader(proposal.oldLeader().brokerId());
changeHistory.setNewLeader(proposal.newReplicas().get(0).brokerId());
List<Integer> balanceBefore = proposal.oldReplicas().stream().map(ReplicaPlacementInfo::brokerId).collect(Collectors.toList());
List<Integer> balanceAfter = proposal.newReplicas().stream().map(ReplicaPlacementInfo::brokerId).collect(Collectors.toList());
changeHistory.setBalanceBefore(balanceBefore);
changeHistory.setBalanceAfter(balanceAfter);
topicChangeHistoryList.add(changeHistory);
}
return topicChangeHistoryList;
}
public String resultJsonTopicChangeHistory() {
try {
return new ObjectMapper().writeValueAsString(resultTopicChangeHistory());
} catch (Exception e) {
logger.error("result balance topic change history json process error", e);
}
return "{}";
}
public String resultJsonBalanceActionHistory() {
try {
return new ObjectMapper().writeValueAsString(balanceActionHistory);
} catch (Exception e) {
logger.error("result balance action history json process error", e);
}
return "{}";
}
public void setBalanceBrokersFormBefore(Set<Broker> balanceBrokersBefore) {
_balanceBrokersBefore = new HashSet<>();
balanceBrokersBefore.forEach(i -> {
Broker broker = new Broker(i.rack(), i.id(), i.host(), false, i.capacity());
broker.load().addLoad(i.load());
_balanceBrokersBefore.add(broker);
});
}
public void setBalanceBrokersFormAfter(Set<Broker> balanceBrokersAfter) {
_balanceBrokersAfter = balanceBrokersAfter;
}
public void setExecutionProposal(Set<ExecutionProposal> proposals) {
_proposals = proposals;
}
// test
public ClusterModel clusterModel() {
return clusterModel;
}
}

View File

@@ -0,0 +1,78 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common;
import java.util.List;
public class TopicChangeHistory {
//均衡Topic
private String topic;
//均衡分区
private int partition;
//旧Leader的BrokerID
private int oldLeader;
//均衡前副本分布
private List<Integer> balanceBefore;
//新Leader的BrokerID
private int newLeader;
//均衡后副本分布
private List<Integer> balanceAfter;
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
public int getPartition() {
return partition;
}
public void setPartition(int partition) {
this.partition = partition;
}
public int getOldLeader() {
return oldLeader;
}
public void setOldLeader(int oldLeader) {
this.oldLeader = oldLeader;
}
public List<Integer> getBalanceBefore() {
return balanceBefore;
}
public void setBalanceBefore(List<Integer> balanceBefore) {
this.balanceBefore = balanceBefore;
}
public int getNewLeader() {
return newLeader;
}
public void setNewLeader(int newLeader) {
this.newLeader = newLeader;
}
public List<Integer> getBalanceAfter() {
return balanceAfter;
}
public void setBalanceAfter(List<Integer> balanceAfter) {
this.balanceAfter = balanceAfter;
}
@Override
public String toString() {
return "TopicChangeHistory{" +
"topic='" + topic + '\'' +
", partition='" + partition + '\'' +
", oldLeader=" + oldLeader +
", balanceBefore=" + balanceBefore +
", newLeader=" + newLeader +
", balanceAfter=" + balanceAfter +
'}';
}
}

View File

@@ -0,0 +1,51 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric;
/**
* @author leewei
* @date 2022/5/12
*/
public class Metric {
private String topic;
private int partition;
private double cpu;
private double bytesIn;
private double bytesOut;
private double disk;
public Metric() {
}
public Metric(String topic, int partition, double cpu, double bytesIn, double bytesOut, double disk) {
this.topic = topic;
this.partition = partition;
this.cpu = cpu;
this.bytesIn = bytesIn;
this.bytesOut = bytesOut;
this.disk = disk;
}
public String topic() {
return topic;
}
public int partition() {
return partition;
}
public double cpu() {
return cpu;
}
public double bytesIn() {
return bytesIn;
}
public double bytesOut() {
return bytesOut;
}
public double disk() {
return disk;
}
}

View File

@@ -0,0 +1,9 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric;
/**
* @author leewei
* @date 2022/4/29
*/
public interface MetricStore {
Metrics getMetrics(String clusterName, int beforeSeconds);
}

View File

@@ -0,0 +1,46 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Load;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
/**
* @author leewei
* @date 2022/4/29
*/
public class Metrics {
private final Map<TopicPartition, Metric> metricByTopicPartition;
public Metrics() {
this.metricByTopicPartition = new HashMap<>();
}
public void addMetrics(Metric metric) {
TopicPartition topicPartition = new TopicPartition(metric.topic(), metric.partition());
this.metricByTopicPartition.put(topicPartition, metric);
}
public List<Metric> values() {
return Collections.unmodifiableList(new ArrayList<>(this.metricByTopicPartition.values()));
}
public Metric metric(TopicPartition topicPartition) {
return this.metricByTopicPartition.get(topicPartition);
}
public Load load(TopicPartition topicPartition) {
Metric metric = this.metricByTopicPartition.get(topicPartition);
if (metric == null) {
return null;
}
Load load = new Load();
load.setLoad(Resource.CPU, metric.cpu());
load.setLoad(Resource.NW_IN, metric.bytesIn());
load.setLoad(Resource.NW_OUT, metric.bytesOut());
load.setLoad(Resource.DISK, metric.disk());
return load;
}
}

View File

@@ -0,0 +1,124 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.elasticsearch;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.Metric;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.MetricStore;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.Metrics;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.Header;
import org.apache.http.HttpHost;
import org.apache.http.message.BasicHeader;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.*;
/**
* @author leewei
* @date 2022/4/29
*/
public class ElasticsearchMetricStore implements MetricStore {
private final Logger logger = LoggerFactory.getLogger(ElasticsearchMetricStore.class);
private final ObjectMapper objectMapper = new ObjectMapper();
private final String hosts;
private final String password;
private final String indexPrefix;
private final String format;
public ElasticsearchMetricStore(String hosts, String password, String indexPrefix) {
this(hosts, password, indexPrefix, "yyyy-MM-dd");
}
public ElasticsearchMetricStore(String hosts, String password, String indexPrefix, String format) {
this.hosts = hosts;
this.password = password;
this.indexPrefix = indexPrefix;
this.format = format;
}
@Override
public Metrics getMetrics(String clusterName, int beforeSeconds) {
Metrics metrics = new Metrics();
try {
String metricsQueryJson = IOUtils.resourceToString("/MetricsQuery.json", StandardCharsets.UTF_8);
metricsQueryJson = metricsQueryJson.replaceAll("<var_before_time>", Integer.toString(beforeSeconds))
.replaceAll("<var_cluster_name>", clusterName);
List<Header> defaultHeaders = new ArrayList<>();
if (StringUtils.isNotBlank(password)) {
String encode = Base64.getEncoder().encodeToString(String.format("%s", this.password).getBytes(StandardCharsets.UTF_8));
Header header = new BasicHeader("Authorization", "Basic " + encode);
defaultHeaders.add(header);
}
Header[] headers = new Header[defaultHeaders.size()];
defaultHeaders.toArray(headers);
try (RestClient restClient = RestClient.builder(toHttpHosts(this.hosts)).setDefaultHeaders(headers).build()) {
Request request = new Request(
"GET",
"/" + indices(beforeSeconds) + "/_search");
request.setJsonEntity(metricsQueryJson);
logger.debug("Es metrics query for cluster: {} request: {} dsl: {}", clusterName, request, metricsQueryJson);
Response response = restClient.performRequest(request);
if (response.getStatusLine().getStatusCode() == 200) {
JsonNode rootNode = objectMapper.readTree(response.getEntity().getContent());
JsonNode topics = rootNode.at("/aggregations/by_topic/buckets");
for (JsonNode topic : topics) {
String topicName = topic.path("key").asText();
JsonNode partitions = topic.at("/by_partition/buckets");
for (JsonNode partition : partitions) {
int partitionId = partition.path("key").asInt();
// double cpu = partition.at("/avg_cpu/value").asDouble();
double cpu = 0D;
double bytesIn = partition.at("/avg_bytes_in/value").asDouble();
double bytesOut = partition.at("/avg_bytes_out/value").asDouble();
double disk = partition.at("/lastest_disk/hits/hits/0/_source/metrics/LogSize").asDouble();
// add
metrics.addMetrics(new Metric(topicName, partitionId, cpu, bytesIn, bytesOut, disk));
}
}
}
}
} catch (IOException e) {
throw new IllegalArgumentException("Cannot get metrics of cluster: " + clusterName, e);
}
logger.debug("Es metrics query for cluster: {} result count: {}", clusterName, metrics.values().size());
return metrics;
}
private String indices(long beforeSeconds) {
Set<String> indices = new TreeSet<>();
DateFormat df = new SimpleDateFormat(this.format);
long endTime = System.currentTimeMillis();
long time = endTime - (beforeSeconds * 1000);
while (time < endTime) {
indices.add(this.indexPrefix + df.format(new Date(time)));
time += 24 * 60 * 60 * 1000; // add 24h
}
indices.add(this.indexPrefix + df.format(new Date(endTime)));
return String.join(",", indices);
}
private static HttpHost[] toHttpHosts(String url) {
String[] nodes = url.split(",");
HttpHost[] hosts = new HttpHost[nodes.length];
for (int i = 0; i < nodes.length; i++) {
String [] ipAndPort = nodes[i].split(":");
hosts[i] = new HttpHost(ipAndPort[0], ipAndPort.length > 1 ? Integer.parseInt(ipAndPort[1]) : 9200);
}
return hosts;
}
}

View File

@@ -0,0 +1,222 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
/**
* @author leewei
* @date 2022/4/29
*/
public class Broker implements Comparable<Broker> {
public static final Broker NONE = new Broker(new Rack("-1"), -1, "localhost", true, new Capacity());
private final Rack rack;
private final int id;
private final String host;
private final boolean isOffline;
private final Set<Replica> replicas;
private final Set<Replica> leaderReplicas;
private final Map<String, Map<Integer, Replica>> topicReplicas;
private final Load load;
private final Capacity capacity;
public Broker(Rack rack, int id, String host, boolean isOffline, Capacity capacity) {
this.rack = rack;
this.id = id;
this.host = host;
this.isOffline = isOffline;
this.replicas = new HashSet<>();
this.leaderReplicas = new HashSet<>();
this.topicReplicas = new HashMap<>();
this.load = new Load();
this.capacity = capacity;
}
public Rack rack() {
return rack;
}
public int id() {
return id;
}
public String host() {
return host;
}
public boolean isOffline() {
return isOffline;
}
public Set<Replica> replicas() {
return Collections.unmodifiableSet(this.replicas);
}
public SortedSet<Replica> sortedReplicasFor(Resource resource, boolean reverse) {
return sortedReplicasFor(null, resource, reverse);
}
public SortedSet<Replica> sortedReplicasFor(Predicate<? super Replica> filter, Resource resource, boolean reverse) {
Comparator<Replica> comparator =
Comparator.<Replica>comparingDouble(r -> r.load().loadFor(resource))
.thenComparingInt(Replica::hashCode);
if (reverse)
comparator = comparator.reversed();
SortedSet<Replica> sortedReplicas = new TreeSet<>(comparator);
if (filter == null) {
sortedReplicas.addAll(this.replicas);
} else {
sortedReplicas.addAll(this.replicas.stream()
.filter(filter).collect(Collectors.toList()));
}
return sortedReplicas;
}
public Set<Replica> leaderReplicas() {
return Collections.unmodifiableSet(this.leaderReplicas);
}
public Load load() {
return load;
}
public Capacity capacity() {
return capacity;
}
public double utilizationFor(Resource resource) {
return this.load.loadFor(resource) / this.capacity.capacityFor(resource);
}
public double expectedUtilizationAfterAdd(Resource resource, Load loadToChange) {
return (this.load.loadFor(resource) + ((loadToChange == null) ? 0 : loadToChange.loadFor(resource)))
/ this.capacity.capacityFor(resource);
}
public double expectedUtilizationAfterRemove(Resource resource, Load loadToChange) {
return (this.load.loadFor(resource) - ((loadToChange == null) ? 0 : loadToChange.loadFor(resource)))
/ this.capacity.capacityFor(resource);
}
public Replica replica(TopicPartition topicPartition) {
Map<Integer, Replica> replicas = this.topicReplicas.get(topicPartition.topic());
if (replicas == null) {
return null;
}
return replicas.get(topicPartition.partition());
}
void addReplica(Replica replica) {
// Add replica to list of all replicas in the broker.
if (this.replicas.contains(replica)) {
throw new IllegalStateException(String.format("Broker %d already has replica %s", this.id,
replica.topicPartition()));
}
this.replicas.add(replica);
// Add topic replica.
this.topicReplicas.computeIfAbsent(replica.topicPartition().topic(), t -> new HashMap<>())
.put(replica.topicPartition().partition(), replica);
// Add leader replica.
if (replica.isLeader()) {
this.leaderReplicas.add(replica);
}
// Add replica load to the broker load.
this.load.addLoad(replica.load());
}
Replica removeReplica(TopicPartition topicPartition) {
Replica replica = replica(topicPartition);
if (replica != null) {
this.replicas.remove(replica);
Map<Integer, Replica> replicas = this.topicReplicas.get(topicPartition.topic());
if (replicas != null) {
replicas.remove(topicPartition.partition());
}
if (replica.isLeader()) {
this.leaderReplicas.remove(replica);
}
this.load.subtractLoad(replica.load());
}
return replica;
}
Load makeFollower(TopicPartition topicPartition) {
Replica replica = replica(topicPartition);
Load leaderLoadDelta = replica.makeFollower();
// Remove leadership load from load.
this.load.subtractLoad(leaderLoadDelta);
this.leaderReplicas.remove(replica);
return leaderLoadDelta;
}
void makeLeader(TopicPartition topicPartition, Load leaderLoadDelta) {
Replica replica = replica(topicPartition);
replica.makeLeader(leaderLoadDelta);
// Add leadership load to load.
this.load.addLoad(leaderLoadDelta);
this.leaderReplicas.add(replica);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Broker broker = (Broker) o;
return id == broker.id;
}
@Override
public int hashCode() {
return Objects.hash(id);
}
@Override
public int compareTo(Broker o) {
return Integer.compare(id, o.id());
}
@Override
public String toString() {
return "Broker{" +
"id=" + id +
", host='" + host + '\'' +
", rack=" + rack.id() +
", replicas=" + replicas +
", leaderReplicas=" + leaderReplicas +
", topicReplicas=" + topicReplicas +
", load=" + load +
", capacity=" + capacity +
'}';
}
public int numLeadersFor(String topicName) {
return (int) replicasOfTopicInBroker(topicName).stream().filter(Replica::isLeader).count();
}
public Set<String> topics() {
return topicReplicas.keySet();
}
public int numReplicasOfTopicInBroker(String topic) {
Map<Integer, Replica> replicaMap = topicReplicas.get(topic);
return replicaMap == null ? 0 : replicaMap.size();
}
public Collection<Replica> replicasOfTopicInBroker(String topic) {
Map<Integer, Replica> replicaMap = topicReplicas.get(topic);
return replicaMap == null ? Collections.emptySet() : replicaMap.values();
}
public Set<Replica> currentOfflineReplicas() {
return replicas.stream().filter(Replica::isCurrentOffline).collect(Collectors.toSet());
}
}

View File

@@ -0,0 +1,36 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import java.util.Arrays;
/**
* @author leewei
* @date 2022/5/9
*/
public class Capacity {
private final double[] values;
public Capacity() {
this.values = new double[Resource.values().length];
}
public void setCapacity(Resource resource, double capacity) {
this.values[resource.id()] = capacity;
}
public double capacityFor(Resource resource) {
return this.values[resource.id()];
}
public void addCapacity(Capacity capacityToAdd) {
for (Resource resource : Resource.values()) {
this.setCapacity(resource, this.capacityFor(resource) + capacityToAdd.capacityFor(resource));
}
}
@Override
public String toString() {
return "Capacity{" +
"values=" + Arrays.toString(values) +
'}';
}
}

View File

@@ -0,0 +1,236 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceActionHistory;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
/**
* @author leewei
* @date 2022/4/29
*/
public class ClusterModel {
private final Map<String, Rack> racksById;
private final Map<Integer, Broker> brokersById;
private final Map<String, Map<TopicPartition, Partition>> partitionsByTopic;
private Map<TopicPartition, List<BalanceActionHistory>> balanceActionHistory;
public ClusterModel() {
this.racksById = new HashMap<>();
this.brokersById = new HashMap<>();
this.partitionsByTopic = new HashMap<>();
this.balanceActionHistory = new HashMap<>();
}
public Rack rack(String rackId) {
return this.racksById.get(rackId);
}
public Rack addRack(String rackId) {
Rack rack = new Rack(rackId);
this.racksById.putIfAbsent(rackId, rack);
return this.racksById.get(rackId);
}
public SortedSet<Broker> brokers() {
return new TreeSet<>(this.brokersById.values());
}
public Set<String> topics() {
return this.partitionsByTopic.keySet();
}
public SortedSet<Partition> topic(String name) {
return new TreeSet<>(this.partitionsByTopic.get(name).values());
}
public SortedSet<Broker> sortedBrokersFor(Resource resource, boolean reverse) {
return sortedBrokersFor(null, resource, reverse);
}
public SortedSet<Broker> sortedBrokersFor(Predicate<? super Broker> filter, Resource resource, boolean reverse) {
Comparator<Broker> comparator =
Comparator.<Broker>comparingDouble(b -> b.utilizationFor(resource))
.thenComparingInt(Broker::id);
if (reverse)
comparator = comparator.reversed();
SortedSet<Broker> sortedBrokers = new TreeSet<>(comparator);
if (filter == null) {
sortedBrokers.addAll(this.brokersById.values());
} else {
sortedBrokers.addAll(this.brokersById.values().stream()
.filter(filter).collect(Collectors.toList()));
}
return sortedBrokers;
}
public Load load() {
Load load = new Load();
for (Broker broker : this.brokersById.values()) {
load.addLoad(broker.load());
}
return load;
}
public Capacity capacity() {
Capacity capacity = new Capacity();
for (Broker broker : this.brokersById.values()) {
capacity.addCapacity(broker.capacity());
}
return capacity;
}
public double utilizationFor(Resource resource) {
return load().loadFor(resource) / capacity().capacityFor(resource);
}
public double[] avgOfUtilization() {
Load load = load();
Capacity capacity = capacity();
double[] unils = new double[Resource.values().length];
for (Resource resource : Resource.values()) {
unils[resource.id()] = load.loadFor(resource) / capacity.capacityFor(resource);
}
return unils;
}
public Broker broker(int brokerId) {
return this.brokersById.get(brokerId);
}
public Broker addBroker(String rackId, int brokerId, String host, boolean isOffline, Capacity capacity) {
Rack rack = rack(rackId);
if (rack == null)
throw new IllegalArgumentException("Rack: " + rackId + "is not exists.");
Broker broker = new Broker(rack, brokerId, host, isOffline, capacity);
rack.addBroker(broker);
this.brokersById.put(brokerId, broker);
return broker;
}
public Replica addReplica(int brokerId, TopicPartition topicPartition, boolean isLeader, Load load) {
return addReplica(brokerId, topicPartition, isLeader, false, load);
}
public Replica addReplica(int brokerId, TopicPartition topicPartition, boolean isLeader, boolean isOffline, Load load) {
Broker broker = broker(brokerId);
if (broker == null) {
throw new IllegalArgumentException("Broker: " + brokerId + "is not exists.");
}
Replica replica = new Replica(broker, topicPartition, isLeader, isOffline);
replica.setLoad(load);
// add to broker
broker.addReplica(replica);
Map<TopicPartition, Partition> partitions = this.partitionsByTopic
.computeIfAbsent(topicPartition.topic(), k -> new HashMap<>());
Partition partition = partitions.computeIfAbsent(topicPartition, Partition::new);
if (isLeader) {
partition.addLeader(replica, 0);
} else {
partition.addFollower(replica, partition.replicas().size());
}
return replica;
}
public Replica removeReplica(int brokerId, TopicPartition topicPartition) {
Broker broker = broker(brokerId);
return broker.removeReplica(topicPartition);
}
public void relocateLeadership(String goal, String actionType, TopicPartition topicPartition, int sourceBrokerId, int destinationBrokerId) {
relocateLeadership(topicPartition, sourceBrokerId, destinationBrokerId);
addBalanceActionHistory(goal, actionType, topicPartition, sourceBrokerId, destinationBrokerId);
}
public void relocateLeadership(TopicPartition topicPartition, int sourceBrokerId, int destinationBrokerId) {
Broker sourceBroker = broker(sourceBrokerId);
Replica sourceReplica = sourceBroker.replica(topicPartition);
if (!sourceReplica.isLeader()) {
throw new IllegalArgumentException("Cannot relocate leadership of partition " + topicPartition + "from broker "
+ sourceBrokerId + " to broker " + destinationBrokerId
+ " because the source replica isn't leader.");
}
Broker destinationBroker = broker(destinationBrokerId);
Replica destinationReplica = destinationBroker.replica(topicPartition);
if (destinationReplica.isLeader()) {
throw new IllegalArgumentException("Cannot relocate leadership of partition " + topicPartition + "from broker "
+ sourceBrokerId + " to broker " + destinationBrokerId
+ " because the destination replica is a leader.");
}
Load leaderLoadDelta = sourceBroker.makeFollower(topicPartition);
destinationBroker.makeLeader(topicPartition, leaderLoadDelta);
Partition partition = this.partitionsByTopic.get(topicPartition.topic()).get(topicPartition);
partition.relocateLeadership(destinationReplica);
}
public void relocateReplica(String goal, String actionType, TopicPartition topicPartition, int sourceBrokerId, int destinationBrokerId) {
relocateReplica(topicPartition, sourceBrokerId, destinationBrokerId);
addBalanceActionHistory(goal, actionType, topicPartition, sourceBrokerId, destinationBrokerId);
}
public void relocateReplica(TopicPartition topicPartition, int sourceBrokerId, int destinationBrokerId) {
Replica replica = removeReplica(sourceBrokerId, topicPartition);
if (replica == null) {
throw new IllegalArgumentException("Replica is not in the cluster.");
}
Broker destinationBroker = broker(destinationBrokerId);
replica.setBroker(destinationBroker);
destinationBroker.addReplica(replica);
}
private void addBalanceActionHistory(String goal, String actionType, TopicPartition topicPartition, int sourceBrokerId, int destinationBrokerId) {
BalanceActionHistory history = new BalanceActionHistory();
history.setActionType(actionType);
history.setGoal(goal);
history.setTopic(topicPartition.topic());
history.setPartition(topicPartition.partition());
history.setSourceBrokerId(sourceBrokerId);
history.setDestinationBrokerId(destinationBrokerId);
this.balanceActionHistory.computeIfAbsent(topicPartition, k -> new ArrayList<>()).add(history);
}
public Map<String, Integer> numLeadersPerTopic(Set<String> topics) {
Map<String, Integer> leaderCountByTopicNames = new HashMap<>();
topics.forEach(topic -> leaderCountByTopicNames.put(topic, partitionsByTopic.get(topic).size()));
return leaderCountByTopicNames;
}
public Map<TopicPartition, List<ReplicaPlacementInfo>> getReplicaDistribution() {
Map<TopicPartition, List<ReplicaPlacementInfo>> replicaDistribution = new HashMap<>();
for (Map<TopicPartition, Partition> tp : partitionsByTopic.values()) {
tp.values().forEach(i -> {
i.replicas().forEach(j -> replicaDistribution.computeIfAbsent(j.topicPartition(), k -> new ArrayList<>())
.add(new ReplicaPlacementInfo(j.broker().id(), "")));
});
}
return replicaDistribution;
}
public Replica partition(TopicPartition tp) {
return partitionsByTopic.get(tp.topic()).get(tp).leader();
}
public Map<TopicPartition, ReplicaPlacementInfo> getLeaderDistribution() {
Map<TopicPartition, ReplicaPlacementInfo> leaderDistribution = new HashMap<>();
for (Broker broker : brokersById.values()) {
broker.leaderReplicas().forEach(i -> leaderDistribution.put(i.topicPartition(), new ReplicaPlacementInfo(broker.id(), "")));
}
return leaderDistribution;
}
public int numTopicReplicas(String topic) {
return partitionsByTopic.get(topic).size();
}
public Map<TopicPartition, List<BalanceActionHistory>> balanceActionHistory() {
return this.balanceActionHistory;
}
}

View File

@@ -0,0 +1,42 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import java.util.Arrays;
/**
* @author leewei
* @date 2022/5/9
*/
public class Load {
private final double[] values;
public Load() {
this.values = new double[Resource.values().length];
}
public void setLoad(Resource resource, double load) {
this.values[resource.id()] = load;
}
public double loadFor(Resource resource) {
return this.values[resource.id()];
}
public void addLoad(Load loadToAdd) {
for (Resource resource : Resource.values()) {
this.setLoad(resource, this.loadFor(resource) + loadToAdd.loadFor(resource));
}
}
public void subtractLoad(Load loadToSubtract) {
for (Resource resource : Resource.values()) {
this.setLoad(resource, this.loadFor(resource) - loadToSubtract.loadFor(resource));
}
}
@Override
public String toString() {
return "Load{" +
"values=" + Arrays.toString(values) +
'}';
}
}

View File

@@ -0,0 +1,148 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import org.apache.kafka.common.TopicPartition;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
/**
* @author leewei
* @date 2022/5/11
*/
public class Partition implements Comparable<Partition> {
private final TopicPartition topicPartition;
private final List<Replica> replicas;
public Partition(TopicPartition topicPartition) {
this.topicPartition = topicPartition;
this.replicas = new ArrayList<>();
}
public TopicPartition topicPartition() {
return topicPartition;
}
public List<Replica> replicas() {
return replicas;
}
public Broker originalLeaderBroker() {
return replicas.stream().filter(r -> r.original().isLeader())
.findFirst().orElseThrow(IllegalStateException::new).broker();
}
public Replica leader() {
return replicas.stream()
.filter(Replica::isLeader)
.findFirst()
.orElseThrow(() ->
new IllegalArgumentException("Not found leader of partition " + topicPartition)
);
}
public Replica leaderOrNull() {
return replicas.stream()
.filter(Replica::isLeader)
.findFirst()
.orElse(null);
}
public List<Replica> followers() {
return replicas.stream()
.filter(r -> !r.isLeader())
.collect(Collectors.toList());
}
Replica replica(long brokerId) {
return replicas.stream()
.filter(r -> r.broker().id() == brokerId)
.findFirst()
.orElseThrow(() ->
new IllegalArgumentException("Requested replica " + brokerId + " is not a replica of partition " + topicPartition)
);
}
public boolean isLeaderChanged() {
// return originalLeaderBroker() != this.leader().broker();
return replicas.stream().anyMatch(Replica::isLeaderChanged);
}
public boolean isChanged() {
return replicas.stream().anyMatch(Replica::isChanged);
}
void addLeader(Replica leader, int index) {
if (leaderOrNull() != null) {
throw new IllegalArgumentException(String.format("Partition %s already has a leader replica %s. Cannot "
+ "add a new leader replica %s", this.topicPartition, leaderOrNull(), leader));
}
if (!leader.isLeader()) {
throw new IllegalArgumentException("Inconsistent leadership information. Trying to set " + leader.broker()
+ " as the leader for partition " + this.topicPartition + " while the replica is not marked "
+ "as a leader.");
}
this.replicas.add(index, leader);
}
void addFollower(Replica follower, int index) {
if (follower.isLeader()) {
throw new IllegalArgumentException("Inconsistent leadership information. Trying to add follower replica "
+ follower + " while it is a leader.");
}
if (!follower.topicPartition().equals(this.topicPartition)) {
throw new IllegalArgumentException("Inconsistent topic partition. Trying to add follower replica " + follower
+ " to partition " + this.topicPartition + ".");
}
this.replicas.add(index, follower);
}
void relocateLeadership(Replica newLeader) {
if (!newLeader.isLeader()) {
throw new IllegalArgumentException("Inconsistent leadership information. Trying to set " + newLeader.broker()
+ " as the leader for partition " + this.topicPartition + " while the replica is not marked "
+ "as a leader.");
}
int leaderPos = this.replicas.indexOf(newLeader);
swapReplicaPositions(0, leaderPos);
}
void swapReplicaPositions(int index1, int index2) {
Replica replica1 = this.replicas.get(index1);
Replica replica2 = this.replicas.get(index2);
this.replicas.set(index2, replica1);
this.replicas.set(index1, replica2);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Partition partition = (Partition) o;
return topicPartition.equals(partition.topicPartition);
}
@Override
public int hashCode() {
return Objects.hash(topicPartition);
}
@Override
public String toString() {
return "Partition{" +
"topicPartition=" + topicPartition +
", replicas=" + replicas +
", originalLeaderBroker=" + originalLeaderBroker().id() +
", leader=" + leaderOrNull() +
'}';
}
@Override
public int compareTo(Partition o) {
return Integer.compare(topicPartition.partition(), o.topicPartition.partition());
}
}

View File

@@ -0,0 +1,67 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import java.util.*;
/**
* @author leewei
* @date 2022/5/9
*/
public class Rack {
private final String id;
private final SortedSet<Broker> brokers;
public Rack(String id) {
this.id = id;
this.brokers = new TreeSet<>();
}
public String id() {
return id;
}
public SortedSet<Broker> brokers() {
return Collections.unmodifiableSortedSet(this.brokers);
}
public Load load() {
Load load = new Load();
for (Broker broker : this.brokers) {
load.addLoad(broker.load());
}
return load;
}
public List<Replica> replicas() {
List<Replica> replicas = new ArrayList<>();
for (Broker broker : this.brokers) {
replicas.addAll(broker.replicas());
}
return replicas;
}
Broker addBroker(Broker broker) {
this.brokers.add(broker);
return broker;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Rack rack = (Rack) o;
return Objects.equals(id, rack.id);
}
@Override
public int hashCode() {
return Objects.hash(id);
}
@Override
public String toString() {
return "Rack{" +
"id='" + id + '\'' +
", brokers=" + brokers +
'}';
}
}

View File

@@ -0,0 +1,129 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import org.apache.kafka.common.TopicPartition;
import java.util.Objects;
/**
* @author leewei
* @date 2022/4/29
*/
public class Replica {
private final Load load;
private final Replica original;
private final TopicPartition topicPartition;
private Broker broker;
private boolean isLeader;
private boolean isOffline;
public Replica(Broker broker, TopicPartition topicPartition, boolean isLeader, boolean isOffline) {
this(broker, topicPartition, isLeader, isOffline, false);
}
private Replica(Broker broker, TopicPartition topicPartition, boolean isLeader, boolean isOffline, boolean isOriginal) {
if (isOriginal) {
this.original = null;
} else {
this.original = new Replica(broker, topicPartition, isLeader, isOffline, true);
}
this.load = new Load();
this.topicPartition = topicPartition;
this.broker = broker;
this.isLeader = isLeader;
this.isOffline = isOffline;
}
public TopicPartition topicPartition() {
return topicPartition;
}
public Replica original() {
return original;
}
public Broker broker() {
return broker;
}
public void setBroker(Broker broker) {
checkOriginal();
this.broker = broker;
}
public boolean isLeader() {
return isLeader;
}
public Load load() {
return load;
}
void setLoad(Load load) {
checkOriginal();
this.load.addLoad(load);
}
Load makeFollower() {
checkOriginal();
this.isLeader = false;
// TODO cpu recal
Load leaderLoadDelta = new Load();
leaderLoadDelta.setLoad(Resource.NW_OUT, this.load.loadFor(Resource.NW_OUT));
this.load.subtractLoad(leaderLoadDelta);
return leaderLoadDelta;
}
void makeLeader(Load leaderLoadDelta) {
checkOriginal();
this.isLeader = true;
this.load.addLoad(leaderLoadDelta);
}
public boolean isLeaderChanged() {
checkOriginal();
return this.original.isLeader != this.isLeader;
}
public boolean isChanged() {
checkOriginal();
return this.original.broker != this.broker || this.original.isLeader != this.isLeader;
}
private void checkOriginal() {
if (this.original == null) {
throw new IllegalStateException("This is a original replica, this operation is not supported.");
}
}
@Override
public boolean equals(Object o) {
checkOriginal();
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Replica replica = (Replica) o;
return topicPartition.equals(replica.topicPartition) && this.original.broker.equals(replica.original.broker);
}
@Override
public int hashCode() {
checkOriginal();
return Objects.hash(topicPartition, this.original.broker);
}
@Override
public String toString() {
checkOriginal();
return "Replica{" +
"topicPartition=" + topicPartition +
", originalBroker=" + this.original.broker.id() +
", broker=" + broker.id() +
", originalIsLeader=" + this.original.isLeader +
", isLeader=" + isLeader +
", load=" + load +
'}';
}
//todo:副本状态,待考虑
public boolean isCurrentOffline() {
return isOffline;
}
}

View File

@@ -0,0 +1,48 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import java.util.Objects;
public class ReplicaPlacementInfo {
private final int _brokerId;
private final String _logdir;
public ReplicaPlacementInfo(int brokerId, String logdir) {
_brokerId = brokerId;
_logdir = logdir;
}
public ReplicaPlacementInfo(Integer brokerId) {
this(brokerId, null);
}
public Integer brokerId() {
return _brokerId;
}
public String logdir() {
return _logdir;
}
@Override
public boolean equals(Object o) {
if (!(o instanceof ReplicaPlacementInfo)) {
return false;
}
ReplicaPlacementInfo info = (ReplicaPlacementInfo) o;
return _brokerId == info._brokerId && Objects.equals(_logdir, info._logdir);
}
@Override
public int hashCode() {
return Objects.hash(_brokerId, _logdir);
}
@Override
public String toString() {
if (_logdir == null) {
return String.format("{Broker: %d}", _brokerId);
} else {
return String.format("{Broker: %d, Logdir: %s}", _brokerId, _logdir);
}
}
}

View File

@@ -0,0 +1,29 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
/**
* @author leewei
* @date 2022/5/10
*/
public enum Resource {
CPU("cpu", 0),
NW_IN("bytesIn", 1),
NW_OUT("bytesOut", 2),
DISK("disk", 3);
private final String resource;
private final int id;
Resource(String resource, int id) {
this.resource = resource;
this.id = id;
}
public String resource() {
return this.resource;
}
public int id() {
return this.id;
}
}

View File

@@ -0,0 +1,112 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.model;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.MetricStore;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.Metrics;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.metric.elasticsearch.ElasticsearchMetricStore;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.MetadataUtils;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.Cluster;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.PartitionInfo;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author leewei
* @date 2022/5/12
*/
public class Supplier {
public static Map<String, String> subConfig(Map<String, String> config, String prefix, boolean stripPrefix) {
return config.entrySet().stream()
.filter(e -> e.getKey().startsWith(prefix))
.collect(Collectors.toMap(e -> stripPrefix ? e.getKey().substring(prefix.length()) : e.getKey(),
Map.Entry::getValue));
}
public static ClusterModel load(String clusterName, int beforeSeconds, String kafkaBootstrapServer, String esUrls, String esPassword, String esIndexPrefix, Map<Integer, Capacity> capacitiesById, Set<String> ignoredTopics) {
Properties kafkaProperties = new Properties();
kafkaProperties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServer);
return load(clusterName, beforeSeconds, kafkaProperties, esUrls, esPassword, esIndexPrefix, capacitiesById, ignoredTopics);
}
public static ClusterModel load(String clusterName, int beforeSeconds, Properties kafkaProperties, String esUrls, String esPassword, String esIndexPrefix, Map<Integer, Capacity> capacitiesById, Set<String> ignoredTopics) {
MetricStore store = new ElasticsearchMetricStore(esUrls, esPassword, esIndexPrefix);
Metrics metrics = store.getMetrics(clusterName, beforeSeconds);
return load(kafkaProperties, capacitiesById, metrics, ignoredTopics);
}
public static ClusterModel load(Properties kafkaProperties, Map<Integer, Capacity> capacitiesById, Metrics metrics, Set<String> ignoredTopics) {
ClusterModel model = new ClusterModel();
Cluster cluster = MetadataUtils.metadata(kafkaProperties);
// nodes
for (Node node: cluster.nodes()) {
addBroker(node, false, model, capacitiesById);
}
// replicas
cluster.topics()
.stream()
.filter(topic -> !ignoredTopics.contains(topic))
.forEach(topic -> {
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
for (PartitionInfo partition : partitions) {
// TODO fix ignore no partition leader
if (partition.leader() == null) {
continue;
}
TopicPartition topicPartition = new TopicPartition(partition.topic(), partition.partition());
Load leaderLoad = metrics.load(topicPartition);
if (leaderLoad == null) {
if (partition.leader() == null) {
// set empty load
leaderLoad = new Load();
} else {
throw new IllegalArgumentException("Cannot get leader load of topic partiton: " + topicPartition);
}
}
// leader nw out + follower nw out
leaderLoad.setLoad(Resource.NW_OUT,
leaderLoad.loadFor(Resource.NW_OUT) +
leaderLoad.loadFor(Resource.NW_IN) * (partition.replicas().length - 1));
Load followerLoad = new Load();
followerLoad.addLoad(leaderLoad);
followerLoad.setLoad(Resource.NW_OUT, 0);
List<Node> offlineReplicas = Arrays.asList(partition.offlineReplicas());
for (Node n : partition.replicas()) {
boolean isLeader = partition.leader() != null && partition.leader().equals(n);
boolean isOffline = offlineReplicas.contains(n);
if (isOffline) {
if (model.broker(n.id()) == null) {
// add offline broker
addBroker(n, true, model, capacitiesById);
}
}
model.addReplica(n.id(), topicPartition, isLeader, isOffline, isLeader ? leaderLoad : followerLoad);
}
}
});
return model;
}
private static String rack(Node node) {
return (node.rack() == null || "".equals(node.rack())) ? node.host() : node.rack();
}
private static void addBroker(Node node, boolean isOffline, ClusterModel model, Map<Integer, Capacity> capacitiesById) {
// rack
Rack rack = model.addRack(rack(node));
// broker
Capacity capacity = capacitiesById.get(node.id());
if (capacity == null)
throw new IllegalArgumentException("Cannot get capacity of node: " + node);
model.addBroker(rack.id(), node.id(), node.host(), isOffline, capacity);
}
}

View File

@@ -0,0 +1,5 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
public enum ActionAcceptance {
ACCEPT, REJECT;
}

View File

@@ -0,0 +1,18 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
public enum ActionType {
REPLICA_MOVEMENT("REPLICA"),
LEADERSHIP_MOVEMENT("LEADER");
// REPLICA_SWAP("SWAP");
private final String _balancingAction;
ActionType(String balancingAction) {
_balancingAction = balancingAction;
}
@Override
public String toString() {
return _balancingAction;
}
}

View File

@@ -0,0 +1,73 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Replica;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ReplicaPlacementInfo;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals.Goal;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance.ACCEPT;
public class AnalyzerUtils {
public static Set<String> getSplitTopics(String value) {
if (StringUtils.isBlank(value)) {
return new HashSet<>();
}
String[] arr = value.split(",");
return Arrays.stream(arr).collect(Collectors.toSet());
}
public static Set<Integer> getSplitBrokers(String value) {
if (StringUtils.isBlank(value)) {
return new HashSet<>();
}
String[] arr = value.split(",");
return Arrays.stream(arr).map(Integer::valueOf).collect(Collectors.toSet());
}
public static Set<ExecutionProposal> getDiff(Map<TopicPartition, List<ReplicaPlacementInfo>> initialReplicaDistribution,
Map<TopicPartition, ReplicaPlacementInfo> initialLeaderDistribution,
ClusterModel optimizedClusterModel) {
Map<TopicPartition, List<ReplicaPlacementInfo>> finalReplicaDistribution = optimizedClusterModel.getReplicaDistribution();
if (!initialReplicaDistribution.keySet().equals(finalReplicaDistribution.keySet())) {
throw new IllegalArgumentException("diff distributions with different partitions.");
}
Set<ExecutionProposal> diff = new HashSet<>();
for (Map.Entry<TopicPartition, List<ReplicaPlacementInfo>> entry : initialReplicaDistribution.entrySet()) {
TopicPartition tp = entry.getKey();
List<ReplicaPlacementInfo> initialReplicas = entry.getValue();
List<ReplicaPlacementInfo> finalReplicas = finalReplicaDistribution.get(tp);
Replica finalLeader = optimizedClusterModel.partition(tp);
ReplicaPlacementInfo finalLeaderPlacementInfo = new ReplicaPlacementInfo(finalLeader.broker().id(), "");
if (finalReplicas.equals(initialReplicas) && initialLeaderDistribution.get(tp).equals(finalLeaderPlacementInfo)) {
continue;
}
if (!finalLeaderPlacementInfo.equals(finalReplicas.get(0))) {
int leaderPos = finalReplicas.indexOf(finalLeaderPlacementInfo);
finalReplicas.set(leaderPos, finalReplicas.get(0));
finalReplicas.set(0, finalLeaderPlacementInfo);
}
double partitionSize = optimizedClusterModel.partition(tp).load().loadFor(Resource.DISK);
diff.add(new ExecutionProposal(tp, partitionSize, initialLeaderDistribution.get(tp), initialReplicas, finalReplicas));
}
return diff;
}
public static ActionAcceptance isProposalAcceptableForOptimizedGoals(Set<Goal> optimizedGoals,
BalancingAction proposal,
ClusterModel clusterModel) {
for (Goal optimizedGoal : optimizedGoals) {
ActionAcceptance actionAcceptance = optimizedGoal.actionAcceptance(proposal, clusterModel);
if (actionAcceptance != ACCEPT) {
return actionAcceptance;
}
}
return ACCEPT;
}
}

View File

@@ -0,0 +1,40 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
import org.apache.kafka.common.TopicPartition;
public class BalancingAction {
private final TopicPartition _tp;
private final Integer _sourceBrokerId;
private final Integer _destinationBrokerId;
private final ActionType _actionType;
public BalancingAction(TopicPartition tp,
Integer sourceBrokerId,
Integer destinationBrokerId,
ActionType actionType) {
_tp = tp;
_sourceBrokerId = sourceBrokerId;
_destinationBrokerId = destinationBrokerId;
_actionType = actionType;
}
public Integer sourceBrokerId() {
return _sourceBrokerId;
}
public Integer destinationBrokerId() {
return _destinationBrokerId;
}
public ActionType balancingAction() {
return _actionType;
}
public TopicPartition topicPartition() {
return _tp;
}
public String topic() {
return _tp.topic();
}
}

View File

@@ -0,0 +1,72 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ReplicaPlacementInfo;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.stream.Collectors;
public class ExecutionProposal {
private final TopicPartition _tp;
private final double _partitionSize;
private final ReplicaPlacementInfo _oldLeader;
private final List<ReplicaPlacementInfo> _oldReplicas;
private final List<ReplicaPlacementInfo> _newReplicas;
private final Set<ReplicaPlacementInfo> _replicasToAdd;
private final Set<ReplicaPlacementInfo> _replicasToRemove;
public ExecutionProposal(TopicPartition tp,
double partitionSize,
ReplicaPlacementInfo oldLeader,
List<ReplicaPlacementInfo> oldReplicas,
List<ReplicaPlacementInfo> newReplicas) {
_tp = tp;
_partitionSize = partitionSize;
_oldLeader = oldLeader;
_oldReplicas = oldReplicas == null ? Collections.emptyList() : oldReplicas;
_newReplicas = newReplicas;
Set<Integer> newBrokerList = _newReplicas.stream().mapToInt(ReplicaPlacementInfo::brokerId).boxed().collect(Collectors.toSet());
Set<Integer> oldBrokerList = _oldReplicas.stream().mapToInt(ReplicaPlacementInfo::brokerId).boxed().collect(Collectors.toSet());
_replicasToAdd = _newReplicas.stream().filter(r -> !oldBrokerList.contains(r.brokerId())).collect(Collectors.toSet());
_replicasToRemove = _oldReplicas.stream().filter(r -> !newBrokerList.contains(r.brokerId())).collect(Collectors.toSet());
}
public TopicPartition tp() {
return _tp;
}
public double partitionSize() {
return _partitionSize;
}
public ReplicaPlacementInfo oldLeader() {
return _oldLeader;
}
public List<ReplicaPlacementInfo> oldReplicas() {
return _oldReplicas;
}
public List<ReplicaPlacementInfo> newReplicas() {
return _newReplicas;
}
public Map<Integer, Double[]> replicasToAdd() {
Map<Integer, Double[]> addData = new HashMap<>();
_replicasToAdd.forEach(i -> {
Double[] total = {1d, _partitionSize};
addData.put(i.brokerId(), total);
});
return Collections.unmodifiableMap(addData);
}
public Map<Integer, Double[]> replicasToRemove() {
Map<Integer, Double[]> removeData = new HashMap<>();
_replicasToRemove.forEach(i -> {
Double[] total = {1d, _partitionSize};
removeData.put(i.brokerId(), total);
});
return Collections.unmodifiableMap(removeData);
}
}

View File

@@ -0,0 +1,48 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.OptimizerResult;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ReplicaPlacementInfo;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals.Goal;
import org.apache.kafka.common.TopicPartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
/**
* @author leewei
* @date 2022/4/29
*/
public class GoalOptimizer {
private static final Logger logger = LoggerFactory.getLogger(GoalOptimizer.class);
public OptimizerResult optimizations(ClusterModel clusterModel, OptimizationOptions optimizationOptions) {
Set<Goal> optimizedGoals = new HashSet<>();
OptimizerResult optimizerResult = new OptimizerResult(clusterModel, optimizationOptions);
optimizerResult.setBalanceBrokersFormBefore(clusterModel.brokers());
Map<TopicPartition, List<ReplicaPlacementInfo>> initReplicaDistribution = clusterModel.getReplicaDistribution();
Map<TopicPartition, ReplicaPlacementInfo> initLeaderDistribution = clusterModel.getLeaderDistribution();
try {
Map<String, Goal> goalMap = new HashMap<>();
ServiceLoader<Goal> serviceLoader = ServiceLoader.load(Goal.class);
for (Goal goal : serviceLoader) {
goalMap.put(goal.name(), goal);
}
for (String g : optimizationOptions.goals()) {
Goal goal = goalMap.get(g);
if (goal != null) {
logger.info("Start {} balancing", goal.name());
goal.optimize(clusterModel, optimizedGoals, optimizationOptions);
optimizedGoals.add(goal);
}
}
} catch (Exception e) {
logger.error("Cluster balancing goal error", e);
}
Set<ExecutionProposal> proposals = AnalyzerUtils.getDiff(initReplicaDistribution, initLeaderDistribution, clusterModel);
optimizerResult.setBalanceBrokersFormAfter(clusterModel.brokers());
optimizerResult.setExecutionProposal(proposals);
return optimizerResult;
}
}

View File

@@ -0,0 +1,60 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceParameter;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import java.util.*;
public class OptimizationOptions {
private final Set<String> _excludedTopics;
private final Set<Integer> _offlineBrokers;
private final Set<Integer> _balanceBrokers;
private final Map<Resource, Double> _resourceBalancePercentage;
private final List<String> _goals;
private final BalanceParameter _parameter;
public OptimizationOptions(BalanceParameter parameter) {
_parameter = parameter;
_goals = parameter.getGoals();
_excludedTopics = AnalyzerUtils.getSplitTopics(parameter.getExcludedTopics());
_offlineBrokers = AnalyzerUtils.getSplitBrokers(parameter.getOfflineBrokers());
_balanceBrokers = AnalyzerUtils.getSplitBrokers(parameter.getBalanceBrokers());
_resourceBalancePercentage = new HashMap<>();
_resourceBalancePercentage.put(Resource.CPU, parameter.getCpuThreshold());
_resourceBalancePercentage.put(Resource.DISK, parameter.getDiskThreshold());
_resourceBalancePercentage.put(Resource.NW_IN, parameter.getNetworkInThreshold());
_resourceBalancePercentage.put(Resource.NW_OUT, parameter.getNetworkOutThreshold());
}
public Set<String> excludedTopics() {
return Collections.unmodifiableSet(_excludedTopics);
}
public Set<Integer> offlineBrokers() {
return Collections.unmodifiableSet(_offlineBrokers);
}
public Set<Integer> balanceBrokers() {
return Collections.unmodifiableSet(_balanceBrokers);
}
public double resourceBalancePercentageFor(Resource resource) {
return _resourceBalancePercentage.get(resource);
}
public List<String> goals() {
return Collections.unmodifiableList(_goals);
}
public double topicReplicaThreshold() {
return _parameter.getTopicReplicaThreshold();
}
public BalanceParameter parameter() {
return _parameter;
}
public double topicLeaderThreshold() {
return _parameter.getTopicLeaderThreshold();
}
}

View File

@@ -0,0 +1,129 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Broker;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Replica;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.*;
import java.util.*;
import java.util.stream.Collectors;
public abstract class AbstractGoal implements Goal {
/**
* 均衡算法逻辑处理
*/
protected abstract void rebalanceForBroker(Broker broker, ClusterModel clusterModel, Set<Goal> optimizedGoals, OptimizationOptions optimizationOptions);
/**
* 集群列表中的所有Broker循环执行均衡算法
*/
@Override
public void optimize(ClusterModel clusterModel, Set<Goal> optimizedGoals, OptimizationOptions optimizationOptions) {
initGoalState(clusterModel, optimizationOptions);
SortedSet<Broker> brokenBrokers = clusterModel.brokers().stream()
.filter(b -> optimizationOptions.balanceBrokers().isEmpty()
|| optimizationOptions.balanceBrokers().contains(b.id()))
.collect(Collectors.toCollection(TreeSet::new));
// SortedSet<Broker> brokenBrokers = clusterModel.brokers();
for (Broker broker : brokenBrokers) {
rebalanceForBroker(broker, clusterModel, optimizedGoals, optimizationOptions);
}
}
protected abstract void initGoalState(ClusterModel clusterModel, OptimizationOptions optimizationOptions);
/**
* 根据已经计算完的均衡副本、候选目标Broker、执行类型来
* 执行不同的集群模型数据更改操作
*/
protected Broker maybeApplyBalancingAction(ClusterModel clusterModel,
Replica replica,
Collection<Broker> candidateBrokers,
ActionType action,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
List<Broker> eligibleBrokers = eligibleBrokers(replica, candidateBrokers, action, optimizationOptions);
for (Broker broker : eligibleBrokers) {
BalancingAction proposal = new BalancingAction(replica.topicPartition(), replica.broker().id(), broker.id(), action);
//均衡的副本如果存在当前的Broker上则进行下次Broker
if (!legitMove(replica, broker, action)) {
continue;
}
//均衡条件已经满足进行下次Broker
if (!selfSatisfied(clusterModel, proposal)) {
continue;
}
//判断当前均衡操作是否与其他目标冲突,如果冲突则禁止均衡操作
ActionAcceptance acceptance = AnalyzerUtils.isProposalAcceptableForOptimizedGoals(optimizedGoals, proposal, clusterModel);
if (acceptance == ActionAcceptance.ACCEPT) {
if (action == ActionType.LEADERSHIP_MOVEMENT) {
clusterModel.relocateLeadership(name(), action.toString(), replica.topicPartition(), replica.broker().id(), broker.id());
} else if (action == ActionType.REPLICA_MOVEMENT) {
clusterModel.relocateReplica(name(), action.toString(), replica.topicPartition(), replica.broker().id(), broker.id());
}
return broker;
}
}
return null;
}
/**
* 副本操作合法性判断:
* 1.副本迁移目的broker不包含移动副本
* 2.Leader切换目的broker需要包含切换副本
*/
private static boolean legitMove(Replica replica,
Broker destinationBroker, ActionType actionType) {
switch (actionType) {
case REPLICA_MOVEMENT:
return destinationBroker.replica(replica.topicPartition()) == null;
case LEADERSHIP_MOVEMENT:
return replica.isLeader() && destinationBroker.replica(replica.topicPartition()) != null;
default:
return false;
}
}
protected abstract boolean selfSatisfied(ClusterModel clusterModel, BalancingAction action);
/**
* 候选Broker列表筛选过滤
*/
public static List<Broker> eligibleBrokers(Replica replica,
Collection<Broker> candidates,
ActionType action,
OptimizationOptions optimizationOptions) {
List<Broker> eligibleBrokers = new ArrayList<>(candidates);
filterOutBrokersExcludedForLeadership(eligibleBrokers, optimizationOptions, replica, action);
filterOutBrokersExcludedForReplicaMove(eligibleBrokers, optimizationOptions, action);
return eligibleBrokers;
}
/**
* Leader切换从候选的Broker列表中排除掉excludedBroker
*/
public static void filterOutBrokersExcludedForLeadership(List<Broker> eligibleBrokers,
OptimizationOptions optimizationOptions,
Replica replica,
ActionType action) {
Set<Integer> excludedBrokers = optimizationOptions.offlineBrokers();
if (!excludedBrokers.isEmpty() && (action == ActionType.LEADERSHIP_MOVEMENT || replica.isLeader())) {
eligibleBrokers.removeIf(broker -> excludedBrokers.contains(broker.id()));
}
}
/**
* 副本迁移从候选的Broker列表中排除掉excludedBroker
*/
public static void filterOutBrokersExcludedForReplicaMove(List<Broker> eligibleBrokers,
OptimizationOptions optimizationOptions,
ActionType action) {
Set<Integer> excludedBrokers = optimizationOptions.offlineBrokers();
if (!excludedBrokers.isEmpty() && action == ActionType.REPLICA_MOVEMENT) {
eligibleBrokers.removeIf(broker -> excludedBrokers.contains(broker.id()));
}
}
}

View File

@@ -0,0 +1,31 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
/**
* @author leewei
* @date 2022/5/24
*/
public class DiskDistributionGoal extends ResourceDistributionGoal {
@Override
protected Resource resource() {
return Resource.DISK;
}
@Override
public String name() {
return DiskDistributionGoal.class.getSimpleName();
}
@Override
public ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel) {
// Leadership movement won't cause disk utilization change.
return action.balancingAction() == ActionType.LEADERSHIP_MOVEMENT ? ActionAcceptance.ACCEPT : super.actionAcceptance(action, clusterModel);
}
}

View File

@@ -0,0 +1,17 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import java.util.Set;
public interface Goal {
void optimize(ClusterModel clusterModel, Set<Goal> optimizedGoals, OptimizationOptions optimizationOptions);
String name();
ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel);
}

View File

@@ -0,0 +1,30 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
/**
* @author leewei
* @date 2022/5/20
*/
public class NetworkInboundDistributionGoal extends ResourceDistributionGoal {
@Override
protected Resource resource() {
return Resource.NW_IN;
}
@Override
public String name() {
return NetworkInboundDistributionGoal.class.getSimpleName();
}
@Override
public ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel) {
// Leadership movement won't cause inbound network utilization change.
return action.balancingAction() == ActionType.LEADERSHIP_MOVEMENT ? ActionAcceptance.ACCEPT : super.actionAcceptance(action, clusterModel);
}
}

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
/**
* @author leewei
* @date 2022/5/24
*/
public class NetworkOutboundDistributionGoal extends ResourceDistributionGoal {
@Override
protected Resource resource() {
return Resource.NW_OUT;
}
@Override
public String name() {
return NetworkOutboundDistributionGoal.class.getSimpleName();
}
}

View File

@@ -0,0 +1,227 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.*;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Collections;
import java.util.Iterator;
import java.util.Set;
import java.util.SortedSet;
/**
* @author leewei
* @date 2022/5/20
*/
public abstract class ResourceDistributionGoal extends AbstractGoal {
private static final Logger logger = LoggerFactory.getLogger(ResourceDistributionGoal.class);
private double balanceUpperThreshold;
private double balanceLowerThreshold;
@Override
protected void initGoalState(ClusterModel clusterModel, OptimizationOptions optimizationOptions) {
double avgUtilization = clusterModel.utilizationFor(resource());
double balancePercentage = optimizationOptions.resourceBalancePercentageFor(resource());
this.balanceUpperThreshold = avgUtilization * (1 + balancePercentage);
this.balanceLowerThreshold = avgUtilization * (1 - balancePercentage);
}
@Override
protected void rebalanceForBroker(Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
double utilization = broker.utilizationFor(resource());
boolean requireLessLoad = utilization > this.balanceUpperThreshold;
boolean requireMoreLoad = utilization < this.balanceLowerThreshold;
if (!requireMoreLoad && !requireLessLoad) {
return;
}
// First try leadership movement
if (resource() == Resource.NW_OUT || resource() == Resource.CPU) {
if (requireLessLoad && rebalanceByMovingLoadOut(broker, clusterModel, optimizedGoals,
ActionType.LEADERSHIP_MOVEMENT, optimizationOptions)) {
logger.debug("Successfully balanced {} for broker {} by moving out leaders.", resource(), broker.id());
requireLessLoad = false;
}
if (requireMoreLoad && rebalanceByMovingLoadIn(broker, clusterModel, optimizedGoals,
ActionType.LEADERSHIP_MOVEMENT, optimizationOptions)) {
logger.debug("Successfully balanced {} for broker {} by moving in leaders.", resource(), broker.id());
requireMoreLoad = false;
}
}
boolean balanced = true;
if (requireLessLoad) {
if (!rebalanceByMovingLoadOut(broker, clusterModel, optimizedGoals,
ActionType.REPLICA_MOVEMENT, optimizationOptions)) {
balanced = rebalanceBySwappingLoadOut(broker, clusterModel, optimizedGoals, optimizationOptions);
}
} else if (requireMoreLoad) {
if (!rebalanceByMovingLoadIn(broker, clusterModel, optimizedGoals,
ActionType.REPLICA_MOVEMENT, optimizationOptions)) {
balanced = rebalanceBySwappingLoadIn(broker, clusterModel, optimizedGoals, optimizationOptions);
}
}
if (balanced) {
logger.debug("Successfully balanced {} for broker {} by moving leaders and replicas.", resource(), broker.id());
}
}
private boolean rebalanceByMovingLoadOut(Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
ActionType actionType,
OptimizationOptions optimizationOptions) {
SortedSet<Broker> candidateBrokers = sortedCandidateBrokersUnderThreshold(clusterModel, this.balanceUpperThreshold, optimizationOptions, broker, false);
SortedSet<Replica> replicasToMove = sortedCandidateReplicas(broker, actionType, optimizationOptions, true);
for (Replica replica : replicasToMove) {
Broker acceptedBroker = maybeApplyBalancingAction(clusterModel, replica, candidateBrokers, actionType, optimizedGoals, optimizationOptions);
if (acceptedBroker != null) {
if (broker.utilizationFor(resource()) < this.balanceUpperThreshold) {
return true;
}
// Remove and reinsert the broker so the order is correct.
// candidateBrokers.remove(acceptedBroker);
candidateBrokers.removeIf(b -> b.id() == acceptedBroker.id());
if (acceptedBroker.utilizationFor(resource()) < this.balanceUpperThreshold) {
candidateBrokers.add(acceptedBroker);
}
}
}
return false;
}
private boolean rebalanceByMovingLoadIn(Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
ActionType actionType,
OptimizationOptions optimizationOptions) {
SortedSet<Broker> candidateBrokers = sortedCandidateBrokersOverThreshold(clusterModel, this.balanceLowerThreshold, optimizationOptions, broker, true);
Iterator<Broker> candidateBrokersIt = candidateBrokers.iterator();
Broker nextCandidateBroker = null;
while (true) {
Broker candidateBroker;
if (nextCandidateBroker != null) {
candidateBroker = nextCandidateBroker;
nextCandidateBroker = null;
} else if (candidateBrokersIt.hasNext()) {
candidateBroker = candidateBrokersIt.next();
} else {
break;
}
SortedSet<Replica> replicasToMove = sortedCandidateReplicas(candidateBroker, actionType, optimizationOptions, true);
for (Replica replica : replicasToMove) {
Broker acceptedBroker = maybeApplyBalancingAction(clusterModel, replica, Collections.singletonList(broker), actionType, optimizedGoals, optimizationOptions);
if (acceptedBroker != null) {
if (broker.utilizationFor(resource()) > this.balanceLowerThreshold) {
return true;
}
if (candidateBrokersIt.hasNext() || nextCandidateBroker != null) {
if (nextCandidateBroker == null) {
nextCandidateBroker = candidateBrokersIt.next();
}
if (candidateBroker.utilizationFor(resource()) < nextCandidateBroker.utilizationFor(resource())) {
break;
}
}
}
}
}
return false;
}
private boolean rebalanceBySwappingLoadOut(Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
return false;
}
private boolean rebalanceBySwappingLoadIn(Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
return false;
}
private SortedSet<Broker> sortedCandidateBrokersUnderThreshold(ClusterModel clusterModel,
double utilizationThreshold,
OptimizationOptions optimizationOptions,
Broker excludedBroker,
boolean reverse) {
return clusterModel.sortedBrokersFor(
b -> b.utilizationFor(resource()) < utilizationThreshold
&& !excludedBroker.equals(b)
// filter brokers
&& (optimizationOptions.balanceBrokers().isEmpty() || optimizationOptions.balanceBrokers().contains(b.id()))
, resource(), reverse);
}
private SortedSet<Broker> sortedCandidateBrokersOverThreshold(ClusterModel clusterModel,
double utilizationThreshold,
OptimizationOptions optimizationOptions,
Broker excludedBroker,
boolean reverse) {
return clusterModel.sortedBrokersFor(
b -> b.utilizationFor(resource()) > utilizationThreshold
&& !excludedBroker.equals(b)
// filter brokers
&& (optimizationOptions.balanceBrokers().isEmpty() || optimizationOptions.balanceBrokers().contains(b.id()))
, resource(), reverse);
}
private SortedSet<Replica> sortedCandidateReplicas(Broker broker,
ActionType actionType,
OptimizationOptions optimizationOptions,
boolean reverse) {
return broker.sortedReplicasFor(
// exclude topic
r -> !optimizationOptions.excludedTopics().contains(r.topicPartition().topic())
&& r.load().loadFor(resource()) > 0.0
// LEADERSHIP_MOVEMENT or NW_OUT is require leader replica
&& (actionType != ActionType.LEADERSHIP_MOVEMENT && resource() != Resource.NW_OUT || r.isLeader())
, resource(), reverse);
}
protected abstract Resource resource();
@Override
protected boolean selfSatisfied(ClusterModel clusterModel, BalancingAction action) {
Broker destinationBroker = clusterModel.broker(action.destinationBrokerId());
Broker sourceBroker = clusterModel.broker(action.sourceBrokerId());
Replica sourceReplica = sourceBroker.replica(action.topicPartition());
Load loadToChange;
if (action.balancingAction() == ActionType.LEADERSHIP_MOVEMENT) {
Replica destinationReplica = destinationBroker.replica(action.topicPartition());
Load delta = new Load();
delta.addLoad(sourceReplica.load());
delta.subtractLoad(destinationReplica.load());
loadToChange = delta;
} else {
loadToChange = sourceReplica.load();
}
double sourceUtilization = sourceBroker.expectedUtilizationAfterRemove(resource(), loadToChange);
double destinationUtilization = destinationBroker.expectedUtilizationAfterAdd(resource(), loadToChange);
return sourceUtilization >= this.balanceLowerThreshold && destinationUtilization <= this.balanceUpperThreshold;
}
@Override
public ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel) {
return this.selfSatisfied(clusterModel, action) ? ActionAcceptance.ACCEPT : ActionAcceptance.REJECT;
}
}

View File

@@ -0,0 +1,222 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Broker;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Replica;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.GoalUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
import java.util.stream.Collectors;
import static com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance.ACCEPT;
import static com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance.REJECT;
import static com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType.REPLICA_MOVEMENT;
import static com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType.LEADERSHIP_MOVEMENT;
public class TopicLeadersDistributionGoal extends AbstractGoal {
private static final Logger logger = LoggerFactory.getLogger(TopicLeadersDistributionGoal.class);
private Map<String, Integer> _mustHaveTopicMinLeadersPerBroker;
/**
* 执行Topic Leader均衡
*/
@Override
protected void rebalanceForBroker(Broker broker, ClusterModel clusterModel, Set<Goal> optimizedGoals, OptimizationOptions optimizationOptions) {
moveAwayOfflineReplicas(broker, clusterModel, optimizedGoals, optimizationOptions);
if (_mustHaveTopicMinLeadersPerBroker.isEmpty()) {
return;
}
if (optimizationOptions.offlineBrokers().contains(broker.id())) {
return;
}
for (String topicName : _mustHaveTopicMinLeadersPerBroker.keySet()) {
maybeMoveLeaderOfTopicToBroker(topicName, broker, clusterModel, optimizedGoals, optimizationOptions);
}
}
/**
* 初始化均衡条件:
* 1.排除不需要的Broker、Topic
* 2.计算每个Topic在集群中所有Broker的平均分布数量
*/
@Override
protected void initGoalState(ClusterModel clusterModel, OptimizationOptions optimizationOptions) {
_mustHaveTopicMinLeadersPerBroker = new HashMap<>();
Set<String> excludedTopics = optimizationOptions.excludedTopics();
Set<Integer> excludedBrokers = optimizationOptions.offlineBrokers();
Set<String> mustHaveTopicLeadersPerBroker = GoalUtils.getNotExcludeTopics(clusterModel, excludedTopics);
Map<String, Integer> numLeadersByTopicNames = clusterModel.numLeadersPerTopic(mustHaveTopicLeadersPerBroker);
Set<Broker> allBrokers = GoalUtils.getNotExcludeBrokers(clusterModel, excludedBrokers);
for (String topicName : mustHaveTopicLeadersPerBroker) {
int topicNumLeaders = numLeadersByTopicNames.get(topicName);
int avgLeaders = allBrokers.size() == 0 ? 0 : (int) Math.ceil(topicNumLeaders / (double) allBrokers.size() * (1 + optimizationOptions.topicLeaderThreshold()));
_mustHaveTopicMinLeadersPerBroker.put(topicName, avgLeaders);
}
}
/**
* 已满足均衡条件判断:
* 1.待操作Broker的副本已下线
* 2.待操作Broker上Topic Leader数量大于Topic平均分布数量
*/
@Override
protected boolean selfSatisfied(ClusterModel clusterModel, BalancingAction action) {
Broker sourceBroker = clusterModel.broker(action.sourceBrokerId());
Replica replicaToBeMoved = sourceBroker.replica(action.topicPartition());
if (replicaToBeMoved.broker().replica(action.topicPartition()).isCurrentOffline()) {
return action.balancingAction() == REPLICA_MOVEMENT;
}
String topicName = replicaToBeMoved.topicPartition().topic();
return sourceBroker.numLeadersFor(topicName) > minTopicLeadersPerBroker(topicName);
}
/**
* 获取Topic在每台Broker上的最小Leader数
*/
private int minTopicLeadersPerBroker(String topicName) {
return _mustHaveTopicMinLeadersPerBroker.get(topicName);
}
@Override
public String name() {
return TopicLeadersDistributionGoal.class.getSimpleName();
}
/**
* 判断Topic Leader均衡动作是否可以执行
*/
@Override
public ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel) {
if (_mustHaveTopicMinLeadersPerBroker.containsKey(action.topic())) {
return ACCEPT;
}
switch (action.balancingAction()) {
case LEADERSHIP_MOVEMENT:
case REPLICA_MOVEMENT:
Replica replicaToBeRemoved = clusterModel.broker(action.sourceBrokerId()).replica(action.topicPartition());
return doesLeaderRemoveViolateOptimizedGoal(replicaToBeRemoved) ? REJECT : ACCEPT;
default:
throw new IllegalArgumentException("Unsupported balancing action " + action.balancingAction() + " is provided.");
}
}
/**
* 根据指定的副本判断是否可以执行均衡动作
*/
private boolean doesLeaderRemoveViolateOptimizedGoal(Replica replicaToBeRemoved) {
if (!replicaToBeRemoved.isLeader()) {
return false;
}
String topic = replicaToBeRemoved.topicPartition().topic();
if (!_mustHaveTopicMinLeadersPerBroker.containsKey(topic)) {
return false;
}
int topicLeaderCountOnSourceBroker = replicaToBeRemoved.broker().numLeadersFor(topic);
return topicLeaderCountOnSourceBroker <= minTopicLeadersPerBroker(topic);
}
/**
* 执行具体的均衡逻辑:
* 先通过Leader切换如果还不满足条件则进行副本迁移
*/
private void maybeMoveLeaderOfTopicToBroker(String topicName,
Broker broker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
int topicLeaderCount = broker.numLeadersFor(topicName);
//判断Topic在当前Broker上的Leader数量是否超过最小Leader分布
if (topicLeaderCount >= minTopicLeadersPerBroker(topicName)) {
return;
}
//获取Topic在当前Broker上的所有follower副本
List<Replica> followerReplicas = broker.replicas().stream().filter(i -> !i.isLeader() && i.topicPartition().topic().equals(topicName)).collect(Collectors.toList());
for (Replica followerReplica : followerReplicas) {
//根据follower副本信息从集群中获取对应的Leader副本
Replica leader = clusterModel.partition(followerReplica.topicPartition());
//如果Leader副本所在Broker的Topic Leader数量超过最小Leader分布则进行Leader切换
if (leader.broker().numLeadersFor(topicName) > minTopicLeadersPerBroker(topicName)) {
if (maybeApplyBalancingAction(clusterModel, leader, Collections.singleton(broker),
LEADERSHIP_MOVEMENT, optimizedGoals, optimizationOptions) != null) {
topicLeaderCount++;
//Topic在当前Broker的Leader分布大于等于最小Leader分布则结束均衡
if (topicLeaderCount >= minTopicLeadersPerBroker(topicName)) {
return;
}
}
}
}
//根据Topic获取需要Leader数量大于最小Leader分布待迁移的Broker列表
PriorityQueue<Broker> brokersWithExcessiveLeaderToMove = getBrokersWithExcessiveLeaderToMove(topicName, clusterModel);
while (!brokersWithExcessiveLeaderToMove.isEmpty()) {
Broker brokerWithExcessiveLeaderToMove = brokersWithExcessiveLeaderToMove.poll();
List<Replica> leadersOfTopic = brokerWithExcessiveLeaderToMove.leaderReplicas().stream()
.filter(i -> i.topicPartition().topic().equals(topicName)).collect(Collectors.toList());
boolean leaderMoved = false;
int leaderMoveCount = leadersOfTopic.size();
for (Replica leaderOfTopic : leadersOfTopic) {
Broker destinationBroker = maybeApplyBalancingAction(clusterModel, leaderOfTopic, Collections.singleton(broker),
REPLICA_MOVEMENT, optimizedGoals, optimizationOptions);
if (destinationBroker != null) {
leaderMoved = true;
break;
}
}
if (leaderMoved) {
//当前Topic Leader数量在满足最小Leader分布后则结束均衡
topicLeaderCount++;
if (topicLeaderCount >= minTopicLeadersPerBroker(topicName)) {
return;
}
//分布过多的Broker在进行副本迁移后Topic Leader依然大于最小Leader分布则继续迁移
leaderMoveCount--;
if (leaderMoveCount > minTopicLeadersPerBroker(topicName)) {
brokersWithExcessiveLeaderToMove.add(brokerWithExcessiveLeaderToMove);
}
}
}
}
/**
* 根据指定的TopicName,筛选出集群内超过该TopicName Leader平均分布数量的所有Broker并且降序排列
*/
private PriorityQueue<Broker> getBrokersWithExcessiveLeaderToMove(String topicName, ClusterModel clusterModel) {
PriorityQueue<Broker> brokersWithExcessiveLeaderToMove = new PriorityQueue<>((broker1, broker2) -> {
int broker1LeaderCount = broker1.numLeadersFor(topicName);
int broker2LeaderCount = broker2.numLeadersFor(topicName);
int leaderCountCompareResult = Integer.compare(broker2LeaderCount, broker1LeaderCount);
return leaderCountCompareResult == 0 ? Integer.compare(broker1.id(), broker2.id()) : leaderCountCompareResult;
});
clusterModel.brokers().stream().filter(broker -> broker.numLeadersFor(topicName) > minTopicLeadersPerBroker(topicName))
.forEach(brokersWithExcessiveLeaderToMove::add);
return brokersWithExcessiveLeaderToMove;
}
/**
* 下线副本优先处理迁移
*/
private void moveAwayOfflineReplicas(Broker srcBroker,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
if (srcBroker.currentOfflineReplicas().isEmpty()) {
return;
}
SortedSet<Broker> eligibleBrokersToMoveOfflineReplicasTo = new TreeSet<>(
Comparator.comparingInt((Broker broker) -> broker.replicas().size()).thenComparingInt(Broker::id));
Set<Replica> offlineReplicas = new HashSet<>(srcBroker.currentOfflineReplicas());
for (Replica offlineReplica : offlineReplicas) {
if (maybeApplyBalancingAction(clusterModel, offlineReplica, eligibleBrokersToMoveOfflineReplicasTo,
REPLICA_MOVEMENT, optimizedGoals, optimizationOptions) == null) {
logger.error(String.format("[%s] offline replica %s from broker %d (has %d replicas) move error", name(),
offlineReplica, srcBroker.id(), srcBroker.replicas().size()));
}
}
}
}

View File

@@ -0,0 +1,287 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.goals;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Broker;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.ClusterModel;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Replica;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionAcceptance;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.ActionType;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.BalancingAction;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.OptimizationOptions;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils.GoalUtils;
import java.util.*;
import java.util.stream.Collectors;
public class TopicReplicaDistributionGoal extends AbstractGoal {
private final Map<String, Integer> _balanceUpperLimitByTopic;
private final Map<String, Integer> _balanceLowerLimitByTopic;
private Set<Broker> _brokersAllowedReplicaMove;
private final Map<String, Double> _avgTopicReplicasOnBroker;
public TopicReplicaDistributionGoal() {
_balanceUpperLimitByTopic = new HashMap<>();
_balanceLowerLimitByTopic = new HashMap<>();
_avgTopicReplicasOnBroker = new HashMap<>();
}
@Override
protected void rebalanceForBroker(Broker broker, ClusterModel clusterModel, Set<Goal> optimizedGoals, OptimizationOptions optimizationOptions) {
for (String topic : broker.topics()) {
if (isTopicExcludedFromRebalance(topic)) {
continue;
}
Collection<Replica> replicas = broker.replicasOfTopicInBroker(topic);
int numTopicReplicas = replicas.size();
boolean isExcludedForReplicaMove = isExcludedForReplicaMove(broker);
int numOfflineTopicReplicas = GoalUtils.retainCurrentOfflineBrokerReplicas(broker, replicas).size();
boolean requireLessReplicas = numOfflineTopicReplicas > 0 || numTopicReplicas > _balanceUpperLimitByTopic.get(topic) && !isExcludedForReplicaMove;
boolean requireMoreReplicas = !isExcludedForReplicaMove && numTopicReplicas - numOfflineTopicReplicas < _balanceLowerLimitByTopic.get(topic);
if (requireLessReplicas) {
rebalanceByMovingReplicasOut(broker, topic, clusterModel, optimizedGoals, optimizationOptions);
}
if (requireMoreReplicas) {
rebalanceByMovingReplicasIn(broker, topic, clusterModel, optimizedGoals, optimizationOptions);
}
}
}
/**
* 初始化均衡条件:
* 1.Topic平均分布副本数量
* 2.Topic在平均副本的基础上向上浮动数量
* 3.Topic在平均副本的基础上向下浮动数量
*/
@Override
protected void initGoalState(ClusterModel clusterModel, OptimizationOptions optimizationOptions) {
Set<String> excludedTopics = optimizationOptions.excludedTopics();
Set<Integer> excludedBrokers = optimizationOptions.offlineBrokers();
Set<String> topicsAllowedRebalance = GoalUtils.getNotExcludeTopics(clusterModel, excludedTopics);
_brokersAllowedReplicaMove = GoalUtils.getNotExcludeBrokers(clusterModel, excludedBrokers);
if (_brokersAllowedReplicaMove.isEmpty()) {
return;
}
for (String topic : topicsAllowedRebalance) {
int numTopicReplicas = clusterModel.numTopicReplicas(topic);
_avgTopicReplicasOnBroker.put(topic, numTopicReplicas / (double) _brokersAllowedReplicaMove.size());
_balanceUpperLimitByTopic.put(topic, balanceUpperLimit(topic, optimizationOptions));
_balanceLowerLimitByTopic.put(topic, balanceLowerLimit(topic, optimizationOptions));
}
}
/**
* 指定Topic平均副本向下浮动,默认10%
*/
private Integer balanceLowerLimit(String topic, OptimizationOptions optimizationOptions) {
return (int) Math.floor(_avgTopicReplicasOnBroker.get(topic)
* Math.max(0, (1 - optimizationOptions.topicReplicaThreshold())));
}
/**
* 指定Topic平均副本向上浮动,默认10%
*/
private Integer balanceUpperLimit(String topic, OptimizationOptions optimizationOptions) {
return (int) Math.ceil(_avgTopicReplicasOnBroker.get(topic)
* (1 + optimizationOptions.topicReplicaThreshold()));
}
@Override
protected boolean selfSatisfied(ClusterModel clusterModel, BalancingAction action) {
Broker sourceBroker = clusterModel.broker(action.sourceBrokerId());
if (sourceBroker.replica(action.topicPartition()).isCurrentOffline()) {
return action.balancingAction() == ActionType.REPLICA_MOVEMENT;
}
Broker destinationBroker = clusterModel.broker(action.destinationBrokerId());
String sourceTopic = action.topic();
return isReplicaCountAddUpperLimit(sourceTopic, destinationBroker)
&& (isExcludedForReplicaMove(sourceBroker) || isReplicaCountRemoveLowerLimit(sourceTopic, sourceBroker));
}
@Override
public String name() {
return TopicReplicaDistributionGoal.class.getSimpleName();
}
@Override
public ActionAcceptance actionAcceptance(BalancingAction action, ClusterModel clusterModel) {
Broker sourceBroker = clusterModel.broker(action.sourceBrokerId());
Broker destinationBroker = clusterModel.broker(action.destinationBrokerId());
String sourceTopic = action.topic();
switch (action.balancingAction()) {
case LEADERSHIP_MOVEMENT:
return ActionAcceptance.ACCEPT;
case REPLICA_MOVEMENT:
return (isReplicaCountAddUpperLimit(sourceTopic, destinationBroker)
&& (isExcludedForReplicaMove(sourceBroker)
|| isReplicaCountRemoveLowerLimit(sourceTopic, sourceBroker))) ? ActionAcceptance.ACCEPT : ActionAcceptance.REJECT;
default:
throw new IllegalArgumentException("Unsupported balancing action " + action.balancingAction() + " is provided.");
}
}
/**
* 指定的Broker上存在Topic副本数大于阈值则迁出副本
*/
private boolean rebalanceByMovingReplicasOut(Broker broker,
String topic,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
//筛选出现低于UpperLimit的所有Broker做为存放目标
SortedSet<Broker> candidateBrokers = new TreeSet<>(
Comparator.comparingInt((Broker b) -> b.numReplicasOfTopicInBroker(topic)).thenComparingInt(Broker::id));
Set<Broker> filterUpperLimitBroker = clusterModel.brokers().stream().filter(b -> b.numReplicasOfTopicInBroker(topic) < _balanceUpperLimitByTopic.get(topic)).collect(Collectors.toSet());
candidateBrokers.addAll(filterUpperLimitBroker);
Collection<Replica> replicasOfTopicInBroker = broker.replicasOfTopicInBroker(topic);
int numReplicasOfTopicInBroker = replicasOfTopicInBroker.size();
int numOfflineTopicReplicas = GoalUtils.retainCurrentOfflineBrokerReplicas(broker, replicasOfTopicInBroker).size();
int balanceUpperLimitForSourceBroker = isExcludedForReplicaMove(broker) ? 0 : _balanceUpperLimitByTopic.get(topic);
boolean wasUnableToMoveOfflineReplica = false;
for (Replica replica : replicasToMoveOut(broker, topic)) {
//当前Broker没有离线副本及Topic的副本数量低于UpperLimit则结束均衡
if (wasUnableToMoveOfflineReplica && !replica.isCurrentOffline() && numReplicasOfTopicInBroker <= balanceUpperLimitForSourceBroker) {
return false;
}
boolean wasOffline = replica.isCurrentOffline();
Broker b = maybeApplyBalancingAction(clusterModel, replica, candidateBrokers, ActionType.REPLICA_MOVEMENT,
optimizedGoals, optimizationOptions);
// Only check if we successfully moved something.
if (b != null) {
if (wasOffline) {
numOfflineTopicReplicas--;
}
if (--numReplicasOfTopicInBroker <= (numOfflineTopicReplicas == 0 ? balanceUpperLimitForSourceBroker : 0)) {
return false;
}
// Remove and reinsert the broker so the order is correct.
candidateBrokers.remove(b);
if (b.numReplicasOfTopicInBroker(topic) < _balanceUpperLimitByTopic.get(topic)) {
candidateBrokers.add(b);
}
} else if (wasOffline) {
wasUnableToMoveOfflineReplica = true;
}
}
return !broker.replicasOfTopicInBroker(topic).isEmpty();
}
/**
* 1.离线副本优行处理
* 2.小分区号优行处理
*/
private SortedSet<Replica> replicasToMoveOut(Broker broker, String topic) {
SortedSet<Replica> replicasToMoveOut = new TreeSet<>((r1, r2) -> {
boolean r1Offline = broker.currentOfflineReplicas().contains(r1);
boolean r2Offline = broker.currentOfflineReplicas().contains(r2);
if (r1Offline && !r2Offline) {
return -1;
} else if (!r1Offline && r2Offline) {
return 1;
}
if (r1.topicPartition().partition() > r2.topicPartition().partition()) {
return 1;
} else if (r1.topicPartition().partition() < r2.topicPartition().partition()) {
return -1;
}
return 0;
});
replicasToMoveOut.addAll(broker.replicasOfTopicInBroker(topic));
return replicasToMoveOut;
}
/**
* Topic副本数>最低阈值的副本迁到指定的Broker上
*/
private boolean rebalanceByMovingReplicasIn(Broker broker,
String topic,
ClusterModel clusterModel,
Set<Goal> optimizedGoals,
OptimizationOptions optimizationOptions) {
PriorityQueue<Broker> eligibleBrokers = new PriorityQueue<>((b1, b2) -> {
Collection<Replica> replicasOfTopicInB2 = b2.replicasOfTopicInBroker(topic);
int numReplicasOfTopicInB2 = replicasOfTopicInB2.size();
int numOfflineTopicReplicasInB2 = GoalUtils.retainCurrentOfflineBrokerReplicas(b2, replicasOfTopicInB2).size();
Collection<Replica> replicasOfTopicInB1 = b1.replicasOfTopicInBroker(topic);
int numReplicasOfTopicInB1 = replicasOfTopicInB1.size();
int numOfflineTopicReplicasInB1 = GoalUtils.retainCurrentOfflineBrokerReplicas(b1, replicasOfTopicInB1).size();
int resultByOfflineReplicas = Integer.compare(numOfflineTopicReplicasInB2, numOfflineTopicReplicasInB1);
if (resultByOfflineReplicas == 0) {
int resultByAllReplicas = Integer.compare(numReplicasOfTopicInB2, numReplicasOfTopicInB1);
return resultByAllReplicas == 0 ? Integer.compare(b1.id(), b2.id()) : resultByAllReplicas;
}
return resultByOfflineReplicas;
});
//筛选当前Topic高于LowerLimit、存在离线副本、的所有Broker做为需要迁的副本
for (Broker sourceBroker : clusterModel.brokers()) {
if (sourceBroker.numReplicasOfTopicInBroker(topic) > _balanceLowerLimitByTopic.get(topic)
|| !sourceBroker.currentOfflineReplicas().isEmpty() || isExcludedForReplicaMove(sourceBroker)) {
eligibleBrokers.add(sourceBroker);
}
}
Collection<Replica> replicasOfTopicInBroker = broker.replicasOfTopicInBroker(topic);
int numReplicasOfTopicInBroker = replicasOfTopicInBroker.size();
//当前Broker做为存放目标
Set<Broker> candidateBrokers = Collections.singleton(broker);
while (!eligibleBrokers.isEmpty()) {
Broker sourceBroker = eligibleBrokers.poll();
SortedSet<Replica> replicasToMove = replicasToMoveOut(sourceBroker, topic);
int numOfflineTopicReplicas = GoalUtils.retainCurrentOfflineBrokerReplicas(sourceBroker, replicasToMove).size();
for (Replica replica : replicasToMove) {
boolean wasOffline = replica.isCurrentOffline();
Broker b = maybeApplyBalancingAction(clusterModel, replica, candidateBrokers, ActionType.REPLICA_MOVEMENT,
optimizedGoals, optimizationOptions);
if (b != null) {
if (wasOffline) {
numOfflineTopicReplicas--;
}
if (++numReplicasOfTopicInBroker >= _balanceLowerLimitByTopic.get(topic)) {
return false;
}
if (!eligibleBrokers.isEmpty() && numOfflineTopicReplicas == 0
&& sourceBroker.numReplicasOfTopicInBroker(topic) < eligibleBrokers.peek().numReplicasOfTopicInBroker(topic)) {
eligibleBrokers.add(sourceBroker);
break;
}
}
}
}
return true;
}
/**
* 目标Broker增加副本后Topic副本数<=最高阈值
*/
private boolean isReplicaCountAddUpperLimit(String topic, Broker destinationBroker) {
int numTopicReplicas = destinationBroker.numReplicasOfTopicInBroker(topic);
int brokerBalanceUpperLimit = _balanceUpperLimitByTopic.get(topic);
return numTopicReplicas + 1 <= brokerBalanceUpperLimit;
}
/**
* 源Broker迁走副本后Topic副本数>=最低阈值
*/
private boolean isReplicaCountRemoveLowerLimit(String topic, Broker sourceBroker) {
int numTopicReplicas = sourceBroker.numReplicasOfTopicInBroker(topic);
int brokerBalanceLowerLimit = _balanceLowerLimitByTopic.get(topic);
return numTopicReplicas - 1 >= brokerBalanceLowerLimit;
}
/**
* 判断指定的Broker是否可以进行副本迁移操作
*/
private boolean isExcludedForReplicaMove(Broker broker) {
return !_brokersAllowedReplicaMove.contains(broker);
}
/**
* 判断指定的Topic是否在可均衡的列表中
*/
private boolean isTopicExcludedFromRebalance(String topic) {
return _avgTopicReplicasOnBroker.get(topic) == null;
}
}

View File

@@ -0,0 +1,7 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm;
/**
*
* Re-Balance算法部分代码
*
* */

View File

@@ -0,0 +1,21 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils;
import joptsimple.OptionParser;
import java.io.IOException;
public class CommandLineUtils {
/**
* Print usage and exit
*/
public static void printUsageAndDie(OptionParser parser, String message) {
try {
System.err.println(message);
parser.printHelpOn(System.err);
System.exit(1);
} catch (IOException e) {
e.printStackTrace();
}
}
}

View File

@@ -0,0 +1,67 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceGoal;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceParameter;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.BalanceThreshold;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.HostEnv;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.*;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.optimizer.AnalyzerUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
import java.util.stream.Collectors;
public class GoalUtils {
private static final Logger logger = LoggerFactory.getLogger(GoalUtils.class);
public static Set<String> getNotExcludeTopics(ClusterModel clusterModel, Set<String> excludedTopics) {
return clusterModel.topics().stream().filter(topicName -> !excludedTopics.contains(topicName)).collect(Collectors.toSet());
}
public static Set<Broker> getNotExcludeBrokers(ClusterModel clusterModel, Set<Integer> excludedBrokers) {
return clusterModel.brokers().stream().filter(broker -> !excludedBrokers.contains(broker.id())).collect(Collectors.toSet());
}
/**
* 在Broker上获取指定的离线副本列表
*/
public static Set<Replica> retainCurrentOfflineBrokerReplicas(Broker broker, Collection<Replica> replicas) {
Set<Replica> offlineReplicas = new HashSet<>(replicas);
offlineReplicas.retainAll(broker.currentOfflineReplicas());
return offlineReplicas;
}
public static ClusterModel getInitClusterModel(BalanceParameter parameter) {
logger.info("Cluster model initialization");
List<HostEnv> hostsEnv = parameter.getHardwareEnv();
Map<Integer, Capacity> capacities = new HashMap<>();
for (HostEnv env : hostsEnv) {
Capacity capacity = new Capacity();
capacity.setCapacity(Resource.CPU, env.getCpu());
capacity.setCapacity(Resource.DISK, env.getDisk());
capacity.setCapacity(Resource.NW_IN, env.getNetwork());
capacity.setCapacity(Resource.NW_OUT, env.getNetwork());
capacities.put(env.getId(), capacity);
}
return Supplier.load(
parameter.getCluster(),
parameter.getBeforeSeconds(),
parameter.getKafkaConfig(),
parameter.getEsRestURL(),
parameter.getEsPassword(),
parameter.getEsIndexPrefix(),
capacities,
AnalyzerUtils.getSplitTopics(parameter.getIgnoredTopics())
);
}
public static Map<String, BalanceThreshold> getBalanceThreshold(BalanceParameter parameter, double[] clusterAvgResource) {
Map<String, BalanceThreshold> balanceThreshold = new HashMap<>();
balanceThreshold.put(BalanceGoal.DISK.goal(), new BalanceThreshold(Resource.DISK, parameter.getDiskThreshold(), clusterAvgResource[Resource.DISK.id()]));
balanceThreshold.put(BalanceGoal.NW_IN.goal(), new BalanceThreshold(Resource.NW_IN, parameter.getNetworkInThreshold(), clusterAvgResource[Resource.NW_IN.id()]));
balanceThreshold.put(BalanceGoal.NW_OUT.goal(), new BalanceThreshold(Resource.NW_OUT, parameter.getNetworkOutThreshold(), clusterAvgResource[Resource.NW_OUT.id()]));
return balanceThreshold;
}
}

View File

@@ -0,0 +1,92 @@
package com.xiaojukeji.know.streaming.km.rebalance.algorithm.utils;
import org.apache.kafka.clients.*;
import org.apache.kafka.clients.consumer.internals.NoAvailableBrokersException;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.Cluster;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.network.ChannelBuilder;
import org.apache.kafka.common.network.NetworkReceive;
import org.apache.kafka.common.network.Selector;
import org.apache.kafka.common.requests.MetadataRequest;
import org.apache.kafka.common.requests.MetadataResponse;
import org.apache.kafka.common.utils.LogContext;
import org.apache.kafka.common.utils.Time;
import org.apache.kafka.common.utils.Utils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.Properties;
/**
* @author leewei
* @date 2022/5/27
*/
public class MetadataUtils {
private static final Logger logger = LoggerFactory.getLogger(MetadataUtils.class);
public static Cluster metadata(Properties props) {
props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.BytesSerializer");
props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.BytesSerializer");
ProducerConfig config = new ProducerConfig(props);
Time time = Time.SYSTEM;
LogContext logContext = new LogContext("Metadata client");
ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext);
Selector selector = new Selector(
NetworkReceive.UNLIMITED,
config.getLong(ProducerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG),
new org.apache.kafka.common.metrics.Metrics(),
time,
"metadata-client",
Collections.singletonMap("client", "metadata-client"),
false,
channelBuilder,
logContext
);
NetworkClient networkClient = new NetworkClient(
selector,
new ManualMetadataUpdater(),
"metadata-client",
1,
config.getLong(ProducerConfig.RECONNECT_BACKOFF_MS_CONFIG),
config.getLong(ProducerConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG),
config.getInt(ProducerConfig.SEND_BUFFER_CONFIG),
config.getInt(ProducerConfig.RECEIVE_BUFFER_CONFIG),
config.getInt(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG),
config.getLong(ProducerConfig.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG),
config.getLong(ProducerConfig.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG),
ClientDnsLookup.DEFAULT,
time,
true,
new ApiVersions(),
logContext
);
try {
List<String> nodes = config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
for (int i = 0; i < nodes.size(); i++) {
Node sourceNode = new Node(i, Utils.getHost(nodes.get(i)), Utils.getPort(nodes.get(i)));
try {
if (NetworkClientUtils.awaitReady(networkClient, sourceNode, time, 10 * 1000)) {
ClientRequest clientRequest = networkClient.newClientRequest(String.valueOf(i), MetadataRequest.Builder.allTopics(),
time.milliseconds(), true);
ClientResponse clientResponse = NetworkClientUtils.sendAndReceive(networkClient, clientRequest, time);
MetadataResponse metadataResponse = (MetadataResponse) clientResponse.responseBody();
return metadataResponse.buildCluster();
}
} catch (IOException e) {
logger.warn("Connection to " + sourceNode + " error", e);
}
}
throw new NoAvailableBrokersException();
} finally {
networkClient.close();
}
}
}

View File

@@ -0,0 +1,35 @@
package com.xiaojukeji.know.streaming.km.rebalance.common;
public class BalanceMetricConstant {
public static final String CLUSTER_METRIC_LOAD_RE_BALANCE_ENABLE = "LoadReBalanceEnable";
public static final String CLUSTER_METRIC_LOAD_RE_BALANCE_CPU = "LoadReBalanceCpu";
public static final String CLUSTER_METRIC_LOAD_RE_BALANCE_NW_IN = "LoadReBalanceNwIn";
public static final String CLUSTER_METRIC_LOAD_RE_BALANCE_NW_OUT = "LoadReBalanceNwOut";
public static final String CLUSTER_METRIC_LOAD_RE_BALANCE_DISK = "LoadReBalanceDisk";
// 集群维度-均衡相关
// itemList.add( buildAllVersionsItem()
// .name(CLUSTER_METRIC_LOAD_RE_BALANCE_ENABLE).unit("是/否").desc("是否开启均衡, 10否").category(CATEGORY_CLUSTER)
// .extend( buildMethodExtend( CLUSTER_METHOD_GET_CLUSTER_LOAD_RE_BALANCE_INFO )));
//
// itemList.add( buildAllVersionsItem()
// .name(CLUSTER_METRIC_LOAD_RE_BALANCE_CPU).unit("是/否").desc("CPU是否均衡, 10否").category(CATEGORY_CLUSTER)
// .extend( buildMethodExtend( CLUSTER_METHOD_GET_CLUSTER_LOAD_RE_BALANCE_INFO )));
//
// itemList.add( buildAllVersionsItem()
// .name(CLUSTER_METRIC_LOAD_RE_BALANCE_NW_IN).unit("是/否").desc("BytesIn是否均衡, 10否").category(CATEGORY_CLUSTER)
// .extend( buildMethodExtend( CLUSTER_METHOD_GET_CLUSTER_LOAD_RE_BALANCE_INFO )));
//
// itemList.add( buildAllVersionsItem()
// .name(CLUSTER_METRIC_LOAD_RE_BALANCE_NW_OUT).unit("是/否").desc("BytesOut是否均衡, 10否").category(CATEGORY_CLUSTER)
// .extend( buildMethodExtend( CLUSTER_METHOD_GET_CLUSTER_LOAD_RE_BALANCE_INFO )));
//
// itemList.add( buildAllVersionsItem()
// .name(CLUSTER_METRIC_LOAD_RE_BALANCE_DISK).unit("是/否").desc("Disk是否均衡, 10否").category(CATEGORY_CLUSTER)
// .extend( buildMethodExtend( CLUSTER_METHOD_GET_CLUSTER_LOAD_RE_BALANCE_INFO )));
}

View File

@@ -0,0 +1,26 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceIntervalDTO {
@NotBlank(message = "clusterBalanceIntervalDTO.type不允许为空")
@ApiModelProperty("均衡维度:cpu,disk,bytesIn,bytesOut")
private String type;
@NotNull(message = "clusterBalanceIntervalDTO.intervalPercent不允许为空")
@ApiModelProperty("平衡区间百分比")
private Double intervalPercent;
@NotNull(message = "clusterBalanceIntervalDTO.priority不允许为空")
@ApiModelProperty("优先级")
private Integer priority;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.dto.pagination.PaginationBaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.Map;
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceOverviewDTO extends PaginationBaseDTO {
@ApiModelProperty("host")
private String host;
@ApiModelProperty("key:disk,bytesOut,bytesIn value:均衡状态 0已均衡2未均衡")
private Map<String, Integer> stateParam;
}

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalancePreviewDTO extends BaseDTO {
@ApiModelProperty("集群id")
private Long clusterId;
@ApiModelProperty("均衡节点")
private List<Integer> brokers;
@ApiModelProperty("topic黑名单")
private List<String> topicBlackList;
@ApiModelProperty("均衡区间详情")
private List<ClusterBalanceIntervalDTO> clusterBalanceIntervalList;
@ApiModelProperty("指标计算周期,单位分钟")
private Integer metricCalculationPeriod;
@ApiModelProperty("任务并行数")
private Integer parallelNum;
@ApiModelProperty("执行策略, 1优先最大副本2优先最小副本")
private Integer executionStrategy;
@ApiModelProperty("限流值")
private Long throttleUnitB;
}

View File

@@ -0,0 +1,66 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.dto.BaseDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
import java.util.List;
/**
* @author zengqiao
* @date 22/02/24
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceStrategyDTO extends BaseDTO {
@ApiModelProperty("是否是周期性任务")
private boolean scheduleJob;
@NotBlank(message = "scheduleCron不允许为空")
@ApiModelProperty("如果是周期任务那么任务的周期cron表达式")
private String scheduleCron;
@NotNull(message = "status不允许为空")
@ApiModelProperty("周期任务状态0:不开启1开启")
private Integer status;
@NotNull(message = "clusterId不允许为空")
@ApiModelProperty("集群id")
private Long clusterId;
@ApiModelProperty("均衡节点")
private List<Integer> brokers;
@ApiModelProperty("topic黑名单")
private List<String> topicBlackList;
@NotNull(message = "clusterBalanceIntervalDTO不允许为空")
@ApiModelProperty("均衡区间详情")
private List<ClusterBalanceIntervalDTO> clusterBalanceIntervalList;
@NotNull(message = "metricCalculationPeriod不允许为空")
@ApiModelProperty("指标计算周期,单位秒")
private Integer metricCalculationPeriod;
@NotNull(message = "parallelNum不允许为空")
@ApiModelProperty("任务并行数0代表不限")
private Integer parallelNum;
@NotNull(message = "executionStrategy不允许为空")
@ApiModelProperty("执行策略, 1优先最大副本2优先最小副本")
private Integer executionStrategy;
@Min(value = 1, message = "throttleUnitB不允许小于1")
@ApiModelProperty("限流值")
private Long throttleUnitB;
@ApiModelProperty("备注说明")
private String description;
}

View File

@@ -0,0 +1,27 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@NoArgsConstructor
@AllArgsConstructor
@EnterpriseLoadReBalance
public class ClusterBalanceInterval {
/**
* 均衡维度:cpu,disk,bytesIn,bytesOut
*/
private String type;
/**
* 平衡区间百分比
*/
private Double intervalPercent;
/**
* 优先级
*/
private Integer priority;
}

View File

@@ -0,0 +1,41 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Map;
@Data
@NoArgsConstructor
@AllArgsConstructor
@EnterpriseLoadReBalance
public class ClusterBalanceItemState {
/**
* 是否配置集群平衡:true:已配置false:未配置
*/
private Boolean configureBalance;
/**
* 是否开启均衡:true:开启false: 未开启
*/
private Boolean enable;
/**
* 子项是否均衡:key: disk,bytesIn,bytesOut,cpu ; value:true:已均衡false:未均衡
* @see com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource
*/
private Map<String, Boolean> itemState;
public Integer getResItemState(Resource res) {
if (itemState == null || !itemState.containsKey(res.resource())) {
return Constant.INVALID_CODE;
}
return itemState.get(res.resource()) ? Constant.YES: Constant.NO;
}
}

View File

@@ -0,0 +1,91 @@
/*
* Copyright (c) 2015, WINIT and/or its affiliates. All rights reserved. Use, Copy is subject to authorized license.
*/
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.entity.BaseEntity;
import lombok.Data;
/**
* 集群均衡任务 实体类
*
* @author fengqiongfeng
* @date 2022-05-23
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceJobConfig extends BaseEntity {
/**
* 序列化版本号
*/
private static final long serialVersionUID=1L;
/**
* 集群id
*/
private Long clusterId;
/**
* 均衡节点
*/
private String brokers;
/**
* topic黑名单
*/
private String topicBlackList;
/**
* 1:立即均衡2周期均衡
*/
private Integer type;
/**
* 任务周期
*/
private String taskCron;
/**
* 均衡区间详情
*/
private String balanceIntervalJson;
/**
* 指标计算周期,单位分钟
*/
private Integer metricCalculationPeriod;
/**
* 迁移脚本
*/
private String reassignmentJson;
/**
* 任务并行数
*/
private Integer parallelNum;
/**
* 执行策略, 1优先最大副本2优先最小副本
*/
private Integer executionStrategy;
/**
* 限流值
*/
private Long throttleUnitByte;
/**
* 操作人
*/
private String creator;
/**
* 任务状态 0未开启1开启
*/
private Integer status;
}

View File

@@ -0,0 +1,64 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.Data;
import java.util.Date;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceReassign {
/**
* jobID
*/
private Long jobId;
/**
* 集群id
*/
private Long clusterId;
/**
* Topic名称
*/
private String topicName;
/**
* 分区ID
*/
private Integer partitionId;
/**
* 源BrokerId列表
*/
private String originalBrokerIds;
/**
* 目标BrokerId列表
*/
private String reassignBrokerIds;
/**
* 任务开始时间
*/
private Date startTime;
/**
* 任务完成时间
*/
private Date finishedTime;
/**
* 扩展数据
*/
private String extendData;
/**
* 任务状态
*/
private Integer status;
}

View File

@@ -0,0 +1,36 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail.ClusterBalanceDetailDataGroupByTopic;
import lombok.Data;
import java.util.Date;
import java.util.List;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceReassignDetail {
/**
* 限流值
*/
private Long throttleUnitB;
/**
* 开始时间
*/
private Date startTime;
/**
* 完成时间
*/
private Date finishedTime;
/**
* 详细信息
*/
private List<ClusterBalanceDetailDataGroupByTopic> reassignTopicDetailsList;
}

View File

@@ -0,0 +1,47 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.Data;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceReassignExtendData {
/**
* 原本保存时间
*/
private Long originalRetentionTimeUnitMs;
/**
* 迁移时保存时间
*/
private Long reassignRetentionTimeUnitMs;
/**
* 需迁移LogSize
*/
private Long needReassignLogSizeUnitB;
/**
* 已完成迁移LogSize
*/
private Long finishedReassignLogSizeUnitB;
/**
* 预计剩余时长
*/
private Long remainTimeUnitMs;
/**
* 当前副本数
*/
private Integer originReplicaNum;
/**
* 新的副本数
*/
private Integer reassignReplicaNum;
}

View File

@@ -0,0 +1,43 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.content;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.entity.job.content.BaseJobCreateContent;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto.ClusterBalanceIntervalDTO;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import javax.validation.constraints.Min;
import java.util.List;
@Data
@EnterpriseLoadReBalance
public class JobClusterBalanceContent extends BaseJobCreateContent {
@Min(value = 1, message = "clusterId不允许为null或者小于0")
@ApiModelProperty(value = "集群ID, 默认为逻辑集群ID", example = "6")
private Long clusterId;
@Min(value = 1, message = "throttle不允许为null或者小于0")
@ApiModelProperty(value = "限流值", example = "102400000")
private Long throttleUnitB;
@ApiModelProperty("topic黑名单")
private List<String> topicBlackList;
@ApiModelProperty("均衡区间详情")
private List<ClusterBalanceIntervalDTO> clusterBalanceIntervalList;
@ApiModelProperty("指标计算周期,单位分钟")
private Integer metricCalculationPeriod;
@ApiModelProperty("任务并行数")
private Integer parallelNum;
@ApiModelProperty("执行策略, 1优先最大副本2优先最小副本")
private Integer executionStrategy;
@ApiModelProperty("备注说明")
private String description;
@ApiModelProperty("是否是周期性任务")
private boolean scheduleJob;
}

View File

@@ -0,0 +1,79 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.Data;
import java.util.List;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public abstract class AbstractClusterBalanceDetailData {
/**
* 物流集群ID
*/
private Long clusterPhyId;
/**
* Topic名称
*/
private String topicName;
/**
* 源Broker列表
*/
private List<Integer> originalBrokerIdList;
/**
* 目标Broker列表
*/
private List<Integer> reassignBrokerIdList;
/**
* 需迁移LogSize
*/
private Long needReassignLogSizeUnitB;
/**
* 已完成迁移LogSize
*/
private Long finishedReassignLogSizeUnitB;
/**
* 预计剩余时长
*/
private Long remainTimeUnitMs;
/**
* 当前副本数
*/
private Integer presentReplicaNum;
/**
* 新的副本数
*/
private Integer oldReplicaNum;
/**
* 新的副本数
*/
private Integer newReplicaNum;
/**
* 原本保存时间
*/
private Long originalRetentionTimeUnitMs;
/**
* 迁移时保存时间
*/
private Long reassignRetentionTimeUnitMs;
/**
* 状态
*/
private Integer status;
}

View File

@@ -0,0 +1,17 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.Data;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceDetailDataGroupByPartition extends AbstractClusterBalanceDetailData {
/**
* 分区ID
*/
private Integer partitionId;
}

View File

@@ -0,0 +1,22 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import lombok.Data;
import java.util.List;
/**
* @author zengqiao
* @date 22/05/06
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceDetailDataGroupByTopic extends AbstractClusterBalanceDetailData {
/**
* 分区ID列表
*/
private List<Integer> partitionIdList;
private List<ClusterBalanceDetailDataGroupByPartition> reassignPartitionDetailsList;
}

View File

@@ -0,0 +1,76 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡详细信息")
public class ClusterBalancePlanDetail implements Serializable {
@ApiModelProperty(value = "是否均衡1已均衡2未均衡")
private Integer status;
@ApiModelProperty(value = "brokerId")
private Integer brokerId;
@ApiModelProperty(value = "broker host")
private String host;
@ApiModelProperty(value = "均衡前 cpu")
private Double cpuBefore;
@ApiModelProperty(value = "均衡前 disk")
private Double diskBefore;
@ApiModelProperty(value = "均衡前 byteIn")
private Double byteInBefore;
@ApiModelProperty(value = "均衡前 byteOut")
private Double byteOutBefore;
@ApiModelProperty(value = "均衡后 cpu")
private Double cpuAfter;
@ApiModelProperty(value = "是否均衡1已均衡2未均衡")
private Integer cpuStatus;
@ApiModelProperty(value = "均衡后 disk")
private Double diskAfter;
@ApiModelProperty(value = "是否均衡1已均衡2未均衡")
private Integer diskStatus;
@ApiModelProperty(value = "均衡后 byteIn")
private Double byteInAfter;
@ApiModelProperty(value = "是否均衡1已均衡2未均衡")
private Integer byteInStatus;
@ApiModelProperty(value = "均衡后 byteOut")
private Double byteOutAfter;
@ApiModelProperty(value = "是否均衡1已均衡2未均衡")
private Integer byteOutStatus;
@ApiModelProperty(value = "均衡流入大小")
private Double inSize;
@ApiModelProperty(value = "均衡流入副本个数")
private Double inReplica;
@ApiModelProperty(value = "均衡流出大小")
private Double outSize;
@ApiModelProperty(value = "均衡流出副本个数")
private Double outReplica;
}

View File

@@ -0,0 +1,85 @@
/*
* Copyright (c) 2015, WINIT and/or its affiliates. All rights reserved. Use, Copy is subject to authorized license.
*/
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.po;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
import lombok.NoArgsConstructor;
/**
* 集群均衡任务 实体类
*
* @author fengqiongfeng
* @date 2022-05-23
*/
@Data
@EnterpriseLoadReBalance
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "cluster_balance_job_config")
public class ClusterBalanceJobConfigPO extends BasePO {
/**
* 序列化版本号
*/
private static final long serialVersionUID=1L;
/**
* 集群id
*/
private Long clusterId;
/**
* topic黑名单
*/
private String topicBlackList;
/**
* 任务周期
*/
private String taskCron;
/**
* 均衡区间详情
*/
private String balanceIntervalJson;
/**
* 指标计算周期,单位分钟
*/
private Integer metricCalculationPeriod;
/**
* 迁移脚本
*/
private String reassignmentJson;
/**
* 任务并行数
*/
private Integer parallelNum;
/**
* 执行策略, 1优先最大副本2优先最小副本
*/
private Integer executionStrategy;
/**
* 限流值
*/
private Long throttleUnitB;
/**
* 操作人
*/
private String creator;
/**
* 任务状态 0未开启1开启
*/
private Integer status;
}

View File

@@ -0,0 +1,125 @@
/*
* Copyright (c) 2015, WINIT and/or its affiliates. All rights reserved. Use, Copy is subject to authorized license.
*/
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.po;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Date;
/**
* 集群均衡任务 实体类
*
* @author fengqiongfeng
* @date 2022-05-23
*/
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "cluster_balance_job")
public class ClusterBalanceJobPO extends BasePO {
/**
* 序列化版本号
*/
private static final long serialVersionUID=1L;
/**
* 集群id
*/
private Long clusterId;
/**
* 均衡节点
*/
private String brokers;
/**
* topic黑名单
*/
private String topicBlackList;
/**
* 1:立即均衡2周期均衡
*/
private Integer type;
/**
* 均衡区间详情
*/
private String balanceIntervalJson;
/**
* 指标计算周期,单位分钟
*/
private Integer metricCalculationPeriod;
/**
* 迁移脚本
*/
private String reassignmentJson;
/**
* 任务并行数
*/
private Integer parallelNum;
/**
* 执行策略, 1优先最大副本2优先最小副本
*/
private Integer executionStrategy;
/**
* 限流值
*/
private Long throttleUnitB;
/**
* 总迁移大小
*/
private Double totalReassignSize;
/**
* 总迁移副本数
*/
private Integer totalReassignReplicaNum;
/**
* 移入topic
*/
private String moveInTopicList;
/**
* 节点均衡详情
*/
private String brokerBalanceDetail;
/**
* 任务状态 1进行中2准备3成功4失败5取消
*/
private Integer status;
/**
* 操作人
*/
private String creator;
/**
* 任务开始时间
*/
private Date startTime;
/**
* 任务完成时间
*/
private Date finishedTime;
/**
* 备注说明
*/
private String description;
}

View File

@@ -0,0 +1,80 @@
/*
* Copyright (c) 2015, WINIT and/or its affiliates. All rights reserved. Use, Copy is subject to authorized license.
*/
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.po;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xiaojukeji.know.streaming.km.common.bean.po.BasePO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Date;
/**
* 集群平衡迁移详情 实体类
*
* @author fengqiongfeng
* @date 2022-05-23
*/
@Data
@NoArgsConstructor
@TableName(Constant.MYSQL_TABLE_NAME_PREFIX + "cluster_balance_reassign")
public class ClusterBalanceReassignPO extends BasePO {
/**
* 序列化版本号
*/
private static final long serialVersionUID=1L;
/**
* jobID
*/
private Long jobId;
/**
* 集群id
*/
private Long clusterId;
/**
* Topic名称
*/
private String topicName;
/**
* 分区ID
*/
private Integer partitionId;
/**
* 源BrokerId列表
*/
private String originalBrokerIds;
/**
* 目标BrokerId列表
*/
private String reassignBrokerIds;
/**
* 任务开始时间
*/
private Date startTime;
/**
* 任务完成时间
*/
private Date finishedTime;
/**
* 扩展数据
*/
private String extendData;
/**
* 任务状态
*/
private Integer status;
}

View File

@@ -0,0 +1,24 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡历史信息")
public class ClusterBalanceHistorySubVO implements Serializable {
@ApiModelProperty(value = "均衡成功节点数")
private Long successNu;
@ApiModelProperty(value = "未均衡成功节点数")
private Long failedNu;
}

View File

@@ -0,0 +1,34 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
import java.util.Date;
import java.util.Map;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡历史信息")
public class ClusterBalanceHistoryVO implements Serializable {
@ApiModelProperty(value = "均衡开始执行时间")
private Date begin;
@ApiModelProperty(value = "均衡执行结束时间")
private Date end;
@ApiModelProperty(value = "均衡任务id")
private Long jobId;
@ApiModelProperty(value = "子项均衡历史信息", example = "cpu、disk")
private Map<String, ClusterBalanceHistorySubVO> sub;
}

View File

@@ -0,0 +1,20 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceIntervalVO {
@ApiModelProperty("均衡维度:cpu,disk,bytesIn,bytesOut")
private String type;
@ApiModelProperty("平衡区间百分比")
private Double intervalPercent;
@ApiModelProperty("优先级")
private Integer priority;
}

View File

@@ -0,0 +1,55 @@
/*
* Copyright (c) 2015, WINIT and/or its affiliates. All rights reserved. Use, Copy is subject to authorized license.
*/
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.ClusterBalanceInterval;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.util.List;
/**
* 集群均衡任务 实体类
*
* @author fengqiongfeng
* @date 2022-05-23
*/
@Data
@EnterpriseLoadReBalance
public class ClusterBalanceJobConfigVO {
/**
* 序列化版本号
*/
private static final long serialVersionUID=1L;
@ApiModelProperty("集群id")
private Long clusterId;
@ApiModelProperty("topic黑名单")
private List<String> topicBlackList;
@ApiModelProperty("任务周期")
private String scheduleCron;
@ApiModelProperty("均衡区间详情")
private List<ClusterBalanceInterval> clusterBalanceIntervalList;
@ApiModelProperty("指标计算周期,单位分钟")
private Integer metricCalculationPeriod;
@ApiModelProperty("任务并行数")
private Integer parallelNum;
@ApiModelProperty("执行策略, 1优先最大副本2优先最小副本")
private Integer executionStrategy;
@ApiModelProperty("限流值")
private Long throttleUnitB;
@ApiModelProperty("任务状态 0未开启1开启")
private Integer status;
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.io.Serializable;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡列表信息")
public class ClusterBalanceOverviewSubVO implements Serializable {
@ApiModelProperty(value = "平均值", example = "cpu的平均值43.4")
private Double avg;
@ApiModelProperty(value = "规格", example = "1000")
private Double spec;
@ApiModelProperty(value = "均衡状态", example = "0:已均衡,-1:低于均衡值1高于均衡值")
private Integer status ;
}

View File

@@ -0,0 +1,37 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
import java.util.Map;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡列表信息")
public class ClusterBalanceOverviewVO implements Serializable {
@ApiModelProperty(value = "brokerId", example = "123")
private Integer brokerId;
@ApiModelProperty(value = "broker host")
private String host;
@ApiModelProperty(value = "broker 对应的 rack")
private String rack;
@ApiModelProperty(value = "leader")
private Integer leader;
@ApiModelProperty(value = "replicas")
private Integer replicas;
@ApiModelProperty(value = "子项统计详细信息", example = "cpu、disk")
private Map<String, ClusterBalanceOverviewSubVO> sub;
}

View File

@@ -0,0 +1,64 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡历史信息")
public class ClusterBalancePlanDetailVO implements Serializable {
@ApiModelProperty(value = "是否均衡0已均衡2未均衡")
private Integer status;
@ApiModelProperty(value = "brokerId")
private Integer brokerId;
@ApiModelProperty(value = "broker host")
private String host;
@ApiModelProperty(value = "均衡前 cpu")
private Double cpuBefore;
@ApiModelProperty(value = "均衡前 disk")
private Double diskBefore;
@ApiModelProperty(value = "均衡前 byteIn")
private Double byteInBefore;
@ApiModelProperty(value = "均衡前 byteOut")
private Double byteOutBefore;
@ApiModelProperty(value = "均衡后 cpu")
private Double cpuAfter;
@ApiModelProperty(value = "均衡后 disk")
private Double diskAfter;
@ApiModelProperty(value = "均衡后 byteIn")
private Double byteInAfter;
@ApiModelProperty(value = "均衡后 byteOut")
private Double byteOutAfter;
@ApiModelProperty(value = "均衡流入大小")
private Double inSize;
@ApiModelProperty(value = "均衡流入副本个数")
private Double inReplica;
@ApiModelProperty(value = "均衡流出大小")
private Double outSize;
@ApiModelProperty(value = "均衡流出副本个数")
private Double outReplica;
}

View File

@@ -0,0 +1,49 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
import java.util.List;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡信息")
public class ClusterBalancePlanVO implements Serializable {
@ApiModelProperty(value = "均衡计划类型1立即均衡2周期均衡")
private Integer type;
@ApiModelProperty(value = "均衡执行的节点范围")
private List<String> brokers;
@ApiModelProperty(value = "均衡执行的Topic黑名单")
private List<String> blackTopics;
@ApiModelProperty(value = "均衡执行移入的Topic名单")
private List<String> topics;
@ApiModelProperty(value = "均衡总迁移的磁盘大小单位byte")
private Double moveSize;
@ApiModelProperty(value = "均衡总迁移的副本个数")
private Integer replicas;
@ApiModelProperty(value = "均衡阈值")
private String threshold;
@ApiModelProperty(value = "reassignment json")
private String reassignmentJson;
@ApiModelProperty(value = "均衡区间信息")
private List<ClusterBalanceIntervalVO> clusterBalanceIntervalList;
@ApiModelProperty(value = "均衡计划明细")
private List<ClusterBalancePlanDetailVO> detail;
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@EnterpriseLoadReBalance
@AllArgsConstructor
@NoArgsConstructor
@ApiModel(description = "集群均衡状态子项的详细统计信息")
public class ClusterBalanceStateSubVO {
@ApiModelProperty(value = "平均值", example = "cpu的平均值43.4")
private Double avg;
@ApiModelProperty(value = "周期均衡时的均衡区间", example = "cpu的均衡值")
private Double interval;
@ApiModelProperty(value = "处于周期均衡时的均衡区间的最小值以下的broker个数", example = "4")
private Long smallNu;
@ApiModelProperty(value = "处于周期均衡时的均衡区间的broker个数", example = "4")
private Long betweenNu;
@ApiModelProperty(value = "处于周期均衡时的均衡区间的最大值以上的broker个数", example = "4")
private Long bigNu;
}

View File

@@ -0,0 +1,32 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Data;
import java.io.Serializable;
import java.util.Date;
import java.util.Map;
/**
* 集群Topic信息
* @author zengqiao
* @date 22/02/23
*/
@Data
@EnterpriseLoadReBalance
@ApiModel(description = "集群均衡状态信息")
public class ClusterBalanceStateVO implements Serializable {
@ApiModelProperty(value = "均衡状态", example = "0:已均衡2:未均衡")
private Integer status;
@ApiModelProperty(value = "是否开启均衡", example = "true:开启false:未开启")
private Boolean enable;
@ApiModelProperty(value = "下次均衡开始时间")
private Date next;
@ApiModelProperty(value = "子项统计详细信息", example = "cpu、disk")
private Map<String, ClusterBalanceStateSubVO> sub;
}

View File

@@ -0,0 +1,494 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.converter;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.persistence.es.template.TemplateConstant;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto.ClusterBalanceIntervalDTO;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto.ClusterBalancePreviewDTO;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.dto.ClusterBalanceStrategyDTO;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.Broker;
import com.xiaojukeji.know.streaming.km.common.bean.entity.broker.BrokerSpec;
import com.xiaojukeji.know.streaming.km.common.bean.entity.cluster.ClusterPhy;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.ClusterBalanceInterval;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.ClusterBalanceReassignExtendData;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail.ClusterBalancePlanDetail;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.content.JobClusterBalanceContent;
import com.xiaojukeji.know.streaming.km.common.bean.entity.topic.Topic;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.po.ClusterBalanceJobConfigPO;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.po.ClusterBalanceJobPO;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.po.ClusterBalanceReassignPO;
import com.xiaojukeji.know.streaming.km.common.constant.Constant;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.vo.*;
import com.xiaojukeji.know.streaming.km.rebalance.common.enums.ClusterBalanceStateEnum;
import com.xiaojukeji.know.streaming.km.rebalance.common.enums.ClusterBalanceTypeEnum;
import com.xiaojukeji.know.streaming.km.common.enums.job.JobStatusEnum;
import com.xiaojukeji.know.streaming.km.common.enums.job.JobTypeEnum;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import com.xiaojukeji.know.streaming.km.common.utils.ValidateUtils;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.executor.common.*;
import com.xiaojukeji.know.streaming.km.rebalance.algorithm.model.Resource;
import org.apache.kafka.clients.CommonClientConfigs;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
@EnterpriseLoadReBalance
public class ClusterBalanceConverter {
private ClusterBalanceConverter() {
}
public static BalanceParameter convert2BalanceParameter(ClusterBalanceJobConfigPO configPO,
Map<Integer, Broker> brokerMap,
Map<Integer, BrokerSpec> brokerSpecMap,
ClusterPhy clusterPhy,
String esUrl,
String esPassword,
List<String> topicNames) {
BalanceParameter balanceParameter = new BalanceParameter();
List<ClusterBalanceIntervalDTO> clusterBalanceIntervalDTOS = ConvertUtil.str2ObjArrayByJson(configPO.getBalanceIntervalJson(), ClusterBalanceIntervalDTO.class);
List<String> goals = new ArrayList<>();
for(ClusterBalanceIntervalDTO clusterBalanceIntervalDTO : clusterBalanceIntervalDTOS){
if (Resource.DISK.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setDiskThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.DISK.goal());
}else if (Resource.CPU.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setCpuThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
// todo cpu底层暂未实现先不加goal
}else if (Resource.NW_IN.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkInThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_IN.goal());
}else if (Resource.NW_OUT.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkOutThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_OUT.goal());
}
}
balanceParameter.setGoals(goals);
balanceParameter.setCluster(clusterPhy.getId().toString());
balanceParameter.setExcludedTopics(configPO.getTopicBlackList());
balanceParameter.setEsInfo(esUrl, esPassword, TemplateConstant.PARTITION_INDEX + "_");
balanceParameter.setBalanceBrokers(CommonUtils.intSet2String(brokerMap.keySet()));
balanceParameter.setHardwareEnv(convert2ListHostEnv(brokerMap, brokerSpecMap));
balanceParameter.setBeforeSeconds(configPO.getMetricCalculationPeriod());
balanceParameter.setIgnoredTopics(CommonUtils.strList2String(topicNames));
Properties kafkaConfig = new Properties();
kafkaConfig.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, clusterPhy.getBootstrapServers());
kafkaConfig.putAll(ConvertUtil.str2ObjByJson(clusterPhy.getClientProperties(), Properties.class));
balanceParameter.setKafkaConfig(kafkaConfig);
return balanceParameter;
}
public static BalanceParameter convert2BalanceParameter(ClusterBalanceJobPO clusterBalanceJobPO,
Map<Integer, Broker> brokerMap,
Map<Integer, BrokerSpec> brokerSpecMap,
ClusterPhy clusterPhy,
String esUrl,
String esPassword,
List<String> topicNames) {
BalanceParameter balanceParameter = new BalanceParameter();
List<ClusterBalanceIntervalDTO> clusterBalanceIntervalDTOS = ConvertUtil.str2ObjArrayByJson(clusterBalanceJobPO.getBalanceIntervalJson(), ClusterBalanceIntervalDTO.class);
List<String> goals = new ArrayList<>();
for(ClusterBalanceIntervalDTO clusterBalanceIntervalDTO : clusterBalanceIntervalDTOS){
if (Resource.DISK.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setDiskThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.DISK.goal());
}else if (Resource.CPU.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setCpuThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
// todo cpu底层暂未实现先不加goal
}else if (Resource.NW_IN.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkInThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_IN.goal());
}else if (Resource.NW_OUT.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkOutThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_OUT.goal());
}
}
balanceParameter.setGoals(goals);
balanceParameter.setCluster(clusterPhy.getId().toString());
balanceParameter.setExcludedTopics(clusterBalanceJobPO.getTopicBlackList());
balanceParameter.setEsInfo(esUrl, esPassword, TemplateConstant.PARTITION_INDEX + "_");
balanceParameter.setBalanceBrokers(clusterBalanceJobPO.getBrokers());
balanceParameter.setHardwareEnv(convert2ListHostEnv(brokerMap, brokerSpecMap));
balanceParameter.setBeforeSeconds(clusterBalanceJobPO.getMetricCalculationPeriod());
balanceParameter.setIgnoredTopics(CommonUtils.strList2String(topicNames));
Properties kafkaConfig = new Properties();
kafkaConfig.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, clusterPhy.getBootstrapServers());
kafkaConfig.putAll(ConvertUtil.str2ObjByJson(clusterPhy.getClientProperties(), Properties.class));
balanceParameter.setKafkaConfig(kafkaConfig);
return balanceParameter;
}
public static BalanceParameter convert2BalanceParameter(JobClusterBalanceContent dto,
List<Broker> brokers,
Map<Integer, BrokerSpec> brokerSpecMap,
ClusterPhy clusterPhy,
String esUrl,
String esPassword,
List<String> topicNames) {
BalanceParameter balanceParameter = new BalanceParameter();
List<ClusterBalanceIntervalDTO> clusterBalanceIntervalDTOS = dto.getClusterBalanceIntervalList().stream()
.sorted(Comparator.comparing(ClusterBalanceIntervalDTO::getPriority)).collect(Collectors.toList());
List<String> goals = new ArrayList<>();
for(ClusterBalanceIntervalDTO clusterBalanceIntervalDTO : clusterBalanceIntervalDTOS){
if (Resource.DISK.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setDiskThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.DISK.goal());
}else if (Resource.CPU.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setCpuThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
// todo cpu底层暂未实现先不加goal
}else if (Resource.NW_IN.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkInThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_IN.goal());
}else if (Resource.NW_OUT.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkOutThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_OUT.goal());
}
}
Map<Integer, Broker> brokerMap = brokers.stream().collect(Collectors.toMap(Broker::getBrokerId, Function.identity()));
balanceParameter.setGoals(goals);
balanceParameter.setCluster(clusterPhy.getId().toString());
balanceParameter.setExcludedTopics(CommonUtils.strList2String(dto.getTopicBlackList()));
balanceParameter.setEsInfo(esUrl, esPassword, TemplateConstant.PARTITION_INDEX + "_");
balanceParameter.setBalanceBrokers(CommonUtils.intSet2String(brokerMap.keySet()));
balanceParameter.setHardwareEnv(convert2ListHostEnv(brokerMap, brokerSpecMap));
balanceParameter.setBeforeSeconds(dto.getMetricCalculationPeriod());
balanceParameter.setIgnoredTopics(CommonUtils.strList2String(topicNames));
Properties kafkaConfig = new Properties();
kafkaConfig.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, clusterPhy.getBootstrapServers());
kafkaConfig.putAll(ConvertUtil.str2ObjByJson(clusterPhy.getClientProperties(), Properties.class));
balanceParameter.setKafkaConfig(kafkaConfig);
return balanceParameter;
}
public static BalanceParameter convert2BalanceParameter(ClusterBalancePreviewDTO dto,
Map<Integer, Broker> brokerMap,
Map<Integer, BrokerSpec> brokerSpecMap,
ClusterPhy clusterPhy,
String esUrl,
String esPassword,
List<String> topicNames) {
BalanceParameter balanceParameter = new BalanceParameter();
List<ClusterBalanceIntervalDTO> clusterBalanceIntervalDTOS = dto.getClusterBalanceIntervalList().stream()
.sorted(Comparator.comparing(ClusterBalanceIntervalDTO::getPriority)).collect(Collectors.toList());
List<String> goals = new ArrayList<>();
for(ClusterBalanceIntervalDTO clusterBalanceIntervalDTO : clusterBalanceIntervalDTOS){
if (Resource.DISK.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setDiskThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.DISK.goal());
}else if (Resource.CPU.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setCpuThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
// todo cpu底层暂未实现先不加goal
}else if (Resource.NW_IN.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkInThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_IN.goal());
}else if (Resource.NW_OUT.resource().equals(clusterBalanceIntervalDTO.getType())){
balanceParameter.setNetworkOutThreshold(clusterBalanceIntervalDTO.getIntervalPercent()/100);
goals.add(BalanceGoal.NW_OUT.goal());
}
}
balanceParameter.setGoals(goals);
balanceParameter.setCluster(clusterPhy.getId().toString());
balanceParameter.setExcludedTopics(CommonUtils.strList2String(dto.getTopicBlackList()));
balanceParameter.setEsInfo(esUrl, esPassword, TemplateConstant.PARTITION_INDEX + "_");
balanceParameter.setBalanceBrokers(CommonUtils.intList2String(dto.getBrokers()));
balanceParameter.setHardwareEnv(convert2ListHostEnv(brokerMap, brokerSpecMap));
balanceParameter.setBeforeSeconds(dto.getMetricCalculationPeriod());
balanceParameter.setIgnoredTopics(CommonUtils.strList2String(topicNames));
Properties kafkaConfig = new Properties();
kafkaConfig.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, clusterPhy.getBootstrapServers());
kafkaConfig.putAll(ConvertUtil.str2ObjByJson(clusterPhy.getClientProperties(), Properties.class));
balanceParameter.setKafkaConfig(kafkaConfig);
return balanceParameter;
}
public static ClusterBalanceJobPO convert2ClusterBalanceJobPO(Long jobId, JobClusterBalanceContent jobDTO, OptimizerResult optimizerResult, List<Broker> brokers, String operator, String json) {
if (ValidateUtils.anyNull(jobDTO, optimizerResult, optimizerResult.resultJsonOverview(),
optimizerResult.resultJsonDetailed(), optimizerResult.resultDetailed(), optimizerResult.resultJsonTask())){
return null;
}
ClusterBalanceJobPO clusterBalanceJobPO = new ClusterBalanceJobPO();
clusterBalanceJobPO.setId(jobId);
clusterBalanceJobPO.setType(jobDTO.isScheduleJob()?
ClusterBalanceTypeEnum.CYCLE.getType():ClusterBalanceTypeEnum.IMMEDIATELY.getType());
clusterBalanceJobPO.setStatus(JobStatusEnum.WAITING.getStatus());
clusterBalanceJobPO.setCreator(operator);
clusterBalanceJobPO.setParallelNum(jobDTO.getParallelNum());
clusterBalanceJobPO.setThrottleUnitB(jobDTO.getThrottleUnitB());
clusterBalanceJobPO.setDescription(jobDTO.getDescription());
clusterBalanceJobPO.setBrokers(CommonUtils.intList2String(brokers.stream().map(Broker::getBrokerId).collect(Collectors.toList())));
clusterBalanceJobPO.setClusterId(jobDTO.getClusterId());
clusterBalanceJobPO.setTopicBlackList(CommonUtils.strList2String(jobDTO.getTopicBlackList()));
clusterBalanceJobPO.setMoveInTopicList(optimizerResult.resultOverview().getMoveTopics());
clusterBalanceJobPO.setExecutionStrategy(jobDTO.getExecutionStrategy());
clusterBalanceJobPO.setBalanceIntervalJson(ConvertUtil.obj2Json(jobDTO.getClusterBalanceIntervalList()));
clusterBalanceJobPO.setBrokerBalanceDetail(ConvertUtil.obj2Json(convert2ClusterBalancePlanDetail(optimizerResult.resultDetailed())));
clusterBalanceJobPO.setMetricCalculationPeriod(jobDTO.getMetricCalculationPeriod());
clusterBalanceJobPO.setReassignmentJson(json);
clusterBalanceJobPO.setTotalReassignSize(optimizerResult.resultOverview().getTotalMoveSize());
clusterBalanceJobPO.setTotalReassignReplicaNum(optimizerResult.resultOverview().getMoveReplicas());
clusterBalanceJobPO.setDescription(optimizerResult.resultJsonBalanceActionHistory());
return clusterBalanceJobPO;
}
public static ClusterBalanceReassignPO convert2ClusterBalanceReassignPO(BalanceTask balanceTask, Topic topic, Long jobId, Long clusterId) {
ClusterBalanceReassignPO reassignPO = new ClusterBalanceReassignPO();
reassignPO.setClusterId(clusterId);
reassignPO.setJobId(jobId);
reassignPO.setPartitionId(balanceTask.getPartition());
reassignPO.setOriginalBrokerIds(CommonUtils.intList2String(topic.getPartitionMap().get(balanceTask.getPartition())));
reassignPO.setReassignBrokerIds(CommonUtils.intList2String(balanceTask.getReplicas()));
reassignPO.setTopicName(balanceTask.getTopic());
ClusterBalanceReassignExtendData extendData = new ClusterBalanceReassignExtendData();
extendData.setOriginalRetentionTimeUnitMs(topic.getRetentionMs());
extendData.setReassignRetentionTimeUnitMs(topic.getRetentionMs());
extendData.setOriginReplicaNum(topic.getReplicaNum());
extendData.setReassignReplicaNum(balanceTask.getReplicas().size());
reassignPO.setExtendData(ConvertUtil.obj2Json(extendData));
reassignPO.setStatus(JobStatusEnum.WAITING.getStatus());
return reassignPO;
}
public static List<ClusterBalanceReassignPO> convert2ListClusterBalanceReassignPO(List<BalanceTask> balanceTasks, Map<String, Topic> topicMap, Long jobId, Long clusterId) {
List<ClusterBalanceReassignPO> reassignPOs = new ArrayList<>();
//生成迁移详情
Map<String, List<BalanceTask>> balanceTaskMap = balanceTasks.stream().collect(Collectors.groupingBy(BalanceTask::getTopic));
for (Map.Entry<String, List<BalanceTask>> entry : balanceTaskMap.entrySet()){
Topic topic = topicMap.get(entry.getKey());
if (topic == null || topic.getPartitionMap() == null){
continue;
}
for (BalanceTask balanceTask : entry.getValue()){
reassignPOs.add(ClusterBalanceConverter.convert2ClusterBalanceReassignPO(balanceTask, topic, jobId, clusterId));
}
}
return reassignPOs;
}
public static ClusterBalanceJobConfigPO convert2ClusterBalanceJobConfigPO(ClusterBalanceStrategyDTO dto, String operator) {
ClusterBalanceJobConfigPO jobConfigPO = new ClusterBalanceJobConfigPO();
jobConfigPO.setCreator(operator);
jobConfigPO.setParallelNum(dto.getParallelNum());
jobConfigPO.setThrottleUnitB(dto.getThrottleUnitB());
jobConfigPO.setClusterId(dto.getClusterId());
jobConfigPO.setExecutionStrategy(dto.getExecutionStrategy());
jobConfigPO.setBalanceIntervalJson(ConvertUtil.obj2Json(dto.getClusterBalanceIntervalList()));
jobConfigPO.setTaskCron(dto.getScheduleCron());
jobConfigPO.setMetricCalculationPeriod(dto.getMetricCalculationPeriod());
jobConfigPO.setStatus(dto.getStatus());
return jobConfigPO;
}
public static JobClusterBalanceContent convert2JobClusterBalanceContent(ClusterBalanceJobConfigPO configPO) {
JobClusterBalanceContent content = new JobClusterBalanceContent();
content.setType(JobTypeEnum.CLUSTER_BALANCE.getType());
content.setParallelNum(configPO.getParallelNum());
content.setThrottleUnitB(configPO.getThrottleUnitB());
content.setClusterId(configPO.getClusterId());
content.setExecutionStrategy(configPO.getExecutionStrategy());
content.setClusterBalanceIntervalList(ConvertUtil.str2ObjArrayByJson(configPO.getBalanceIntervalJson(), ClusterBalanceIntervalDTO.class));
content.setMetricCalculationPeriod(configPO.getMetricCalculationPeriod());
content.setTopicBlackList(CommonUtils.string2StrList(configPO.getTopicBlackList()));
content.setScheduleJob(Boolean.TRUE);
return content;
}
public static List<ClusterBalancePlanDetail> convert2ClusterBalancePlanDetail(Map<Integer, BalanceDetailed> detailedMap) {
List<ClusterBalancePlanDetail> details = new ArrayList<>();
for(Map.Entry<Integer, BalanceDetailed> entry : detailedMap.entrySet()){
BalanceDetailed balanceDetailed = entry.getValue();
if (balanceDetailed == null){
continue ;
}
ClusterBalancePlanDetail planDetail = new ClusterBalancePlanDetail();
planDetail.setStatus(balanceDetailed.getBalanceState()==ClusterBalanceStateEnum.BALANCE.getState()?ClusterBalanceStateEnum.BALANCE.getState():ClusterBalanceStateEnum.UNBALANCED.getState());
planDetail.setHost(balanceDetailed.getHost());
planDetail.setBrokerId(entry.getKey());
planDetail.setCpuBefore(balanceDetailed.getCurrentCPUUtilization()*Constant.ONE_HUNDRED);
planDetail.setCpuAfter(balanceDetailed.getLastCPUUtilization()*Constant.ONE_HUNDRED);
planDetail.setDiskBefore(balanceDetailed.getCurrentDiskUtilization()*Constant.ONE_HUNDRED);
planDetail.setDiskAfter(balanceDetailed.getLastDiskUtilization()*Constant.ONE_HUNDRED);
planDetail.setByteInBefore(balanceDetailed.getCurrentNetworkInUtilization()*Constant.ONE_HUNDRED);
planDetail.setByteInAfter(balanceDetailed.getLastNetworkInUtilization()*Constant.ONE_HUNDRED);
planDetail.setByteOutBefore(balanceDetailed.getCurrentNetworkOutUtilization()*Constant.ONE_HUNDRED);
planDetail.setByteOutAfter(balanceDetailed.getLastNetworkOutUtilization()*Constant.ONE_HUNDRED);
planDetail.setInReplica(balanceDetailed.getMoveInReplicas());
planDetail.setOutReplica(balanceDetailed.getMoveOutReplicas());
planDetail.setInSize(balanceDetailed.getMoveInDiskSize());
planDetail.setOutSize(balanceDetailed.getMoveOutDiskSize());
details.add(planDetail);
}
return details;
}
//更新平衡任务完成后的集群均衡状态
public static List<ClusterBalancePlanDetail> convert2ClusterBalancePlanDetail(List<ClusterBalancePlanDetail> details, Map<Integer, BrokerBalanceState> stateMap) {
details.forEach(planDetail ->{
BrokerBalanceState state = stateMap.get(planDetail.getBrokerId());
if (state == null){
return;
}
planDetail.setCpuStatus(state.getCpuBalanceState());
planDetail.setDiskStatus(state.getDiskBalanceState());
planDetail.setByteInStatus(state.getBytesInBalanceState());
planDetail.setByteOutStatus(state.getBytesOutBalanceState());
if ((state.getCpuBalanceState() == null || ClusterBalanceStateEnum.BALANCE.getState().equals(state.getCpuBalanceState()))
&& (state.getDiskBalanceState() == null || ClusterBalanceStateEnum.BALANCE.getState().equals(state.getDiskBalanceState()))
&& (state.getBytesInBalanceState() == null || ClusterBalanceStateEnum.BALANCE.getState().equals(state.getBytesInBalanceState()))
&& (state.getBytesOutBalanceState() == null || ClusterBalanceStateEnum.BALANCE.getState().equals(state.getBytesOutBalanceState()))) {
planDetail.setStatus(ClusterBalanceStateEnum.BALANCE.getState());
}else {
planDetail.setStatus(ClusterBalanceStateEnum.UNBALANCED.getState());
}
});
return details;
}
public static List<ClusterBalancePlanDetailVO> convert2ClusterBalancePlanDetailVO(List<Integer> balanceBrokerIds, Map<Integer, BalanceDetailed> detailedMap) {
List<ClusterBalancePlanDetailVO> detailVOS = new ArrayList<>();
for(Map.Entry<Integer, BalanceDetailed> entry : detailedMap.entrySet()){
BalanceDetailed value = entry.getValue();
if (value == null || !balanceBrokerIds.contains(entry.getKey())){
continue ;
}
ClusterBalancePlanDetailVO planDetailVO = new ClusterBalancePlanDetailVO();
planDetailVO.setStatus(value.getBalanceState()==ClusterBalanceStateEnum.BALANCE.getState()?ClusterBalanceStateEnum.BALANCE.getState():ClusterBalanceStateEnum.UNBALANCED.getState());
planDetailVO.setHost(value.getHost());
planDetailVO.setBrokerId(entry.getKey());
planDetailVO.setCpuBefore(value.getCurrentCPUUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setCpuAfter(value.getLastCPUUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setDiskBefore(value.getCurrentDiskUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setDiskAfter(value.getLastDiskUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setByteInBefore(value.getCurrentNetworkInUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setByteInAfter(value.getLastNetworkInUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setByteOutBefore(value.getCurrentNetworkOutUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setByteOutAfter(value.getLastNetworkOutUtilization()*Constant.ONE_HUNDRED);
planDetailVO.setInReplica(value.getMoveInReplicas());
planDetailVO.setOutReplica(value.getMoveOutReplicas());
planDetailVO.setInSize(value.getMoveInDiskSize());
planDetailVO.setOutSize(value.getMoveOutDiskSize());
detailVOS.add(planDetailVO);
}
return detailVOS;
}
public static ClusterBalancePlanVO convert2ClusterBalancePlanVO(ClusterBalancePreviewDTO jobDTO, OptimizerResult optimizerResult, List<Broker> allBrokers) {
if (ValidateUtils.anyNull(jobDTO, optimizerResult, optimizerResult.resultJsonOverview(),
optimizerResult.resultJsonDetailed(), optimizerResult.resultDetailed(), optimizerResult.resultJsonTask())){
return null;
}
ClusterBalancePlanVO planVO = new ClusterBalancePlanVO();
planVO.setTopics(CommonUtils.string2StrList(optimizerResult.resultOverview().getMoveTopics()));
planVO.setType(ClusterBalanceTypeEnum.IMMEDIATELY.getType());
planVO.setReplicas(optimizerResult.resultOverview().getMoveReplicas());
planVO.setBlackTopics(jobDTO.getTopicBlackList());
planVO.setMoveSize(optimizerResult.resultOverview().getTotalMoveSize());
planVO.setThreshold(ConvertUtil.obj2Json(jobDTO.getClusterBalanceIntervalList()));
planVO.setBrokers(convert2HostList(allBrokers, optimizerResult.resultOverview().getNodeRange()));
planVO.setDetail(convert2ClusterBalancePlanDetailVO(jobDTO.getBrokers(), optimizerResult.resultDetailed()));
planVO.setClusterBalanceIntervalList(ConvertUtil.list2List(jobDTO.getClusterBalanceIntervalList(), ClusterBalanceIntervalVO.class));
planVO.setReassignmentJson(optimizerResult.resultJsonTask());
return planVO;
}
public static ClusterBalancePreviewDTO convert2ClusterBalancePreviewDTO(ClusterBalanceJobPO clusterBalanceJobPO) {
ClusterBalancePreviewDTO planVO = new ClusterBalancePreviewDTO();
planVO.setBrokers(CommonUtils.string2IntList(clusterBalanceJobPO.getBrokers()));
planVO.setClusterBalanceIntervalList(ConvertUtil.str2ObjArrayByJson(clusterBalanceJobPO.getBalanceIntervalJson(), ClusterBalanceIntervalDTO.class));
planVO.setClusterId(clusterBalanceJobPO.getClusterId());
planVO.setExecutionStrategy(clusterBalanceJobPO.getExecutionStrategy());
planVO.setParallelNum(clusterBalanceJobPO.getParallelNum());
planVO.setThrottleUnitB(clusterBalanceJobPO.getThrottleUnitB());
planVO.setMetricCalculationPeriod(clusterBalanceJobPO.getMetricCalculationPeriod());
planVO.setTopicBlackList(CommonUtils.string2StrList(clusterBalanceJobPO.getTopicBlackList()));
return planVO;
}
public static Map<String, ClusterBalanceOverviewSubVO> convert2MapClusterBalanceOverviewSubVO(BrokerSpec brokerSpec, BrokerBalanceState state) {
Map<String, ClusterBalanceOverviewSubVO> subVOMap = new HashMap<>();
if (brokerSpec == null){
brokerSpec = new BrokerSpec();
}
if (state == null){
state = new BrokerBalanceState();
}
Double cpuSpec = brokerSpec.getCpu()!=null?brokerSpec.getCpu()*Constant.ONE_HUNDRED:null;//转成基础单位
subVOMap.put(Resource.DISK.resource(),
new ClusterBalanceOverviewSubVO(
state.getDiskAvgResource(), brokerSpec.getDisk(),
state.getDiskBalanceState() == null || state.getDiskBalanceState().equals(ClusterBalanceStateEnum.BALANCE.getState())?state.getDiskBalanceState():ClusterBalanceStateEnum.UNBALANCED.getState()));
subVOMap.put(Resource.CPU.resource(),
new ClusterBalanceOverviewSubVO(state.getCpuAvgResource(), cpuSpec,
state.getCpuBalanceState() == null || state.getCpuBalanceState().equals(ClusterBalanceStateEnum.BALANCE.getState())?state.getCpuBalanceState():ClusterBalanceStateEnum.UNBALANCED.getState()));
subVOMap.put(Resource.NW_IN.resource(),
new ClusterBalanceOverviewSubVO(
state.getBytesInAvgResource(), brokerSpec.getFlow(),
state.getBytesInBalanceState() == null || state.getBytesInBalanceState().equals(ClusterBalanceStateEnum.BALANCE.getState())?state.getBytesInBalanceState():ClusterBalanceStateEnum.UNBALANCED.getState()));
subVOMap.put(Resource.NW_OUT.resource(),
new ClusterBalanceOverviewSubVO(
state.getBytesOutAvgResource(), brokerSpec.getFlow(),
state.getBytesOutBalanceState() == null || state.getBytesOutBalanceState().equals(ClusterBalanceStateEnum.BALANCE.getState())?state.getBytesOutBalanceState():ClusterBalanceStateEnum.UNBALANCED.getState()));
return subVOMap;
}
public static ClusterBalanceJobConfigVO convert2ClusterBalanceJobConfigVO(ClusterBalanceJobConfigPO clusterBalanceJobConfigPO){
ClusterBalanceJobConfigVO configVO = new ClusterBalanceJobConfigVO();
configVO.setScheduleCron(clusterBalanceJobConfigPO.getTaskCron());
configVO.setClusterBalanceIntervalList(ConvertUtil.str2ObjArrayByJson(clusterBalanceJobConfigPO.getBalanceIntervalJson(), ClusterBalanceInterval.class));
configVO.setClusterId(clusterBalanceJobConfigPO.getClusterId());
configVO.setExecutionStrategy(clusterBalanceJobConfigPO.getExecutionStrategy());
configVO.setParallelNum(clusterBalanceJobConfigPO.getParallelNum());
configVO.setMetricCalculationPeriod(clusterBalanceJobConfigPO.getMetricCalculationPeriod());
configVO.setThrottleUnitB(clusterBalanceJobConfigPO.getThrottleUnitB());
configVO.setTopicBlackList(CommonUtils.string2StrList(clusterBalanceJobConfigPO.getTopicBlackList()));
configVO.setStatus(clusterBalanceJobConfigPO.getStatus());
return configVO;
}
public static List<String> convert2HostList(List<Broker> allBrokers, String brokerIdStr){
if (allBrokers.isEmpty() || ValidateUtils.isBlank(brokerIdStr)){
return new ArrayList<>();
}
List<Integer> brokerIds = CommonUtils.string2IntList(brokerIdStr);
return allBrokers.stream().filter(broker -> brokerIds.contains(broker.getBrokerId()))
.map(Broker::getHost).collect(Collectors.toList());
}
private static List<HostEnv> convert2ListHostEnv(Map<Integer, Broker> brokerMap, Map<Integer, BrokerSpec> brokerSpecMap) {
List<HostEnv> hostEnvs = new ArrayList<>();
for (Map.Entry<Integer, Broker> entry : brokerMap.entrySet()) {
HostEnv hostEnv = new HostEnv();
hostEnv.setId(entry.getKey());
hostEnv.setHost(entry.getValue().getHost());
hostEnv.setRackId(entry.getValue().getRack());
BrokerSpec brokerSpec = brokerSpecMap.get(entry.getKey());
if (brokerSpec == null){
continue;
}
hostEnv.setCpu(brokerSpec.getCpu().intValue() * Constant.ONE_HUNDRED);
hostEnv.setDisk(brokerSpec.getDisk() * Constant.B_TO_GB);
hostEnv.setNetwork(brokerSpec.getFlow() * Constant.B_TO_MB);
hostEnvs.add(hostEnv);
}
return hostEnvs;
}
}

View File

@@ -0,0 +1,218 @@
package com.xiaojukeji.know.streaming.km.rebalance.common.converter;
import com.xiaojukeji.know.streaming.km.common.annotations.enterprise.EnterpriseLoadReBalance;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.ClusterBalanceReassignDetail;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.ClusterBalanceReassignExtendData;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail.ClusterBalanceDetailDataGroupByPartition;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.entity.job.detail.ClusterBalanceDetailDataGroupByTopic;
import com.xiaojukeji.know.streaming.km.common.bean.entity.job.Job;
import com.xiaojukeji.know.streaming.km.common.bean.entity.job.JobStatus;
import com.xiaojukeji.know.streaming.km.common.bean.entity.job.detail.JobDetail;
import com.xiaojukeji.know.streaming.km.common.bean.entity.job.detail.SubJobReplicaMoveDetail;
import com.xiaojukeji.know.streaming.km.common.bean.entity.reassign.strategy.ReplaceReassignSub;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.po.ClusterBalanceJobPO;
import com.xiaojukeji.know.streaming.km.rebalance.common.bean.po.ClusterBalanceReassignPO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.job.sub.SubJobClusterBalanceReplicaMoveVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.job.sub.SubJobPartitionDetailVO;
import com.xiaojukeji.know.streaming.km.common.bean.vo.job.sub.SubJobVO;
import com.xiaojukeji.know.streaming.km.common.enums.job.JobStatusEnum;
import com.xiaojukeji.know.streaming.km.common.utils.CommonUtils;
import com.xiaojukeji.know.streaming.km.common.utils.ConvertUtil;
import java.util.*;
import java.util.stream.Collectors;
@EnterpriseLoadReBalance
public class ClusterBalanceReassignConverter {
private ClusterBalanceReassignConverter() {
}
public static JobDetail convert2JobDetail(Job job, ClusterBalanceReassignDetail reassignDetail) {
JobDetail jobDetail = new JobDetail();
jobDetail.setId(job.getId());
jobDetail.setJobType(job.getJobType());
jobDetail.setJobName(job.getJobName());
jobDetail.setJobStatus(job.getJobStatus());
jobDetail.setPlanTime(job.getPlanTime());
jobDetail.setStartTime(reassignDetail.getStartTime());
jobDetail.setEndTime(reassignDetail.getFinishedTime());
jobDetail.setFlowLimit(reassignDetail.getThrottleUnitB().doubleValue());
JobStatus jobStatus = new JobStatus(reassignDetail.getReassignTopicDetailsList().stream().map(elem -> elem.getStatus()).collect(Collectors.toList()));
jobDetail.setTotal(jobStatus.getTotal());
jobDetail.setSuccess(jobStatus.getSuccess());
jobDetail.setFail(jobStatus.getFailed());
jobDetail.setDoing(jobStatus.getDoing());
List<SubJobVO> subJobDetailList = new ArrayList<>();
subJobDetailList.addAll(
ConvertUtil.list2List(convert2SubJobReplicaMoveDetailList(reassignDetail.getReassignTopicDetailsList()), SubJobClusterBalanceReplicaMoveVO.class)
);
jobDetail.setSubJobs(subJobDetailList);
return jobDetail;
}
public static ClusterBalanceReassignDetail convert2ClusterBalanceReassignDetail(ClusterBalanceJobPO jobPO, List<ClusterBalanceReassignPO> reassignPOS) {
// 按照Topic做聚合
Map<String, List<ClusterBalanceReassignPO>> topicJobPOMap = new HashMap<>();
reassignPOS.forEach(elem -> {
topicJobPOMap.putIfAbsent(elem.getTopicName(), new ArrayList<>());
topicJobPOMap.get(elem.getTopicName()).add(elem);
});
List<ClusterBalanceDetailDataGroupByTopic> reassignTopicDetailsList = new ArrayList<>();
for (Map.Entry<String, List<ClusterBalanceReassignPO>> entry: topicJobPOMap.entrySet()) {
reassignTopicDetailsList.add(convert2ClusterBalanceDetailDataGroupByTopic(entry.getValue()));
}
ClusterBalanceReassignDetail jobDetail = new ClusterBalanceReassignDetail();
jobDetail.setThrottleUnitB(jobPO.getThrottleUnitB());
jobDetail.setReassignTopicDetailsList(reassignTopicDetailsList);
jobDetail.setStartTime(jobPO.getStartTime());
if (JobStatusEnum.isFinished(jobPO.getStatus())) {
jobDetail.setFinishedTime(jobPO.getFinishedTime());
}
return jobDetail;
}
private static ClusterBalanceDetailDataGroupByTopic convert2ClusterBalanceDetailDataGroupByTopic(List<ClusterBalanceReassignPO> reassingns) {
Set<Integer> originalBrokerIdSet = new HashSet<>();
Set<Integer> reassignBrokerIdSet = new HashSet<>();
// 分区的信息
List<ClusterBalanceDetailDataGroupByPartition> partitionDetailList = new ArrayList<>();
for (ClusterBalanceReassignPO reassignPO : reassingns) {
ClusterBalanceDetailDataGroupByPartition detail = new ClusterBalanceDetailDataGroupByPartition();
detail.setPartitionId(reassignPO.getPartitionId());
detail.setClusterPhyId(reassignPO.getClusterId());
detail.setTopicName(reassignPO.getTopicName());
detail.setOriginalBrokerIdList(CommonUtils.string2IntList(reassignPO.getOriginalBrokerIds()));
detail.setReassignBrokerIdList(CommonUtils.string2IntList(reassignPO.getReassignBrokerIds()));
detail.setStatus(reassignPO.getStatus());
ClusterBalanceReassignExtendData extendData = ConvertUtil.str2ObjByJson(reassignPO.getExtendData(), ClusterBalanceReassignExtendData.class);
if (extendData != null) {
detail.setNeedReassignLogSizeUnitB(extendData.getNeedReassignLogSizeUnitB());
detail.setFinishedReassignLogSizeUnitB(extendData.getFinishedReassignLogSizeUnitB());
detail.setRemainTimeUnitMs(extendData.getRemainTimeUnitMs());
detail.setPresentReplicaNum(extendData.getOriginReplicaNum());
detail.setNewReplicaNum(extendData.getReassignReplicaNum());
detail.setOriginalRetentionTimeUnitMs(extendData.getOriginalRetentionTimeUnitMs());
detail.setReassignRetentionTimeUnitMs(extendData.getReassignRetentionTimeUnitMs());
}
originalBrokerIdSet.addAll(detail.getOriginalBrokerIdList());
reassignBrokerIdSet.addAll(detail.getReassignBrokerIdList());
partitionDetailList.add(detail);
}
// Topic的详细信息
ClusterBalanceDetailDataGroupByTopic topicDetail = new ClusterBalanceDetailDataGroupByTopic();
topicDetail.setPartitionIdList(partitionDetailList.stream().map(elem -> elem.getPartitionId()).collect(Collectors.toList()));
topicDetail.setReassignPartitionDetailsList(partitionDetailList);
topicDetail.setClusterPhyId(reassingns.get(0).getClusterId());
topicDetail.setTopicName(reassingns.get(0).getTopicName());
topicDetail.setOriginalBrokerIdList(new ArrayList<>(originalBrokerIdSet));
topicDetail.setReassignBrokerIdList(new ArrayList<>(reassignBrokerIdSet));
List<Long> needSizeList = partitionDetailList
.stream()
.filter(elem -> elem.getNeedReassignLogSizeUnitB() != null)
.map(item -> item.getNeedReassignLogSizeUnitB()).collect(Collectors.toList());
topicDetail.setNeedReassignLogSizeUnitB(needSizeList.isEmpty()? null: needSizeList.stream().reduce(Long::sum).get());
List<Long> finishedSizeList = partitionDetailList
.stream()
.filter(elem -> elem.getFinishedReassignLogSizeUnitB() != null)
.map(item -> item.getFinishedReassignLogSizeUnitB()).collect(Collectors.toList());
topicDetail.setFinishedReassignLogSizeUnitB(finishedSizeList.isEmpty()? null: finishedSizeList.stream().reduce(Long::sum).get());
List<Long> remainList = partitionDetailList
.stream()
.filter(elem -> elem.getRemainTimeUnitMs() != null)
.map(item -> item.getRemainTimeUnitMs()).collect(Collectors.toList());
topicDetail.setRemainTimeUnitMs(remainList.isEmpty()? null: remainList.stream().reduce(Long::max).get());
topicDetail.setPresentReplicaNum(partitionDetailList.get(0).getPresentReplicaNum());
topicDetail.setNewReplicaNum(partitionDetailList.get(0).getNewReplicaNum());
topicDetail.setOriginalRetentionTimeUnitMs(partitionDetailList.get(0).getOriginalRetentionTimeUnitMs());
topicDetail.setReassignRetentionTimeUnitMs(partitionDetailList.get(0).getReassignRetentionTimeUnitMs());
topicDetail.setStatus(
new JobStatus(
partitionDetailList.stream().map(elem -> elem.getStatus()).collect(Collectors.toList())
).getStatus()
);
return topicDetail;
}
public static List<SubJobPartitionDetailVO> convert2SubJobPartitionDetailVOList(ClusterBalanceDetailDataGroupByTopic detailDataGroupByTopic) {
List<SubJobPartitionDetailVO> voList = new ArrayList<>();
for (ClusterBalanceDetailDataGroupByPartition groupByPartition: detailDataGroupByTopic.getReassignPartitionDetailsList()) {
SubJobPartitionDetailVO vo = new SubJobPartitionDetailVO();
vo.setPartitionId(groupByPartition.getPartitionId());
vo.setSourceBrokerIds(groupByPartition.getOriginalBrokerIdList());
vo.setDesBrokerIds(groupByPartition.getReassignBrokerIdList());
vo.setTotalSize(groupByPartition.getNeedReassignLogSizeUnitB() != null ? groupByPartition.getNeedReassignLogSizeUnitB().doubleValue(): null);
vo.setMovedSize(groupByPartition.getFinishedReassignLogSizeUnitB() != null ? groupByPartition.getFinishedReassignLogSizeUnitB().doubleValue(): null);
vo.setStatus(groupByPartition.getStatus());
vo.setRemainTime(groupByPartition.getRemainTimeUnitMs());
voList.add(vo);
}
return voList;
}
private static List<SubJobReplicaMoveDetail> convert2SubJobReplicaMoveDetailList(List<ClusterBalanceDetailDataGroupByTopic> reassignTopicDetailsList) {
List<SubJobReplicaMoveDetail> detailList = new ArrayList<>();
for (ClusterBalanceDetailDataGroupByTopic detailDataGroupByTopic: reassignTopicDetailsList) {
SubJobReplicaMoveDetail detail = new SubJobReplicaMoveDetail();
detail.setTopicName(detailDataGroupByTopic.getTopicName());
detail.setPartitions(detailDataGroupByTopic.getPartitionIdList());
detail.setCurrentTimeSpent(detailDataGroupByTopic.getOriginalRetentionTimeUnitMs());
detail.setMoveTimeSpent(detailDataGroupByTopic.getReassignRetentionTimeUnitMs());
detail.setSourceBrokers(detailDataGroupByTopic.getOriginalBrokerIdList());
detail.setDesBrokers(detailDataGroupByTopic.getReassignBrokerIdList());
detail.setStatus(detailDataGroupByTopic.getStatus());
if (detailDataGroupByTopic.getNeedReassignLogSizeUnitB() != null) {
detail.setTotalSize(detailDataGroupByTopic.getNeedReassignLogSizeUnitB().doubleValue());
}
if (detailDataGroupByTopic.getFinishedReassignLogSizeUnitB() != null) {
detail.setMovedSize(detailDataGroupByTopic.getFinishedReassignLogSizeUnitB().doubleValue());
}
JobStatus jobStatus = new JobStatus(detailDataGroupByTopic.getReassignPartitionDetailsList().stream().map(elem -> elem.getStatus()).collect(Collectors.toList())); detail.setTotal(jobStatus.getTotal());
detail.setSuccess(jobStatus.getSuccess());
detail.setFail(jobStatus.getFailed());
detail.setDoing(jobStatus.getDoing());
detail.setRemainTime(detailDataGroupByTopic.getRemainTimeUnitMs());
detailList.add(detail);
}
return detailList;
}
public static List<ReplaceReassignSub> convert2ReplaceReassignSubList(List<ClusterBalanceReassignPO> reassignPOList) {
List<ReplaceReassignSub> voList = new ArrayList<>();
for (ClusterBalanceReassignPO reassignPO: reassignPOList) {
voList.add(convert2ReplaceReassignSub(reassignPO));
}
return voList;
}
public static ReplaceReassignSub convert2ReplaceReassignSub(ClusterBalanceReassignPO reassignPO) {
ReplaceReassignSub reassignSub = new ReplaceReassignSub();
reassignSub.setClusterPhyId(reassignPO.getClusterId());
reassignSub.setOriginalBrokerIdList(CommonUtils.string2IntList(reassignPO.getOriginalBrokerIds()));
reassignSub.setReassignBrokerIdList(CommonUtils.string2IntList(reassignPO.getReassignBrokerIds()));
reassignSub.setPartitionId(reassignPO.getPartitionId());
reassignSub.setTopicName(reassignPO.getTopicName());
return reassignSub;
}
}

Some files were not shown because too many files have changed in this diff Show More