mirror of
https://github.com/didi/KnowStreaming.git
synced 2025-12-26 05:22:14 +08:00
Compare commits
2 Commits
ve_prd
...
ve_kafka_d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
da978cd25f | ||
|
|
0b8160a714 |
51
.github/ISSUE_TEMPLATE/bug_report.md
vendored
51
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,51 +0,0 @@
|
||||
---
|
||||
name: 报告Bug
|
||||
about: 报告KnowStreaming的相关Bug
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||
|
||||
你是否希望来认领这个Bug。
|
||||
|
||||
「 Y / N 」
|
||||
|
||||
### 环境信息
|
||||
|
||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
||||
* Operating System version : <font size=4 color =red> xxx </font>
|
||||
* Java version : <font size=4 color =red> xxx </font>
|
||||
|
||||
|
||||
### 重现该问题的步骤
|
||||
|
||||
1. xxx
|
||||
|
||||
|
||||
|
||||
2. xxx
|
||||
|
||||
|
||||
3. xxx
|
||||
|
||||
|
||||
|
||||
### 预期结果
|
||||
|
||||
<!-- 写下应该出现的预期结果?-->
|
||||
|
||||
### 实际结果
|
||||
|
||||
<!-- 实际发生了什么? -->
|
||||
|
||||
|
||||
---
|
||||
|
||||
如果有异常,请附上异常Trace:
|
||||
|
||||
```
|
||||
Just put your stack trace here!
|
||||
```
|
||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
8
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,8 +0,0 @@
|
||||
blank_issues_enabled: true
|
||||
contact_links:
|
||||
- name: 讨论问题
|
||||
url: https://github.com/didi/KnowStreaming/discussions/new
|
||||
about: 发起问题、讨论 等等
|
||||
- name: KnowStreaming官网
|
||||
url: https://knowstreaming.com/
|
||||
about: KnowStreaming website
|
||||
26
.github/ISSUE_TEMPLATE/detail_optimizing.md
vendored
26
.github/ISSUE_TEMPLATE/detail_optimizing.md
vendored
@@ -1,26 +0,0 @@
|
||||
---
|
||||
name: 优化建议
|
||||
about: 相关功能优化建议
|
||||
title: ''
|
||||
labels: Optimization Suggestions
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||
|
||||
你是否希望来认领这个优化建议。
|
||||
|
||||
「 Y / N 」
|
||||
|
||||
### 环境信息
|
||||
|
||||
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
||||
* Operating System version : <font size=4 color =red> xxx </font>
|
||||
* Java version : <font size=4 color =red> xxx </font>
|
||||
|
||||
### 需要优化的功能点
|
||||
|
||||
|
||||
### 建议如何优化
|
||||
|
||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -1,20 +0,0 @@
|
||||
---
|
||||
name: 提议新功能/需求
|
||||
about: 给KnowStreaming提一个功能需求
|
||||
title: ''
|
||||
labels: feature
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
|
||||
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
|
||||
|
||||
你是否希望来认领这个Feature。
|
||||
|
||||
「 Y / N 」
|
||||
|
||||
|
||||
## 这里描述需求
|
||||
<!--请尽可能的描述清楚您的需求 -->
|
||||
|
||||
12
.github/ISSUE_TEMPLATE/question.md
vendored
12
.github/ISSUE_TEMPLATE/question.md
vendored
@@ -1,12 +0,0 @@
|
||||
---
|
||||
name: 提个问题
|
||||
about: 问KnowStreaming相关问题
|
||||
title: ''
|
||||
labels: question
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||
|
||||
## 在这里提出你的问题
|
||||
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,23 +0,0 @@
|
||||
请不要在没有先创建Issue的情况下创建Pull Request。
|
||||
|
||||
## 变更的目的是什么
|
||||
|
||||
XXXXX
|
||||
|
||||
## 简短的更新日志
|
||||
|
||||
XX
|
||||
|
||||
## 验证这一变化
|
||||
|
||||
XXXX
|
||||
|
||||
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
|
||||
|
||||
* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
|
||||
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
|
||||
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
|
||||
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
|
||||
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
|
||||
* [ ] 确保编译通过,集成测试通过;
|
||||
|
||||
158
.gitignore
vendored
158
.gitignore
vendored
@@ -1,116 +1,62 @@
|
||||
### Intellij ###
|
||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
||||
|
||||
.gradle/
|
||||
dist
|
||||
*classes
|
||||
*.class
|
||||
target/
|
||||
build/
|
||||
build_eclipse/
|
||||
out/
|
||||
.gradle/
|
||||
lib_managed/
|
||||
src_managed/
|
||||
project/boot/
|
||||
project/plugins/project/
|
||||
patch-process/*
|
||||
.idea
|
||||
.svn
|
||||
.classpath
|
||||
/.metadata
|
||||
/.recommenders
|
||||
*~
|
||||
*#
|
||||
.#*
|
||||
rat.out
|
||||
TAGS
|
||||
*.iml
|
||||
|
||||
## Directory-based project format:
|
||||
.idea/
|
||||
# if you remove the above rule, at least ignore the following:
|
||||
|
||||
# User-specific stuff:
|
||||
# .idea/workspace.xml
|
||||
# .idea/tasks.xml
|
||||
# .idea/dictionaries
|
||||
# .idea/shelf
|
||||
|
||||
# Sensitive or high-churn files:
|
||||
.idea/dataSources.ids
|
||||
.idea/dataSources.xml
|
||||
.idea/sqlDataSources.xml
|
||||
.idea/dynamic.xml
|
||||
.idea/uiDesigner.xml
|
||||
|
||||
|
||||
# Mongo Explorer plugin:
|
||||
.idea/mongoSettings.xml
|
||||
|
||||
## File-based project format:
|
||||
.project
|
||||
.settings
|
||||
*.ipr
|
||||
*.iws
|
||||
|
||||
## Plugin-specific files:
|
||||
|
||||
# IntelliJ
|
||||
/out/
|
||||
|
||||
# mpeltonen/sbt-idea plugin
|
||||
.idea_modules/
|
||||
|
||||
# JIRA plugin
|
||||
atlassian-ide-plugin.xml
|
||||
|
||||
# Crashlytics plugin (for Android Studio and IntelliJ)
|
||||
com_crashlytics_export_strings.xml
|
||||
crashlytics.properties
|
||||
crashlytics-build.properties
|
||||
fabric.properties
|
||||
|
||||
|
||||
### Java ###
|
||||
*.class
|
||||
|
||||
# Mobile Tools for Java (J2ME)
|
||||
.mtj.tmp/
|
||||
|
||||
# Package Files #
|
||||
*.jar
|
||||
*.war
|
||||
*.ear
|
||||
*.tar.gz
|
||||
|
||||
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
||||
hs_err_pid*
|
||||
|
||||
|
||||
### OSX ###
|
||||
.vagrant
|
||||
Vagrantfile.local
|
||||
/logs
|
||||
.DS_Store
|
||||
.AppleDouble
|
||||
.LSOverride
|
||||
|
||||
# Icon must end with two \r
|
||||
Icon
|
||||
config/server-*
|
||||
config/zookeeper-*
|
||||
core/data/*
|
||||
#gradle/wrapper/*.jar
|
||||
gradlew.bat
|
||||
|
||||
results
|
||||
tests/results
|
||||
.ducktape
|
||||
tests/.ducktape
|
||||
tests/venv
|
||||
.cache
|
||||
|
||||
# Thumbnails
|
||||
._*
|
||||
docs/generated/
|
||||
|
||||
# Files that might appear in the root of a volume
|
||||
.DocumentRevisions-V100
|
||||
.fseventsd
|
||||
.Spotlight-V100
|
||||
.TemporaryItems
|
||||
.Trashes
|
||||
.VolumeIcon.icns
|
||||
.release-settings.json
|
||||
|
||||
# Directories potentially created on remote AFP share
|
||||
.AppleDB
|
||||
.AppleDesktop
|
||||
Network Trash Folder
|
||||
Temporary Items
|
||||
.apdisk
|
||||
|
||||
/target
|
||||
target/
|
||||
*.log
|
||||
*.log.*
|
||||
*.bak
|
||||
*.vscode
|
||||
*/.vscode/*
|
||||
*/.vscode
|
||||
*/velocity.log*
|
||||
*/*.log
|
||||
*/*.log.*
|
||||
node_modules/
|
||||
node_modules/*
|
||||
workspace.xml
|
||||
/output/*
|
||||
.gitversion
|
||||
out/*
|
||||
dist/
|
||||
dist/*
|
||||
km-rest/src/main/resources/templates/
|
||||
*dependency-reduced-pom*
|
||||
#filter flattened xml
|
||||
*/.flattened-pom.xml
|
||||
.flattened-pom.xml
|
||||
*/*/.flattened-pom.xml
|
||||
kafkatest.egg-info/
|
||||
systest/
|
||||
*.swp
|
||||
clients/src/generated
|
||||
clients/src/generated-test
|
||||
jmh-benchmarks/generated
|
||||
streams/src/generated
|
||||
kafka-logs
|
||||
clients/src/generated-test/
|
||||
clients/src/generated/
|
||||
|
||||
11
CONTRIBUTING.md
Normal file
11
CONTRIBUTING.md
Normal file
@@ -0,0 +1,11 @@
|
||||
## Contributing to Kafka
|
||||
|
||||
*Before opening a pull request*, review the [Contributing](https://kafka.apache.org/contributing.html) and [Contributing Code Changes](https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes) pages.
|
||||
|
||||
It lists steps that are required before creating a PR.
|
||||
|
||||
When you contribute code, you affirm that the contribution is your original work and that you
|
||||
license the work to the project under the project's open source license. Whether or not you
|
||||
state this explicitly, by submitting any copyrighted material via pull request, email, or
|
||||
other means you agree to license the material under the project's open source license and
|
||||
warrant that you have the legal authority to do so.
|
||||
14
HEADER
Normal file
14
HEADER
Normal file
@@ -0,0 +1,14 @@
|
||||
Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
contributor license agreements. See the NOTICE file distributed with
|
||||
this work for additional information regarding copyright ownership.
|
||||
The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
(the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
8
NOTICE
Normal file
8
NOTICE
Normal file
@@ -0,0 +1,8 @@
|
||||
Apache Kafka
|
||||
Copyright 2020 The Apache Software Foundation.
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (https://www.apache.org/).
|
||||
|
||||
This distribution has a binary dependency on jersey, which is available under the CDDL
|
||||
License. The source code of jersey can be found at https://github.com/jersey/jersey/.
|
||||
14
PULL_REQUEST_TEMPLATE.md
Normal file
14
PULL_REQUEST_TEMPLATE.md
Normal file
@@ -0,0 +1,14 @@
|
||||
*More detailed description of your change,
|
||||
if necessary. The PR title and PR message become
|
||||
the squashed commit message, so use a separate
|
||||
comment to ping reviewers.*
|
||||
|
||||
*Summary of testing strategy (including rationale)
|
||||
for the feature or bug fix. Unit and/or integration
|
||||
tests are expected for any behaviour change and
|
||||
system tests should be considered for larger changes.*
|
||||
|
||||
### Committer Checklist (excluded from commit message)
|
||||
- [ ] Verify design and implementation
|
||||
- [ ] Verify test coverage and CI build status
|
||||
- [ ] Verify documentation (including upgrade notes)
|
||||
220
README.md
Normal file
220
README.md
Normal file
@@ -0,0 +1,220 @@
|
||||
Apache Kafka
|
||||
=================
|
||||
See our [web site](https://kafka.apache.org) for details on the project.
|
||||
|
||||
You need to have [Java](http://www.oracle.com/technetwork/java/javase/downloads/index.html) installed.
|
||||
|
||||
Java 8 should be used for building in order to support both Java 8 and Java 11 at runtime.
|
||||
|
||||
Scala 2.12 is used by default, see below for how to use a different Scala version or all of the supported Scala versions.
|
||||
|
||||
### Build a jar and run it ###
|
||||
./gradlew jar
|
||||
|
||||
Follow instructions in https://kafka.apache.org/documentation.html#quickstart
|
||||
|
||||
### Build source jar ###
|
||||
./gradlew srcJar
|
||||
|
||||
### Build aggregated javadoc ###
|
||||
./gradlew aggregatedJavadoc
|
||||
|
||||
### Build javadoc and scaladoc ###
|
||||
./gradlew javadoc
|
||||
./gradlew javadocJar # builds a javadoc jar for each module
|
||||
./gradlew scaladoc
|
||||
./gradlew scaladocJar # builds a scaladoc jar for each module
|
||||
./gradlew docsJar # builds both (if applicable) javadoc and scaladoc jars for each module
|
||||
|
||||
### Run unit/integration tests ###
|
||||
./gradlew test # runs both unit and integration tests
|
||||
./gradlew unitTest
|
||||
./gradlew integrationTest
|
||||
|
||||
### Force re-running tests without code change ###
|
||||
./gradlew cleanTest test
|
||||
./gradlew cleanTest unitTest
|
||||
./gradlew cleanTest integrationTest
|
||||
|
||||
### Running a particular unit/integration test ###
|
||||
./gradlew clients:test --tests RequestResponseTest
|
||||
|
||||
### Running a particular test method within a unit/integration test ###
|
||||
./gradlew core:test --tests kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic
|
||||
./gradlew clients:test --tests org.apache.kafka.clients.MetadataTest.testMetadataUpdateWaitTime
|
||||
|
||||
### Running a particular unit/integration test with log4j output ###
|
||||
Change the log4j setting in either `clients/src/test/resources/log4j.properties` or `core/src/test/resources/log4j.properties`
|
||||
|
||||
./gradlew clients:test --tests RequestResponseTest
|
||||
|
||||
### Generating test coverage reports ###
|
||||
Generate coverage reports for the whole project:
|
||||
|
||||
./gradlew reportCoverage
|
||||
|
||||
Generate coverage for a single module, i.e.:
|
||||
|
||||
./gradlew clients:reportCoverage
|
||||
|
||||
### Building a binary release gzipped tar ball ###
|
||||
./gradlew clean releaseTarGz
|
||||
|
||||
The above command will fail if you haven't set up the signing key. To bypass signing the artifact, you can run:
|
||||
|
||||
./gradlew clean releaseTarGz -x signArchives
|
||||
|
||||
The release file can be found inside `./core/build/distributions/`.
|
||||
|
||||
### Cleaning the build ###
|
||||
./gradlew clean
|
||||
|
||||
### Running a task with one of the Scala versions available (2.12.x or 2.13.x) ###
|
||||
*Note that if building the jars with a version other than 2.12.x, you need to set the `SCALA_VERSION` variable or change it in `bin/kafka-run-class.sh` to run the quick start.*
|
||||
|
||||
You can pass either the major version (eg 2.12) or the full version (eg 2.12.7):
|
||||
|
||||
./gradlew -PscalaVersion=2.12 jar
|
||||
./gradlew -PscalaVersion=2.12 test
|
||||
./gradlew -PscalaVersion=2.12 releaseTarGz
|
||||
|
||||
### Running a task with all the scala versions enabled by default ###
|
||||
|
||||
Append `All` to the task name:
|
||||
|
||||
./gradlew testAll
|
||||
./gradlew jarAll
|
||||
./gradlew releaseTarGzAll
|
||||
|
||||
### Running a task for a specific project ###
|
||||
This is for `core`, `examples` and `clients`
|
||||
|
||||
./gradlew core:jar
|
||||
./gradlew core:test
|
||||
|
||||
### Listing all gradle tasks ###
|
||||
./gradlew tasks
|
||||
|
||||
### Building IDE project ####
|
||||
*Note that this is not strictly necessary (IntelliJ IDEA has good built-in support for Gradle projects, for example).*
|
||||
|
||||
./gradlew eclipse
|
||||
./gradlew idea
|
||||
|
||||
The `eclipse` task has been configured to use `${project_dir}/build_eclipse` as Eclipse's build directory. Eclipse's default
|
||||
build directory (`${project_dir}/bin`) clashes with Kafka's scripts directory and we don't use Gradle's build directory
|
||||
to avoid known issues with this configuration.
|
||||
|
||||
### Publishing the jar for all version of Scala and for all projects to maven ###
|
||||
./gradlew uploadArchivesAll
|
||||
|
||||
Please note for this to work you should create/update `${GRADLE_USER_HOME}/gradle.properties` (typically, `~/.gradle/gradle.properties`) and assign the following variables
|
||||
|
||||
mavenUrl=
|
||||
mavenUsername=
|
||||
mavenPassword=
|
||||
signing.keyId=
|
||||
signing.password=
|
||||
signing.secretKeyRingFile=
|
||||
|
||||
### Publishing the streams quickstart archetype artifact to maven ###
|
||||
For the Streams archetype project, one cannot use gradle to upload to maven; instead the `mvn deploy` command needs to be called at the quickstart folder:
|
||||
|
||||
cd streams/quickstart
|
||||
mvn deploy
|
||||
|
||||
Please note for this to work you should create/update user maven settings (typically, `${USER_HOME}/.m2/settings.xml`) to assign the following variables
|
||||
|
||||
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
|
||||
https://maven.apache.org/xsd/settings-1.0.0.xsd">
|
||||
...
|
||||
<servers>
|
||||
...
|
||||
<server>
|
||||
<id>apache.snapshots.https</id>
|
||||
<username>${maven_username}</username>
|
||||
<password>${maven_password}</password>
|
||||
</server>
|
||||
<server>
|
||||
<id>apache.releases.https</id>
|
||||
<username>${maven_username}</username>
|
||||
<password>${maven_password}</password>
|
||||
</server>
|
||||
...
|
||||
</servers>
|
||||
...
|
||||
|
||||
|
||||
### Installing the jars to the local Maven repository ###
|
||||
./gradlew installAll
|
||||
|
||||
### Building the test jar ###
|
||||
./gradlew testJar
|
||||
|
||||
### Determining how transitive dependencies are added ###
|
||||
./gradlew core:dependencies --configuration runtime
|
||||
|
||||
### Determining if any dependencies could be updated ###
|
||||
./gradlew dependencyUpdates
|
||||
|
||||
### Running code quality checks ###
|
||||
There are two code quality analysis tools that we regularly run, spotbugs and checkstyle.
|
||||
|
||||
#### Checkstyle ####
|
||||
Checkstyle enforces a consistent coding style in Kafka.
|
||||
You can run checkstyle using:
|
||||
|
||||
./gradlew checkstyleMain checkstyleTest
|
||||
|
||||
The checkstyle warnings will be found in `reports/checkstyle/reports/main.html` and `reports/checkstyle/reports/test.html` files in the
|
||||
subproject build directories. They are also printed to the console. The build will fail if Checkstyle fails.
|
||||
|
||||
#### Spotbugs ####
|
||||
Spotbugs uses static analysis to look for bugs in the code.
|
||||
You can run spotbugs using:
|
||||
|
||||
./gradlew spotbugsMain spotbugsTest -x test
|
||||
|
||||
The spotbugs warnings will be found in `reports/spotbugs/main.html` and `reports/spotbugs/test.html` files in the subproject build
|
||||
directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one.
|
||||
|
||||
### Common build options ###
|
||||
|
||||
The following options should be set with a `-P` switch, for example `./gradlew -PmaxParallelForks=1 test`.
|
||||
|
||||
* `commitId`: sets the build commit ID as .git/HEAD might not be correct if there are local commits added for build purposes.
|
||||
* `mavenUrl`: sets the URL of the maven deployment repository (`file://path/to/repo` can be used to point to a local repository).
|
||||
* `maxParallelForks`: limits the maximum number of processes for each task.
|
||||
* `showStandardStreams`: shows standard out and standard error of the test JVM(s) on the console.
|
||||
* `skipSigning`: skips signing of artifacts.
|
||||
* `testLoggingEvents`: unit test events to be logged, separated by comma. For example `./gradlew -PtestLoggingEvents=started,passed,skipped,failed test`.
|
||||
* `xmlSpotBugsReport`: enable XML reports for spotBugs. This also disables HTML reports as only one can be enabled at a time.
|
||||
|
||||
### Dependency Analysis ###
|
||||
|
||||
The gradle [dependency debugging documentation](https://docs.gradle.org/current/userguide/viewing_debugging_dependencies.html) mentions using the `dependencies` or `dependencyInsight` tasks to debug dependencies for the root project or individual subprojects.
|
||||
|
||||
Alternatively, use the `allDeps` or `allDepInsight` tasks for recursively iterating through all subprojects:
|
||||
|
||||
./gradlew allDeps
|
||||
|
||||
./gradlew allDepInsight --configuration runtime --dependency com.fasterxml.jackson.core:jackson-databind
|
||||
|
||||
These take the same arguments as the builtin variants.
|
||||
|
||||
### Running system tests ###
|
||||
|
||||
See [tests/README.md](tests/README.md).
|
||||
|
||||
### Running in Vagrant ###
|
||||
|
||||
See [vagrant/README.md](vagrant/README.md).
|
||||
|
||||
### Contribution ###
|
||||
|
||||
Apache Kafka is interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html).
|
||||
|
||||
To contribute follow the instructions here:
|
||||
* https://kafka.apache.org/contributing.html
|
||||
189
TROGDOR.md
Normal file
189
TROGDOR.md
Normal file
@@ -0,0 +1,189 @@
|
||||
Trogdor
|
||||
========================================
|
||||
Trogdor is a test framework for Apache Kafka.
|
||||
|
||||
Trogdor can run benchmarks and other workloads. Trogdor can also inject faults in order to stress test the system.
|
||||
|
||||
Quickstart
|
||||
=========================================================
|
||||
First, we want to start a single-node Kafka cluster with a ZooKeeper and a broker.
|
||||
|
||||
Running ZooKeeper:
|
||||
|
||||
> ./bin/zookeeper-server-start.sh ./config/zookeeper.properties &> /tmp/zookeeper.log &
|
||||
|
||||
Running Kafka:
|
||||
|
||||
> ./bin/kafka-server-start.sh ./config/server.properties &> /tmp/kafka.log &
|
||||
|
||||
Then, we want to run a Trogdor Agent, plus a Trogdor Coordinator.
|
||||
|
||||
To run the Trogdor Agent:
|
||||
|
||||
> ./bin/trogdor.sh agent -c ./config/trogdor.conf -n node0 &> /tmp/trogdor-agent.log &
|
||||
|
||||
To run the Trogdor Coordinator:
|
||||
|
||||
> ./bin/trogdor.sh coordinator -c ./config/trogdor.conf -n node0 &> /tmp/trogdor-coordinator.log &
|
||||
|
||||
Let's confirm that all of the daemons are running:
|
||||
|
||||
> jps
|
||||
116212 Coordinator
|
||||
115188 QuorumPeerMain
|
||||
116571 Jps
|
||||
115420 Kafka
|
||||
115694 Agent
|
||||
|
||||
Now, we can submit a test job to Trogdor.
|
||||
|
||||
> ./bin/trogdor.sh client createTask -t localhost:8889 -i produce0 --spec ./tests/spec/simple_produce_bench.json
|
||||
Sent CreateTaskRequest for task produce0.
|
||||
|
||||
We can run showTask to see what the task's status is:
|
||||
|
||||
> ./bin/trogdor.sh client showTask -t localhost:8889 -i produce0
|
||||
Task bar of type org.apache.kafka.trogdor.workload.ProduceBenchSpec is DONE. FINISHED at 2019-01-09T20:38:22.039-08:00 after 6s
|
||||
|
||||
To see the results, we use showTask with --show-status:
|
||||
|
||||
> ./bin/trogdor.sh client showTask -t localhost:8889 -i produce0 --show-status
|
||||
Task bar of type org.apache.kafka.trogdor.workload.ProduceBenchSpec is DONE. FINISHED at 2019-01-09T20:38:22.039-08:00 after 6s
|
||||
Status: {
|
||||
"totalSent" : 50000,
|
||||
"averageLatencyMs" : 17.83388,
|
||||
"p50LatencyMs" : 12,
|
||||
"p95LatencyMs" : 75,
|
||||
"p99LatencyMs" : 96,
|
||||
"transactionsCommitted" : 0
|
||||
}
|
||||
|
||||
Trogdor Architecture
|
||||
========================================
|
||||
Trogdor has a single coordinator process which manages multiple agent processes. Each agent process is responsible for a single cluster node.
|
||||
|
||||
The Trogdor coordinator manages tasks. A task is anything we might want to do on a cluster, such as running a benchmark, injecting a fault, or running a workload. In order to implement each task, the coordinator creates workers on one or more agent nodes.
|
||||
|
||||
The Trogdor agent process implements the tasks. For example, when running a workload, the agent process is the process which produces and consumes messages.
|
||||
|
||||
Both the coordinator and the agent expose a REST interface that accepts objects serialized via JSON. There is also a command-line program which makes it easy to send messages to either one without manually crafting the JSON message body.
|
||||
|
||||
All Trogdor RPCs are idempotent except the shutdown requests. Sending an idempotent RPC twice in a row has the same effect as sending the RPC once.
|
||||
|
||||
Tasks
|
||||
========================================
|
||||
Tasks are described by specifications containing:
|
||||
|
||||
* A "class" field describing the task type. This contains a full Java class name.
|
||||
* A "startMs" field describing when the task should start. This is given in terms of milliseconds since the UNIX epoch.
|
||||
* A "durationMs" field describing how long the task should last. This is given in terms of milliseconds.
|
||||
* Other fields which are task-specific.
|
||||
|
||||
The task specification is usually written as JSON. For example, this task specification describes a network partition between nodes 1 and 2, and 3:
|
||||
|
||||
{
|
||||
"class": "org.apache.kafka.trogdor.fault.NetworkPartitionFaultSpec",
|
||||
"startMs": 1000,
|
||||
"durationMs": 30000,
|
||||
"partitions": [["node1", "node2"], ["node3"]]
|
||||
}
|
||||
|
||||
This task runs a simple ProduceBench test on a cluster with one producer node, 5 topics, and 10,000 messages per second.
|
||||
The keys are generated sequentially and the configured partitioner (DefaultPartitioner) is used.
|
||||
|
||||
{
|
||||
"class": "org.apache.kafka.trogdor.workload.ProduceBenchSpec",
|
||||
"durationMs": 10000000,
|
||||
"producerNode": "node0",
|
||||
"bootstrapServers": "localhost:9092",
|
||||
"targetMessagesPerSec": 10000,
|
||||
"maxMessages": 50000,
|
||||
"activeTopics": {
|
||||
"foo[1-3]": {
|
||||
"numPartitions": 10,
|
||||
"replicationFactor": 1
|
||||
}
|
||||
},
|
||||
"inactiveTopics": {
|
||||
"foo[4-5]": {
|
||||
"numPartitions": 10,
|
||||
"replicationFactor": 1
|
||||
}
|
||||
},
|
||||
"keyGenerator": {
|
||||
"type": "sequential",
|
||||
"size": 8,
|
||||
"offset": 1
|
||||
},
|
||||
"useConfiguredPartitioner": true
|
||||
}
|
||||
|
||||
Tasks are submitted to the coordinator. Once the coordinator determines that it is time for the task to start, it creates workers on agent processes. The workers run until the task is done.
|
||||
|
||||
Task specifications are immutable; they do not change after the task has been created.
|
||||
|
||||
Tasks can be in several states:
|
||||
* PENDING, when task is waiting to execute,
|
||||
* RUNNING, when the task is running,
|
||||
* STOPPING, when the task is in the process of stopping,
|
||||
* DONE, when the task is done.
|
||||
|
||||
Tasks that are DONE also have an error field which will be set if the task failed.
|
||||
|
||||
Workloads
|
||||
========================================
|
||||
Trogdor can run several workloads. Workloads perform operations on the cluster and measure their performance. Workloads fail when the operations cannot be performed.
|
||||
|
||||
### ProduceBench
|
||||
ProduceBench starts a Kafka producer on a single agent node, producing to several partitions. The workload measures the average produce latency, as well as the median, 95th percentile, and 99th percentile latency.
|
||||
It can be configured to use a transactional producer which can commit transactions based on a set time interval or number of messages.
|
||||
|
||||
### RoundTripWorkload
|
||||
RoundTripWorkload tests both production and consumption. The workload starts a Kafka producer and consumer on a single node. The consumer will read back the messages that were produced by the producer.
|
||||
|
||||
### ConsumeBench
|
||||
ConsumeBench starts one or more Kafka consumers on a single agent node. Depending on the passed in configuration (see ConsumeBenchSpec), the consumers either subscribe to a set of topics (leveraging consumer group functionality and dynamic partition assignment) or manually assign partitions to themselves.
|
||||
The workload measures the average produce latency, as well as the median, 95th percentile, and 99th percentile latency.
|
||||
|
||||
Faults
|
||||
========================================
|
||||
Trogdor can run several faults which deliberately break something in the cluster.
|
||||
|
||||
### ProcessStopFault
|
||||
ProcessStopFault stops a process by sending it a SIGSTOP signal. When the fault ends, the process is resumed with SIGCONT.
|
||||
|
||||
### NetworkPartitionFault
|
||||
NetworkPartitionFault sets up an artificial network partition between one or more sets of nodes. Currently, this is implemented using iptables. The iptables rules are set up on the outbound traffic from the affected nodes. Therefore, the affected nodes should still be reachable from outside the cluster.
|
||||
|
||||
External Processes
|
||||
========================================
|
||||
Trogdor supports running arbitrary commands in external processes. This is a generic way to run any configurable command in the Trogdor framework - be it a Python program, bash script, docker image, etc.
|
||||
|
||||
### ExternalCommandWorker
|
||||
ExternalCommandWorker starts an external command defined by the ExternalCommandSpec. It essentially allows you to run any command on any Trogdor agent node.
|
||||
The worker communicates with the external process via its stdin, stdout and stderr in a JSON protocol. It uses stdout for any actionable communication and only logs what it sees in stderr.
|
||||
On startup the worker will first send a message describing the workload to the external process in this format:
|
||||
```
|
||||
{"id":<task ID string>, "workload":<configured workload JSON object>}
|
||||
```
|
||||
and will then listen for messages from the external process, again in a JSON format.
|
||||
Said JSON can contain the following fields:
|
||||
- status: If the object contains this field, the status of the worker will be set to the given value.
|
||||
- error: If the object contains this field, the error of the worker will be set to the given value. Once an error occurs, the external process will be terminated.
|
||||
- log: If the object contains this field, a log message will be issued with this text.
|
||||
An example:
|
||||
```json
|
||||
{"log": "Finished successfully.", "status": {"p99ProduceLatency": "100ms", "messagesSent": 10000}}
|
||||
```
|
||||
|
||||
Exec Mode
|
||||
========================================
|
||||
Sometimes, you just want to run a test quickly on a single node. In this case, you can use "exec mode." This mode allows you to run a single Trogdor Agent without a Coordinator.
|
||||
|
||||
When using exec mode, you must pass in a Task specification to use. The Agent will try to start this task.
|
||||
|
||||
For example:
|
||||
|
||||
> ./bin/trogdor.sh agent -n node0 -c ./config/trogdor.conf --exec ./tests/spec/simple_produce_bench.json
|
||||
|
||||
When using exec mode, the Agent will exit once the task is complete.
|
||||
199
Vagrantfile
vendored
Normal file
199
Vagrantfile
vendored
Normal file
@@ -0,0 +1,199 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
require 'socket'
|
||||
|
||||
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
|
||||
VAGRANTFILE_API_VERSION = "2"
|
||||
|
||||
# General config
|
||||
enable_dns = false
|
||||
# Override to false when bringing up a cluster on AWS
|
||||
enable_hostmanager = true
|
||||
enable_jmx = false
|
||||
num_zookeepers = 1
|
||||
num_brokers = 3
|
||||
num_workers = 0 # Generic workers that get the code, but don't start any services
|
||||
ram_megabytes = 1280
|
||||
base_box = "ubuntu/trusty64"
|
||||
|
||||
# EC2
|
||||
ec2_access_key = ENV['AWS_ACCESS_KEY']
|
||||
ec2_secret_key = ENV['AWS_SECRET_KEY']
|
||||
ec2_session_token = ENV['AWS_SESSION_TOKEN']
|
||||
ec2_keypair_name = nil
|
||||
ec2_keypair_file = nil
|
||||
|
||||
ec2_region = "us-east-1"
|
||||
ec2_az = nil # Uses set by AWS
|
||||
ec2_ami = "ami-29ebb519"
|
||||
ec2_instance_type = "m3.medium"
|
||||
ec2_spot_instance = ENV['SPOT_INSTANCE'] ? ENV['SPOT_INSTANCE'] == 'true' : true
|
||||
ec2_spot_max_price = "0.113" # On-demand price for instance type
|
||||
ec2_user = "ubuntu"
|
||||
ec2_instance_name_prefix = "kafka-vagrant"
|
||||
ec2_security_groups = nil
|
||||
ec2_subnet_id = nil
|
||||
# Only override this by setting it to false if you're running in a VPC and you
|
||||
# are running Vagrant from within that VPC as well.
|
||||
ec2_associate_public_ip = nil
|
||||
|
||||
jdk_major = '8'
|
||||
jdk_full = '8u202-linux-x64'
|
||||
|
||||
local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
|
||||
if File.exists?(local_config_file) then
|
||||
eval(File.read(local_config_file), binding, "Vagrantfile.local")
|
||||
end
|
||||
|
||||
# TODO(ksweeney): RAM requirements are not empirical and can probably be significantly lowered.
|
||||
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
|
||||
config.hostmanager.enabled = enable_hostmanager
|
||||
config.hostmanager.manage_host = enable_dns
|
||||
config.hostmanager.include_offline = false
|
||||
|
||||
## Provider-specific global configs
|
||||
config.vm.provider :virtualbox do |vb,override|
|
||||
override.vm.box = base_box
|
||||
|
||||
override.hostmanager.ignore_private_ip = false
|
||||
|
||||
# Brokers started with the standard script currently set Xms and Xmx to 1G,
|
||||
# plus we need some extra head room.
|
||||
vb.customize ["modifyvm", :id, "--memory", ram_megabytes.to_s]
|
||||
|
||||
if Vagrant.has_plugin?("vagrant-cachier")
|
||||
override.cache.scope = :box
|
||||
end
|
||||
end
|
||||
|
||||
config.vm.provider :aws do |aws,override|
|
||||
# The "box" is specified as an AMI
|
||||
override.vm.box = "dummy"
|
||||
override.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
|
||||
|
||||
cached_addresses = {}
|
||||
# Use a custom resolver that SSH's into the machine and finds the IP address
|
||||
# directly. This lets us get at the private IP address directly, avoiding
|
||||
# some issues with using the default IP resolver, which uses the public IP
|
||||
# address.
|
||||
override.hostmanager.ip_resolver = proc do |vm, resolving_vm|
|
||||
if !cached_addresses.has_key?(vm.name)
|
||||
state_id = vm.state.id
|
||||
if state_id != :not_created && state_id != :stopped && vm.communicate.ready?
|
||||
contents = ''
|
||||
vm.communicate.execute("/sbin/ifconfig eth0 | grep 'inet addr' | tail -n 1 | egrep -o '[0-9\.]+' | head -n 1 2>&1") do |type, data|
|
||||
contents << data
|
||||
end
|
||||
cached_addresses[vm.name] = contents.split("\n").first[/(\d+\.\d+\.\d+\.\d+)/, 1]
|
||||
else
|
||||
cached_addresses[vm.name] = nil
|
||||
end
|
||||
end
|
||||
cached_addresses[vm.name]
|
||||
end
|
||||
|
||||
override.ssh.username = ec2_user
|
||||
override.ssh.private_key_path = ec2_keypair_file
|
||||
|
||||
aws.access_key_id = ec2_access_key
|
||||
aws.secret_access_key = ec2_secret_key
|
||||
aws.session_token = ec2_session_token
|
||||
aws.keypair_name = ec2_keypair_name
|
||||
|
||||
aws.region = ec2_region
|
||||
aws.availability_zone = ec2_az
|
||||
aws.instance_type = ec2_instance_type
|
||||
aws.ami = ec2_ami
|
||||
aws.security_groups = ec2_security_groups
|
||||
aws.subnet_id = ec2_subnet_id
|
||||
# If a subnet is specified, default to turning on a public IP unless the
|
||||
# user explicitly specifies the option. Without a public IP, Vagrant won't
|
||||
# be able to SSH into the hosts unless Vagrant is also running in the VPC.
|
||||
if ec2_associate_public_ip.nil?
|
||||
aws.associate_public_ip = true unless ec2_subnet_id.nil?
|
||||
else
|
||||
aws.associate_public_ip = ec2_associate_public_ip
|
||||
end
|
||||
aws.region_config ec2_region do |region|
|
||||
region.spot_instance = ec2_spot_instance
|
||||
region.spot_max_price = ec2_spot_max_price
|
||||
end
|
||||
|
||||
# Exclude some directories that can grow very large from syncing
|
||||
override.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ['.git', 'core/data/', 'logs/', 'tests/results/', 'results/']
|
||||
end
|
||||
|
||||
def name_node(node, name, ec2_instance_name_prefix)
|
||||
node.vm.hostname = name
|
||||
node.vm.provider :aws do |aws|
|
||||
aws.tags = {
|
||||
'Name' => ec2_instance_name_prefix + "-" + Socket.gethostname + "-" + name,
|
||||
'JenkinsBuildUrl' => ENV['BUILD_URL']
|
||||
}
|
||||
end
|
||||
end
|
||||
|
||||
def assign_local_ip(node, ip_address)
|
||||
node.vm.provider :virtualbox do |vb,override|
|
||||
override.vm.network :private_network, ip: ip_address
|
||||
end
|
||||
end
|
||||
|
||||
## Cluster definition
|
||||
zookeepers = []
|
||||
(1..num_zookeepers).each { |i|
|
||||
name = "zk" + i.to_s
|
||||
zookeepers.push(name)
|
||||
config.vm.define name do |zookeeper|
|
||||
name_node(zookeeper, name, ec2_instance_name_prefix)
|
||||
ip_address = "192.168.50." + (10 + i).to_s
|
||||
assign_local_ip(zookeeper, ip_address)
|
||||
zookeeper.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
|
||||
zk_jmx_port = enable_jmx ? (8000 + i).to_s : ""
|
||||
zookeeper.vm.provision "shell", path: "vagrant/zk.sh", :args => [i.to_s, num_zookeepers, zk_jmx_port]
|
||||
end
|
||||
}
|
||||
|
||||
(1..num_brokers).each { |i|
|
||||
name = "broker" + i.to_s
|
||||
config.vm.define name do |broker|
|
||||
name_node(broker, name, ec2_instance_name_prefix)
|
||||
ip_address = "192.168.50." + (50 + i).to_s
|
||||
assign_local_ip(broker, ip_address)
|
||||
# We need to be careful about what we list as the publicly routable
|
||||
# address since this is registered in ZK and handed out to clients. If
|
||||
# host DNS isn't setup, we shouldn't use hostnames -- IP addresses must be
|
||||
# used to support clients running on the host.
|
||||
zookeeper_connect = zookeepers.map{ |zk_addr| zk_addr + ":2181"}.join(",")
|
||||
broker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
|
||||
kafka_jmx_port = enable_jmx ? (9000 + i).to_s : ""
|
||||
broker.vm.provision "shell", path: "vagrant/broker.sh", :args => [i.to_s, enable_dns ? name : ip_address, zookeeper_connect, kafka_jmx_port]
|
||||
end
|
||||
}
|
||||
|
||||
(1..num_workers).each { |i|
|
||||
name = "worker" + i.to_s
|
||||
config.vm.define name do |worker|
|
||||
name_node(worker, name, ec2_instance_name_prefix)
|
||||
ip_address = "192.168.50." + (100 + i).to_s
|
||||
assign_local_ip(worker, ip_address)
|
||||
worker.vm.provision "shell", path: "vagrant/base.sh", env: {"JDK_MAJOR" => jdk_major, "JDK_FULL" => jdk_full}
|
||||
end
|
||||
}
|
||||
|
||||
end
|
||||
45
bin/connect-distributed.sh
Executable file
45
bin/connect-distributed.sh
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] connect-distributed.properties"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
base_dir=$(dirname $0)
|
||||
|
||||
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
|
||||
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
|
||||
fi
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
|
||||
fi
|
||||
|
||||
EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'}
|
||||
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-daemon)
|
||||
EXTRA_ARGS="-daemon "$EXTRA_ARGS
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"
|
||||
45
bin/connect-mirror-maker.sh
Executable file
45
bin/connect-mirror-maker.sh
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] mm2.properties"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
base_dir=$(dirname $0)
|
||||
|
||||
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
|
||||
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
|
||||
fi
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
|
||||
fi
|
||||
|
||||
EXTRA_ARGS=${EXTRA_ARGS-'-name mirrorMaker'}
|
||||
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-daemon)
|
||||
EXTRA_ARGS="-daemon "$EXTRA_ARGS
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.mirror.MirrorMaker "$@"
|
||||
45
bin/connect-standalone.sh
Executable file
45
bin/connect-standalone.sh
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] connect-standalone.properties"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
base_dir=$(dirname $0)
|
||||
|
||||
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
|
||||
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
|
||||
fi
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
|
||||
fi
|
||||
|
||||
EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone'}
|
||||
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-daemon)
|
||||
EXTRA_ARGS="-daemon "$EXTRA_ARGS
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectStandalone "$@"
|
||||
17
bin/kafka-acls.sh
Executable file
17
bin/kafka-acls.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.AclCommand "$@"
|
||||
17
bin/kafka-broker-api-versions.sh
Executable file
17
bin/kafka-broker-api-versions.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.BrokerApiVersionsCommand "$@"
|
||||
17
bin/kafka-configs.sh
Executable file
17
bin/kafka-configs.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.ConfigCommand "$@"
|
||||
21
bin/kafka-console-consumer.sh
Executable file
21
bin/kafka-console-consumer.sh
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
|
||||
20
bin/kafka-console-producer.sh
Executable file
20
bin/kafka-console-producer.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"
|
||||
17
bin/kafka-consumer-groups.sh
Executable file
17
bin/kafka-consumer-groups.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.ConsumerGroupCommand "$@"
|
||||
20
bin/kafka-consumer-perf-test.sh
Executable file
20
bin/kafka-consumer-perf-test.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsumerPerformance "$@"
|
||||
17
bin/kafka-delegation-tokens.sh
Executable file
17
bin/kafka-delegation-tokens.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.DelegationTokenCommand "$@"
|
||||
17
bin/kafka-delete-records.sh
Executable file
17
bin/kafka-delete-records.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.DeleteRecordsCommand "$@"
|
||||
18
bin/kafka-diskload-protector.sh
Executable file
18
bin/kafka-diskload-protector.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.DiskLoadProtectorCommand "$@"
|
||||
|
||||
17
bin/kafka-dump-log.sh
Executable file
17
bin/kafka-dump-log.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.DumpLogSegments "$@"
|
||||
18
bin/kafka-exmetrics.sh
Executable file
18
bin/kafka-exmetrics.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.KafkaExMetricsCommand "$@"
|
||||
|
||||
17
bin/kafka-leader-election.sh
Executable file
17
bin/kafka-leader-election.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.LeaderElectionCommand "$@"
|
||||
17
bin/kafka-log-dirs.sh
Executable file
17
bin/kafka-log-dirs.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.LogDirsCommand "$@"
|
||||
17
bin/kafka-mirror-maker.sh
Executable file
17
bin/kafka-mirror-maker.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.MirrorMaker "$@"
|
||||
17
bin/kafka-preferred-replica-election.sh
Executable file
17
bin/kafka-preferred-replica-election.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.PreferredReplicaLeaderElectionCommand "$@"
|
||||
20
bin/kafka-producer-perf-test.sh
Executable file
20
bin/kafka-producer-perf-test.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance "$@"
|
||||
17
bin/kafka-reassign-partitions.sh
Executable file
17
bin/kafka-reassign-partitions.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.ReassignPartitionsCommand "$@"
|
||||
17
bin/kafka-replica-verification.sh
Executable file
17
bin/kafka-replica-verification.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ReplicaVerificationTool "$@"
|
||||
316
bin/kafka-run-class.sh
Executable file
316
bin/kafka-run-class.sh
Executable file
@@ -0,0 +1,316 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# CYGWIN == 1 if Cygwin is detected, else 0.
|
||||
if [[ $(uname -a) =~ "CYGWIN" ]]; then
|
||||
CYGWIN=1
|
||||
else
|
||||
CYGWIN=0
|
||||
fi
|
||||
|
||||
if [ -z "$INCLUDE_TEST_JARS" ]; then
|
||||
INCLUDE_TEST_JARS=false
|
||||
fi
|
||||
|
||||
# Exclude jars not necessary for running commands.
|
||||
regex="(-(test|test-sources|src|scaladoc|javadoc)\.jar|jar.asc)$"
|
||||
should_include_file() {
|
||||
if [ "$INCLUDE_TEST_JARS" = true ]; then
|
||||
return 0
|
||||
fi
|
||||
file=$1
|
||||
if [ -z "$(echo "$file" | egrep "$regex")" ] ; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
base_dir=$(dirname $0)/..
|
||||
|
||||
if [ -z "$SCALA_VERSION" ]; then
|
||||
SCALA_VERSION=2.12.10
|
||||
fi
|
||||
|
||||
if [ -z "$SCALA_BINARY_VERSION" ]; then
|
||||
SCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d '.')
|
||||
fi
|
||||
|
||||
# run ./gradlew copyDependantLibs to get all dependant jars in a local dir
|
||||
shopt -s nullglob
|
||||
if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
|
||||
for dir in "$base_dir"/core/build/dependant-libs-${SCALA_VERSION}*;
|
||||
do
|
||||
CLASSPATH="$CLASSPATH:$dir/*"
|
||||
done
|
||||
fi
|
||||
|
||||
for file in "$base_dir"/examples/build/libs/kafka-examples*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
|
||||
clients_lib_dir=$(dirname $0)/../clients/build/libs
|
||||
streams_lib_dir=$(dirname $0)/../streams/build/libs
|
||||
streams_dependant_clients_lib_dir=$(dirname $0)/../streams/build/dependant-libs-${SCALA_VERSION}
|
||||
else
|
||||
clients_lib_dir=/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs
|
||||
streams_lib_dir=$clients_lib_dir
|
||||
streams_dependant_clients_lib_dir=$streams_lib_dir
|
||||
fi
|
||||
|
||||
|
||||
for file in "$clients_lib_dir"/kafka-clients*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
|
||||
for file in "$streams_lib_dir"/kafka-streams*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
|
||||
for file in "$base_dir"/streams/examples/build/libs/kafka-streams-examples*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
else
|
||||
VERSION_NO_DOTS=`echo $UPGRADE_KAFKA_STREAMS_TEST_VERSION | sed 's/\.//g'`
|
||||
SHORT_VERSION_NO_DOTS=${VERSION_NO_DOTS:0:((${#VERSION_NO_DOTS} - 1))} # remove last char, ie, bug-fix number
|
||||
for file in "$base_dir"/streams/upgrade-system-tests-$SHORT_VERSION_NO_DOTS/build/libs/kafka-streams-upgrade-system-tests*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$file":"$CLASSPATH"
|
||||
fi
|
||||
done
|
||||
if [ "$SHORT_VERSION_NO_DOTS" = "0100" ]; then
|
||||
CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.8.jar":"$CLASSPATH"
|
||||
CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.6.jar":"$CLASSPATH"
|
||||
fi
|
||||
if [ "$SHORT_VERSION_NO_DOTS" = "0101" ]; then
|
||||
CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.9.jar":"$CLASSPATH"
|
||||
CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.8.jar":"$CLASSPATH"
|
||||
fi
|
||||
fi
|
||||
|
||||
for file in "$streams_dependant_clients_lib_dir"/rocksdb*.jar;
|
||||
do
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
done
|
||||
|
||||
for file in "$streams_dependant_clients_lib_dir"/*hamcrest*.jar;
|
||||
do
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
done
|
||||
|
||||
for file in "$base_dir"/tools/build/libs/kafka-tools*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
|
||||
for dir in "$base_dir"/tools/build/dependant-libs-${SCALA_VERSION}*;
|
||||
do
|
||||
CLASSPATH="$CLASSPATH:$dir/*"
|
||||
done
|
||||
|
||||
for cc_pkg in "api" "transforms" "runtime" "file" "mirror" "mirror-client" "json" "tools" "basic-auth-extension"
|
||||
do
|
||||
for file in "$base_dir"/connect/${cc_pkg}/build/libs/connect-${cc_pkg}*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
if [ -d "$base_dir/connect/${cc_pkg}/build/dependant-libs" ] ; then
|
||||
CLASSPATH="$CLASSPATH:$base_dir/connect/${cc_pkg}/build/dependant-libs/*"
|
||||
fi
|
||||
done
|
||||
|
||||
# classpath addition for release
|
||||
for file in "$base_dir"/libs/*;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
|
||||
for file in "$base_dir"/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar;
|
||||
do
|
||||
if should_include_file "$file"; then
|
||||
CLASSPATH="$CLASSPATH":"$file"
|
||||
fi
|
||||
done
|
||||
shopt -u nullglob
|
||||
|
||||
if [ -z "$CLASSPATH" ] ; then
|
||||
echo "Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=$SCALA_VERSION'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# JMX settings
|
||||
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
||||
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false "
|
||||
fi
|
||||
|
||||
# JMX port to use
|
||||
if [ $JMX_PORT ]; then
|
||||
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
||||
fi
|
||||
|
||||
# Log directory to use
|
||||
if [ "x$LOG_DIR" = "x" ]; then
|
||||
LOG_DIR="$base_dir/logs"
|
||||
fi
|
||||
|
||||
# Log4j settings
|
||||
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
|
||||
# Log to console. This is a tool.
|
||||
LOG4J_DIR="$base_dir/config/tools-log4j.properties"
|
||||
# If Cygwin is detected, LOG4J_DIR is converted to Windows format.
|
||||
(( CYGWIN )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
|
||||
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
|
||||
else
|
||||
# create logs directory
|
||||
if [ ! -d "$LOG_DIR" ]; then
|
||||
mkdir -p "$LOG_DIR"
|
||||
fi
|
||||
fi
|
||||
|
||||
# If Cygwin is detected, LOG_DIR is converted to Windows format.
|
||||
(( CYGWIN )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}")
|
||||
KAFKA_LOG4J_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS"
|
||||
|
||||
# Generic jvm settings you want to add
|
||||
if [ -z "$KAFKA_OPTS" ]; then
|
||||
KAFKA_OPTS=""
|
||||
fi
|
||||
|
||||
# Set Debug options if enabled
|
||||
if [ "x$KAFKA_DEBUG" != "x" ]; then
|
||||
|
||||
# Use default ports
|
||||
DEFAULT_JAVA_DEBUG_PORT="5005"
|
||||
|
||||
if [ -z "$JAVA_DEBUG_PORT" ]; then
|
||||
JAVA_DEBUG_PORT="$DEFAULT_JAVA_DEBUG_PORT"
|
||||
fi
|
||||
|
||||
# Use the defaults if JAVA_DEBUG_OPTS was not set
|
||||
DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=0.0.0.0:$JAVA_DEBUG_PORT"
|
||||
if [ -z "$JAVA_DEBUG_OPTS" ]; then
|
||||
JAVA_DEBUG_OPTS="$DEFAULT_JAVA_DEBUG_OPTS"
|
||||
fi
|
||||
|
||||
echo "Enabling Java debug options: $JAVA_DEBUG_OPTS"
|
||||
KAFKA_OPTS="$JAVA_DEBUG_OPTS $KAFKA_OPTS"
|
||||
fi
|
||||
|
||||
# Which java to use
|
||||
if [ -z "$JAVA_HOME" ]; then
|
||||
JAVA="java"
|
||||
else
|
||||
JAVA="$JAVA_HOME/bin/java"
|
||||
fi
|
||||
|
||||
# Memory options
|
||||
if [ -z "$KAFKA_HEAP_OPTS" ]; then
|
||||
KAFKA_HEAP_OPTS="-Xmx256M"
|
||||
fi
|
||||
|
||||
# JVM performance options
|
||||
# MaxInlineLevel=15 is the default since JDK 14 and can be removed once older JDKs are no longer supported
|
||||
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
|
||||
KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16m -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true"
|
||||
fi
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-name)
|
||||
DAEMON_NAME=$2
|
||||
CONSOLE_OUTPUT_FILE=$LOG_DIR/$DAEMON_NAME.out
|
||||
shift 2
|
||||
;;
|
||||
-loggc)
|
||||
if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
|
||||
GC_LOG_ENABLED="true"
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
-daemon)
|
||||
DAEMON_MODE="true"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
break
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# GC options
|
||||
GC_FILE_SUFFIX='-gc.log'
|
||||
GC_LOG_FILE_NAME=''
|
||||
if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
|
||||
GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX
|
||||
|
||||
# The first segment of the version number, which is '1' for releases before Java 9
|
||||
# it then becomes '9', '10', ...
|
||||
# Some examples of the first line of `java --version`:
|
||||
# 8 -> java version "1.8.0_152"
|
||||
# 9.0.4 -> java version "9.0.4"
|
||||
# 10 -> java version "10" 2018-03-20
|
||||
# 10.0.1 -> java version "10.0.1" 2018-04-17
|
||||
# We need to match to the end of the line to prevent sed from printing the characters that do not match
|
||||
JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
|
||||
if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
|
||||
KAFKA_GC_LOG_OPTS="-Xlog:gc*:file=$LOG_DIR/$GC_LOG_FILE_NAME:time"
|
||||
else
|
||||
KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Remove a possible colon prefix from the classpath (happens at lines like `CLASSPATH="$CLASSPATH:$file"` when CLASSPATH is blank)
|
||||
# Syntax used on the right side is native Bash string manipulation; for more details see
|
||||
# http://tldp.org/LDP/abs/html/string-manipulation.html, specifically the section titled "Substring Removal"
|
||||
CLASSPATH=${CLASSPATH#:}
|
||||
|
||||
# If Cygwin is detected, classpath is converted to Windows format.
|
||||
(( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}")
|
||||
|
||||
# Launch mode
|
||||
if [ "x$DAEMON_MODE" = "xtrue" ]; then
|
||||
nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
|
||||
else
|
||||
exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"
|
||||
fi
|
||||
51
bin/kafka-server-start.sh
Executable file
51
bin/kafka-server-start.sh
Executable file
@@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
|
||||
exit 1
|
||||
fi
|
||||
base_dir=$(dirname $0)
|
||||
|
||||
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
|
||||
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
|
||||
fi
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx8G -Xms8G"
|
||||
export JMX_PORT=8099
|
||||
#export KAFKA_DEBUG=debug
|
||||
#export DAEMON_MODE=true
|
||||
export KAFKA_OPTS="-Djava.security.auth.login.config=$base_dir/../config/kafka_server_jaas.conf"
|
||||
export DEBUG_SUSPEND_FLAG="n"
|
||||
export JAVA_DEBUG_PORT="8096"
|
||||
export GC_LOG_ENABLED=true
|
||||
fi
|
||||
|
||||
EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
|
||||
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-daemon)
|
||||
EXTRA_ARGS="-daemon "$EXTRA_ARGS
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"
|
||||
24
bin/kafka-server-stop.sh
Executable file
24
bin/kafka-server-stop.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
SIGNAL=${SIGNAL:-TERM}
|
||||
PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
|
||||
|
||||
if [ -z "$PIDS" ]; then
|
||||
echo "No kafka server to stop"
|
||||
exit 1
|
||||
else
|
||||
kill -s $SIGNAL $PIDS
|
||||
fi
|
||||
21
bin/kafka-streams-application-reset.sh
Executable file
21
bin/kafka-streams-application-reset.sh
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.tools.StreamsResetter "$@"
|
||||
17
bin/kafka-topics.sh
Executable file
17
bin/kafka-topics.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@"
|
||||
20
bin/kafka-verifiable-consumer.sh
Executable file
20
bin/kafka-verifiable-consumer.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.VerifiableConsumer "$@"
|
||||
20
bin/kafka-verifiable-producer.sh
Executable file
20
bin/kafka-verifiable-producer.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M"
|
||||
fi
|
||||
exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.VerifiableProducer "$@"
|
||||
50
bin/trogdor.sh
Executable file
50
bin/trogdor.sh
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
The Trogdor fault injector.
|
||||
|
||||
Usage:
|
||||
$0 [action] [options]
|
||||
|
||||
Actions:
|
||||
agent: Run the trogdor agent.
|
||||
coordinator: Run the trogdor coordinator.
|
||||
client: Run the client which communicates with the trogdor coordinator.
|
||||
agent-client: Run the client which communicates with the trogdor agent.
|
||||
help: This help message.
|
||||
EOF
|
||||
}
|
||||
|
||||
if [[ $# -lt 1 ]]; then
|
||||
usage
|
||||
exit 0
|
||||
fi
|
||||
action="${1}"
|
||||
shift
|
||||
CLASS=""
|
||||
case ${action} in
|
||||
agent) CLASS="org.apache.kafka.trogdor.agent.Agent";;
|
||||
coordinator) CLASS="org.apache.kafka.trogdor.coordinator.Coordinator";;
|
||||
client) CLASS="org.apache.kafka.trogdor.coordinator.CoordinatorClient";;
|
||||
agent-client) CLASS="org.apache.kafka.trogdor.agent.AgentClient";;
|
||||
help) usage; exit 0;;
|
||||
*) echo "Unknown action '${action}'. Type '$0 help' for help."; exit 1;;
|
||||
esac
|
||||
|
||||
export INCLUDE_TEST_JARS=1
|
||||
exec $(dirname $0)/kafka-run-class.sh "${CLASS}" "$@"
|
||||
34
bin/windows/connect-distributed.bat
Normal file
34
bin/windows/connect-distributed.bat
Normal file
@@ -0,0 +1,34 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 connect-distributed.properties
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
SetLocal
|
||||
rem Using pushd popd to set BASE_DIR to the absolute path
|
||||
pushd %~dp0..\..
|
||||
set BASE_DIR=%CD%
|
||||
popd
|
||||
|
||||
rem Log4j settings
|
||||
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
|
||||
set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/config/tools-log4j.properties
|
||||
)
|
||||
|
||||
"%~dp0kafka-run-class.bat" org.apache.kafka.connect.cli.ConnectDistributed %*
|
||||
EndLocal
|
||||
34
bin/windows/connect-standalone.bat
Normal file
34
bin/windows/connect-standalone.bat
Normal file
@@ -0,0 +1,34 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 connect-standalone.properties
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
SetLocal
|
||||
rem Using pushd popd to set BASE_DIR to the absolute path
|
||||
pushd %~dp0..\..
|
||||
set BASE_DIR=%CD%
|
||||
popd
|
||||
|
||||
rem Log4j settings
|
||||
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
|
||||
set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/config/tools-log4j.properties
|
||||
)
|
||||
|
||||
"%~dp0kafka-run-class.bat" org.apache.kafka.connect.cli.ConnectStandalone %*
|
||||
EndLocal
|
||||
17
bin/windows/kafka-acls.bat
Normal file
17
bin/windows/kafka-acls.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.AclCommand %*
|
||||
17
bin/windows/kafka-broker-api-versions.bat
Normal file
17
bin/windows/kafka-broker-api-versions.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
%~dp0kafka-run-class.bat kafka.admin.BrokerApiVersionsCommand %*
|
||||
17
bin/windows/kafka-configs.bat
Normal file
17
bin/windows/kafka-configs.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.ConfigCommand %*
|
||||
20
bin/windows/kafka-console-consumer.bat
Normal file
20
bin/windows/kafka-console-consumer.bat
Normal file
@@ -0,0 +1,20 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
SetLocal
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.ConsoleConsumer %*
|
||||
EndLocal
|
||||
20
bin/windows/kafka-console-producer.bat
Normal file
20
bin/windows/kafka-console-producer.bat
Normal file
@@ -0,0 +1,20 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
SetLocal
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.ConsoleProducer %*
|
||||
EndLocal
|
||||
17
bin/windows/kafka-consumer-groups.bat
Normal file
17
bin/windows/kafka-consumer-groups.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.ConsumerGroupCommand %*
|
||||
20
bin/windows/kafka-consumer-perf-test.bat
Normal file
20
bin/windows/kafka-consumer-perf-test.bat
Normal file
@@ -0,0 +1,20 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
SetLocal
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.ConsumerPerformance %*
|
||||
EndLocal
|
||||
17
bin/windows/kafka-delegation-tokens.bat
Normal file
17
bin/windows/kafka-delegation-tokens.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.DelegationTokenCommand %*
|
||||
17
bin/windows/kafka-delete-records.bat
Normal file
17
bin/windows/kafka-delete-records.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.DeleteRecordsCommand %*
|
||||
17
bin/windows/kafka-dump-log.bat
Normal file
17
bin/windows/kafka-dump-log.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.DumpLogSegments %*
|
||||
17
bin/windows/kafka-leader-election.bat
Normal file
17
bin/windows/kafka-leader-election.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.LeaderElectionCommand %*
|
||||
17
bin/windows/kafka-log-dirs.bat
Normal file
17
bin/windows/kafka-log-dirs.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.LogDirsCommand %*
|
||||
17
bin/windows/kafka-mirror-maker.bat
Normal file
17
bin/windows/kafka-mirror-maker.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.MirrorMaker %*
|
||||
17
bin/windows/kafka-preferred-replica-election.bat
Normal file
17
bin/windows/kafka-preferred-replica-election.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.PreferredReplicaLeaderElectionCommand %*
|
||||
20
bin/windows/kafka-producer-perf-test.bat
Normal file
20
bin/windows/kafka-producer-perf-test.bat
Normal file
@@ -0,0 +1,20 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
SetLocal
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M
|
||||
"%~dp0kafka-run-class.bat" org.apache.kafka.tools.ProducerPerformance %*
|
||||
EndLocal
|
||||
17
bin/windows/kafka-reassign-partitions.bat
Normal file
17
bin/windows/kafka-reassign-partitions.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.ReassignPartitionsCommand %*
|
||||
17
bin/windows/kafka-replica-verification.bat
Normal file
17
bin/windows/kafka-replica-verification.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.ReplicaVerificationTool %*
|
||||
191
bin/windows/kafka-run-class.bat
Executable file
191
bin/windows/kafka-run-class.bat
Executable file
@@ -0,0 +1,191 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
setlocal enabledelayedexpansion
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 classname [opts]
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
rem Using pushd popd to set BASE_DIR to the absolute path
|
||||
pushd %~dp0..\..
|
||||
set BASE_DIR=%CD%
|
||||
popd
|
||||
|
||||
IF ["%SCALA_VERSION%"] EQU [""] (
|
||||
set SCALA_VERSION=2.12.10
|
||||
)
|
||||
|
||||
IF ["%SCALA_BINARY_VERSION%"] EQU [""] (
|
||||
for /f "tokens=1,2 delims=." %%a in ("%SCALA_VERSION%") do (
|
||||
set FIRST=%%a
|
||||
set SECOND=%%b
|
||||
if ["!SECOND!"] EQU [""] (
|
||||
set SCALA_BINARY_VERSION=!FIRST!
|
||||
) else (
|
||||
set SCALA_BINARY_VERSION=!FIRST!.!SECOND!
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka-core dependencies
|
||||
for %%i in ("%BASE_DIR%\core\build\dependant-libs-%SCALA_VERSION%\*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka-examples
|
||||
for %%i in ("%BASE_DIR%\examples\build\libs\kafka-examples*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka-clients
|
||||
for %%i in ("%BASE_DIR%\clients\build\libs\kafka-clients*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka-streams
|
||||
for %%i in ("%BASE_DIR%\streams\build\libs\kafka-streams*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka-streams-examples
|
||||
for %%i in ("%BASE_DIR%\streams\examples\build\libs\kafka-streams-examples*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
for %%i in ("%BASE_DIR%\streams\build\dependant-libs-%SCALA_VERSION%\rocksdb*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for kafka tools
|
||||
for %%i in ("%BASE_DIR%\tools\build\libs\kafka-tools*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
for %%i in ("%BASE_DIR%\tools\build\dependant-libs-%SCALA_VERSION%\*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
for %%p in (api runtime file json tools) do (
|
||||
for %%i in ("%BASE_DIR%\connect\%%p\build\libs\connect-%%p*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
if exist "%BASE_DIR%\connect\%%p\build\dependant-libs\*" (
|
||||
call :concat "%BASE_DIR%\connect\%%p\build\dependant-libs\*"
|
||||
)
|
||||
)
|
||||
|
||||
rem Classpath addition for release
|
||||
for %%i in ("%BASE_DIR%\libs\*") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem Classpath addition for core
|
||||
for %%i in ("%BASE_DIR%\core\build\libs\kafka_%SCALA_BINARY_VERSION%*.jar") do (
|
||||
call :concat "%%i"
|
||||
)
|
||||
|
||||
rem JMX settings
|
||||
IF ["%KAFKA_JMX_OPTS%"] EQU [""] (
|
||||
set KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
|
||||
)
|
||||
|
||||
rem JMX port to use
|
||||
IF ["%JMX_PORT%"] NEQ [""] (
|
||||
set KAFKA_JMX_OPTS=%KAFKA_JMX_OPTS% -Dcom.sun.management.jmxremote.port=%JMX_PORT%
|
||||
)
|
||||
|
||||
rem Log directory to use
|
||||
IF ["%LOG_DIR%"] EQU [""] (
|
||||
set LOG_DIR=%BASE_DIR%/logs
|
||||
)
|
||||
|
||||
rem Log4j settings
|
||||
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
|
||||
set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/config/tools-log4j.properties
|
||||
) ELSE (
|
||||
rem create logs directory
|
||||
IF not exist "%LOG_DIR%" (
|
||||
mkdir "%LOG_DIR%"
|
||||
)
|
||||
)
|
||||
|
||||
set KAFKA_LOG4J_OPTS=-Dkafka.logs.dir="%LOG_DIR%" "%KAFKA_LOG4J_OPTS%"
|
||||
|
||||
rem Generic jvm settings you want to add
|
||||
IF ["%KAFKA_OPTS%"] EQU [""] (
|
||||
set KAFKA_OPTS=
|
||||
)
|
||||
|
||||
set DEFAULT_JAVA_DEBUG_PORT=5005
|
||||
set DEFAULT_DEBUG_SUSPEND_FLAG=n
|
||||
rem Set Debug options if enabled
|
||||
IF ["%KAFKA_DEBUG%"] NEQ [""] (
|
||||
|
||||
|
||||
IF ["%JAVA_DEBUG_PORT%"] EQU [""] (
|
||||
set JAVA_DEBUG_PORT=%DEFAULT_JAVA_DEBUG_PORT%
|
||||
)
|
||||
|
||||
IF ["%DEBUG_SUSPEND_FLAG%"] EQU [""] (
|
||||
set DEBUG_SUSPEND_FLAG=%DEFAULT_DEBUG_SUSPEND_FLAG%
|
||||
)
|
||||
set DEFAULT_JAVA_DEBUG_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=!DEBUG_SUSPEND_FLAG!,address=!JAVA_DEBUG_PORT!
|
||||
|
||||
IF ["%JAVA_DEBUG_OPTS%"] EQU [""] (
|
||||
set JAVA_DEBUG_OPTS=!DEFAULT_JAVA_DEBUG_OPTS!
|
||||
)
|
||||
|
||||
echo Enabling Java debug options: !JAVA_DEBUG_OPTS!
|
||||
set KAFKA_OPTS=!JAVA_DEBUG_OPTS! !KAFKA_OPTS!
|
||||
)
|
||||
|
||||
rem Which java to use
|
||||
IF ["%JAVA_HOME%"] EQU [""] (
|
||||
set JAVA=java
|
||||
) ELSE (
|
||||
set JAVA="%JAVA_HOME%/bin/java"
|
||||
)
|
||||
|
||||
rem Memory options
|
||||
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
|
||||
set KAFKA_HEAP_OPTS=-Xmx256M
|
||||
)
|
||||
|
||||
rem JVM performance options
|
||||
IF ["%KAFKA_JVM_PERFORMANCE_OPTS%"] EQU [""] (
|
||||
set KAFKA_JVM_PERFORMANCE_OPTS=-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true
|
||||
)
|
||||
|
||||
IF not defined CLASSPATH (
|
||||
echo Classpath is empty. Please build the project first e.g. by running 'gradlew jarAll'
|
||||
EXIT /B 2
|
||||
)
|
||||
|
||||
set COMMAND=%JAVA% %KAFKA_HEAP_OPTS% %KAFKA_JVM_PERFORMANCE_OPTS% %KAFKA_JMX_OPTS% %KAFKA_LOG4J_OPTS% -cp "%CLASSPATH%" %KAFKA_OPTS% %*
|
||||
rem echo.
|
||||
rem echo %COMMAND%
|
||||
rem echo.
|
||||
%COMMAND%
|
||||
|
||||
goto :eof
|
||||
:concat
|
||||
IF not defined CLASSPATH (
|
||||
set CLASSPATH="%~1"
|
||||
) ELSE (
|
||||
set CLASSPATH=%CLASSPATH%;"%~1"
|
||||
)
|
||||
38
bin/windows/kafka-server-start.bat
Normal file
38
bin/windows/kafka-server-start.bat
Normal file
@@ -0,0 +1,38 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 server.properties
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
SetLocal
|
||||
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
|
||||
set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties
|
||||
)
|
||||
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
|
||||
rem detect OS architecture
|
||||
wmic os get osarchitecture | find /i "32-bit" >nul 2>&1
|
||||
IF NOT ERRORLEVEL 1 (
|
||||
rem 32-bit OS
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
|
||||
) ELSE (
|
||||
rem 64-bit OS
|
||||
set KAFKA_HEAP_OPTS=-Xmx1G -Xms1G
|
||||
)
|
||||
)
|
||||
"%~dp0kafka-run-class.bat" kafka.Kafka %*
|
||||
EndLocal
|
||||
18
bin/windows/kafka-server-stop.bat
Normal file
18
bin/windows/kafka-server-stop.bat
Normal file
@@ -0,0 +1,18 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
wmic process where (commandline like "%%kafka.Kafka%%" and not name="wmic.exe") delete
|
||||
rem ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM
|
||||
23
bin/windows/kafka-streams-application-reset.bat
Normal file
23
bin/windows/kafka-streams-application-reset.bat
Normal file
@@ -0,0 +1,23 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
SetLocal
|
||||
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M
|
||||
)
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.tools.StreamsResetter %*
|
||||
EndLocal
|
||||
17
bin/windows/kafka-topics.bat
Normal file
17
bin/windows/kafka-topics.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
"%~dp0kafka-run-class.bat" kafka.admin.TopicCommand %*
|
||||
30
bin/windows/zookeeper-server-start.bat
Normal file
30
bin/windows/zookeeper-server-start.bat
Normal file
@@ -0,0 +1,30 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 zookeeper.properties
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
SetLocal
|
||||
IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] (
|
||||
set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties
|
||||
)
|
||||
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
|
||||
set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
|
||||
)
|
||||
"%~dp0kafka-run-class.bat" org.apache.zookeeper.server.quorum.QuorumPeerMain %*
|
||||
EndLocal
|
||||
17
bin/windows/zookeeper-server-stop.bat
Normal file
17
bin/windows/zookeeper-server-stop.bat
Normal file
@@ -0,0 +1,17 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
wmic process where (commandline like "%%zookeeper%%" and not name="wmic.exe") delete
|
||||
22
bin/windows/zookeeper-shell.bat
Normal file
22
bin/windows/zookeeper-shell.bat
Normal file
@@ -0,0 +1,22 @@
|
||||
@echo off
|
||||
rem Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
rem contributor license agreements. See the NOTICE file distributed with
|
||||
rem this work for additional information regarding copyright ownership.
|
||||
rem The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
rem (the "License"); you may not use this file except in compliance with
|
||||
rem the License. You may obtain a copy of the License at
|
||||
rem
|
||||
rem http://www.apache.org/licenses/LICENSE-2.0
|
||||
rem
|
||||
rem Unless required by applicable law or agreed to in writing, software
|
||||
rem distributed under the License is distributed on an "AS IS" BASIS,
|
||||
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
rem See the License for the specific language governing permissions and
|
||||
rem limitations under the License.
|
||||
|
||||
IF [%1] EQU [] (
|
||||
echo USAGE: %0 zookeeper_host:port[/path] [-zk-tls-config-file file] [args...]
|
||||
EXIT /B 1
|
||||
)
|
||||
|
||||
"%~dp0kafka-run-class.bat" org.apache.zookeeper.ZooKeeperMainWithTlsSupportForKafka -server %*
|
||||
17
bin/zookeeper-security-migration.sh
Executable file
17
bin/zookeeper-security-migration.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh kafka.admin.ZkSecurityMigrator "$@"
|
||||
44
bin/zookeeper-server-start.sh
Executable file
44
bin/zookeeper-server-start.sh
Executable file
@@ -0,0 +1,44 @@
|
||||
#!/bin/bash
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 [-daemon] zookeeper.properties"
|
||||
exit 1
|
||||
fi
|
||||
base_dir=$(dirname $0)
|
||||
|
||||
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
|
||||
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
|
||||
fi
|
||||
|
||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
||||
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
|
||||
fi
|
||||
|
||||
EXTRA_ARGS=${EXTRA_ARGS-'-name zookeeper -loggc'}
|
||||
|
||||
COMMAND=$1
|
||||
case $COMMAND in
|
||||
-daemon)
|
||||
EXTRA_ARGS="-daemon "$EXTRA_ARGS
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$@"
|
||||
24
bin/zookeeper-server-stop.sh
Executable file
24
bin/zookeeper-server-stop.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
SIGNAL=${SIGNAL:-TERM}
|
||||
PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk '{print $1}')
|
||||
|
||||
if [ -z "$PIDS" ]; then
|
||||
echo "No zookeeper server to stop"
|
||||
exit 1
|
||||
else
|
||||
kill -s $SIGNAL $PIDS
|
||||
fi
|
||||
23
bin/zookeeper-shell.sh
Executable file
23
bin/zookeeper-shell.sh
Executable file
@@ -0,0 +1,23 @@
|
||||
#!/bin/sh
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
if [ $# -lt 1 ];
|
||||
then
|
||||
echo "USAGE: $0 zookeeper_host:port[/path] [-zk-tls-config-file file] [args...]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMainWithTlsSupportForKafka -server "$@"
|
||||
1201
build.gradle
Normal file
1201
build.gradle
Normal file
File diff suppressed because it is too large
Load Diff
20
checkstyle/.scalafmt.conf
Normal file
20
checkstyle/.scalafmt.conf
Normal file
@@ -0,0 +1,20 @@
|
||||
// Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
// contributor license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright ownership.
|
||||
// The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
// (the "License"); you may not use this file except in compliance with
|
||||
// the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
docstrings = JavaDoc
|
||||
maxColumn = 120
|
||||
continuationIndent.defnSite = 2
|
||||
assumeStandardLibraryStripMargin = true
|
||||
danglingParentheses = true
|
||||
rewrite.rules = [SortImports, RedundantBraces, RedundantParens, SortModifiers]
|
||||
142
checkstyle/checkstyle.xml
Normal file
142
checkstyle/checkstyle.xml
Normal file
@@ -0,0 +1,142 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE module PUBLIC
|
||||
"-//Puppy Crawl//DTD Check Configuration 1.3//EN"
|
||||
"http://www.puppycrawl.com/dtds/configuration_1_3.dtd">
|
||||
<!--
|
||||
// Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
// contributor license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright ownership.
|
||||
// The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
// (the "License"); you may not use this file except in compliance with
|
||||
// the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
-->
|
||||
<module name="Checker">
|
||||
<property name="localeLanguage" value="en"/>
|
||||
|
||||
<module name="FileTabCharacter"/>
|
||||
|
||||
<!-- header -->
|
||||
<module name="Header">
|
||||
<property name="headerFile" value="${headerFile}" />
|
||||
</module>
|
||||
|
||||
<module name="TreeWalker">
|
||||
|
||||
<!-- code cleanup -->
|
||||
<module name="UnusedImports">
|
||||
<property name="processJavadoc" value="true" />
|
||||
</module>
|
||||
<module name="RedundantImport"/>
|
||||
<module name="IllegalImport" />
|
||||
<module name="EqualsHashCode"/>
|
||||
<module name="SimplifyBooleanExpression"/>
|
||||
<module name="OneStatementPerLine"/>
|
||||
<module name="UnnecessaryParentheses" />
|
||||
<module name="SimplifyBooleanReturn"/>
|
||||
|
||||
<!-- style -->
|
||||
<module name="DefaultComesLast"/>
|
||||
<module name="EmptyStatement"/>
|
||||
<module name="ArrayTypeStyle"/>
|
||||
<module name="UpperEll"/>
|
||||
<module name="LeftCurly"/>
|
||||
<module name="RightCurly"/>
|
||||
<module name="EmptyStatement"/>
|
||||
<module name="ConstantName">
|
||||
<property name="format" value="(^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$)|(^log$)"/>
|
||||
</module>
|
||||
<module name="LocalVariableName"/>
|
||||
<module name="LocalFinalVariableName"/>
|
||||
<module name="MemberName"/>
|
||||
<module name="ClassTypeParameterName">
|
||||
<property name="format" value="^[A-Z][a-zA-Z0-9]*$$"/>
|
||||
</module>
|
||||
<module name="MethodTypeParameterName">
|
||||
<property name="format" value="^[A-Z][a-zA-Z0-9]*$$"/>
|
||||
</module>
|
||||
<module name="InterfaceTypeParameterName">
|
||||
<property name="format" value="^[A-Z][a-zA-Z0-9]*$$"/>
|
||||
</module>
|
||||
<module name="PackageName"/>
|
||||
<module name="ParameterName"/>
|
||||
<module name="StaticVariableName"/>
|
||||
<module name="TypeName"/>
|
||||
<module name="AvoidStarImport"/>
|
||||
|
||||
<!-- variables that can be final should be final (suppressed except for Streams) -->
|
||||
<module name="FinalLocalVariable">
|
||||
<property name="tokens" value="VARIABLE_DEF,PARAMETER_DEF"/>
|
||||
<property name="validateEnhancedForLoopVariable" value="true"/>
|
||||
</module>
|
||||
|
||||
<!-- dependencies -->
|
||||
<module name="ImportControl">
|
||||
<property name="file" value="${importControlFile}"/>
|
||||
</module>
|
||||
|
||||
<!-- whitespace -->
|
||||
<module name="GenericWhitespace"/>
|
||||
<module name="NoWhitespaceBefore"/>
|
||||
<module name="WhitespaceAfter" />
|
||||
<module name="NoWhitespaceAfter"/>
|
||||
<module name="WhitespaceAround">
|
||||
<property name="allowEmptyConstructors" value="true"/>
|
||||
<property name="allowEmptyMethods" value="true"/>
|
||||
</module>
|
||||
<module name="Indentation"/>
|
||||
<module name="MethodParamPad"/>
|
||||
<module name="ParenPad"/>
|
||||
<module name="TypecastParenPad"/>
|
||||
|
||||
<!-- locale-sensitive methods should specify locale -->
|
||||
<module name="Regexp">
|
||||
<property name="format" value="\.to(Lower|Upper)Case\(\)"/>
|
||||
<property name="illegalPattern" value="true"/>
|
||||
<property name="ignoreComments" value="true"/>
|
||||
</module>
|
||||
|
||||
<!-- code quality -->
|
||||
<module name="MethodLength"/>
|
||||
<module name="ParameterNumber">
|
||||
<!-- default is 8 -->
|
||||
<property name="max" value="13"/>
|
||||
</module>
|
||||
<module name="ClassDataAbstractionCoupling">
|
||||
<!-- default is 7 -->
|
||||
<property name="max" value="25"/>
|
||||
</module>
|
||||
<module name="BooleanExpressionComplexity">
|
||||
<!-- default is 3 -->
|
||||
<property name="max" value="5"/>
|
||||
</module>
|
||||
|
||||
<module name="ClassFanOutComplexity">
|
||||
<!-- default is 20 -->
|
||||
<property name="max" value="50"/>
|
||||
</module>
|
||||
<module name="CyclomaticComplexity">
|
||||
<!-- default is 10-->
|
||||
<property name="max" value="16"/>
|
||||
</module>
|
||||
<module name="JavaNCSS">
|
||||
<!-- default is 50 -->
|
||||
<property name="methodMaximum" value="100"/>
|
||||
</module>
|
||||
<module name="NPathComplexity">
|
||||
<!-- default is 200 -->
|
||||
<property name="max" value="500"/>
|
||||
</module>
|
||||
</module>
|
||||
|
||||
<module name="SuppressionFilter">
|
||||
<property name="file" value="${suppressionsFile}"/>
|
||||
</module>
|
||||
</module>
|
||||
57
checkstyle/import-control-core.xml
Normal file
57
checkstyle/import-control-core.xml
Normal file
@@ -0,0 +1,57 @@
|
||||
<!DOCTYPE import-control PUBLIC
|
||||
"-//Puppy Crawl//DTD Import Control 1.1//EN"
|
||||
"http://www.puppycrawl.com/dtds/import_control_1_1.dtd">
|
||||
<!--
|
||||
// Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
// contributor license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright ownership.
|
||||
// The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
// (the "License"); you may not use this file except in compliance with
|
||||
// the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
-->
|
||||
|
||||
<import-control pkg="kafka">
|
||||
|
||||
<!-- THINK HARD ABOUT THE LAYERING OF THE PROJECT BEFORE CHANGING THIS FILE -->
|
||||
|
||||
<!-- common library dependencies -->
|
||||
<allow pkg="java" />
|
||||
<allow pkg="scala" />
|
||||
<allow pkg="javax.management" />
|
||||
<allow pkg="org.slf4j" />
|
||||
<allow pkg="org.junit" />
|
||||
<allow pkg="org.easymock" />
|
||||
<allow pkg="java.security" />
|
||||
<allow pkg="javax.net.ssl" />
|
||||
<allow pkg="javax.security" />
|
||||
<allow pkg="com.didichuxing.datachannel.kafka" />
|
||||
|
||||
<allow pkg="kafka.common" />
|
||||
<allow pkg="kafka.utils" />
|
||||
<allow pkg="kafka.serializer" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
|
||||
<subpackage name="tools">
|
||||
<allow pkg="org.apache.kafka.clients.admin" />
|
||||
<allow pkg="kafka.admin" />
|
||||
<allow pkg="joptsimple" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="coordinator">
|
||||
<allow class="kafka.server.MetadataCache" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="examples">
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
</subpackage>
|
||||
|
||||
</import-control>
|
||||
47
checkstyle/import-control-jmh-benchmarks.xml
Normal file
47
checkstyle/import-control-jmh-benchmarks.xml
Normal file
@@ -0,0 +1,47 @@
|
||||
<!DOCTYPE import-control PUBLIC
|
||||
"-//Puppy Crawl//DTD Import Control 1.1//EN"
|
||||
"http://www.puppycrawl.com/dtds/import_control_1_1.dtd">
|
||||
<!--
|
||||
// Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
// contributor license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright ownership.
|
||||
// The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
// (the "License"); you may not use this file except in compliance with
|
||||
// the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
-->
|
||||
|
||||
<import-control pkg="org.apache.kafka.jmh">
|
||||
|
||||
<allow pkg="java"/>
|
||||
<allow pkg="scala"/>
|
||||
<allow pkg="javax.management"/>
|
||||
<allow pkg="org.slf4j"/>
|
||||
<allow pkg="org.openjdk.jmh.annotations"/>
|
||||
<allow pkg="org.openjdk.jmh.runner"/>
|
||||
<allow pkg="org.openjdk.jmh.infra"/>
|
||||
<allow pkg="java.security"/>
|
||||
<allow pkg="javax.net.ssl"/>
|
||||
<allow pkg="javax.security"/>
|
||||
<allow pkg="org.apache.kafka.common"/>
|
||||
<allow pkg="org.apache.kafka.clients.producer"/>
|
||||
<allow pkg="kafka.cluster"/>
|
||||
<allow pkg="kafka.log"/>
|
||||
<allow pkg="kafka.server"/>
|
||||
<allow pkg="kafka.api"/>
|
||||
<allow class="kafka.utils.Pool"/>
|
||||
<allow class="kafka.utils.KafkaScheduler"/>
|
||||
<allow class="org.apache.kafka.clients.FetchSessionHandler"/>
|
||||
<allow pkg="org.mockito"/>
|
||||
|
||||
|
||||
<subpackage name="cache">
|
||||
</subpackage>
|
||||
</import-control>
|
||||
457
checkstyle/import-control.xml
Normal file
457
checkstyle/import-control.xml
Normal file
@@ -0,0 +1,457 @@
|
||||
<!DOCTYPE import-control PUBLIC
|
||||
"-//Puppy Crawl//DTD Import Control 1.1//EN"
|
||||
"http://www.puppycrawl.com/dtds/import_control_1_1.dtd">
|
||||
<!--
|
||||
// Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
// contributor license agreements. See the NOTICE file distributed with
|
||||
// this work for additional information regarding copyright ownership.
|
||||
// The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
// (the "License"); you may not use this file except in compliance with
|
||||
// the License. You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
-->
|
||||
|
||||
<import-control pkg="org.apache.kafka">
|
||||
|
||||
<!-- THINK HARD ABOUT THE LAYERING OF THE PROJECT BEFORE CHANGING THIS FILE -->
|
||||
|
||||
<!-- common library dependencies -->
|
||||
<allow pkg="java" />
|
||||
<allow pkg="javax.management" />
|
||||
<allow pkg="org.slf4j" />
|
||||
<allow pkg="org.junit" />
|
||||
<allow pkg="org.hamcrest" />
|
||||
<allow pkg="org.mockito" />
|
||||
<allow pkg="org.easymock" />
|
||||
<allow pkg="org.powermock" />
|
||||
<allow pkg="java.security" />
|
||||
<allow pkg="javax.net.ssl" />
|
||||
<allow pkg="javax.security" />
|
||||
<allow pkg="org.ietf.jgss" />
|
||||
|
||||
<!-- no one depends on the server -->
|
||||
<disallow pkg="kafka" />
|
||||
|
||||
<!-- anyone can use public classes -->
|
||||
<allow pkg="org.apache.kafka.common" exact-match="true" />
|
||||
<allow pkg="org.apache.kafka.common.security" />
|
||||
<allow pkg="org.apache.kafka.common.serialization" />
|
||||
<allow pkg="org.apache.kafka.common.utils" />
|
||||
<allow pkg="org.apache.kafka.common.errors" exact-match="true" />
|
||||
<allow pkg="org.apache.kafka.common.memory" />
|
||||
|
||||
<subpackage name="common">
|
||||
<disallow pkg="org.apache.kafka.clients" />
|
||||
<allow pkg="org.apache.kafka.common" exact-match="true" />
|
||||
<allow pkg="org.apache.kafka.common.annotation" />
|
||||
<allow pkg="org.apache.kafka.common.config" exact-match="true" />
|
||||
<allow pkg="org.apache.kafka.common.internals" exact-match="true" />
|
||||
<allow pkg="org.apache.kafka.test" />
|
||||
|
||||
<subpackage name="acl">
|
||||
<allow pkg="org.apache.kafka.common.annotation" />
|
||||
<allow pkg="org.apache.kafka.common.acl" />
|
||||
<allow pkg="org.apache.kafka.common.resource" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="config">
|
||||
<allow pkg="org.apache.kafka.common.config" />
|
||||
<!-- for testing -->
|
||||
<allow pkg="org.apache.kafka.common.metrics" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="message">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.protocol.types" />
|
||||
<allow pkg="org.apache.kafka.common.message" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="metrics">
|
||||
<allow pkg="org.apache.kafka.common.metrics" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="memory">
|
||||
<allow pkg="org.apache.kafka.common.metrics" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="network">
|
||||
<allow pkg="org.apache.kafka.common.security.auth" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.config" />
|
||||
<allow pkg="org.apache.kafka.common.metrics" />
|
||||
<allow pkg="org.apache.kafka.common.security" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="resource">
|
||||
<allow pkg="org.apache.kafka.common.annotation" />
|
||||
<allow pkg="org.apache.kafka.common.resource" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="security">
|
||||
<allow pkg="org.apache.kafka.common.annotation" />
|
||||
<allow pkg="org.apache.kafka.common.network" />
|
||||
<allow pkg="org.apache.kafka.common.config" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.errors" />
|
||||
<subpackage name="authenticator">
|
||||
<allow pkg="org.apache.kafka.common.message" />
|
||||
<allow pkg="org.apache.kafka.common.protocol.types" />
|
||||
<allow pkg="org.apache.kafka.common.requests" />
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
</subpackage>
|
||||
<subpackage name="scram">
|
||||
<allow pkg="javax.crypto" />
|
||||
</subpackage>
|
||||
<subpackage name="oauthbearer">
|
||||
<allow pkg="com.fasterxml.jackson.databind" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="protocol">
|
||||
<allow pkg="org.apache.kafka.common.errors" />
|
||||
<allow pkg="org.apache.kafka.common.message" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.protocol.types" />
|
||||
<allow pkg="org.apache.kafka.common.record" />
|
||||
<allow pkg="org.apache.kafka.common.requests" />
|
||||
<allow pkg="org.apache.kafka.common.resource" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="record">
|
||||
<allow pkg="net.jpountz" />
|
||||
<allow pkg="org.apache.kafka.common.header" />
|
||||
<allow pkg="org.apache.kafka.common.record" />
|
||||
<allow pkg="org.apache.kafka.common.network" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.protocol.types" />
|
||||
<allow pkg="org.apache.kafka.common.errors" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="header">
|
||||
<allow pkg="org.apache.kafka.common.header" />
|
||||
<allow pkg="org.apache.kafka.common.record" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="requests">
|
||||
<allow pkg="org.apache.kafka.common.acl" />
|
||||
<allow pkg="org.apache.kafka.common.protocol" />
|
||||
<allow pkg="org.apache.kafka.common.message" />
|
||||
<allow pkg="org.apache.kafka.common.network" />
|
||||
<allow pkg="org.apache.kafka.common.requests" />
|
||||
<allow pkg="org.apache.kafka.common.resource" />
|
||||
<allow pkg="org.apache.kafka.common.record" />
|
||||
<!-- for AuthorizableRequestContext interface -->
|
||||
<allow pkg="org.apache.kafka.server.authorizer" />
|
||||
<!-- for testing -->
|
||||
<allow pkg="org.apache.kafka.common.errors" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="serialization">
|
||||
<allow class="org.apache.kafka.common.errors.SerializationException" />
|
||||
<allow class="org.apache.kafka.common.header.Headers" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="utils">
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="clients">
|
||||
<allow pkg="org.slf4j" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.clients" exact-match="true"/>
|
||||
<allow pkg="org.apache.kafka.test" />
|
||||
|
||||
<subpackage name="consumer">
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="producer">
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
<allow pkg="org.apache.kafka.clients.producer" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="admin">
|
||||
<allow pkg="org.apache.kafka.clients.admin" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer.internals" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="server">
|
||||
<allow pkg="org.slf4j" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.test" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="tools">
|
||||
<allow pkg="org.apache.kafka.common"/>
|
||||
<allow pkg="org.apache.kafka.clients.admin" />
|
||||
<allow pkg="org.apache.kafka.clients.producer" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="net.sourceforge.argparse4j" />
|
||||
<allow pkg="org.apache.log4j" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="trogdor">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="javax.servlet" />
|
||||
<allow pkg="javax.ws.rs" />
|
||||
<allow pkg="net.sourceforge.argparse4j" />
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
<allow pkg="org.apache.kafka.clients.admin" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer" exact-match="true"/>
|
||||
<allow pkg="org.apache.kafka.clients.producer" exact-match="true"/>
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.test"/>
|
||||
<allow pkg="org.apache.kafka.trogdor" />
|
||||
<allow pkg="org.apache.log4j" />
|
||||
<allow pkg="org.eclipse.jetty" />
|
||||
<allow pkg="org.glassfish.jersey" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="message">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="com.fasterxml.jackson.annotation" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="streams">
|
||||
<allow pkg="org.apache.kafka.common"/>
|
||||
<allow pkg="org.apache.kafka.test"/>
|
||||
<allow pkg="org.apache.kafka.clients"/>
|
||||
<allow pkg="org.apache.kafka.clients.producer" exact-match="true"/>
|
||||
<allow pkg="org.apache.kafka.clients.consumer" exact-match="true"/>
|
||||
|
||||
<allow pkg="org.apache.kafka.streams"/>
|
||||
|
||||
<subpackage name="examples">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="org.apache.kafka.connect.json" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="perf">
|
||||
<allow pkg="com.fasterxml.jackson.databind" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="integration">
|
||||
<allow pkg="kafka.admin" />
|
||||
<allow pkg="kafka.api" />
|
||||
<allow pkg="kafka.server" />
|
||||
<allow pkg="kafka.tools" />
|
||||
<allow pkg="kafka.utils" />
|
||||
<allow pkg="kafka.log" />
|
||||
<allow pkg="scala" />
|
||||
<allow class="kafka.zk.EmbeddedZookeeper"/>
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="test">
|
||||
<allow pkg="kafka.admin" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="tools">
|
||||
<allow pkg="kafka.tools" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="state">
|
||||
<allow pkg="org.rocksdb" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="processor">
|
||||
<subpackage name="internals">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="kafka.utils" />
|
||||
<allow pkg="org.apache.zookeeper" />
|
||||
<allow pkg="org.apache.zookeeper" />
|
||||
<allow pkg="org.apache.log4j" />
|
||||
<subpackage name="testutil">
|
||||
<allow pkg="org.apache.log4j" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="jmh">
|
||||
<allow pkg="org.openjdk.jmh.annotations" />
|
||||
<allow pkg="org.openjdk.jmh.runner" />
|
||||
<allow pkg="org.openjdk.jmh.runner.options" />
|
||||
<allow pkg="org.openjdk.jmh.infra" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
<allow pkg="org.apache.kafka.streams" />
|
||||
<allow pkg="org.github.jamm" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="log4jappender">
|
||||
<allow pkg="org.apache.log4j" />
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.test" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="test">
|
||||
<allow pkg="org.apache.kafka" />
|
||||
<allow pkg="org.bouncycastle" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="connect">
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.connect.data" />
|
||||
<allow pkg="org.apache.kafka.connect.errors" />
|
||||
<allow pkg="org.apache.kafka.connect.header" />
|
||||
<allow pkg="org.apache.kafka.connect.components"/>
|
||||
<allow pkg="org.apache.kafka.clients" />
|
||||
<allow pkg="org.apache.kafka.test"/>
|
||||
|
||||
<subpackage name="source">
|
||||
<allow pkg="org.apache.kafka.connect.connector" />
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="sink">
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
<allow pkg="org.apache.kafka.connect.connector" />
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="converters">
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="connector.policy">
|
||||
<allow pkg="org.apache.kafka.connect.health" />
|
||||
<allow pkg="org.apache.kafka.connect.connector" />
|
||||
<!-- for testing -->
|
||||
<allow pkg="org.apache.kafka.connect.runtime" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="rest">
|
||||
<allow pkg="org.apache.kafka.connect.health" />
|
||||
<allow pkg="javax.ws.rs" />
|
||||
<allow pkg= "javax.security.auth"/>
|
||||
<subpackage name="basic">
|
||||
<allow pkg="org.apache.kafka.connect.rest"/>
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="mirror">
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
<allow pkg="org.apache.kafka.connect.source" />
|
||||
<allow pkg="org.apache.kafka.connect.sink" />
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
<allow pkg="org.apache.kafka.connect.connector" />
|
||||
<allow pkg="org.apache.kafka.connect.runtime" />
|
||||
<allow pkg="org.apache.kafka.connect.runtime.distributed" />
|
||||
<allow pkg="org.apache.kafka.connect.util" />
|
||||
<allow pkg="org.apache.kafka.connect.converters" />
|
||||
<allow pkg="net.sourceforge.argparse4j" />
|
||||
<!-- for tests -->
|
||||
<allow pkg="org.apache.kafka.connect.integration" />
|
||||
<allow pkg="org.apache.kafka.connect.mirror" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="runtime">
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.reflections"/>
|
||||
<allow pkg="org.reflections.util"/>
|
||||
<allow pkg="javax.crypto"/>
|
||||
|
||||
<subpackage name="rest">
|
||||
<allow pkg="org.eclipse.jetty" />
|
||||
<allow pkg="javax.ws.rs" />
|
||||
<allow pkg="javax.servlet" />
|
||||
<allow pkg="org.glassfish.jersey" />
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="org.apache.http"/>
|
||||
<subpackage name="resources">
|
||||
<allow pkg="org.apache.log4j" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="isolation">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="org.apache.maven.artifact.versioning" />
|
||||
<allow pkg="javax.tools" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="distributed">
|
||||
<allow pkg="javax.ws.rs.core" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="cli">
|
||||
<allow pkg="org.apache.kafka.connect.runtime" />
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
<allow pkg="org.apache.kafka.connect.util" />
|
||||
<allow pkg="org.apache.kafka.common" />
|
||||
<allow pkg="org.apache.kafka.connect.connector.policy" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="storage">
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.apache.kafka.common.serialization" />
|
||||
<allow pkg="javax.crypto.spec"/>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="util">
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.reflections.vfs" />
|
||||
<!-- for annotations to avoid code duplication -->
|
||||
<allow pkg="com.fasterxml.jackson.annotation" />
|
||||
<allow pkg="com.fasterxml.jackson.databind" />
|
||||
<subpackage name="clusters">
|
||||
<allow pkg="kafka.server" />
|
||||
<allow pkg="kafka.zk" />
|
||||
<allow pkg="kafka.utils" />
|
||||
<allow class="javax.servlet.http.HttpServletResponse" />
|
||||
<allow class="javax.ws.rs.core.Response" />
|
||||
<allow pkg="com.fasterxml.jackson.core.type" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="integration">
|
||||
<allow pkg="org.apache.kafka.connect.util.clusters" />
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.apache.kafka.tools" />
|
||||
<allow pkg="javax.ws.rs" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="json">
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
<allow pkg="org.apache.kafka.common.serialization" />
|
||||
<allow pkg="org.apache.kafka.common.errors" />
|
||||
<allow pkg="org.apache.kafka.connect.storage" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="file">
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.apache.kafka.clients.consumer" />
|
||||
<!-- for tests -->
|
||||
<allow pkg="org.easymock" />
|
||||
<allow pkg="org.powermock" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="tools">
|
||||
<allow pkg="org.apache.kafka.connect" />
|
||||
<allow pkg="org.apache.kafka.tools" />
|
||||
<allow pkg="com.fasterxml.jackson" />
|
||||
</subpackage>
|
||||
|
||||
<subpackage name="transforms">
|
||||
<allow class="org.apache.kafka.connect.connector.ConnectRecord" />
|
||||
<allow class="org.apache.kafka.connect.source.SourceRecord" />
|
||||
<allow class="org.apache.kafka.connect.sink.SinkRecord" />
|
||||
<allow pkg="org.apache.kafka.connect.transforms.util" />
|
||||
</subpackage>
|
||||
</subpackage>
|
||||
|
||||
</import-control>
|
||||
16
checkstyle/java.header
Normal file
16
checkstyle/java.header
Normal file
@@ -0,0 +1,16 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
265
checkstyle/suppressions.xml
Normal file
265
checkstyle/suppressions.xml
Normal file
@@ -0,0 +1,265 @@
|
||||
|
||||
|
||||
<!DOCTYPE suppressions PUBLIC
|
||||
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
|
||||
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
|
||||
|
||||
<suppressions>
|
||||
|
||||
<!-- Note that [/\\] must be used as the path separator for cross-platform support -->
|
||||
|
||||
<!-- Generator -->
|
||||
<suppress checks="CyclomaticComplexity|BooleanExpressionComplexity"
|
||||
files="(SchemaGenerator|MessageDataGenerator|FieldSpec).java"/>
|
||||
<suppress checks="NPathComplexity"
|
||||
files="(MessageDataGenerator|FieldSpec).java"/>
|
||||
<suppress checks="JavaNCSS"
|
||||
files="(ApiMessageType).java|MessageDataGenerator.java"/>
|
||||
<suppress checks="MethodLength"
|
||||
files="MessageDataGenerator.java"/>
|
||||
|
||||
<!-- Clients -->
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(Fetcher|Sender|SenderTest|ConsumerCoordinator|KafkaConsumer|KafkaProducer|Utils|TransactionManager|TransactionManagerTest|KafkaAdminClient|NetworkClient|Admin).java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(SaslServerAuthenticator|SaslAuthenticatorTest).java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="Errors.java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="Utils.java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="AbstractRequest.java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="AbstractResponse.java"/>
|
||||
|
||||
<suppress checks="MethodLength"
|
||||
files="KerberosLogin.java|RequestResponseTest.java|ConnectMetricsRegistry.java|KafkaConsumer.java"/>
|
||||
|
||||
<suppress checks="ParameterNumber"
|
||||
files="NetworkClient.java|FieldSpec.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="KafkaConsumer.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="Fetcher.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="Sender.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="ConfigDef.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="DefaultRecordBatch.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="Sender.java"/>
|
||||
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(KafkaConsumer|ConsumerCoordinator|Fetcher|KafkaProducer|AbstractRequest|AbstractResponse|TransactionManager|Admin|KafkaAdminClient).java"/>
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(Errors|SaslAuthenticatorTest|AgentTest|CoordinatorTest).java"/>
|
||||
|
||||
<suppress checks="BooleanExpressionComplexity"
|
||||
files="(Utils|Topic|KafkaLZ4BlockOutputStream|AclData|JoinGroupRequest).java"/>
|
||||
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="(ConsumerCoordinator|Fetcher|Sender|KafkaProducer|BufferPool|ConfigDef|RecordAccumulator|KerberosLogin|AbstractRequest|AbstractResponse|Selector|SslFactory|SslTransportLayer|SaslClientAuthenticator|SaslClientCallbackHandler|SaslServerAuthenticator|AbstractCoordinator|TransactionManager|AbstractStickyAssignor).java"/>
|
||||
|
||||
<suppress checks="JavaNCSS"
|
||||
files="(AbstractRequest|KerberosLogin|WorkerSinkTaskTest|TransactionManagerTest|SenderTest|KafkaAdminClient|ConsumerCoordinatorTest).java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="(ConsumerCoordinator|BufferPool|Fetcher|MetricName|Node|ConfigDef|RecordBatch|SslFactory|SslTransportLayer|MetadataResponse|KerberosLogin|Selector|Sender|Serdes|TokenInformation|Agent|Values|PluginUtils|MiniTrogdorCluster|TasksRequest|KafkaProducer|AbstractStickyAssignor).java"/>
|
||||
|
||||
<suppress checks="(JavaNCSS|CyclomaticComplexity|MethodLength)"
|
||||
files="CoordinatorClient.java"/>
|
||||
<suppress checks="(UnnecessaryParentheses|BooleanExpressionComplexity|CyclomaticComplexity|WhitespaceAfter|LocalVariableName)"
|
||||
files="Murmur3.java"/>
|
||||
|
||||
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS)"
|
||||
files="clients[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="MessageTest.java"/>
|
||||
|
||||
<!-- clients tests -->
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(Sender|Fetcher|KafkaConsumer|Metrics|RequestResponse|TransactionManager|KafkaAdminClient|Message|KafkaProducer)Test.java"/>
|
||||
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(ConsumerCoordinator|KafkaConsumer|RequestResponse|Fetcher|KafkaAdminClient|Message|KafkaProducer)Test.java"/>
|
||||
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="MockAdminClient.java"/>
|
||||
|
||||
<suppress checks="JavaNCSS"
|
||||
files="RequestResponseTest.java|FetcherTest.java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="MemoryRecordsTest|MetricsTest"/>
|
||||
|
||||
<suppress checks="(WhitespaceAround|LocalVariableName|ImportControl|AvoidStarImport)"
|
||||
files="Murmur3Test.java"/>
|
||||
|
||||
<!-- Connect -->
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="DistributedHerder(|Test).java"/>
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="Worker.java"/>
|
||||
<suppress checks="MethodLength"
|
||||
files="(KafkaConfigBackingStore|IncrementalCooperativeAssignor|RequestResponseTest|WorkerSinkTaskTest).java"/>
|
||||
|
||||
<suppress checks="ParameterNumber"
|
||||
files="Worker(SinkTask|SourceTask|Coordinator).java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="ConfigKeyInfo.java"/>
|
||||
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(RestServer|AbstractHerder|DistributedHerder).java"/>
|
||||
|
||||
<suppress checks="BooleanExpressionComplexity"
|
||||
files="JsonConverter.java"/>
|
||||
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="ConnectRecord.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="JsonConverter.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="FileStreamSourceTask.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="DistributedHerder.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="KafkaConfigBackingStore.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="(Values|ConnectHeader|ConnectHeaders).java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java"/>
|
||||
|
||||
<suppress checks="JavaNCSS"
|
||||
files="KafkaConfigBackingStore.java"/>
|
||||
<suppress checks="JavaNCSS"
|
||||
files="Values.java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="(DistributedHerder|RestClient|JsonConverter|KafkaConfigBackingStore|FileStreamSourceTask).java"/>
|
||||
|
||||
<suppress checks="MethodLength"
|
||||
files="Values.java"/>
|
||||
|
||||
<!-- connect tests-->
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(DistributedHerder|KafkaBasedLog)Test.java"/>
|
||||
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(WorkerSinkTask|WorkerSourceTask)Test.java"/>
|
||||
|
||||
<!-- Streams -->
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(TopologyBuilder|KafkaStreams|KStreamImpl|KTableImpl|StreamThread|StreamTask).java"/>
|
||||
|
||||
<suppress checks="MethodLength"
|
||||
files="(KTableImpl|StreamsPartitionAssignor.java)"/>
|
||||
|
||||
<suppress checks="ParameterNumber"
|
||||
files="StreamTask.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="RocksDBWindowStoreSupplier.java"/>
|
||||
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="(TopologyBuilder|KStreamImpl|StreamsPartitionAssignor|KafkaStreams|KTableImpl).java"/>
|
||||
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="TopologyBuilder.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="StreamsPartitionAssignor.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="StreamThread.java"/>
|
||||
|
||||
<suppress checks="JavaNCSS"
|
||||
files="StreamsPartitionAssignor.java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="(AssignorConfiguration|InternalTopologyBuilder|KafkaStreams|ProcessorStateManager|StreamsPartitionAssignor|StreamThread|TaskManager).java"/>
|
||||
|
||||
<suppress checks="(FinalLocalVariable|UnnecessaryParentheses|BooleanExpressionComplexity|CyclomaticComplexity|WhitespaceAfter|LocalVariableName)"
|
||||
files="Murmur3.java"/>
|
||||
|
||||
<!-- suppress FinalLocalVariable outside of the streams package. -->
|
||||
<suppress checks="FinalLocalVariable"
|
||||
files="^(?!.*[\\/]org[\\/]apache[\\/]kafka[\\/]streams[\\/].*$)"/>
|
||||
|
||||
<!-- generated code -->
|
||||
<suppress checks="(NPathComplexity|ClassFanOutComplexity|CyclomaticComplexity|ClassDataAbstractionCoupling|FinalLocalVariable|LocalVariableName|MemberName|ParameterName|MethodLength|JavaNCSS)"
|
||||
files="streams[\\/]src[\\/](generated|generated-test)[\\/].+.java$"/>
|
||||
|
||||
|
||||
<!-- Streams tests -->
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="(StreamThreadTest|StreamTaskTest|ProcessorTopologyTestDriver).java"/>
|
||||
|
||||
<suppress checks="MethodLength"
|
||||
files="KStreamKTableJoinIntegrationTest.java"/>
|
||||
<suppress checks="MethodLength"
|
||||
files="KStreamKStreamJoinTest.java"/>
|
||||
<suppress checks="MethodLength"
|
||||
files="KStreamWindowAggregateTest.java"/>
|
||||
<suppress checks="MethodLength"
|
||||
files="RocksDBWindowStoreTest.java"/>
|
||||
|
||||
<suppress checks="MemberName"
|
||||
files="StreamsPartitionAssignorTest.java"/>
|
||||
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files=".*[/\\]streams[/\\].*test[/\\].*.java"/>
|
||||
|
||||
<suppress checks="BooleanExpressionComplexity"
|
||||
files="SmokeTestDriver.java"/>
|
||||
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="KStreamKStreamJoinTest.java|KTableKTableForeignKeyJoinIntegrationTest.java"/>
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="RelationalSmokeTest.java|SmokeTestDriver.java"/>
|
||||
|
||||
<suppress checks="JavaNCSS"
|
||||
files="KStreamKStreamJoinTest.java"/>
|
||||
<suppress checks="JavaNCSS"
|
||||
files="SmokeTestDriver.java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="EosTestDriver|KStreamKStreamJoinTest.java|RelationalSmokeTest.java|SmokeTestDriver.java|KStreamKStreamLeftJoinTest.java|KTableKTableForeignKeyJoinIntegrationTest.java"/>
|
||||
|
||||
<suppress checks="(FinalLocalVariable|WhitespaceAround|LocalVariableName|ImportControl|AvoidStarImport)"
|
||||
files="Murmur3Test.java"/>
|
||||
|
||||
|
||||
<!-- Streams Test-Utils -->
|
||||
<suppress checks="ClassFanOutComplexity"
|
||||
files="TopologyTestDriver.java"/>
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="TopologyTestDriver.java"/>
|
||||
|
||||
<!-- Tools -->
|
||||
<suppress checks="ClassDataAbstractionCoupling"
|
||||
files="VerifiableConsumer.java"/>
|
||||
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="(StreamsResetter|ProducerPerformance|Agent).java"/>
|
||||
<suppress checks="BooleanExpressionComplexity"
|
||||
files="StreamsResetter.java"/>
|
||||
<suppress checks="NPathComplexity"
|
||||
files="(ProducerPerformance|StreamsResetter|Agent|TransactionalMessageCopier).java"/>
|
||||
<suppress checks="ImportControl"
|
||||
files="SignalLogger.java"/>
|
||||
<suppress checks="IllegalImport"
|
||||
files="SignalLogger.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="ProduceBenchSpec.java"/>
|
||||
<suppress checks="ParameterNumber"
|
||||
files="SustainedConnectionSpec.java"/>
|
||||
|
||||
<!-- Log4J-Appender -->
|
||||
<suppress checks="CyclomaticComplexity"
|
||||
files="KafkaLog4jAppender.java"/>
|
||||
|
||||
<suppress checks="NPathComplexity"
|
||||
files="KafkaLog4jAppender.java"/>
|
||||
<suppress checks="JavaNCSS"
|
||||
files="RequestResponseTest.java"/>
|
||||
|
||||
</suppressions>
|
||||
1
clients/.gitignore
vendored
Normal file
1
clients/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/bin/
|
||||
@@ -0,0 +1,56 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersionsResponseKey;
|
||||
import org.apache.kafka.common.protocol.ApiKeys;
|
||||
|
||||
/**
|
||||
* Represents the min version and max version of an api key.
|
||||
*
|
||||
* NOTE: This class is intended for INTERNAL usage only within Kafka.
|
||||
*/
|
||||
public class ApiVersion {
|
||||
public final short apiKey;
|
||||
public final short minVersion;
|
||||
public final short maxVersion;
|
||||
|
||||
public ApiVersion(ApiKeys apiKey) {
|
||||
this(apiKey.id, apiKey.oldestVersion(), apiKey.latestVersion());
|
||||
}
|
||||
|
||||
public ApiVersion(short apiKey, short minVersion, short maxVersion) {
|
||||
this.apiKey = apiKey;
|
||||
this.minVersion = minVersion;
|
||||
this.maxVersion = maxVersion;
|
||||
}
|
||||
|
||||
public ApiVersion(ApiVersionsResponseKey apiVersionsResponseKey) {
|
||||
this.apiKey = apiVersionsResponseKey.apiKey();
|
||||
this.minVersion = apiVersionsResponseKey.minVersion();
|
||||
this.maxVersion = apiVersionsResponseKey.maxVersion();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "ApiVersion(" +
|
||||
"apiKey=" + apiKey +
|
||||
", minVersion=" + minVersion +
|
||||
", maxVersion= " + maxVersion +
|
||||
")";
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.protocol.ApiKeys;
|
||||
import org.apache.kafka.common.record.RecordBatch;
|
||||
import org.apache.kafka.common.requests.ProduceRequest;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Maintains node api versions for access outside of NetworkClient (which is where the information is derived).
|
||||
* The pattern is akin to the use of {@link Metadata} for topic metadata.
|
||||
*
|
||||
* NOTE: This class is intended for INTERNAL usage only within Kafka.
|
||||
*/
|
||||
public class ApiVersions {
|
||||
|
||||
private final Map<String, NodeApiVersions> nodeApiVersions = new HashMap<>();
|
||||
private byte maxUsableProduceMagic = RecordBatch.CURRENT_MAGIC_VALUE;
|
||||
|
||||
public synchronized void update(String nodeId, NodeApiVersions nodeApiVersions) {
|
||||
this.nodeApiVersions.put(nodeId, nodeApiVersions);
|
||||
this.maxUsableProduceMagic = computeMaxUsableProduceMagic();
|
||||
}
|
||||
|
||||
public synchronized void remove(String nodeId) {
|
||||
this.nodeApiVersions.remove(nodeId);
|
||||
this.maxUsableProduceMagic = computeMaxUsableProduceMagic();
|
||||
}
|
||||
|
||||
public synchronized NodeApiVersions get(String nodeId) {
|
||||
return this.nodeApiVersions.get(nodeId);
|
||||
}
|
||||
|
||||
private byte computeMaxUsableProduceMagic() {
|
||||
// use a magic version which is supported by all brokers to reduce the chance that
|
||||
// we will need to convert the messages when they are ready to be sent.
|
||||
byte maxUsableMagic = RecordBatch.CURRENT_MAGIC_VALUE;
|
||||
for (NodeApiVersions versions : this.nodeApiVersions.values()) {
|
||||
byte nodeMaxUsableMagic = ProduceRequest.requiredMagicForVersion(versions.latestUsableVersion(ApiKeys.PRODUCE));
|
||||
maxUsableMagic = (byte) Math.min(nodeMaxUsableMagic, maxUsableMagic);
|
||||
}
|
||||
return maxUsableMagic;
|
||||
}
|
||||
|
||||
public synchronized byte maxUsableProduceMagic() {
|
||||
return maxUsableProduceMagic;
|
||||
}
|
||||
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import java.util.Locale;
|
||||
|
||||
public enum ClientDnsLookup {
|
||||
|
||||
DEFAULT("default"),
|
||||
USE_ALL_DNS_IPS("use_all_dns_ips"),
|
||||
RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY("resolve_canonical_bootstrap_servers_only");
|
||||
|
||||
private String clientDnsLookup;
|
||||
|
||||
ClientDnsLookup(String clientDnsLookup) {
|
||||
this.clientDnsLookup = clientDnsLookup;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return clientDnsLookup;
|
||||
}
|
||||
|
||||
public static ClientDnsLookup forConfig(String config) {
|
||||
return ClientDnsLookup.valueOf(config.toUpperCase(Locale.ROOT));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,119 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.message.RequestHeaderData;
|
||||
import org.apache.kafka.common.protocol.ApiKeys;
|
||||
import org.apache.kafka.common.requests.AbstractRequest;
|
||||
import org.apache.kafka.common.requests.RequestHeader;
|
||||
|
||||
/**
|
||||
* A request being sent to the server. This holds both the network send as well as the client-level metadata.
|
||||
*/
|
||||
public final class ClientRequest {
|
||||
|
||||
private final String destination;
|
||||
private final AbstractRequest.Builder<?> requestBuilder;
|
||||
private final int correlationId;
|
||||
private final String clientId;
|
||||
private final long createdTimeMs;
|
||||
private final boolean expectResponse;
|
||||
private final int requestTimeoutMs;
|
||||
private final RequestCompletionHandler callback;
|
||||
|
||||
/**
|
||||
* @param destination The brokerId to send the request to
|
||||
* @param requestBuilder The builder for the request to make
|
||||
* @param correlationId The correlation id for this client request
|
||||
* @param clientId The client ID to use for the header
|
||||
* @param createdTimeMs The unix timestamp in milliseconds for the time at which this request was created.
|
||||
* @param expectResponse Should we expect a response message or is this request complete once it is sent?
|
||||
* @param callback A callback to execute when the response has been received (or null if no callback is necessary)
|
||||
*/
|
||||
public ClientRequest(String destination,
|
||||
AbstractRequest.Builder<?> requestBuilder,
|
||||
int correlationId,
|
||||
String clientId,
|
||||
long createdTimeMs,
|
||||
boolean expectResponse,
|
||||
int requestTimeoutMs,
|
||||
RequestCompletionHandler callback) {
|
||||
this.destination = destination;
|
||||
this.requestBuilder = requestBuilder;
|
||||
this.correlationId = correlationId;
|
||||
this.clientId = clientId;
|
||||
this.createdTimeMs = createdTimeMs;
|
||||
this.expectResponse = expectResponse;
|
||||
this.requestTimeoutMs = requestTimeoutMs;
|
||||
this.callback = callback;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "ClientRequest(expectResponse=" + expectResponse +
|
||||
", callback=" + callback +
|
||||
", destination=" + destination +
|
||||
", correlationId=" + correlationId +
|
||||
", clientId=" + clientId +
|
||||
", createdTimeMs=" + createdTimeMs +
|
||||
", requestBuilder=" + requestBuilder +
|
||||
")";
|
||||
}
|
||||
|
||||
public boolean expectResponse() {
|
||||
return expectResponse;
|
||||
}
|
||||
|
||||
public ApiKeys apiKey() {
|
||||
return requestBuilder.apiKey();
|
||||
}
|
||||
|
||||
public RequestHeader makeHeader(short version) {
|
||||
short requestApiKey = requestBuilder.apiKey().id;
|
||||
return new RequestHeader(
|
||||
new RequestHeaderData().
|
||||
setRequestApiKey(requestApiKey).
|
||||
setRequestApiVersion(version).
|
||||
setClientId(clientId).
|
||||
setCorrelationId(correlationId),
|
||||
ApiKeys.forId(requestApiKey).requestHeaderVersion(version));
|
||||
}
|
||||
|
||||
public AbstractRequest.Builder<?> requestBuilder() {
|
||||
return requestBuilder;
|
||||
}
|
||||
|
||||
public String destination() {
|
||||
return destination;
|
||||
}
|
||||
|
||||
public RequestCompletionHandler callback() {
|
||||
return callback;
|
||||
}
|
||||
|
||||
public long createdTimeMs() {
|
||||
return createdTimeMs;
|
||||
}
|
||||
|
||||
public int correlationId() {
|
||||
return correlationId;
|
||||
}
|
||||
|
||||
public int requestTimeoutMs() {
|
||||
return requestTimeoutMs;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,126 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.errors.AuthenticationException;
|
||||
import org.apache.kafka.common.errors.UnsupportedVersionException;
|
||||
import org.apache.kafka.common.requests.AbstractResponse;
|
||||
import org.apache.kafka.common.requests.RequestHeader;
|
||||
|
||||
/**
|
||||
* A response from the server. Contains both the body of the response as well as the correlated request
|
||||
* metadata that was originally sent.
|
||||
*/
|
||||
public class ClientResponse {
|
||||
|
||||
private final RequestHeader requestHeader;
|
||||
private final RequestCompletionHandler callback;
|
||||
private final String destination;
|
||||
private final long receivedTimeMs;
|
||||
private final long latencyMs;
|
||||
private final boolean disconnected;
|
||||
private final UnsupportedVersionException versionMismatch;
|
||||
private final AuthenticationException authenticationException;
|
||||
private final AbstractResponse responseBody;
|
||||
|
||||
/**
|
||||
* @param requestHeader The header of the corresponding request
|
||||
* @param callback The callback to be invoked
|
||||
* @param createdTimeMs The unix timestamp when the corresponding request was created
|
||||
* @param destination The node the corresponding request was sent to
|
||||
* @param receivedTimeMs The unix timestamp when this response was received
|
||||
* @param disconnected Whether the client disconnected before fully reading a response
|
||||
* @param versionMismatch Whether there was a version mismatch that prevented sending the request.
|
||||
* @param responseBody The response contents (or null) if we disconnected, no response was expected,
|
||||
* or if there was a version mismatch.
|
||||
*/
|
||||
public ClientResponse(RequestHeader requestHeader,
|
||||
RequestCompletionHandler callback,
|
||||
String destination,
|
||||
long createdTimeMs,
|
||||
long receivedTimeMs,
|
||||
boolean disconnected,
|
||||
UnsupportedVersionException versionMismatch,
|
||||
AuthenticationException authenticationException,
|
||||
AbstractResponse responseBody) {
|
||||
this.requestHeader = requestHeader;
|
||||
this.callback = callback;
|
||||
this.destination = destination;
|
||||
this.receivedTimeMs = receivedTimeMs;
|
||||
this.latencyMs = receivedTimeMs - createdTimeMs;
|
||||
this.disconnected = disconnected;
|
||||
this.versionMismatch = versionMismatch;
|
||||
this.authenticationException = authenticationException;
|
||||
this.responseBody = responseBody;
|
||||
}
|
||||
|
||||
public long receivedTimeMs() {
|
||||
return receivedTimeMs;
|
||||
}
|
||||
|
||||
public boolean wasDisconnected() {
|
||||
return disconnected;
|
||||
}
|
||||
|
||||
public UnsupportedVersionException versionMismatch() {
|
||||
return versionMismatch;
|
||||
}
|
||||
|
||||
public AuthenticationException authenticationException() {
|
||||
return authenticationException;
|
||||
}
|
||||
|
||||
public RequestHeader requestHeader() {
|
||||
return requestHeader;
|
||||
}
|
||||
|
||||
public String destination() {
|
||||
return destination;
|
||||
}
|
||||
|
||||
public AbstractResponse responseBody() {
|
||||
return responseBody;
|
||||
}
|
||||
|
||||
public boolean hasResponse() {
|
||||
return responseBody != null;
|
||||
}
|
||||
|
||||
public long requestLatencyMs() {
|
||||
return latencyMs;
|
||||
}
|
||||
|
||||
public void onComplete() {
|
||||
if (callback != null)
|
||||
callback.onComplete(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "ClientResponse(receivedTimeMs=" + receivedTimeMs +
|
||||
", latencyMs=" +
|
||||
latencyMs +
|
||||
", disconnected=" +
|
||||
disconnected +
|
||||
", requestHeader=" +
|
||||
requestHeader +
|
||||
", responseBody=" +
|
||||
responseBody +
|
||||
")";
|
||||
}
|
||||
|
||||
}
|
||||
131
clients/src/main/java/org/apache/kafka/clients/ClientUtils.java
Normal file
131
clients/src/main/java/org/apache/kafka/clients/ClientUtils.java
Normal file
@@ -0,0 +1,131 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.config.AbstractConfig;
|
||||
import org.apache.kafka.common.config.ConfigException;
|
||||
import org.apache.kafka.common.config.SaslConfigs;
|
||||
import org.apache.kafka.common.network.ChannelBuilder;
|
||||
import org.apache.kafka.common.network.ChannelBuilders;
|
||||
import org.apache.kafka.common.security.JaasContext;
|
||||
import org.apache.kafka.common.security.auth.SecurityProtocol;
|
||||
import org.apache.kafka.common.utils.LogContext;
|
||||
import org.apache.kafka.common.utils.Time;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.net.InetAddress;
|
||||
import java.net.InetSocketAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
|
||||
import static org.apache.kafka.common.utils.Utils.getHost;
|
||||
import static org.apache.kafka.common.utils.Utils.getPort;
|
||||
|
||||
public final class ClientUtils {
|
||||
private static final Logger log = LoggerFactory.getLogger(ClientUtils.class);
|
||||
|
||||
private ClientUtils() {
|
||||
}
|
||||
|
||||
public static List<InetSocketAddress> parseAndValidateAddresses(List<String> urls, String clientDnsLookupConfig) {
|
||||
return parseAndValidateAddresses(urls, ClientDnsLookup.forConfig(clientDnsLookupConfig));
|
||||
}
|
||||
|
||||
public static List<InetSocketAddress> parseAndValidateAddresses(List<String> urls, ClientDnsLookup clientDnsLookup) {
|
||||
List<InetSocketAddress> addresses = new ArrayList<>();
|
||||
for (String url : urls) {
|
||||
if (url != null && !url.isEmpty()) {
|
||||
try {
|
||||
String host = getHost(url);
|
||||
Integer port = getPort(url);
|
||||
if (host == null || port == null)
|
||||
throw new ConfigException("Invalid url in " + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG + ": " + url);
|
||||
|
||||
if (clientDnsLookup == ClientDnsLookup.RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY) {
|
||||
InetAddress[] inetAddresses = InetAddress.getAllByName(host);
|
||||
for (InetAddress inetAddress : inetAddresses) {
|
||||
String resolvedCanonicalName = inetAddress.getCanonicalHostName();
|
||||
InetSocketAddress address = new InetSocketAddress(resolvedCanonicalName, port);
|
||||
if (address.isUnresolved()) {
|
||||
log.warn("Couldn't resolve server {} from {} as DNS resolution of the canonical hostname {} failed for {}", url, CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, resolvedCanonicalName, host);
|
||||
} else {
|
||||
addresses.add(address);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
InetSocketAddress address = new InetSocketAddress(host, port);
|
||||
if (address.isUnresolved()) {
|
||||
log.warn("Couldn't resolve server {} from {} as DNS resolution failed for {}", url, CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, host);
|
||||
} else {
|
||||
addresses.add(address);
|
||||
}
|
||||
}
|
||||
|
||||
} catch (IllegalArgumentException e) {
|
||||
throw new ConfigException("Invalid port in " + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG + ": " + url);
|
||||
} catch (UnknownHostException e) {
|
||||
throw new ConfigException("Unknown host in " + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG + ": " + url);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (addresses.isEmpty())
|
||||
throw new ConfigException("No resolvable bootstrap urls given in " + CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG);
|
||||
return addresses;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new channel builder from the provided configuration.
|
||||
*
|
||||
* @param config client configs
|
||||
* @param time the time implementation
|
||||
* @param logContext the logging context
|
||||
*
|
||||
* @return configured ChannelBuilder based on the configs.
|
||||
*/
|
||||
public static ChannelBuilder createChannelBuilder(AbstractConfig config, Time time, LogContext logContext) {
|
||||
SecurityProtocol securityProtocol = SecurityProtocol.forName(config.getString(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG));
|
||||
String clientSaslMechanism = config.getString(SaslConfigs.SASL_MECHANISM);
|
||||
return ChannelBuilders.clientChannelBuilder(securityProtocol, JaasContext.Type.CLIENT, config, null,
|
||||
clientSaslMechanism, time, true, logContext);
|
||||
}
|
||||
|
||||
static List<InetAddress> resolve(String host, ClientDnsLookup clientDnsLookup) throws UnknownHostException {
|
||||
InetAddress[] addresses = InetAddress.getAllByName(host);
|
||||
if (ClientDnsLookup.USE_ALL_DNS_IPS == clientDnsLookup) {
|
||||
return filterPreferredAddresses(addresses);
|
||||
} else {
|
||||
return Collections.singletonList(addresses[0]);
|
||||
}
|
||||
}
|
||||
|
||||
static List<InetAddress> filterPreferredAddresses(InetAddress[] allAddresses) {
|
||||
List<InetAddress> preferredAddresses = new ArrayList<>();
|
||||
Class<? extends InetAddress> clazz = null;
|
||||
for (InetAddress address : allAddresses) {
|
||||
if (clazz == null) {
|
||||
clazz = address.getClass();
|
||||
}
|
||||
if (clazz.isInstance(address)) {
|
||||
preferredAddresses.add(address);
|
||||
}
|
||||
}
|
||||
return preferredAddresses;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,427 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import java.util.concurrent.ThreadLocalRandom;
|
||||
|
||||
import org.apache.kafka.common.errors.AuthenticationException;
|
||||
import org.apache.kafka.common.utils.LogContext;
|
||||
import org.slf4j.Logger;
|
||||
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* The state of our connection to each node in the cluster.
|
||||
*
|
||||
*/
|
||||
final class ClusterConnectionStates {
|
||||
private final long reconnectBackoffInitMs;
|
||||
private final long reconnectBackoffMaxMs;
|
||||
private final static int RECONNECT_BACKOFF_EXP_BASE = 2;
|
||||
private final double reconnectBackoffMaxExp;
|
||||
private final Map<String, NodeConnectionState> nodeState;
|
||||
private final Logger log;
|
||||
|
||||
public ClusterConnectionStates(long reconnectBackoffMs, long reconnectBackoffMaxMs, LogContext logContext) {
|
||||
this.log = logContext.logger(ClusterConnectionStates.class);
|
||||
this.reconnectBackoffInitMs = reconnectBackoffMs;
|
||||
this.reconnectBackoffMaxMs = reconnectBackoffMaxMs;
|
||||
this.reconnectBackoffMaxExp = Math.log(this.reconnectBackoffMaxMs / (double) Math.max(reconnectBackoffMs, 1)) / Math.log(RECONNECT_BACKOFF_EXP_BASE);
|
||||
this.nodeState = new HashMap<>();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true iff we can currently initiate a new connection. This will be the case if we are not
|
||||
* connected and haven't been connected for at least the minimum reconnection backoff period.
|
||||
* @param id the connection id to check
|
||||
* @param now the current time in ms
|
||||
* @return true if we can initiate a new connection
|
||||
*/
|
||||
public boolean canConnect(String id, long now) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
if (state == null)
|
||||
return true;
|
||||
else
|
||||
return state.state.isDisconnected() &&
|
||||
now - state.lastConnectAttemptMs >= state.reconnectBackoffMs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if we are disconnected from the given node and can't re-establish a connection yet.
|
||||
* @param id the connection to check
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public boolean isBlackedOut(String id, long now) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null
|
||||
&& state.state.isDisconnected()
|
||||
&& now - state.lastConnectAttemptMs < state.reconnectBackoffMs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the number of milliseconds to wait, based on the connection state, before attempting to send data. When
|
||||
* disconnected, this respects the reconnect backoff time. When connecting or connected, this handles slow/stalled
|
||||
* connections.
|
||||
* @param id the connection to check
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public long connectionDelay(String id, long now) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
if (state == null) return 0;
|
||||
if (state.state.isDisconnected()) {
|
||||
long timeWaited = now - state.lastConnectAttemptMs;
|
||||
return Math.max(state.reconnectBackoffMs - timeWaited, 0);
|
||||
} else {
|
||||
// When connecting or connected, we should be able to delay indefinitely since other events (connection or
|
||||
// data acked) will cause a wakeup once data can be sent.
|
||||
return Long.MAX_VALUE;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if a specific connection establishment is currently underway
|
||||
* @param id The id of the node to check
|
||||
*/
|
||||
public boolean isConnecting(String id) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null && state.state == ConnectionState.CONNECTING;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check whether a connection is either being established or awaiting API version information.
|
||||
* @param id The id of the node to check
|
||||
* @return true if the node is either connecting or has connected and is awaiting API versions, false otherwise
|
||||
*/
|
||||
public boolean isPreparingConnection(String id) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null &&
|
||||
(state.state == ConnectionState.CONNECTING || state.state == ConnectionState.CHECKING_API_VERSIONS);
|
||||
}
|
||||
|
||||
/**
|
||||
* Enter the connecting state for the given connection, moving to a new resolved address if necessary.
|
||||
* @param id the id of the connection
|
||||
* @param now the current time in ms
|
||||
* @param host the host of the connection, to be resolved internally if needed
|
||||
* @param clientDnsLookup the mode of DNS lookup to use when resolving the {@code host}
|
||||
*/
|
||||
public void connecting(String id, long now, String host, ClientDnsLookup clientDnsLookup) {
|
||||
NodeConnectionState connectionState = nodeState.get(id);
|
||||
if (connectionState != null && connectionState.host().equals(host)) {
|
||||
connectionState.lastConnectAttemptMs = now;
|
||||
connectionState.state = ConnectionState.CONNECTING;
|
||||
// Move to next resolved address, or if addresses are exhausted, mark node to be re-resolved
|
||||
connectionState.moveToNextAddress();
|
||||
return;
|
||||
} else if (connectionState != null) {
|
||||
log.info("Hostname for node {} changed from {} to {}.", id, connectionState.host(), host);
|
||||
}
|
||||
|
||||
// Create a new NodeConnectionState if nodeState does not already contain one
|
||||
// for the specified id or if the hostname associated with the node id changed.
|
||||
nodeState.put(id, new NodeConnectionState(ConnectionState.CONNECTING, now,
|
||||
this.reconnectBackoffInitMs, host, clientDnsLookup));
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a resolved address for the given connection, resolving it if necessary.
|
||||
* @param id the id of the connection
|
||||
* @throws UnknownHostException if the address was not resolvable
|
||||
*/
|
||||
public InetAddress currentAddress(String id) throws UnknownHostException {
|
||||
return nodeState(id).currentAddress();
|
||||
}
|
||||
|
||||
/**
|
||||
* Enter the disconnected state for the given node.
|
||||
* @param id the connection we have disconnected
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public void disconnected(String id, long now) {
|
||||
NodeConnectionState nodeState = nodeState(id);
|
||||
nodeState.state = ConnectionState.DISCONNECTED;
|
||||
nodeState.lastConnectAttemptMs = now;
|
||||
updateReconnectBackoff(nodeState);
|
||||
}
|
||||
|
||||
/**
|
||||
* Indicate that the connection is throttled until the specified deadline.
|
||||
* @param id the connection to be throttled
|
||||
* @param throttleUntilTimeMs the throttle deadline in milliseconds
|
||||
*/
|
||||
public void throttle(String id, long throttleUntilTimeMs) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
// The throttle deadline should never regress.
|
||||
if (state != null && state.throttleUntilTimeMs < throttleUntilTimeMs) {
|
||||
state.throttleUntilTimeMs = throttleUntilTimeMs;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the remaining throttling delay in milliseconds if throttling is in progress. Return 0, otherwise.
|
||||
* @param id the connection to check
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public long throttleDelayMs(String id, long now) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
if (state != null && state.throttleUntilTimeMs > now) {
|
||||
return state.throttleUntilTimeMs - now;
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the number of milliseconds to wait, based on the connection state and the throttle time, before
|
||||
* attempting to send data. If the connection has been established but being throttled, return throttle delay.
|
||||
* Otherwise, return connection delay.
|
||||
* @param id the connection to check
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public long pollDelayMs(String id, long now) {
|
||||
long throttleDelayMs = throttleDelayMs(id, now);
|
||||
if (isConnected(id) && throttleDelayMs > 0) {
|
||||
return throttleDelayMs;
|
||||
} else {
|
||||
return connectionDelay(id, now);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Enter the checking_api_versions state for the given node.
|
||||
* @param id the connection identifier
|
||||
*/
|
||||
public void checkingApiVersions(String id) {
|
||||
NodeConnectionState nodeState = nodeState(id);
|
||||
nodeState.state = ConnectionState.CHECKING_API_VERSIONS;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enter the ready state for the given node.
|
||||
* @param id the connection identifier
|
||||
*/
|
||||
public void ready(String id) {
|
||||
NodeConnectionState nodeState = nodeState(id);
|
||||
nodeState.state = ConnectionState.READY;
|
||||
nodeState.authenticationException = null;
|
||||
resetReconnectBackoff(nodeState);
|
||||
}
|
||||
|
||||
/**
|
||||
* Enter the authentication failed state for the given node.
|
||||
* @param id the connection identifier
|
||||
* @param now the current time in ms
|
||||
* @param exception the authentication exception
|
||||
*/
|
||||
public void authenticationFailed(String id, long now, AuthenticationException exception) {
|
||||
NodeConnectionState nodeState = nodeState(id);
|
||||
nodeState.authenticationException = exception;
|
||||
nodeState.state = ConnectionState.AUTHENTICATION_FAILED;
|
||||
nodeState.lastConnectAttemptMs = now;
|
||||
updateReconnectBackoff(nodeState);
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if the connection is in the READY state and currently not throttled.
|
||||
*
|
||||
* @param id the connection identifier
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public boolean isReady(String id, long now) {
|
||||
return isReady(nodeState.get(id), now);
|
||||
}
|
||||
|
||||
private boolean isReady(NodeConnectionState state, long now) {
|
||||
return state != null && state.state == ConnectionState.READY && state.throttleUntilTimeMs <= now;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if there is at least one node with connection in the READY state and not throttled. Returns false
|
||||
* otherwise.
|
||||
*
|
||||
* @param now the current time in ms
|
||||
*/
|
||||
public boolean hasReadyNodes(long now) {
|
||||
for (Map.Entry<String, NodeConnectionState> entry : nodeState.entrySet()) {
|
||||
if (isReady(entry.getValue(), now)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if the connection has been established
|
||||
* @param id The id of the node to check
|
||||
*/
|
||||
public boolean isConnected(String id) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null && state.state.isConnected();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if the connection has been disconnected
|
||||
* @param id The id of the node to check
|
||||
*/
|
||||
public boolean isDisconnected(String id) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null && state.state.isDisconnected();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return authentication exception if an authentication error occurred
|
||||
* @param id The id of the node to check
|
||||
*/
|
||||
public AuthenticationException authenticationException(String id) {
|
||||
NodeConnectionState state = nodeState.get(id);
|
||||
return state != null ? state.authenticationException : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resets the failure count for a node and sets the reconnect backoff to the base
|
||||
* value configured via reconnect.backoff.ms
|
||||
*
|
||||
* @param nodeState The node state object to update
|
||||
*/
|
||||
private void resetReconnectBackoff(NodeConnectionState nodeState) {
|
||||
nodeState.failedAttempts = 0;
|
||||
nodeState.reconnectBackoffMs = this.reconnectBackoffInitMs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the node reconnect backoff exponentially.
|
||||
* The delay is reconnect.backoff.ms * 2**(failures - 1) * (+/- 20% random jitter)
|
||||
* Up to a (pre-jitter) maximum of reconnect.backoff.max.ms
|
||||
*
|
||||
* @param nodeState The node state object to update
|
||||
*/
|
||||
private void updateReconnectBackoff(NodeConnectionState nodeState) {
|
||||
if (this.reconnectBackoffMaxMs > this.reconnectBackoffInitMs) {
|
||||
nodeState.failedAttempts += 1;
|
||||
double backoffExp = Math.min(nodeState.failedAttempts - 1, this.reconnectBackoffMaxExp);
|
||||
double backoffFactor = Math.pow(RECONNECT_BACKOFF_EXP_BASE, backoffExp);
|
||||
long reconnectBackoffMs = (long) (this.reconnectBackoffInitMs * backoffFactor);
|
||||
// Actual backoff is randomized to avoid connection storms.
|
||||
double randomFactor = ThreadLocalRandom.current().nextDouble(0.8, 1.2);
|
||||
nodeState.reconnectBackoffMs = (long) (randomFactor * reconnectBackoffMs);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the given node from the tracked connection states. The main difference between this and `disconnected`
|
||||
* is the impact on `connectionDelay`: it will be 0 after this call whereas `reconnectBackoffMs` will be taken
|
||||
* into account after `disconnected` is called.
|
||||
*
|
||||
* @param id the connection to remove
|
||||
*/
|
||||
public void remove(String id) {
|
||||
nodeState.remove(id);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the state of a given connection.
|
||||
* @param id the id of the connection
|
||||
* @return the state of our connection
|
||||
*/
|
||||
public ConnectionState connectionState(String id) {
|
||||
return nodeState(id).state;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the state of a given node.
|
||||
* @param id the connection to fetch the state for
|
||||
*/
|
||||
private NodeConnectionState nodeState(String id) {
|
||||
NodeConnectionState state = this.nodeState.get(id);
|
||||
if (state == null)
|
||||
throw new IllegalStateException("No entry found for connection " + id);
|
||||
return state;
|
||||
}
|
||||
|
||||
/**
|
||||
* The state of our connection to a node.
|
||||
*/
|
||||
private static class NodeConnectionState {
|
||||
|
||||
ConnectionState state;
|
||||
AuthenticationException authenticationException;
|
||||
long lastConnectAttemptMs;
|
||||
long failedAttempts;
|
||||
long reconnectBackoffMs;
|
||||
// Connection is being throttled if current time < throttleUntilTimeMs.
|
||||
long throttleUntilTimeMs;
|
||||
private List<InetAddress> addresses;
|
||||
private int addressIndex;
|
||||
private final String host;
|
||||
private final ClientDnsLookup clientDnsLookup;
|
||||
|
||||
private NodeConnectionState(ConnectionState state, long lastConnectAttempt, long reconnectBackoffMs,
|
||||
String host, ClientDnsLookup clientDnsLookup) {
|
||||
this.state = state;
|
||||
this.addresses = Collections.emptyList();
|
||||
this.addressIndex = -1;
|
||||
this.authenticationException = null;
|
||||
this.lastConnectAttemptMs = lastConnectAttempt;
|
||||
this.failedAttempts = 0;
|
||||
this.reconnectBackoffMs = reconnectBackoffMs;
|
||||
this.throttleUntilTimeMs = 0;
|
||||
this.host = host;
|
||||
this.clientDnsLookup = clientDnsLookup;
|
||||
}
|
||||
|
||||
public String host() {
|
||||
return host;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetches the current selected IP address for this node, resolving {@link #host()} if necessary.
|
||||
* @return the selected address
|
||||
* @throws UnknownHostException if resolving {@link #host()} fails
|
||||
*/
|
||||
private InetAddress currentAddress() throws UnknownHostException {
|
||||
if (addresses.isEmpty()) {
|
||||
// (Re-)initialize list
|
||||
addresses = ClientUtils.resolve(host, clientDnsLookup);
|
||||
addressIndex = 0;
|
||||
}
|
||||
|
||||
return addresses.get(addressIndex);
|
||||
}
|
||||
|
||||
/**
|
||||
* Jumps to the next available resolved address for this node. If no other addresses are available, marks the
|
||||
* list to be refreshed on the next {@link #currentAddress()} call.
|
||||
*/
|
||||
private void moveToNextAddress() {
|
||||
if (addresses.isEmpty())
|
||||
return; // Avoid div0. List will initialize on next currentAddress() call
|
||||
|
||||
addressIndex = (addressIndex + 1) % addresses.size();
|
||||
if (addressIndex == 0)
|
||||
addresses = Collections.emptyList(); // Exhausted list. Re-resolve on next currentAddress() call
|
||||
}
|
||||
|
||||
public String toString() {
|
||||
return "NodeState(" + state + ", " + lastConnectAttemptMs + ", " + failedAttempts + ", " + throttleUntilTimeMs + ")";
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,168 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.config.AbstractConfig;
|
||||
import org.apache.kafka.common.security.auth.SecurityProtocol;
|
||||
import org.apache.kafka.common.utils.Utils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Configurations shared by Kafka client applications: producer, consumer, connect, etc.
|
||||
*/
|
||||
public class CommonClientConfigs {
|
||||
private static final Logger log = LoggerFactory.getLogger(CommonClientConfigs.class);
|
||||
|
||||
/*
|
||||
* NOTE: DO NOT CHANGE EITHER CONFIG NAMES AS THESE ARE PART OF THE PUBLIC API AND CHANGE WILL BREAK USER CODE.
|
||||
*/
|
||||
|
||||
public static final String BOOTSTRAP_SERVERS_CONFIG = "bootstrap.servers";
|
||||
public static final String BOOTSTRAP_SERVERS_DOC = "A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form "
|
||||
+ "<code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to "
|
||||
+ "discover the full cluster membership (which may change dynamically), this list need not contain the full set of "
|
||||
+ "servers (you may want more than one, though, in case a server is down).";
|
||||
|
||||
public static final String CLIENT_DNS_LOOKUP_CONFIG = "client.dns.lookup";
|
||||
public static final String CLIENT_DNS_LOOKUP_DOC = "Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code> then, when the lookup returns multiple IP addresses for a hostname,"
|
||||
+ " they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers."
|
||||
+ " If the value is <code>resolve_canonical_bootstrap_servers_only</code> each entry will be resolved and expanded into a list of canonical names.";
|
||||
|
||||
public static final String METADATA_MAX_AGE_CONFIG = "metadata.max.age.ms";
|
||||
public static final String METADATA_MAX_AGE_DOC = "The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.";
|
||||
|
||||
public static final String SEND_BUFFER_CONFIG = "send.buffer.bytes";
|
||||
public static final String SEND_BUFFER_DOC = "The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.";
|
||||
public static final int SEND_BUFFER_LOWER_BOUND = -1;
|
||||
|
||||
public static final String RECEIVE_BUFFER_CONFIG = "receive.buffer.bytes";
|
||||
public static final String RECEIVE_BUFFER_DOC = "The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.";
|
||||
public static final int RECEIVE_BUFFER_LOWER_BOUND = -1;
|
||||
|
||||
public static final String CLIENT_ID_CONFIG = "client.id";
|
||||
public static final String CLIENT_ID_DOC = "An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.";
|
||||
|
||||
public static final String CLIENT_RACK_CONFIG = "client.rack";
|
||||
public static final String CLIENT_RACK_DOC = "A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'";
|
||||
|
||||
public static final String RECONNECT_BACKOFF_MS_CONFIG = "reconnect.backoff.ms";
|
||||
public static final String RECONNECT_BACKOFF_MS_DOC = "The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.";
|
||||
|
||||
public static final String RECONNECT_BACKOFF_MAX_MS_CONFIG = "reconnect.backoff.max.ms";
|
||||
public static final String RECONNECT_BACKOFF_MAX_MS_DOC = "The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.";
|
||||
|
||||
public static final String RETRIES_CONFIG = "retries";
|
||||
public static final String RETRIES_DOC = "Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error.";
|
||||
|
||||
public static final String RETRY_BACKOFF_MS_CONFIG = "retry.backoff.ms";
|
||||
public static final String RETRY_BACKOFF_MS_DOC = "The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.";
|
||||
|
||||
public static final String METRICS_SAMPLE_WINDOW_MS_CONFIG = "metrics.sample.window.ms";
|
||||
public static final String METRICS_SAMPLE_WINDOW_MS_DOC = "The window of time a metrics sample is computed over.";
|
||||
|
||||
public static final String METRICS_NUM_SAMPLES_CONFIG = "metrics.num.samples";
|
||||
public static final String METRICS_NUM_SAMPLES_DOC = "The number of samples maintained to compute metrics.";
|
||||
|
||||
public static final String METRICS_RECORDING_LEVEL_CONFIG = "metrics.recording.level";
|
||||
public static final String METRICS_RECORDING_LEVEL_DOC = "The highest recording level for metrics.";
|
||||
|
||||
public static final String METRIC_REPORTER_CLASSES_CONFIG = "metric.reporters";
|
||||
public static final String METRIC_REPORTER_CLASSES_DOC = "A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.";
|
||||
|
||||
public static final String SECURITY_PROTOCOL_CONFIG = "security.protocol";
|
||||
public static final String SECURITY_PROTOCOL_DOC = "Protocol used to communicate with brokers. Valid values are: " +
|
||||
Utils.join(SecurityProtocol.names(), ", ") + ".";
|
||||
public static final String DEFAULT_SECURITY_PROTOCOL = "PLAINTEXT";
|
||||
|
||||
public static final String CONNECTIONS_MAX_IDLE_MS_CONFIG = "connections.max.idle.ms";
|
||||
public static final String CONNECTIONS_MAX_IDLE_MS_DOC = "Close idle connections after the number of milliseconds specified by this config.";
|
||||
|
||||
public static final String REQUEST_TIMEOUT_MS_CONFIG = "request.timeout.ms";
|
||||
public static final String REQUEST_TIMEOUT_MS_DOC = "The configuration controls the maximum amount of time the client will wait "
|
||||
+ "for the response of a request. If the response is not received before the timeout "
|
||||
+ "elapses the client will resend the request if necessary or fail the request if "
|
||||
+ "retries are exhausted.";
|
||||
|
||||
public static final String GROUP_ID_CONFIG = "group.id";
|
||||
public static final String GROUP_ID_DOC = "A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using <code>subscribe(topic)</code> or the Kafka-based offset management strategy.";
|
||||
|
||||
public static final String GROUP_INSTANCE_ID_CONFIG = "group.instance.id";
|
||||
public static final String GROUP_INSTANCE_ID_DOC = "A unique identifier of the consumer instance provided by the end user. "
|
||||
+ "Only non-empty strings are permitted. If set, the consumer is treated as a static member, "
|
||||
+ "which means that only one instance with this ID is allowed in the consumer group at any time. "
|
||||
+ "This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability "
|
||||
+ "(e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.";
|
||||
|
||||
public static final String MAX_POLL_INTERVAL_MS_CONFIG = "max.poll.interval.ms";
|
||||
public static final String MAX_POLL_INTERVAL_MS_DOC = "The maximum delay between invocations of poll() when using "
|
||||
+ "consumer group management. This places an upper bound on the amount of time that the consumer can be idle "
|
||||
+ "before fetching more records. If poll() is not called before expiration of this timeout, then the consumer "
|
||||
+ "is considered failed and the group will rebalance in order to reassign the partitions to another member. "
|
||||
+ "For consumers using a non-null <code>group.instance.id</code> which reach this timeout, partitions will not be immediately reassigned. "
|
||||
+ "Instead, the consumer will stop sending heartbeats and partitions will be reassigned "
|
||||
+ "after expiration of <code>session.timeout.ms</code>. This mirrors the behavior of a static consumer which has shutdown.";
|
||||
|
||||
public static final String REBALANCE_TIMEOUT_MS_CONFIG = "rebalance.timeout.ms";
|
||||
public static final String REBALANCE_TIMEOUT_MS_DOC = "The maximum allowed time for each worker to join the group "
|
||||
+ "once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to "
|
||||
+ "flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed "
|
||||
+ "from the group, which will cause offset commit failures.";
|
||||
|
||||
public static final String SESSION_TIMEOUT_MS_CONFIG = "session.timeout.ms";
|
||||
public static final String SESSION_TIMEOUT_MS_DOC = "The timeout used to detect client failures when using "
|
||||
+ "Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness "
|
||||
+ "to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, "
|
||||
+ "then the broker will remove this client from the group and initiate a rebalance. Note that the value "
|
||||
+ "must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> "
|
||||
+ "and <code>group.max.session.timeout.ms</code>.";
|
||||
|
||||
public static final String HEARTBEAT_INTERVAL_MS_CONFIG = "heartbeat.interval.ms";
|
||||
public static final String HEARTBEAT_INTERVAL_MS_DOC = "The expected time between heartbeats to the consumer "
|
||||
+ "coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the "
|
||||
+ "consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. "
|
||||
+ "The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher "
|
||||
+ "than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.";
|
||||
|
||||
public static final String DEFAULT_API_TIMEOUT_MS_CONFIG = "default.api.timeout.ms";
|
||||
public static final String DEFAULT_API_TIMEOUT_MS_DOC = "Specifies the timeout (in milliseconds) for client APIs. " +
|
||||
"This configuration is used as the default timeout for all client operations that do not specify a <code>timeout</code> parameter.";
|
||||
|
||||
/**
|
||||
* Postprocess the configuration so that exponential backoff is disabled when reconnect backoff
|
||||
* is explicitly configured but the maximum reconnect backoff is not explicitly configured.
|
||||
*
|
||||
* @param config The config object.
|
||||
* @param parsedValues The parsedValues as provided to postProcessParsedConfig.
|
||||
*
|
||||
* @return The new values which have been set as described in postProcessParsedConfig.
|
||||
*/
|
||||
public static Map<String, Object> postProcessReconnectBackoffConfigs(AbstractConfig config,
|
||||
Map<String, Object> parsedValues) {
|
||||
HashMap<String, Object> rval = new HashMap<>();
|
||||
if ((!config.originals().containsKey(RECONNECT_BACKOFF_MAX_MS_CONFIG)) &&
|
||||
config.originals().containsKey(RECONNECT_BACKOFF_MS_CONFIG)) {
|
||||
log.debug("Disabling exponential reconnect backoff because {} is set, but {} is not.",
|
||||
RECONNECT_BACKOFF_MS_CONFIG, RECONNECT_BACKOFF_MAX_MS_CONFIG);
|
||||
rval.put(RECONNECT_BACKOFF_MAX_MS_CONFIG, parsedValues.get(RECONNECT_BACKOFF_MS_CONFIG));
|
||||
}
|
||||
return rval;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
/**
|
||||
* The states of a node connection
|
||||
*
|
||||
* DISCONNECTED: connection has not been successfully established yet
|
||||
* CONNECTING: connection is under progress
|
||||
* CHECKING_API_VERSIONS: connection has been established and api versions check is in progress. Failure of this check will cause connection to close
|
||||
* READY: connection is ready to send requests
|
||||
* AUTHENTICATION_FAILED: connection failed due to an authentication error
|
||||
*/
|
||||
public enum ConnectionState {
|
||||
DISCONNECTED, CONNECTING, CHECKING_API_VERSIONS, READY, AUTHENTICATION_FAILED;
|
||||
|
||||
public boolean isDisconnected() {
|
||||
return this == AUTHENTICATION_FAILED || this == DISCONNECTED;
|
||||
}
|
||||
|
||||
public boolean isConnected() {
|
||||
return this == CHECKING_API_VERSIONS || this == READY;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,484 @@
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.apache.kafka.clients;
|
||||
|
||||
import org.apache.kafka.common.TopicPartition;
|
||||
import org.apache.kafka.common.protocol.Errors;
|
||||
import org.apache.kafka.common.requests.FetchMetadata;
|
||||
import org.apache.kafka.common.requests.FetchRequest.PartitionData;
|
||||
import org.apache.kafka.common.requests.FetchResponse;
|
||||
import org.apache.kafka.common.utils.LogContext;
|
||||
import org.apache.kafka.common.utils.Utils;
|
||||
import org.slf4j.Logger;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.LinkedHashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Map.Entry;
|
||||
import java.util.Set;
|
||||
|
||||
import static org.apache.kafka.common.requests.FetchMetadata.INVALID_SESSION_ID;
|
||||
|
||||
/**
|
||||
* FetchSessionHandler maintains the fetch session state for connecting to a broker.
|
||||
*
|
||||
* Using the protocol outlined by KIP-227, clients can create incremental fetch sessions.
|
||||
* These sessions allow the client to fetch information about a set of partition over
|
||||
* and over, without explicitly enumerating all the partitions in the request and the
|
||||
* response.
|
||||
*
|
||||
* FetchSessionHandler tracks the partitions which are in the session. It also
|
||||
* determines which partitions need to be included in each fetch request, and what
|
||||
* the attached fetch session metadata should be for each request. The corresponding
|
||||
* class on the receiving broker side is FetchManager.
|
||||
*/
|
||||
public class FetchSessionHandler {
|
||||
private final Logger log;
|
||||
|
||||
private final int node;
|
||||
|
||||
/**
|
||||
* The metadata for the next fetch request.
|
||||
*/
|
||||
private FetchMetadata nextMetadata = FetchMetadata.INITIAL;
|
||||
|
||||
public FetchSessionHandler(LogContext logContext, int node) {
|
||||
this.log = logContext.logger(FetchSessionHandler.class);
|
||||
this.node = node;
|
||||
}
|
||||
|
||||
/**
|
||||
* All of the partitions which exist in the fetch request session.
|
||||
*/
|
||||
private LinkedHashMap<TopicPartition, PartitionData> sessionPartitions =
|
||||
new LinkedHashMap<>(0);
|
||||
|
||||
public static class FetchRequestData {
|
||||
/**
|
||||
* The partitions to send in the fetch request.
|
||||
*/
|
||||
private final Map<TopicPartition, PartitionData> toSend;
|
||||
|
||||
/**
|
||||
* The partitions to send in the request's "forget" list.
|
||||
*/
|
||||
private final List<TopicPartition> toForget;
|
||||
|
||||
/**
|
||||
* All of the partitions which exist in the fetch request session.
|
||||
*/
|
||||
private final Map<TopicPartition, PartitionData> sessionPartitions;
|
||||
|
||||
/**
|
||||
* The metadata to use in this fetch request.
|
||||
*/
|
||||
private final FetchMetadata metadata;
|
||||
|
||||
FetchRequestData(Map<TopicPartition, PartitionData> toSend,
|
||||
List<TopicPartition> toForget,
|
||||
Map<TopicPartition, PartitionData> sessionPartitions,
|
||||
FetchMetadata metadata) {
|
||||
this.toSend = toSend;
|
||||
this.toForget = toForget;
|
||||
this.sessionPartitions = sessionPartitions;
|
||||
this.metadata = metadata;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the set of partitions to send in this fetch request.
|
||||
*/
|
||||
public Map<TopicPartition, PartitionData> toSend() {
|
||||
return toSend;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a list of partitions to forget in this fetch request.
|
||||
*/
|
||||
public List<TopicPartition> toForget() {
|
||||
return toForget;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the full set of partitions involved in this fetch request.
|
||||
*/
|
||||
public Map<TopicPartition, PartitionData> sessionPartitions() {
|
||||
return sessionPartitions;
|
||||
}
|
||||
|
||||
public FetchMetadata metadata() {
|
||||
return metadata;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
if (metadata.isFull()) {
|
||||
StringBuilder bld = new StringBuilder("FullFetchRequest(");
|
||||
String prefix = "";
|
||||
for (TopicPartition partition : toSend.keySet()) {
|
||||
bld.append(prefix);
|
||||
bld.append(partition);
|
||||
prefix = ", ";
|
||||
}
|
||||
bld.append(")");
|
||||
return bld.toString();
|
||||
} else {
|
||||
StringBuilder bld = new StringBuilder("IncrementalFetchRequest(toSend=(");
|
||||
String prefix = "";
|
||||
for (TopicPartition partition : toSend.keySet()) {
|
||||
bld.append(prefix);
|
||||
bld.append(partition);
|
||||
prefix = ", ";
|
||||
}
|
||||
bld.append("), toForget=(");
|
||||
prefix = "";
|
||||
for (TopicPartition partition : toForget) {
|
||||
bld.append(prefix);
|
||||
bld.append(partition);
|
||||
prefix = ", ";
|
||||
}
|
||||
bld.append("), implied=(");
|
||||
prefix = "";
|
||||
for (TopicPartition partition : sessionPartitions.keySet()) {
|
||||
if (!toSend.containsKey(partition)) {
|
||||
bld.append(prefix);
|
||||
bld.append(partition);
|
||||
prefix = ", ";
|
||||
}
|
||||
}
|
||||
bld.append("))");
|
||||
return bld.toString();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public class Builder {
|
||||
/**
|
||||
* The next partitions which we want to fetch.
|
||||
*
|
||||
* It is important to maintain the insertion order of this list by using a LinkedHashMap rather
|
||||
* than a regular Map.
|
||||
*
|
||||
* One reason is that when dealing with FULL fetch requests, if there is not enough response
|
||||
* space to return data from all partitions, the server will only return data from partitions
|
||||
* early in this list.
|
||||
*
|
||||
* Another reason is because we make use of the list ordering to optimize the preparation of
|
||||
* incremental fetch requests (see below).
|
||||
*/
|
||||
private LinkedHashMap<TopicPartition, PartitionData> next;
|
||||
private final boolean copySessionPartitions;
|
||||
|
||||
Builder() {
|
||||
this.next = new LinkedHashMap<>();
|
||||
this.copySessionPartitions = true;
|
||||
}
|
||||
|
||||
Builder(int initialSize, boolean copySessionPartitions) {
|
||||
this.next = new LinkedHashMap<>(initialSize);
|
||||
this.copySessionPartitions = copySessionPartitions;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark that we want data from this partition in the upcoming fetch.
|
||||
*/
|
||||
public void add(TopicPartition topicPartition, PartitionData data) {
|
||||
next.put(topicPartition, data);
|
||||
}
|
||||
|
||||
public FetchRequestData build() {
|
||||
if (nextMetadata.isFull()) {
|
||||
if (log.isDebugEnabled()) {
|
||||
log.debug("Built full fetch {} for node {} with {}.",
|
||||
nextMetadata, node, partitionsToLogString(next.keySet()));
|
||||
}
|
||||
sessionPartitions = next;
|
||||
next = null;
|
||||
Map<TopicPartition, PartitionData> toSend =
|
||||
Collections.unmodifiableMap(new LinkedHashMap<>(sessionPartitions));
|
||||
return new FetchRequestData(toSend, Collections.emptyList(), toSend, nextMetadata);
|
||||
}
|
||||
|
||||
List<TopicPartition> added = new ArrayList<>();
|
||||
List<TopicPartition> removed = new ArrayList<>();
|
||||
List<TopicPartition> altered = new ArrayList<>();
|
||||
for (Iterator<Entry<TopicPartition, PartitionData>> iter =
|
||||
sessionPartitions.entrySet().iterator(); iter.hasNext(); ) {
|
||||
Entry<TopicPartition, PartitionData> entry = iter.next();
|
||||
TopicPartition topicPartition = entry.getKey();
|
||||
PartitionData prevData = entry.getValue();
|
||||
PartitionData nextData = next.remove(topicPartition);
|
||||
if (nextData != null) {
|
||||
if (!prevData.equals(nextData)) {
|
||||
// Re-add the altered partition to the end of 'next'
|
||||
next.put(topicPartition, nextData);
|
||||
entry.setValue(nextData);
|
||||
altered.add(topicPartition);
|
||||
}
|
||||
} else {
|
||||
// Remove this partition from the session.
|
||||
iter.remove();
|
||||
// Indicate that we no longer want to listen to this partition.
|
||||
removed.add(topicPartition);
|
||||
}
|
||||
}
|
||||
// Add any new partitions to the session.
|
||||
for (Entry<TopicPartition, PartitionData> entry : next.entrySet()) {
|
||||
TopicPartition topicPartition = entry.getKey();
|
||||
PartitionData nextData = entry.getValue();
|
||||
if (sessionPartitions.containsKey(topicPartition)) {
|
||||
// In the previous loop, all the partitions which existed in both sessionPartitions
|
||||
// and next were moved to the end of next, or removed from next. Therefore,
|
||||
// once we hit one of them, we know there are no more unseen entries to look
|
||||
// at in next.
|
||||
break;
|
||||
}
|
||||
sessionPartitions.put(topicPartition, nextData);
|
||||
added.add(topicPartition);
|
||||
}
|
||||
if (log.isDebugEnabled()) {
|
||||
log.debug("Built incremental fetch {} for node {}. Added {}, altered {}, removed {} " +
|
||||
"out of {}", nextMetadata, node, partitionsToLogString(added),
|
||||
partitionsToLogString(altered), partitionsToLogString(removed),
|
||||
partitionsToLogString(sessionPartitions.keySet()));
|
||||
}
|
||||
Map<TopicPartition, PartitionData> toSend = Collections.unmodifiableMap(next);
|
||||
Map<TopicPartition, PartitionData> curSessionPartitions = copySessionPartitions
|
||||
? Collections.unmodifiableMap(new LinkedHashMap<>(sessionPartitions))
|
||||
: Collections.unmodifiableMap(sessionPartitions);
|
||||
next = null;
|
||||
return new FetchRequestData(toSend, Collections.unmodifiableList(removed),
|
||||
curSessionPartitions, nextMetadata);
|
||||
}
|
||||
}
|
||||
|
||||
public Builder newBuilder() {
|
||||
return new Builder();
|
||||
}
|
||||
|
||||
|
||||
/** A builder that allows for presizing the PartitionData hashmap, and avoiding making a
|
||||
* secondary copy of the sessionPartitions, in cases where this is not necessarily.
|
||||
* This builder is primarily for use by the Replica Fetcher
|
||||
* @param size the initial size of the PartitionData hashmap
|
||||
* @param copySessionPartitions boolean denoting whether the builder should make a deep copy of
|
||||
* session partitions
|
||||
*/
|
||||
public Builder newBuilder(int size, boolean copySessionPartitions) {
|
||||
return new Builder(size, copySessionPartitions);
|
||||
}
|
||||
|
||||
private String partitionsToLogString(Collection<TopicPartition> partitions) {
|
||||
if (!log.isTraceEnabled()) {
|
||||
return String.format("%d partition(s)", partitions.size());
|
||||
}
|
||||
return "(" + Utils.join(partitions, ", ") + ")";
|
||||
}
|
||||
|
||||
/**
|
||||
* Return some partitions which are expected to be in a particular set, but which are not.
|
||||
*
|
||||
* @param toFind The partitions to look for.
|
||||
* @param toSearch The set of partitions to search.
|
||||
* @return null if all partitions were found; some of the missing ones
|
||||
* in string form, if not.
|
||||
*/
|
||||
static Set<TopicPartition> findMissing(Set<TopicPartition> toFind, Set<TopicPartition> toSearch) {
|
||||
Set<TopicPartition> ret = new LinkedHashSet<>();
|
||||
for (TopicPartition partition : toFind) {
|
||||
if (!toSearch.contains(partition)) {
|
||||
ret.add(partition);
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify that a full fetch response contains all the partitions in the fetch session.
|
||||
*
|
||||
* @param response The response.
|
||||
* @return True if the full fetch response partitions are valid.
|
||||
*/
|
||||
String verifyFullFetchResponsePartitions(FetchResponse<?> response) {
|
||||
StringBuilder bld = new StringBuilder();
|
||||
Set<TopicPartition> extra =
|
||||
findMissing(response.responseData().keySet(), sessionPartitions.keySet());
|
||||
Set<TopicPartition> omitted =
|
||||
findMissing(sessionPartitions.keySet(), response.responseData().keySet());
|
||||
if (!omitted.isEmpty()) {
|
||||
bld.append("omitted=(").append(Utils.join(omitted, ", ")).append(", ");
|
||||
}
|
||||
if (!extra.isEmpty()) {
|
||||
bld.append("extra=(").append(Utils.join(extra, ", ")).append(", ");
|
||||
}
|
||||
if ((!omitted.isEmpty()) || (!extra.isEmpty())) {
|
||||
bld.append("response=(").append(Utils.join(response.responseData().keySet(), ", ")).append(")");
|
||||
return bld.toString();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify that the partitions in an incremental fetch response are contained in the session.
|
||||
*
|
||||
* @param response The response.
|
||||
* @return True if the incremental fetch response partitions are valid.
|
||||
*/
|
||||
String verifyIncrementalFetchResponsePartitions(FetchResponse<?> response) {
|
||||
Set<TopicPartition> extra =
|
||||
findMissing(response.responseData().keySet(), sessionPartitions.keySet());
|
||||
if (!extra.isEmpty()) {
|
||||
StringBuilder bld = new StringBuilder();
|
||||
bld.append("extra=(").append(Utils.join(extra, ", ")).append("), ");
|
||||
bld.append("response=(").append(
|
||||
Utils.join(response.responseData().keySet(), ", ")).append("), ");
|
||||
return bld.toString();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a string describing the partitions in a FetchResponse.
|
||||
*
|
||||
* @param response The FetchResponse.
|
||||
* @return The string to log.
|
||||
*/
|
||||
private String responseDataToLogString(FetchResponse<?> response) {
|
||||
if (!log.isTraceEnabled()) {
|
||||
int implied = sessionPartitions.size() - response.responseData().size();
|
||||
if (implied > 0) {
|
||||
return String.format(" with %d response partition(s), %d implied partition(s)",
|
||||
response.responseData().size(), implied);
|
||||
} else {
|
||||
return String.format(" with %d response partition(s)",
|
||||
response.responseData().size());
|
||||
}
|
||||
}
|
||||
StringBuilder bld = new StringBuilder();
|
||||
bld.append(" with response=(").
|
||||
append(Utils.join(response.responseData().keySet(), ", ")).
|
||||
append(")");
|
||||
String prefix = ", implied=(";
|
||||
String suffix = "";
|
||||
for (TopicPartition partition : sessionPartitions.keySet()) {
|
||||
if (!response.responseData().containsKey(partition)) {
|
||||
bld.append(prefix);
|
||||
bld.append(partition);
|
||||
prefix = ", ";
|
||||
suffix = ")";
|
||||
}
|
||||
}
|
||||
bld.append(suffix);
|
||||
return bld.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle the fetch response.
|
||||
*
|
||||
* @param response The response.
|
||||
* @return True if the response is well-formed; false if it can't be processed
|
||||
* because of missing or unexpected partitions.
|
||||
*/
|
||||
public boolean handleResponse(FetchResponse<?> response) {
|
||||
if (response.error() != Errors.NONE) {
|
||||
log.info("Node {} was unable to process the fetch request with {}: {}.",
|
||||
node, nextMetadata, response.error());
|
||||
if (response.error() == Errors.FETCH_SESSION_ID_NOT_FOUND) {
|
||||
nextMetadata = FetchMetadata.INITIAL;
|
||||
} else {
|
||||
nextMetadata = nextMetadata.nextCloseExisting();
|
||||
}
|
||||
return false;
|
||||
}
|
||||
if (nextMetadata.isFull()) {
|
||||
if (response.responseData().isEmpty() && response.throttleTimeMs() > 0) {
|
||||
// Normally, an empty full fetch response would be invalid. However, KIP-219
|
||||
// specifies that if the broker wants to throttle the client, it will respond
|
||||
// to a full fetch request with an empty response and a throttleTimeMs
|
||||
// value set. We don't want to log this with a warning, since it's not an error.
|
||||
// However, the empty full fetch response can't be processed, so it's still appropriate
|
||||
// to return false here.
|
||||
if (log.isDebugEnabled()) {
|
||||
log.debug("Node {} sent a empty full fetch response to indicate that this " +
|
||||
"client should be throttled for {} ms.", node, response.throttleTimeMs());
|
||||
}
|
||||
nextMetadata = FetchMetadata.INITIAL;
|
||||
return false;
|
||||
}
|
||||
String problem = verifyFullFetchResponsePartitions(response);
|
||||
if (problem != null) {
|
||||
log.info("Node {} sent an invalid full fetch response with {}", node, problem);
|
||||
nextMetadata = FetchMetadata.INITIAL;
|
||||
return false;
|
||||
} else if (response.sessionId() == INVALID_SESSION_ID) {
|
||||
if (log.isDebugEnabled())
|
||||
log.debug("Node {} sent a full fetch response{}", node, responseDataToLogString(response));
|
||||
nextMetadata = FetchMetadata.INITIAL;
|
||||
return true;
|
||||
} else {
|
||||
// The server created a new incremental fetch session.
|
||||
if (log.isDebugEnabled())
|
||||
log.debug("Node {} sent a full fetch response that created a new incremental " +
|
||||
"fetch session {}{}", node, response.sessionId(), responseDataToLogString(response));
|
||||
nextMetadata = FetchMetadata.newIncremental(response.sessionId());
|
||||
return true;
|
||||
}
|
||||
} else {
|
||||
String problem = verifyIncrementalFetchResponsePartitions(response);
|
||||
if (problem != null) {
|
||||
log.info("Node {} sent an invalid incremental fetch response with {}", node, problem);
|
||||
nextMetadata = nextMetadata.nextCloseExisting();
|
||||
return false;
|
||||
} else if (response.sessionId() == INVALID_SESSION_ID) {
|
||||
// The incremental fetch session was closed by the server.
|
||||
if (log.isDebugEnabled())
|
||||
log.debug("Node {} sent an incremental fetch response closing session {}{}",
|
||||
node, nextMetadata.sessionId(), responseDataToLogString(response));
|
||||
nextMetadata = FetchMetadata.INITIAL;
|
||||
return true;
|
||||
} else {
|
||||
// The incremental fetch session was continued by the server.
|
||||
// We don't have to do anything special here to support KIP-219, since an empty incremental
|
||||
// fetch request is perfectly valid.
|
||||
if (log.isDebugEnabled())
|
||||
log.debug("Node {} sent an incremental fetch response with throttleTimeMs = {} " +
|
||||
"for session {}{}", response.throttleTimeMs(), node, response.sessionId(),
|
||||
responseDataToLogString(response));
|
||||
nextMetadata = nextMetadata.nextIncremental();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle an error sending the prepared request.
|
||||
*
|
||||
* When a network error occurs, we close any existing fetch session on our next request,
|
||||
* and try to create a new session.
|
||||
*
|
||||
* @param t The exception.
|
||||
*/
|
||||
public void handleError(Throwable t) {
|
||||
log.info("Error sending fetch request {} to node {}: {}.", nextMetadata, node, t);
|
||||
nextMetadata = nextMetadata.nextCloseExisting();
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user