初始化3.0.0版本

This commit is contained in:
zengqiao
2022-08-18 17:04:05 +08:00
parent 462303fca0
commit 51832385b1
2446 changed files with 93177 additions and 127211 deletions

23
km-dist/helm/.helmignore Normal file
View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

27
km-dist/helm/Chart.yaml Normal file
View File

@@ -0,0 +1,27 @@
apiVersion: v2
name: knowstreaming-manager
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
maintainers:
- email: didicloud@didiglobal.com
name: didicloud
appVersion: "1.0.0"
dependencies:
- name: knowstreaming-web
version: 0.1.0
repository: https://docker.nginx.com/
condition: knowstreaming-manager.knowstreaming-web.enabled,web.enabled
- name: elasticsearch
version: 7.6.0
repository: https://docker.elastic.co/
condition: knowstreaming-manager.elasticsearch.enabled,elasticsearch.enabled
- name: ksmysql
version: 5.7.38-1
repository: https://docker.mysql.co/
condition: knowstreaming-manager.ksmysql.enabled,ksmysql.enabled

22
km-dist/helm/README.md Normal file
View File

@@ -0,0 +1,22 @@
- [Requirements](#requirements)
- [Installing](#installing)
## Requirements
* Kubernetes >= 1.14
* [Helm][] >= 2.17.0
## Installing
* 默认配置为全部安装elasticsearch + mysql + knowstreaming
* 如果使用已有的elasticsearch(7.6.x) 和 mysql(5.7) 只需调整 values.yaml 部分参数即可;
* Install it:
- with Helm 3: `helm install knowstreaming knowstreaming-manager/`

View File

@@ -0,0 +1,12 @@
apiVersion: v1
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
version: 7.6.0
appVersion: 7.6.0
sources:
- https://github.com/elastic/elasticsearch
icon: https://helm.elastic.co/icons/elasticsearch.png

View File

@@ -0,0 +1 @@
include ../helpers/common.mk

View File

@@ -0,0 +1,484 @@
# Elasticsearch Helm Chart
[![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+main.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+main/) [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/elastic)](https://artifacthub.io/packages/search?repo=elastic)
This Helm chart is a lightweight way to configure and run our official
[Elasticsearch Docker image][].
<!-- development warning placeholder -->
**Warning**: This branch is used for development, please use the latest [7.x][] release for released version.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Requirements](#requirements)
- [Installing](#installing)
- [Install released version using Helm repository](#install-released-version-using-helm-repository)
- [Install development version using main branch](#install-development-version-using-main-branch)
- [Upgrading](#upgrading)
- [Usage notes](#usage-notes)
- [Configuration](#configuration)
- [Deprecated](#deprecated)
- [FAQ](#faq)
- [How to deploy this chart on a specific K8S distribution?](#how-to-deploy-this-chart-on-a-specific-k8s-distribution)
- [How to deploy dedicated nodes types?](#how-to-deploy-dedicated-nodes-types)
- [Coordinating nodes](#coordinating-nodes)
- [Clustering and Node Discovery](#clustering-and-node-discovery)
- [How to deploy clusters with security (authentication and TLS) enabled?](#how-to-deploy-clusters-with-security-authentication-and-tls-enabled)
- [How to migrate from helm/charts stable chart?](#how-to-migrate-from-helmcharts-stable-chart)
- [How to install plugins?](#how-to-install-plugins)
- [How to use the keystore?](#how-to-use-the-keystore)
- [Basic example](#basic-example)
- [Multiple keys](#multiple-keys)
- [Custom paths and keys](#custom-paths-and-keys)
- [How to enable snapshotting?](#how-to-enable-snapshotting)
- [How to configure templates post-deployment?](#how-to-configure-templates-post-deployment)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- Use this to update TOC: -->
<!-- docker run --entrypoint doctoc --rm -it -v $(pwd):/usr/src jorgeandrada/doctoc README.md --github --no-title -->
## Requirements
* Kubernetes >= 1.14
* [Helm][] >= 2.17.0
* Minimum cluster requirements include the following to run this chart with
default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
See [supported configurations][] for more details.
## Installing
### Install released version using Helm repository
* Add the Elastic Helm charts repo:
`helm repo add elastic https://helm.elastic.co`
* Install it:
- with Helm 3: `helm install elasticsearch elastic/elasticsearch`
- with Helm 2 (deprecated): `helm install --name elasticsearch elastic/elasticsearch`
### Install development version using main branch
* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git`
* Install it:
- with Helm 3: `helm install elasticsearch ./helm-charts/elasticsearch --set imageTag=8.1.0`
- with Helm 2 (deprecated): `helm install --name elasticsearch ./helm-charts/elasticsearch --set imageTag=8.1.0`
## Upgrading
Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before
upgrading to a new chart version.
## Usage notes
* This repo includes a number of [examples][] configurations which can be used
as a reference. They are also used in the automated testing of this chart.
* Automated testing of this chart is currently only run against GKE (Google
Kubernetes Engine).
* The chart deploys a StatefulSet and by default will do an automated rolling
update of your cluster. It does this by waiting for the cluster health to become
green after each instance is updated. If you prefer to update manually you can
set `OnDelete` [updateStrategy][].
* It is important to verify that the JVM heap size in `esJavaOpts` and to set
the CPU/Memory `resources` to something suitable for your cluster.
* To simplify chart and maintenance each set of node groups is deployed as a
separate Helm release. Take a look at the [multi][] example to get an idea for
how this works. Without doing this it isn't possible to resize persistent
volumes in a StatefulSet. By setting it up this way it makes it possible to add
more nodes with a new storage size then drain the old ones. It also solves the
problem of allowing the user to determine which node groups to update first when
doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure
Elasticsearch. It exposes ways to set environment variables and mount secrets
inside of the container. Doing this makes it much easier for this chart to
support multiple versions with minimal changes.
## Configuration
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` |
| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` |
| `createCert` | This will automatically create the SSL certificates | `true` |
| `enableServiceLinks` | Set to false to disabling service links, which can cause slow pod startup times when there are many services in the current namespace. | `true` |
| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` |
| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you could configure the [jvm heap size][] | `""` |
| `esJvmOptions` | [Java options][] for Elasticsearch. Override the default JVM options by adding custom options files . See [values.yaml][] for an example of the formatting | `{}` |
| `esMajorVersion` | Deprecated. Instead, use the version of the chart corresponding to your ES minor version. Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` |
| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` |
| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` |
| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` |
| `healthNameOverride` | Overrides `test-elasticsearch-health` pod name | `""` |
| `hostAliases` | Configurable [hostAliases][] | `[]` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` |
| `imageTag` | The Elasticsearch Docker image tag | `8.1.0` |
| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] |
| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` |
| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` |
| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` |
| `lifecycle` | Allows you to add [lifecycle hooks][]. See [values.yaml][] for an example of the formatting | `{}` |
| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` |
| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` |
| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` |
| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` |
| `networkPolicy` | The [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to set. See [`values.yaml`](./values.yaml) for an example | `{http.enabled: false,transport.enabled: false}` |
| `nodeAffinity` | Value for the [node affinity settings][] | `{}` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` |
| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] |
| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` |
| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] |
| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] |
| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` |
| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"`, or automount the service account token | see [values.yaml][] |
| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] |
| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` |
| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] |
| `roles` | A list with the specific [roles][] for the `nodeGroup` | see [values.yaml][] |
| `schedulerName` | Name of the [alternate scheduler][] | `""` |
| `secret.enabled` | Enable Secret creation for Elasticsearch credentials | `true` |
| `secret.password` | Initial password for the elastic user | `""` (generated randomly) |
| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` |
| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] |
| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` |
| `service.enabled` | Enable non-headless service | `true` |
| `service.externalTrafficPolicy` | Some cloud providers allow you to specify the [LoadBalancer externalTrafficPolicy][]. Kubernetes will use this to preserve the client source IP. This will configure load balancer if `service.type` is `LoadBalancer` | `""` |
| `service.httpPortName` | The name of the http port within the service | `http` |
| `service.labelsHeadless` | Labels to be added to headless service | `{}` |
| `service.labels` | Labels to be added to non-headless service | `{}` |
| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` |
| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` |
| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` |
| `service.transportPortName` | The name of the transport port within the service | `transport` |
| `service.publishNotReadyAddresses` | Consider that all endpoints are considered "ready" even if the Pods themselves are not | `false` |
| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` |
| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` |
| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` |
| `tests.enabled` | Enable creating test related resources when running `helm template` or `helm test` | `true` |
| `tolerations` | Configurable [tolerations][] | `[]` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` |
| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] |
### Deprecated
| Parameter | Description | Default |
|-----------|---------------------------------------------------------------------------------------------------------------|---------|
| `fsGroup` | The Group ID (GID) for [securityContext][] so that the Elasticsearch user can read from the persistent volume | `""` |
## FAQ
### How to deploy this chart on a specific K8S distribution?
This chart is designed to run on production scale Kubernetes clusters with
multiple nodes, lots of memory and persistent storage. For that reason it can be
a bit tricky to run them against local Kubernetes environments such as
[Minikube][].
This chart is highly tested with [GKE][], but some K8S distribution also
requires specific configurations.
We provide examples of configuration for the following K8S providers:
- [Docker for Mac][]
- [KIND][]
- [Minikube][]
- [MicroK8S][]
- [OpenShift][]
### How to deploy dedicated nodes types?
All the Elasticsearch pods deployed share the same configuration. If you need to
deploy dedicated [nodes types][] (for example dedicated master and data nodes),
you can deploy multiple releases of this chart with different configurations
while they share the same `clusterName` value.
For each Helm release, the nodes types can then be defined using `roles` value.
An example of Elasticsearch cluster using 2 different Helm releases for master,
data and coordinating nodes can be found in [examples/multi][].
#### Coordinating nodes
Every node is implicitly a coordinating node. This means that a node that has an
explicit empty list of roles will only act as a coordinating node.
When deploying coordinating-only node with Elasticsearch chart, it is required
to define the empty list of roles in both `roles` value and `node.roles`
settings:
```yaml
roles: []
esConfig:
elasticsearch.yml: |
node.roles: []
```
More details in [#1186 (comment)][]
#### Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two
`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup`
and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all
pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can
just add new groups of nodes with a different `nodeGroup` and they will
automatically discover the correct master. If your master nodes have a different
`nodeGroup` name then you will need to set `masterService` to
`$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate
`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to
contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting
`masterService` to the desired `Service` name of the related cluster is
sufficient.
### How to deploy clusters with security (authentication and TLS) enabled?
This Helm chart can generate a [Kubernetes Secret][] or use an existing one to
setup Elastic credentials.
This Helm chart can use existing [Kubernetes Secret][] to setup Elastic
certificates for example. These secrets should be created outside of this chart
and accessed using [environment variables][] and volumes.
This chart is setting TLS and creating a certificate by default, but you can also provide your own certs as a K8S secret. An example of configuration for providing existing certificates can be found in [examples/security][].
### How to migrate from helm/charts stable chart?
If you currently have a cluster deployed with the [helm/charts stable][] chart
you can follow the [migration guide][].
### How to install plugins?
The recommended way to install plugins into our Docker images is to create a
[custom Docker image][].
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install
plugins is not a great idea or something that we recommend. Especially in
Kubernetes where it is normal and expected for a container to be moved to
another host at random times.
2. Mutating the state of a running Docker image (by installing plugins) goes
against best practices of containers and immutable infrastructure.
### How to use the keystore?
#### Basic example
Create the secret, the key name needs to be the keystore key path. In this
example we will create a secret from a file and from a literal string.
```
kubectl create secret generic encryption-key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic slack-hook --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
To add these secrets to the keystore:
```
keystore:
- secretName: encryption-key
- secretName: slack-hook
```
#### Multiple keys
All keys in the secret will be added to the keystore. To create the previous
example in one secret you could also do:
```
kubectl create secret generic keystore-secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
```
keystore:
- secretName: keystore-secrets
```
#### Custom paths and keys
If you are using these secrets for other applications (besides the Elasticsearch
keystore) then it is also possible to specify the keystore path and which keys
you want to add. Everything specified under each `keystore` item will be passed
through to the `volumeMounts` section for mounting the [secret][]. In this
example we will only add the `slack_hook` key from a secret that also has other
keys. Our secret looks like this:
```
kubectl create secret generic slack-secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
We only want to add the `slack_hook` key to the keystore at path
`xpack.notification.slack.account.monitoring.secure_url`:
```
keystore:
- secretName: slack-secrets
items:
- key: slack_hook
path: xpack.notification.slack.account.monitoring.secure_url
```
You can also take a look at the [config example][] which is used as part of the
automated testing pipeline.
### How to enable snapshotting?
1. Install your [snapshot plugin][] into a custom Docker image following the
[how to install plugins guide][].
2. Add any required secrets or credentials into an Elasticsearch keystore
following the [how to use the keystore][] guide.
3. Configure the [snapshot repository][] as you normally would.
4. To automate snapshots you can use [Snapshot Lifecycle Management][] or a tool
like [curator][].
### How to configure templates post-deployment?
You can use `postStart` [lifecycle hooks][] to run code triggered after a
container is created.
Here is an example of `postStart` hook to configure templates:
```yaml
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas
TEMPLATE_NAME=my_template
INDEX_PATTERN="logstash-*"
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
```
## Contributing
Please check [CONTRIBUTING.md][] before any contribution or for any questions
about our development and testing process.
[7.x]: https://github.com/elastic/helm-charts/releases
[#63]: https://github.com/elastic/helm-charts/issues/63
[#1186 (comment)]: https://github.com/elastic/helm-charts/pull/1186#discussion_r631166442
[7.9.2]: https://github.com/elastic/helm-charts/blob/7.9.2/elasticsearch/README.md
[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/main/BREAKING_CHANGES.md
[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/main/CHANGELOG.md
[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/main/CONTRIBUTING.md
[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html
[clustering and node discovery]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#clustering-and-node-discovery
[config example]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/config/values.yaml
[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html
[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image
[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html#minimum_master_nodes
[docker for mac]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/docker-for-mac
[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params
[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config
[environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
[examples]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/
[examples/multi]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi
[examples/security]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/security
[gke]: https://cloud.google.com/kubernetes-engine
[helm]: https://helm.sh
[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/
[how to install plugins guide]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#how-to-install-plugins
[how to use the keystore]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#how-to-use-the-keystore
[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings
[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images
[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html
[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
[hostAliases]: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
[kind]: https://github.com/elastic/helm-charts/tree/main//elasticsearch/examples/kubernetes-kind
[kubernetes secrets]: https://kubernetes.io/docs/concepts/configuration/secret/
[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
[loadBalancer externalTrafficPolicy]: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
[migration guide]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/migration/README.md
[minikube]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/minikube
[microk8s]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/microk8s
[multi]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/
[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html
[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
[node-certificates]: https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#node-certificates
[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
[nodes types]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html
[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
[openshift]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/openshift
[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html
[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[snapshot lifecycle management]: https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-lifecycle-management.html
[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository.html
[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
[supported configurations]: https://github.com/elastic/helm-charts/blob/main/README.md#supported-configurations
[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count
[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings
[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[values.yaml]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/values.yaml
[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage

View File

@@ -0,0 +1,21 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-config
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
secrets:
kubectl delete secret elastic-config-credentials elastic-config-secret elastic-config-slack elastic-config-custom-path || true
kubectl create secret generic elastic-config-credentials --from-literal=password=changeme --from-literal=username=elastic
kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
kubectl create secret generic elastic-config-secret --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic elastic-config-custom-path --from-literal=slack_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' --from-literal=thing_i_don_tcare_about=test
test: secrets install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,27 @@
# Config
This example deploy a single node Elasticsearch 8.1.0 with authentication and
custom [values][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/config-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/config/test/goss.yaml
[values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/config/values.yaml

View File

@@ -0,0 +1,31 @@
http:
https://localhost:9200/_cluster/health:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":1'
- '"number_of_data_nodes":1'
https://localhost:9200:
status: 200
timeout: 2000
username: elastic
allow-insecure: true
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "config"'
- "You Know, for Search"
command:
"elasticsearch-keystore list":
exit-status: 0
stdout:
- keystore.seed
- bootstrap.password
- xpack.notification.slack.account.monitoring.secure_url
- xpack.notification.slack.account.otheraccount.secure_url
- xpack.watcher.encryption_key

View File

@@ -0,0 +1,29 @@
---
clusterName: "config"
replicas: 1
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: password
# This is just a dummy file to make sure that
# the keystore can be mounted at the same time
# as a custom elasticsearch.yml
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
path.data: /usr/share/elasticsearch/data
keystore:
- secretName: elastic-config-secret
- secretName: elastic-config-slack
- secretName: elastic-config-custom-path
items:
- key: slack_url
path: xpack.notification.slack.account.otheraccount.secure_url
secret:
enabled: false

View File

@@ -0,0 +1 @@
supersecret

View File

@@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-default
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,25 @@
# Default
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster using
[default values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/default/test/goss.yaml
[default values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/values.yaml

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash -x
kubectl proxy || true &
make &
PROC_ID=$!
while kill -0 "$PROC_ID" >/dev/null 2>&1; do
echo "PROCESS IS RUNNING"
if curl --fail 'http://localhost:8001/api/v1/proxy/namespaces/default/services/elasticsearch-master:9200/_search' ; then
echo "cluster is healthy"
else
echo "cluster not healthy!"
exit 1
fi
sleep 1
done
echo "PROCESS TERMINATED"
exit 0

View File

@@ -0,0 +1,44 @@
kernel-param:
vm.max_map_count:
value: "262144"
http:
https://elasticsearch-master:9200/_cluster/health:
status: 200
timeout: 2000
username: elastic
allow-insecure: true
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"number" : "8.1.0"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"
file:
/usr/share/elasticsearch/data:
exists: true
mode: "2775"
owner: root
group: elasticsearch
filetype: directory
mount:
/usr/share/elasticsearch/data:
exists: true
user:
elasticsearch:
exists: true
uid: 1000
gid: 1000

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-docker-for-mac
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,23 @@
# Docker for Mac
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster on [Docker for Mac][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/docker-for-mac/values.yaml
[docker for mac]: https://docs.docker.com/docker-for-mac/kubernetes/

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "hostpath"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,17 @@
default: test
RELEASE := helm-es-kind
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
install-local-path:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values-local-path.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,36 @@
# KIND
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster on [Kind][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
Note that Kind < 0.7.0 are affected by a [kind issue][] with mount points
created from PVCs not writable by non-root users. [kubernetes-sigs/kind#1157][]
fix it in Kind 0.7.0.
The workaround for Kind < 0.7.0 is to install manually
[Rancher Local Path Provisioner][] and use `local-path` storage class for
Elasticsearch volumes (see [Makefile][] instructions).
## Usage
* For Kind >= 0.7.0: Deploy Elasticsearch chart with the default values: `make install`
* For Kind < 0.7.0: Deploy Elasticsearch chart with `local-path` storage class: `make install-local-path`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/kubernetes-kind/values.yaml
[kind]: https://kind.sigs.k8s.io/
[kind issue]: https://github.com/kubernetes-sigs/kind/issues/830
[kubernetes-sigs/kind#1157]: https://github.com/kubernetes-sigs/kind/pull/1157
[rancher local path provisioner]: https://github.com/rancher/local-path-provisioner
[Makefile]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/kubernetes-kind/Makefile

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-microk8s
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,32 @@
# MicroK8S
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster on [MicroK8S][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
The following MicroK8S [addons][] need to be enabled:
- `dns`
- `helm`
- `storage`
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[addons]: https://microk8s.io/docs/addons
[custom values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/microk8s/values.yaml
[MicroK8S]: https://microk8s.io

View File

@@ -0,0 +1,32 @@
---
# Disable privileged init Container creation.
sysctlInitContainer:
enabled: false
# Restrict the use of the memory-mapping when sysctlInitContainer is disabled.
esConfig:
elasticsearch.yml: |
node.store.allow_mmap: false
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "microk8s-hostpath"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,10 @@
PREFIX := helm-es-migration
data:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
master:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
client:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../

View File

@@ -0,0 +1,167 @@
# Migration Guide from helm/charts
There are two viable options for migrating from the community Elasticsearch Helm
chart from the [helm/charts][] repo.
1. Restoring from Snapshot to a fresh cluster
2. Live migration by joining a new cluster to the existing cluster.
## Restoring from Snapshot
This is the recommended and preferred option. The downside is that it will
involve a period of write downtime during the migration. If you have a way to
temporarily stop writes to your cluster then this is the way to go. This is also
a lot simpler as it just involves launching a fresh cluster and restoring a
snapshot following the [restoring to a different cluster guide][].
## Live migration
If restoring from a snapshot is not possible due to the write downtime then a
live migration is also possible. It is very important to first test this in a
testing environment to make sure you are comfortable with the process and fully
understand what is happening.
This process will involve joining a new set of master, data and client nodes to
an existing cluster that has been deployed using the [helm/charts][] community
chart. Nodes will then be replaced one by one in a controlled fashion to
decommission the old cluster.
This example will be using the default values for the existing helm/charts
release and for the Elastic helm-charts release. If you have changed any of the
default values then you will need to first make sure that your values are
configured in a compatible way before starting the migration.
The process will involve a re-sync and a rolling restart of all of your data
nodes. Therefore it is important to disable shard allocation and perform a synced
flush like you normally would during any other rolling upgrade. See the
[rolling upgrades guide][] for more information.
* The default image for this chart is
`docker.elastic.co/elasticsearch/elasticsearch` which contains the default
distribution of Elasticsearch with a [basic license][]. Make sure to update the
`image` and `imageTag` values to the correct Docker image and Elasticsearch
version that you currently have deployed.
* Convert your current helm/charts configuration into something that is
compatible with this chart.
* Take a fresh snapshot of your cluster. If something goes wrong you want to be
able to restore your data no matter what.
* Check that your clusters health is green. If not abort and make sure your
cluster is healthy before continuing:
```
curl localhost:9200/_cluster/health
```
* Deploy new data nodes which will join the existing cluster. Take a look at the
configuration in [data.yaml][]:
```
make data
```
* Check that the new nodes have joined the cluster (run this and any other curl
commands from within one of your pods):
```
curl localhost:9200/_cat/nodes
```
* Check that your cluster is still green. If so we can now start to scale down
the existing data nodes. Assuming you have the default amount of data nodes (2)
we now want to scale it down to 1:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=1
```
* Wait for your cluster to become green again:
```
watch 'curl -s localhost:9200/_cluster/health'
```
* Once the cluster is green we can scale down again:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=0
```
* Wait for the cluster to be green again.
* OK. We now have all data nodes running in the new cluster. Time to replace the
masters by firstly scaling down the masters from 3 to 2. Between each step make
sure to wait for the cluster to become green again, and check with
`curl localhost:9200/_cat/nodes` that you see the correct amount of master
nodes. During this process we will always make sure to keep at least 2 master
nodes as to not lose quorum:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=2
```
* Now deploy a single new master so that we have 3 masters again. See
[master.yaml][] for the configuration:
```
make master
```
* Scale down old masters to 1:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=1
```
* Edit the masters in [masters.yaml][] to 2 and redeploy:
```
make master
```
* Scale down the old masters to 0:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=0
```
* Edit the [masters.yaml][] to have 3 replicas and remove the
`discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the
masters. This will make sure all 3 masters are running in the new cluster and
are pointing at each other for discovery:
```
make master
```
* Remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then
redeploy the data nodes to make sure they are pointing at the new masters:
```
make data
```
* Deploy the client nodes:
```
make client
```
* Update any processes that are talking to the existing client nodes and point
them to the new client nodes. Once this is done you can scale down the old
client nodes:
```
kubectl scale deployment my-release-elasticsearch-client --replicas=0
```
* The migration should now be complete. After verifying that everything is
working correctly you can cleanup leftover resources from your old cluster.
[basic license]: https://www.elastic.co/subscriptions
[data.yaml]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/migration/data.yaml
[helm/charts]: https://github.com/helm/charts/tree/master/stable/elasticsearch
[master.yaml]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/migration/master.yaml
[restoring to a different cluster guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/modules-snapshots.html#_restoring_to_a_different_cluster
[rolling upgrades guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/rolling-upgrades.html

View File

@@ -0,0 +1,19 @@
---
replicas: 2
clusterName: "elasticsearch"
nodeGroup: "client"
esMajorVersion: 6
roles: []
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
storageClassName: "standard"
resources:
requests:
storage: 1Gi # Currently needed till pvcs are made optional
persistence:
enabled: false

View File

@@ -0,0 +1,14 @@
---
replicas: 2
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "data"
roles:
- data

View File

@@ -0,0 +1,23 @@
---
# Temporarily set to 3 so we can scale up/down the old a new cluster
# one at a time whilst always keeping 3 masters running
replicas: 1
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "master"
roles:
- master
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
storageClassName: "standard"
resources:
requests:
storage: 4Gi

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-minikube
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,38 @@
# Minikube
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster on [Minikube][]
using [custom values][].
If helm or kubectl timeouts occur, you may consider creating a minikube VM with
more CPU cores or memory allocated.
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
In order to properly support the required persistent volume claims for the
Elasticsearch StatefulSet, the `default-storageclass` and `storage-provisioner`
minikube addons must be enabled.
```
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
```
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/minikube/values.yaml
[minikube]: https://minikube.sigs.k8s.io/docs/

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,19 @@
default: test
include ../../../helpers/examples.mk
PREFIX := helm-es-multi
RELEASE := helm-es-multi-master
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../
test: install goss
purge:
helm del $(PREFIX)-master
helm del $(PREFIX)-data
helm del $(PREFIX)-client

View File

@@ -0,0 +1,29 @@
# Multi
This example deploy an Elasticsearch 8.1.0 cluster composed of 3 different Helm
releases:
- `helm-es-multi-master` for the 3 master nodes using [master values][]
- `helm-es-multi-data` for the 3 data nodes using [data values][]
- `helm-es-multi-client` for the 3 client nodes using [client values][]
## Usage
* Deploy the 3 Elasticsearch releases: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/multi-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[client values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/client.yaml
[data values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/data.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/test/goss.yaml
[master values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/master.yaml

View File

@@ -0,0 +1,50 @@
---
clusterName: "multi"
nodeGroup: "client"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: multi-master-credentials
key: password
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.http.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.transport.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.transport.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
- name: xpack.security.http.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.http.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.http.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
roles: []
persistence:
enabled: false
# For client nodes, we also need to add an empty node.roles in elasticsearch.yml
# This is due to https://github.com/elastic/helm-charts/pull/1186#discussion_r631225687
esConfig:
elasticsearch.yml: |
node.roles: []
secret:
enabled: false
createCert: false
secretMounts:
- name: elastic-certificates
secretName: multi-master-certs
path: /usr/share/elasticsearch/config/certs

View File

@@ -0,0 +1,48 @@
---
clusterName: "multi"
nodeGroup: "data"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: multi-master-credentials
key: password
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.http.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.transport.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.transport.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
- name: xpack.security.http.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.http.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.http.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
roles:
- data
- data_content
- data_hot
- data_warm
- data_cold
- data_frozen
- ingest
secret:
enabled: false
createCert: false
secretMounts:
- name: elastic-certificates
secretName: multi-master-certs
path: /usr/share/elasticsearch/config/certs

View File

@@ -0,0 +1,6 @@
---
clusterName: "multi"
nodeGroup: "master"
roles:
- master

View File

@@ -0,0 +1,12 @@
http:
https://localhost:9200/_cluster/health:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"cluster_name":"multi"'
- '"number_of_nodes":9'
- '"number_of_data_nodes":3'

View File

@@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-networkpolicy
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,37 @@
networkPolicy:
http:
enabled: true
explicitNamespacesSelector:
# Accept from namespaces with all those different rules (from whitelisted Pods)
matchLabels:
role: frontend-http
matchExpressions:
- {key: role, operator: In, values: [frontend-http]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-http
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-http
transport:
enabled: true
allowExternal: true
explicitNamespacesSelector:
matchLabels:
role: frontend-transport
matchExpressions:
- {key: role, operator: In, values: [frontend-transport]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-transport
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-transport

View File

@@ -0,0 +1,13 @@
default: test
include ../../../helpers/examples.mk
RELEASE := elasticsearch
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,24 @@
# OpenShift
This example deploy a 3 nodes Elasticsearch 8.1.0 cluster on [OpenShift][]
using [custom values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[custom values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/openshift/values.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/openshift/test/goss.yaml
[openshift]: https://www.openshift.com/

View File

@@ -0,0 +1,20 @@
http:
https://localhost:9200/_cluster/health:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"number" : "8.1.0"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"

View File

@@ -0,0 +1,11 @@
---
securityContext:
runAsUser: null
podSecurityContext:
fsGroup: null
runAsUser: null
sysctlInitContainer:
enabled: false

View File

@@ -0,0 +1,36 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-security
ELASTICSEARCH_IMAGE := docker.elastic.co/elasticsearch/elasticsearch:$(STACK_VERSION)
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: secrets install goss
purge:
kubectl delete secrets elastic-certificates elastic-certificate-pem elastic-certificate-crt|| true
helm del $(RELEASE)
pull-elasticsearch-image:
docker pull $(ELASTICSEARCH_IMAGE)
secrets:
docker rm -f elastic-helm-charts-certs || true
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12 || true
docker run --name elastic-helm-charts-certs -i -w /tmp \
$(ELASTICSEARCH_IMAGE) \
/bin/sh -c " \
elasticsearch-certutil ca --out /tmp/elastic-stack-ca.p12 --pass '' && \
elasticsearch-certutil cert --name security-master --dns security-master --ca /tmp/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /tmp/elastic-certificates.p12" && \
docker cp elastic-helm-charts-certs:/tmp/elastic-certificates.p12 ./ && \
docker rm -f elastic-helm-charts-certs && \
openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem && \
openssl x509 -outform der -in elastic-certificate.pem -out elastic-certificate.crt && \
kubectl create secret generic elastic-certificates --from-file=elastic-certificates.p12 && \
kubectl create secret generic elastic-certificate-pem --from-file=elastic-certificate.pem && \
kubectl create secret generic elastic-certificate-crt --from-file=elastic-certificate.crt && \
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12

View File

@@ -0,0 +1,29 @@
# Security
This example deploy a 3 nodes Elasticsearch 8.1.0 with authentication and
autogenerated certificates for TLS (see [values][]).
Note that this configuration should be used for test only. For a production
deployment you should generate SSL certificates following the [official docs][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/security-master 9200
curl -u elastic:changeme https://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/security/test/goss.yaml
[official docs]: https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#node-certificates
[values]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/security/values.yaml

View File

@@ -0,0 +1,44 @@
http:
https://security-master:9200/_cluster/health:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200/:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "security"'
- "You Know, for Search"
https://localhost:9200/_license:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "active"
- "basic"
file:
/usr/share/elasticsearch/config/elasticsearch.yml:
exists: true
contains:
- "xpack.security.enabled: true"
- "xpack.security.transport.ssl.enabled: true"
- "xpack.security.transport.ssl.verification_mode: certificate"
- "xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.enabled: true"
- "xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"

View File

@@ -0,0 +1,28 @@
---
clusterName: "security"
nodeGroup: "master"
createCert: false
roles:
- master
- ingest
- data
protocol: https
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs

View File

@@ -0,0 +1,19 @@
default: test
include ../../../helpers/examples.mk
CHART := elasticsearch
RELEASE := helm-es-upgrade
FROM := 7.17.1 # upgrade from versions before 7.17.1 isn't compatible with 8.x
install:
../../../helpers/upgrade.sh --chart $(CHART) --release $(RELEASE) --from $(FROM)
# Rolling upgrade doesn't work when upgrading from clusters with security disabled.
# This is because nodes with security enabled can't join a cluster with security disabled.
# Every nodes need to be recreated at the same time so they can recreate a cluster with security enabled
kubectl delete pod --selector=app=upgrade-master
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,17 @@
# Upgrade
This example will deploy a 3 node Elasticsearch cluster chart using an old chart
version, then upgrade it.
## Usage
* Deploy and upgrade Elasticsearch chart with the default values: `make install`
## Testing
You can also run [goss integration tests][] using `make test`.
[goss integration tests]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/upgrade/test/goss.yaml

View File

@@ -0,0 +1,22 @@
http:
https://localhost:9200/_cluster/health:
status: 200
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
allow-insecure: true
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200:
status: 200
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
allow-insecure: true
timeout: 2000
body:
- '"number" : "8.1.0"'
- '"cluster_name" : "upgrade"'
- "You Know, for Search"

View File

@@ -0,0 +1,6 @@
---
clusterName: upgrade
# Rolling upgrade doesn't work when upgrading from clusters with security disabled.
# This is because nodes with security enabled can't join a cluster with security disabled.
# Every nodes need to be recreated at the same time so they can recreate a cluster with security enabled
updateStrategy: OnDelete

View File

@@ -0,0 +1,8 @@
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "elasticsearch.uname" . }} -w
2. Retrieve elastic user's password.
$ kubectl get secrets --namespace={{ .Release.Namespace }} {{ template "elasticsearch.uname" . }}-credentials -ojsonpath='{.data.password}' | base64 -d
{{- if .Values.tests.enabled }}
3. Test cluster health using Helm test.
$ helm --namespace={{ .Release.Namespace }} test {{ .Release.Name }}
{{- end -}}

View File

@@ -0,0 +1,90 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "elasticsearch.uname" -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- else -}}
{{ .Values.nameOverride }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- end -}}
{{/*
Generate certificates
*/}}
{{- define "elasticsearch.gen-certs" -}}
{{- $altNames := list ( include "elasticsearch.masterService" . ) ( printf "%s.%s" (include "elasticsearch.masterService" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "elasticsearch.masterService" .) .Release.Namespace ) -}}
{{- $ca := genCA "elasticsearch-ca" 365 -}}
{{- $cert := genSignedCert ( include "elasticsearch.masterService" . ) nil $altNames 365 $ca -}}
tls.crt: {{ $cert.Cert | toString | b64enc }}
tls.key: {{ $cert.Key | toString | b64enc }}
ca.crt: {{ $ca.Cert | toString | b64enc }}
{{- end -}}
{{- define "elasticsearch.masterService" -}}
{{- if empty .Values.masterService -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.nameOverride }}-master
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.endpoints" -}}
{{- $replicas := int (toString (.Values.replicas)) }}
{{- $uname := (include "elasticsearch.uname" .) }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.roles" -}}
{{- range $.Values.roles -}}
{{ . }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
8
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "elasticsearch.serviceAccount" -}}
{{- .Values.rbac.serviceAccountName | default (include "elasticsearch.uname" .) -}}
{{- end -}}

View File

@@ -0,0 +1,34 @@
{{- if .Values.esConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
{{- if .Values.esJvmOptions }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-jvm-options
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esJvmOptions }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,64 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "elasticsearch.uname" . -}}
{{- $httpPort := .Values.httpPort -}}
{{- $pathtype := .Values.ingress.pathtype -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className | quote }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- if .ingressPath }}
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- else }}
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
{{- end}}
rules:
{{- range .Values.ingress.hosts }}
{{- if $ingressPath }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $httpPort }}
{{- else }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ .servicePort | default $httpPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,61 @@
{{- if (or .Values.networkPolicy.http.enabled .Values.networkPolicy.transport.enabled) }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
spec:
podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
ingress: # Allow inbound connections
{{- if .Values.networkPolicy.http.enabled }}
# For HTTP access
- ports:
- port: {{ .Values.httpPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-http-client: "true"
{{- with .Values.networkPolicy.http.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.http.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.networkPolicy.transport.enabled }}
# For transport access
- ports:
- port: {{ .Values.transportPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-transport-client: "true"
{{- with .Values.networkPolicy.transport.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.transport.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
# Or from other ElasticSearch Pods
- podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.maxUnavailable }}
{{- if .Capabilities.APIVersions.Has "policy/v1" -}}
apiVersion: policy/v1
{{- else}}
apiVersion: policy/v1beta1
{{- end }}
kind: PodDisruptionBudget
metadata:
name: "{{ template "elasticsearch.uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}

View File

@@ -0,0 +1,18 @@
{{- if .Values.podSecurityPolicy.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
{{- if .Capabilities.APIVersions.Has "policy/v1" -}}
apiVersion: policy/v1
{{- else}}
apiVersion: policy/v1beta1
{{- end }}
kind: PodSecurityPolicy
metadata:
name: {{ default $fullName .Values.podSecurityPolicy.name | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
spec:
{{ toYaml .Values.podSecurityPolicy.spec | indent 2 }}
{{- end -}}

View File

@@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}

View File

@@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
subjects:
- kind: ServiceAccount
name: "{{ template "elasticsearch.serviceAccount" . }}"
namespace: {{ .Release.Namespace | quote }}
roleRef:
kind: Role
name: {{ $fullName | quote }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@@ -0,0 +1,17 @@
{{- if .Values.createCert }}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: {{ template "elasticsearch.uname" . }}-certs
labels:
app: {{ template "elasticsearch.uname" . }}
chart: "{{ .Chart.Name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
{{ ( include "elasticsearch.gen-certs" . ) | indent 2 }}
{{- end }}

View File

@@ -0,0 +1,23 @@
{{- if .Values.secret.enabled -}}
{{- $passwordValue := (randAlphaNum 16) | b64enc | quote }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "elasticsearch.uname" . }}-credentials
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
type: Opaque
data:
username: {{ "elastic" | b64enc }}
{{- if .Values.secret.password }}
password: {{ .Values.secret.password | b64enc }}
{{- else }}
password: {{ $passwordValue }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,78 @@
{{- if .Values.service.enabled -}}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}
{{- else }}
name: {{ template "elasticsearch.uname" . }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4}}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
publishNotReadyAddresses: {{ .Values.service.publishNotReadyAddresses }}
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: {{ .Values.service.transportPortName | default "transport" }}
protocol: TCP
port: {{ .Values.transportPort }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}-headless
{{- else }}
name: {{ template "elasticsearch.uname" . }}-headless
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labelsHeadless }}
{{ toYaml .Values.service.labelsHeadless | indent 4 }}
{{- end }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
port: {{ .Values.httpPort }}
- name: {{ .Values.service.transportPortName | default "transport" }}
port: {{ .Values.transportPort }}

View File

@@ -0,0 +1,16 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ template "elasticsearch.serviceAccount" . }}"
annotations:
{{- with .Values.rbac.serviceAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
{{- end -}}

View File

@@ -0,0 +1,429 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
esMajorVersion: "{{ include "elasticsearch.esMajorVersion" . }}"
spec:
serviceName: {{ template "elasticsearch.uname" . }}-headless
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
replicas: {{ .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ template "elasticsearch.uname" . }}
{{- if .Values.persistence.labels.enabled }}
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{ toYaml .Values.volumeClaimTemplate | indent 6 }}
{{- end }}
template:
metadata:
name: "{{ template "elasticsearch.uname" . }}"
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if or .Values.esConfig .Values.esJvmOptions }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.fsGroup }}
fsGroup: {{ .Values.fsGroup }} # Deprecated value, please use .Values.podSecurityContext.fsGroup
{{- end }}
{{- if or .Values.rbac.create .Values.rbac.serviceAccountName }}
serviceAccountName: "{{ template "elasticsearch.serviceAccount" . }}"
{{- end }}
automountServiceAccountToken: {{ .Values.rbac.automountToken }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if or (eq .Values.antiAffinity "hard") (eq .Values.antiAffinity "soft") .Values.nodeAffinity }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
affinity:
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- if .defaultMode }}
defaultMode: {{ .defaultMode }}
{{- end }}
{{- end }}
{{- if .Values.esConfig }}
- name: esconfig
configMap:
name: {{ template "elasticsearch.uname" . }}-config
{{- end }}
{{- if .Values.esJvmOptions }}
- name: esjvmoptions
configMap:
name: {{ template "elasticsearch.uname" . }}-jvm-options
{{- end }}
{{- if .Values.createCert }}
- name: elasticsearch-certs
secret:
secretName: {{ template "elasticsearch.uname" . }}-certs
{{- end }}
{{- if .Values.keystore }}
- name: keystore
emptyDir: {}
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
secret: {{ toYaml . | nindent 12 }}
{{- end }}
{{ end }}
{{- if .Values.extraVolumes }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumes) }}
{{ tpl .Values.extraVolumes . | indent 8 }}
{{- else }}
{{ toYaml .Values.extraVolumes | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
enableServiceLinks: {{ .Values.enableServiceLinks }}
{{- if .Values.hostAliases }}
hostAliases: {{ toYaml .Values.hostAliases | nindent 8 }}
{{- end }}
{{- if or (.Values.extraInitContainers) (.Values.sysctlInitContainer.enabled) (.Values.keystore) }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
{{- end }}
{{ if .Values.keystore }}
- name: keystore
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- bash
- -c
- |
set -euo pipefail
elasticsearch-keystore create
for i in /tmp/keystoreSecrets/*/*; do
key=$(basename $i)
echo "Adding file $i to keystore key $key"
elasticsearch-keystore add-file "$key" "$i"
done
# Add the bootstrap password since otherwise the Elasticsearch entrypoint tries to do this on startup
if [ ! -z ${ELASTIC_PASSWORD+x} ]; then
echo 'Adding env $ELASTIC_PASSWORD to keystore as key bootstrap.password'
echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x bootstrap.password
fi
cp -a /usr/share/elasticsearch/config/elasticsearch.keystore /tmp/keystore/
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
envFrom: {{ toYaml .Values.envFrom | nindent 10 }}
resources: {{ toYaml .Values.initResources | nindent 10 }}
volumeMounts:
- name: keystore
mountPath: /tmp/keystore
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
mountPath: /tmp/keystoreSecrets/{{ .secretName }}
{{- end }}
{{ end }}
{{- if .Values.extraInitContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraInitContainers) }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraInitContainers | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
containers:
- name: "{{ template "elasticsearch.name" . }}"
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
exec:
command:
- bash
- -c
- |
set -e
# Exit if ELASTIC_PASSWORD in unset
if [ -z "${ELASTIC_PASSWORD}" ]; then
echo "ELASTIC_PASSWORD variable is missing, exiting"
exit 1
fi
# If the node is starting up wait for the cluster to be ready (request params: "{{ .Values.clusterHealthCheckParams }}" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$@" $args
fi
set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
curl --output /dev/null -k "$@" "{{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "{{ include "elasticsearch.esMajorVersion" . }}" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
if http "/_cluster/health?{{ .Values.clusterHealthCheckParams }}" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
exit 1
fi
fi
{{ toYaml .Values.readinessProbe | indent 10 }}
ports:
- name: http
containerPort: {{ .Values.httpPort }}
- name: transport
containerPort: {{ .Values.transportPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: TZ
value: Asia/Shanghai
{{- if has "master" .Values.roles }}
- name: cluster.initial_master_nodes
value: "{{ template "elasticsearch.endpoints" . }}"
{{- end }}
#- name: node.roles
# value: "{{ template "elasticsearch.roles" . }}"
{{- if lt (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: discovery.zen.ping.unicast.hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- else }}
- name: discovery.seed_hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- end }}
- name: cluster.name
value: "{{ .Values.clusterName }}"
- name: network.host
value: "{{ .Values.networkHost }}"
{{- if .Values.secret.enabled }}
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "elasticsearch.uname" . }}-credentials
key: password
{{- end }}
{{- if .Values.esJavaOpts }}
- name: ES_JAVA_OPTS
value: "{{ .Values.esJavaOpts }}"
{{- end }}
- name: xpack.security.enabled
value: "false"
{{- if .Values.createCert }}
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.http.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.transport.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.transport.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
- name: xpack.security.http.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.http.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.http.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
{{- if .Values.envFrom }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: "{{ template "elasticsearch.uname" . }}"
mountPath: /usr/share/elasticsearch/data
{{- end }}
{{- if .Values.createCert }}
- name: elasticsearch-certs
mountPath: /usr/share/elasticsearch/config/certs
readOnly: true
{{- end }}
{{ if .Values.keystore }}
- name: keystore
mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
{{ end }}
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.esConfig }}
- name: esconfig
mountPath: /usr/share/elasticsearch/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- range $path, $config := .Values.esJvmOptions }}
- name: esjvmoptions
mountPath: /usr/share/elasticsearch/config/jvm.options.d/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.extraVolumeMounts }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumeMounts) }}
{{ tpl .Values.extraVolumeMounts . | indent 10 }}
{{- else }}
{{ toYaml .Values.extraVolumeMounts | indent 10 }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{ toYaml .Values.lifecycle | indent 10 }}
{{- end }}
{{- if .Values.extraContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraContainers) }}
{{ tpl .Values.extraContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraContainers | indent 6 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,36 @@
{{- if .Values.tests.enabled -}}
---
apiVersion: v1
kind: Pod
metadata:
{{- if .Values.healthNameOverride }}
name: {{ .Values.healthNameOverride | quote }}
{{- else }}
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
securityContext:
{{ toYaml .Values.podSecurityContext | indent 4 }}
containers:
{{- if .Values.healthNameOverride }}
- name: {{ .Values.healthNameOverride | quote }}
{{- else }}
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail '{{ template "elasticsearch.uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 4 }}
{{- end }}
restartPolicy: Never
{{- end -}}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,361 @@
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.roles=master
# https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles
roles:
- master
- data
- data_content
- data_hot
- data_warm
- data_cold
- ingest
- ml
- remote_cluster_client
- transform
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
createCert: false
esJvmOptions: {}
# processors.options: |
# -XX:ActiveProcessorCount=3
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# Disable it to use your own elastic-credential Secret.
secret:
enabled: true
password: "knowstreaming" # generated randomly if not defined
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.6.0"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
#resources:
# requests:
# cpu: "1000m"
# memory: "1Gi"
# limits:
# cpu: "1000m"
# memory: "1Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
#storageClassName: sc-lvmpv
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: true
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
# Consider that all endpoints are considered "ready" even if the Pods themselves are not
# https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec
publishNotReadyAddresses: false
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publicly expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
healthNameOverride: ""
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true
# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,13 @@
apiVersion: v2
name: knowstreaming-web
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
maintainers:
- email: didicloud@didiglobal.com
name: didicloud
appVersion: "1.0.0"

View File

@@ -0,0 +1 @@
knowstreaming-web

View File

@@ -0,0 +1,55 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "knowstreaming-web.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "knowstreaming-web.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "knowstreaming-web.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "knowstreaming-web.labels" -}}
helm.sh/chart: {{ include "knowstreaming-web.chart" . }}
{{ include "knowstreaming-web.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "knowstreaming-web.selectorLabels" -}}
app.kubernetes.io/name: {{ include "knowstreaming-web.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}

View File

@@ -0,0 +1,58 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "knowstreaming-web.fullname" . }}
labels:
app: {{ template "knowstreaming-web.name" . }}
chart: {{ template "knowstreaming-web.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
data:
knowStreaming.conf: |
server {
listen 80;
server_name localhost;
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 4;
gzip_http_version 1.0;
gzip_min_length 1280;
gzip_types text/plain text/css text/xml application/x-javascript application/xml application/xml+rss application/json application/javascript text/*;
gzip_vary on;
root /pub;
location / {
root /pub;
if ($request_filename ~* .*\.(?:htm|html|json)$) {
add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";
}
try_files $uri /layout/index.html;
}
location ~* \.(json)$ {
add_header Cache-Control no-cache;
}
location @kmfallback {
}
#location ~ ^/(clusters|config|cluster|login) {
# rewrite ^.*$ /;
#}
location ~ ^/ks-km/api/v3 {
#rewrite ^/ks-km/api/v3/(.*)$ /ks-km/ks-km/api/v3/$1 break;
proxy_pass http://{{ .Release.Name }}-knowstreaming-manager;
#proxy_pass localhost;
#proxy_cookie_path /ks-km/ /;
#proxy_set_header Host $host;
#proxy_set_header Referer $http_referer;
#proxy_set_header Cookie $http_cookie;
#proxy_set_header X-Real-Ip $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/logi-security/api/v1 {
#rewrite ^/logi-security/api/v1/(.*)$ /ks-km/logi-security/api/v1/$1 break;
proxy_pass http://{{ .Release.Name }}-knowstreaming-manager;
#proxy_pass localhost;
}
location ~ ^/(401|403|404|500){
rewrite ^.*$ /;
}
}

View File

@@ -0,0 +1,77 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "knowstreaming-web.fullname" . }}
labels:
app: {{ template "knowstreaming-web.name" . }}
chart: {{ template "knowstreaming-web.chart" . }}
{{- include "knowstreaming-web.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "knowstreaming-web.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
release: {{ .Release.Name | quote }}
{{- include "knowstreaming-web.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: TZ
value: Asia/Shanghai
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: configmap
mountPath: /etc/nginx/conf.d
volumes:
- name: configmap
configMap:
name: {{ include "knowstreaming-web.fullname" . }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "knowstreaming-web.fullname" . }}
labels:
{{- include "knowstreaming-web.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "knowstreaming-web.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "knowstreaming-web.fullname" . }}
labels:
{{- include "knowstreaming-web.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "knowstreaming-web.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "knowstreaming-web.fullname" . }}-test-connection"
labels:
{{- include "knowstreaming-web.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "knowstreaming-web.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -0,0 +1,61 @@
# Default values for web.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 2
image:
repository: knowstreaming/knowstreaming-ui
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "latest"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Annotation to be added to Kafka pods 添加到pod的注释
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: NodePort
#type: ClusterIP
port: 80
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 1000m
memory: 1Gi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,9 @@
apiVersion: v2
name: ksmysql
description: MySql for KnowStreaming
type: application
version: 0.1.0
appVersion: "5.7.38-1"

View File

@@ -0,0 +1 @@
knowstreaming-mysql

View File

@@ -0,0 +1,55 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "ksmysql.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ksmysql.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "ksmysql.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "ksmysql.labels" -}}
helm.sh/chart: {{ include "ksmysql.chart" . }}
{{ include "ksmysql.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "ksmysql.selectorLabels" -}}
app.kubernetes.io/name: {{ include "ksmysql.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "ksmysql.fullname" . }}
metadata:
name: {{ include "ksmysql.fullname" . }}
labels:
app: {{ template "ksmysql.name" . }}
chart: {{ template "ksmysql.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
data:
my.cnf: |
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
skip-host-cache
skip-name-resolve
datadir=/data/mysql
socket=/var/lib/mysql/mysql.sock
secure-file-priv=/var/lib/mysql-files
character-set-server=utf8
user=mysql
symbolic-links=0
pid-file=/var/run/mysqld/mysqld.pid
sql_mode=ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "ksmysql.fullname" . }}
labels:
{{- include "ksmysql.labels" . | nindent 4 }}
stringData:
rootUser: {{ .Values.mysql.username }}
rootHost: '%'
rootPassword: {{ .Values.mysql.password }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
labels:
{{- include "ksmysql.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
protocol: TCP
name: mysql
selector:
{{- include "ksmysql.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "ksmysql.fullname" . }}
labels:
app: {{ template "ksmysql.name" . }}
tier: {{ template "ksmysql.name" . }}
chart: {{ template "ksmysql.name" . }}
release: {{ .Release.Name | quote }}
{{- include "ksmysql.labels" . | nindent 4 }}
spec:
serviceName: "ksmysql"
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "ksmysql.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
release: {{ .Release.Name | quote }}
{{- include "ksmysql.selectorLabels" . | nindent 8 }}
spec:
containers:
- image: knowstreaming/knowstreaming-mysql:latest
name: {{ .Chart.Name }}
env:
- name: MYSQL_DATABASE
value: {{ .Values.mysql.dbname }}
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: rootPassword
name: {{ include "ksmysql.fullname" . }}
- name: MYSQL_ROOT_HOST
valueFrom:
secretKeyRef:
key: rootHost
name: {{ include "ksmysql.fullname" . }}
- name: TZ
value: Asia/Shanghai
resources:
{{- toYaml .Values.resources | nindent 12 }}
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: data
mountPath: /data
- name: configmap
mountPath: /etc/my.cnf
subPath: my.cnf
volumes:
- name: configmap
configMap:
name: {{ include "ksmysql.fullname" . }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,30 @@
# Default values for k11gMysql.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
mysql:
dbname: k11g
username: root
password: "admin2022_"
replicaCount: 1
resources:
limits:
cpu: "1000m"
memory: "2Gi"
requests:
cpu: "1000m"
memory: "2Gi"
service:
name: k11gmysql-server
type: ClusterIP
port: 3306
persistence:
enabled: true
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 30Gi

View File

@@ -0,0 +1,10 @@
############knowstreaming-manager############
1. Watch all cluster members come up. 启动需要几分钟时间进行初始化,请稍等~
$ kubectl get pods --namespace={{ .Release.Namespace }} -l release={{ .Release.Name }} -w
2. 获取KnowStreaming前端ui的service. 默认nodeport方式.(http://nodeIP:nodeport)
$ kubectl get service --namespace={{ .Release.Namespace }} {{ .Release.Name }}-knowstreaming-web

View File

@@ -0,0 +1,55 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "knowstreaming-manager.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "knowstreaming-manager.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "knowstreaming-manager.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "knowstreaming-manager.labels" -}}
helm.sh/chart: {{ include "knowstreaming-manager.chart" . }}
{{ include "knowstreaming-manager.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "knowstreaming-manager.selectorLabels" -}}
app.kubernetes.io/name: {{ include "knowstreaming-manager.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}

View File

@@ -0,0 +1,158 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "knowstreaming-manager.fullname" . }}
labels:
app: {{ template "knowstreaming-manager.name" . }}
chart: {{ template "knowstreaming-manager.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
data:
application-test.yml: |
server:
port: 80 # 服务端口
tomcat:
accept-count: 1000
max-connections: 10000
spring:
application:
name: know-streaming
profiles:
active: dev
main:
allow-bean-definition-overriding: true
jackson:
time-zone: GMT+8
datasource:
know-streaming:
{{ if .Values.ksmysql.enabled }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.service.name }}:{{ .Values.ksmysql.service.port }}/{{ .Values.ksmysql.mysql.dbname }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.mysql.username }}
password: {{ .Values.ksmysql.mysql.password }}
{{- else }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.mysqlAddress }}:{{ .Values.ksmysql.mysqlProt }}/{{ .Values.ksmysql.databasename }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.username }}
password: {{ .Values.ksmysql.password }}
{{- end }}
driver-class-name: org.mariadb.jdbc.Driver
maximum-pool-size: 20
idle-timeout: 30000
connection-test-query: SELECT 1
logi-job:
{{ if .Values.ksmysql.enabled }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.service.name }}:{{ .Values.ksmysql.service.port }}/{{ .Values.ksmysql.mysql.dbname }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.mysql.username }}
password: {{ .Values.ksmysql.mysql.password }}
{{- else }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.mysqlAddress }}:{{ .Values.ksmysql.mysqlProt }}/{{ .Values.ksmysql.databasename }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.username }}
password: {{ .Values.ksmysql.password }}
{{- end }}
driver-class-name: org.mariadb.jdbc.Driver
max-lifetime: 60000
init-sql: true
init-thread-num: 50
max-thread-num: 100
log-expire: 3 # 日志保存天数,以天为单位
app-name: know-stream
claim-strategy: com.didiglobal.logi.job.core.consensual.RandomConsensual
logi-security:
{{ if .Values.ksmysql.enabled }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.service.name }}:{{ .Values.ksmysql.service.port }}/{{ .Values.ksmysql.mysql.dbname }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.mysql.username }}
password: {{ .Values.ksmysql.mysql.password }}
{{- else }}
jdbc-url: jdbc:mariadb://{{ .Values.ksmysql.mysqlAddress }}:{{ .Values.ksmysql.mysqlProt }}/{{ .Values.ksmysql.databasename }}?useUnicode=true&characterEncoding=utf8&jdbcCompliantTruncation=true&allowMultiQueries=true&useSSL=false&alwaysAutoGeneratedKeys=true&serverTimezone=GMT%2B8&allowPublicKeyRetrieval=true
username: {{ .Values.ksmysql.username }}
password: {{ .Values.ksmysql.password }}
{{- end }}
driver-class-name: org.mariadb.jdbc.Driver
app-name: know-streaming
resource-extend-bean-name: myResourceExtendImpl
logging:
config: classpath:logback-spring.xml
thread-pool:
scheduled:
thread-num: 2 # @Scheduled任务的线程池大小默认是一个
collector:
future-util:
num: 1
thread-num: 8
queue-size: 10000
select-suitable-enable: true
suitable-queue-size: 1000
task:
heaven:
thread-num: 20
queue-size: 1000
client-pool:
kafka-consumer:
min-idle-client-num: 2 # 最小空闲客户端数
max-idle-client-num: 20 # 最大空闲客户端数
max-total-client-num: 20 # 最大客户端数
borrow-timeout-unit-ms: 5000 # 租借超时时间,单位秒
{{ if .Values.elasticsearch.enabled }}
es.client.address: elasticsearch-master:9200
#es.client.address: {{ .Release.Name }}-elasticsearch:9200
{{- else }}
es.client.address: {{ .Values.elasticsearch.esClientAddress }}:{{ .Values.elasticsearch.esProt }}
{{- end }}
# es.client.pass: knowstreaming-manager
# 集群自动均衡相关配置
cluster-balance:
ignored-topics:
time-second: 300
# 普罗米修斯指标导出相关配置
management:
endpoints:
web:
base-path: /metrics
exposure:
include: '*'
metrics:
export:
prometheus:
descriptions: true
enabled: true
tags:
application: know-streaming
init_es_index.sh: |
#!/bin/bash
{{ if .Values.elasticsearch.enabled }}
esaddr=elasticsearch-master
port=9200
{{- else }}
esaddr={{ .Values.elasticsearch.esClientAddress }}
port={{ .Values.elasticsearch.esProt }}
{{- end }}
curl -s --connect-timeout 10 -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_broker_metric -d '{"order":10,"index_patterns":["ks_kafka_broker_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"brokerId":{"type":"long"},"routingValue":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"clusterPhyId":{"type":"long"},"metrics":{"properties":{"NetworkProcessorAvgIdle":{"type":"float"},"UnderReplicatedPartitions":{"type":"float"},"BytesIn_min_15":{"type":"float"},"HealthCheckTotal":{"type":"float"},"RequestHandlerAvgIdle":{"type":"float"},"connectionsCount":{"type":"float"},"BytesIn_min_5":{"type":"float"},"HealthScore":{"type":"float"},"BytesOut":{"type":"float"},"BytesOut_min_15":{"type":"float"},"BytesIn":{"type":"float"},"BytesOut_min_5":{"type":"float"},"TotalRequestQueueSize":{"type":"float"},"MessagesIn":{"type":"float"},"TotalProduceRequests":{"type":"float"},"HealthCheckPassed":{"type":"float"},"TotalResponseQueueSize":{"type":"float"}}},"key":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","index":true,"type":"date","doc_values":true}}},"aliases":{}}' && \
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_cluster_metric -d '{"order":10,"index_patterns":["ks_kafka_cluster_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"routingValue":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"clusterPhyId":{"type":"long"},"metrics":{"properties":{"Connections":{"type":"double"},"BytesIn_min_15":{"type":"double"},"PartitionURP":{"type":"double"},"HealthScore_Topics":{"type":"double"},"EventQueueSize":{"type":"double"},"ActiveControllerCount":{"type":"double"},"GroupDeads":{"type":"double"},"BytesIn_min_5":{"type":"double"},"HealthCheckTotal_Topics":{"type":"double"},"Partitions":{"type":"double"},"BytesOut":{"type":"double"},"Groups":{"type":"double"},"BytesOut_min_15":{"type":"double"},"TotalRequestQueueSize":{"type":"double"},"HealthCheckPassed_Groups":{"type":"double"},"TotalProduceRequests":{"type":"double"},"HealthCheckPassed":{"type":"double"},"TotalLogSize":{"type":"double"},"GroupEmptys":{"type":"double"},"PartitionNoLeader":{"type":"double"},"HealthScore_Brokers":{"type":"double"},"Messages":{"type":"double"},"Topics":{"type":"double"},"PartitionMinISR_E":{"type":"double"},"HealthCheckTotal":{"type":"double"},"Brokers":{"type":"double"},"Replicas":{"type":"double"},"HealthCheckTotal_Groups":{"type":"double"},"GroupRebalances":{"type":"double"},"MessageIn":{"type":"double"},"HealthScore":{"type":"double"},"HealthCheckPassed_Topics":{"type":"double"},"HealthCheckTotal_Brokers":{"type":"double"},"PartitionMinISR_S":{"type":"double"},"BytesIn":{"type":"double"},"BytesOut_min_5":{"type":"double"},"GroupActives":{"type":"double"},"MessagesIn":{"type":"double"},"GroupReBalances":{"type":"double"},"HealthCheckPassed_Brokers":{"type":"double"},"HealthScore_Groups":{"type":"double"},"TotalResponseQueueSize":{"type":"double"},"Zookeepers":{"type":"double"},"LeaderMessages":{"type":"double"},"HealthScore_Cluster":{"type":"double"},"HealthCheckPassed_Cluster":{"type":"double"},"HealthCheckTotal_Cluster":{"type":"double"}}},"key":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","type":"date"}}},"aliases":{}}' && \
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_group_metric -d '{"order":10,"index_patterns":["ks_kafka_group_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"group":{"type":"keyword"},"partitionId":{"type":"long"},"routingValue":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"clusterPhyId":{"type":"long"},"topic":{"type":"keyword"},"metrics":{"properties":{"HealthScore":{"type":"float"},"Lag":{"type":"float"},"OffsetConsumed":{"type":"float"},"HealthCheckTotal":{"type":"float"},"HealthCheckPassed":{"type":"float"}}},"groupMetric":{"type":"keyword"},"key":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","index":true,"type":"date","doc_values":true}}},"aliases":{}}' && \
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_partition_metric -d '{"order":10,"index_patterns":["ks_kafka_partition_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"brokerId":{"type":"long"},"partitionId":{"type":"long"},"routingValue":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"clusterPhyId":{"type":"long"},"topic":{"type":"keyword"},"metrics":{"properties":{"LogStartOffset":{"type":"float"},"Messages":{"type":"float"},"LogEndOffset":{"type":"float"}}},"key":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","index":true,"type":"date","doc_values":true}}},"aliases":{}}' && \
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_replication_metric -d '{"order":10,"index_patterns":["ks_kafka_replication_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","index":true,"type":"date","doc_values":true}}},"aliases":{}}' && \
curl -s -o /dev/null -X POST -H 'cache-control: no-cache' -H 'content-type: application/json' http://${esaddr}:${port}/_template/ks_kafka_topic_metric -d '{"order":10,"index_patterns":["ks_kafka_topic_metric*"],"settings":{"index":{"number_of_shards":"10"}},"mappings":{"properties":{"brokerId":{"type":"long"},"routingValue":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"topic":{"type":"keyword"},"clusterPhyId":{"type":"long"},"metrics":{"properties":{"BytesIn_min_15":{"type":"float"},"Messages":{"type":"float"},"BytesRejected":{"type":"float"},"PartitionURP":{"type":"float"},"HealthCheckTotal":{"type":"float"},"ReplicationCount":{"type":"float"},"ReplicationBytesOut":{"type":"float"},"ReplicationBytesIn":{"type":"float"},"FailedFetchRequests":{"type":"float"},"BytesIn_min_5":{"type":"float"},"HealthScore":{"type":"float"},"LogSize":{"type":"float"},"BytesOut":{"type":"float"},"BytesOut_min_15":{"type":"float"},"FailedProduceRequests":{"type":"float"},"BytesIn":{"type":"float"},"BytesOut_min_5":{"type":"float"},"MessagesIn":{"type":"float"},"TotalProduceRequests":{"type":"float"},"HealthCheckPassed":{"type":"float"}}},"brokerAgg":{"type":"keyword"},"key":{"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"timestamp":{"format":"yyyy-MM-dd HH:mm:ss Z||yyyy-MM-dd HH:mm:ss||yyyy-MM-dd HH:mm:ss.SSS Z||yyyy-MM-dd HH:mm:ss.SSS||yyyy-MM-dd HH:mm:ss,SSS||yyyy/MM/dd HH:mm:ss||yyyy-MM-dd HH:mm:ss,SSS Z||yyyy/MM/dd HH:mm:ss,SSS Z||epoch_millis","index":true,"type":"date","doc_values":true}}},"aliases":{}}' || \
exit 1
for i in {0..6};do
logdate=_$(date -d "${i} day ago" +%Y-%m-%d)
curl -s --connect-timeout 10 -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_broker_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_cluster_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_group_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_partition_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_replication_metric${logdate} && \
curl -s -o /dev/null -X PUT http://${esaddr}:${port}/ks_kafka_topic_metric${logdate} || \
exit 2
done

View File

@@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "knowstreaming-manager.fullname" . }}
labels:
app: {{ template "knowstreaming-manager.name" . }}
chart: {{ template "knowstreaming-manager.chart" . }}
{{- include "knowstreaming-manager.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "knowstreaming-manager.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
release: {{ .Release.Name | quote }}
{{- include "knowstreaming-manager.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: init-config
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ['/bin/bash', '/conf/init_es_index.sh']
volumeMounts:
- name: configmap
mountPath: /conf
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ['/usr/bin/java', '-Xmx8g', '-Xms8g', '-jar', '/km-rest.jar', '--spring.config.location=/conf/application-test.yml']
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: TZ
value: Asia/Shanghai
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /ks-km/api/v3/open/health
port: http
readinessProbe:
httpGet:
path: /ks-km/api/v3/open/health
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: configmap
mountPath: /conf
volumes:
- name: configmap
configMap:
name: {{ include "knowstreaming-manager.fullname" . }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "knowstreaming-manager.fullname" . }}
labels:
{{- include "knowstreaming-manager.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "knowstreaming-manager.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "knowstreaming-manager.fullname" . }}
#name: knowstreaming-manager-km
labels:
{{- include "knowstreaming-manager.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "knowstreaming-manager.selectorLabels" . | nindent 6 }}

View File

@@ -0,0 +1,16 @@
{{ if .Values.servicemonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "knowstreaming-manager.fullname" . }}
labels:
{{- include "knowstreaming-manager.labels" . | nindent 4 }}
spec:
endpoints:
- port: http
scheme: http
path: /metrics/prometheus
selector:
matchLabels:
{{- include "knowstreaming-manager.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "knowstreaming-manager.fullname" . }}-test-connection"
labels:
{{- include "knowstreaming-manager.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "knowstreaming-manager.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

170
km-dist/helm/values.yaml Normal file
View File

@@ -0,0 +1,170 @@
replicaCount: 2
image:
repository: knowstreaming/knowstreaming-manager
pullPolicy: IfNotPresent
tag: "latest"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Annotation to be added to Kafka pods 添加到pod的注释
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
# 当前最小化配置为4C-8G
resources:
limits:
cpu: "4000m"
memory: "8Gi"
requests:
cpu: "4000m"
memory: "8Gi"
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
servicemonitor:
enabled: false
#-------------------------------------------------------------------
# Web: knowStreaming 前端UI
#-------------------------------------------------------------------
knowstreaming-web:
enabled: true
replicaCount: 2
resources:
requests:
cpu: "1000m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "1Gi"
image:
repository: knowstreaming/knowstreaming-ui
pullPolicy: IfNotPresent
tag: "latest"
service:
type: NodePort
#type: ClusterIP
port: 80
#-------------------------------------------------------------------
# elasticsearch: 当前使用7.6.0版本,集群部署
#-------------------------------------------------------------------
elasticsearch:
enabled: true
#-------------------------------------------------------------------
# 注意:
# 使用已有的elasticsearch请配置下面ip和端口并将上方enabled修改为false
# elasticsearch 版本限制使用7.6.0版本
#------------------------------------------------------------------
esClientAddress: 10.96.64.13
esProt: 8061
#------------------------------------------------------------------
replicas: 3
minimumMasterNodes: 2
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.6.0"
imagePullPolicy: "IfNotPresent"
#esJavaOpts: "-Xmx30g -Xms30g"
#esJavaOpts: "-Xmx16g -Xms16g"
esJavaOpts: ""
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
#requests:
# cpu: "8000m"
# memory: "31Gi"
#limits:
# cpu: "8000m"
# memory: "31Gi"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
#storageClassName: sc-lvmpv
#
#-------------------------------------------------------------------
# ksMysql: 当前限制使用5.7版本,快速部署只提供单机版本mysql数据库建议使用自建数据库
#-------------------------------------------------------------------
ksmysql:
enabled: true
#------------------------------------------------------------------
# 注意:
# 使用已有的mysql请配置下面ip和端口并将上方enabled修改为false
# mysql版本为5.7
# 请提前初始化完成数据库
#------------------------------------------------------------------
mysqlAddress: 10.96.64.13
mysqlProt: 3306
databasename: test
username: test
password: test
#------------------------------------------------------------------
mysql:
dbname: k11g
username: root
password: "admin2022_"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
service:
name: k11gmysql-server
type: ClusterIP
port: 3306
persistence:
enabled: true
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 30Gi