add helm charts

This commit is contained in:
Ybehrooz
2025-11-09 13:22:40 +03:30
parent 282c3e52d0
commit 38e4d749ad
1352 changed files with 190457 additions and 0 deletions

View File

@@ -0,0 +1,2 @@
tests/
.pytest_cache/

View File

@@ -0,0 +1,12 @@
apiVersion: v1
appVersion: 8.5.1
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
icon: https://helm.elastic.co/icons/elasticsearch.png
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
sources:
- https://github.com/elastic/elasticsearch
version: 8.5.1

View File

@@ -0,0 +1 @@
include ../helpers/common.mk

View File

@@ -0,0 +1,490 @@
# Elasticsearch Helm Chart
[![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+main.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+main/) [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/elastic)](https://artifacthub.io/packages/search?repo=elastic)
This Helm chart is a lightweight way to configure and run our official
[Elasticsearch Docker image][].
> **Warning**
> When it comes to running the Elastic on Kubernetes infrastructure, we
> recommend [Elastic Cloud on Kubernetes][] (ECK) as the best way to run and manage
> the Elastic Stack.
>
> ECK offers many operational benefits for both our basic-tier and our
> enterprise-tier customers, such as spinning up cluster nodes that were lost on
> failed infrastructure, seamless upgrades, rolling cluster changes, and much
> much more.
>
> With the release of the Elastic Stack Helm charts for Elastic version 8.5.1,
> we are handing over the ongoing maintenance of our Elastic Stack Helm charts
> to the community and contributors. This repository will finally be archived
> after 6 months time. Elastic Stacks deployed on Kubernetes through Helm charts
> will still be fully supported under EOL limitations.
>
> Since we want to provide an even better experience for our customers by
> running the Elastic Stack on Kubernetes, we will continue maintaining the
> Helm charts applicable to ECK Custom Resources. These charts can be found in
> the [ECK repository][eck-charts].
>
> Helm charts will currently be maintained for ECK Enterprise-tier customers,
> however, we encourage the community to engage with the existing Helm charts
> for the Elastic Stack and continue supporting their ongoing maintenance.
>
> See <https://github.com/elastic/helm-charts/issues/1731> for more details.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Requirements](#requirements)
- [Installing](#installing)
- [Install a released version using the Helm repository](#install-a-released-version-using-the-helm-repository)
- [Install a development version using the main branch](#install-a-development-version-using-the-main-branch)
- [Upgrading](#upgrading)
- [Usage notes](#usage-notes)
- [Configuration](#configuration)
- [FAQ](#faq)
- [How to deploy this chart on a specific K8S distribution?](#how-to-deploy-this-chart-on-a-specific-k8s-distribution)
- [How to deploy dedicated nodes types?](#how-to-deploy-dedicated-nodes-types)
- [Coordinating nodes](#coordinating-nodes)
- [Clustering and Node Discovery](#clustering-and-node-discovery)
- [How to deploy clusters with security (authentication and TLS) enabled?](#how-to-deploy-clusters-with-security-authentication-and-tls-enabled)
- [How to migrate from helm/charts stable chart?](#how-to-migrate-from-helmcharts-stable-chart)
- [How to install plugins?](#how-to-install-plugins)
- [How to use the keystore?](#how-to-use-the-keystore)
- [Basic example](#basic-example)
- [Multiple keys](#multiple-keys)
- [Custom paths and keys](#custom-paths-and-keys)
- [How to enable snapshotting?](#how-to-enable-snapshotting)
- [How to configure templates post-deployment?](#how-to-configure-templates-post-deployment)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- Use this to update TOC: -->
<!-- docker run --entrypoint doctoc --rm -it -v $(pwd):/usr/src jorgeandrada/doctoc README.md --github --no-title -->
## Requirements
* Minimum cluster requirements include the following to run this chart with
default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
See [supported configurations][] for more details.
## Installing
### Install a released version using the Helm repository
* Add the Elastic Helm charts repo:
`helm repo add elastic https://helm.elastic.co`
* Install it: `helm install elasticsearch elastic/elasticsearch`
### Install a development version using the main branch
* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git`
* Install it: `helm install elasticsearch ./helm-charts/elasticsearch --set imageTag=8.5.1`
## Upgrading
Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before
upgrading to a new chart version.
## Usage notes
* This repo includes several [examples][] of configurations that can be used
as a reference. They are also used in the automated testing of this chart.
* Automated testing of this chart is currently only run against GKE (Google
Kubernetes Engine).
* The chart deploys a StatefulSet and by default will do an automated rolling
update of your cluster. It does this by waiting for the cluster health to become
green after each instance is updated. If you prefer to update manually you can
set `OnDelete` [updateStrategy][].
* It is important to verify that the JVM heap size in `esJavaOpts` and to set
the CPU/Memory `resources` to something suitable for your cluster.
* To simplify chart and maintenance each set of node groups is deployed as a
separate Helm release. Take a look at the [multi][] example to get an idea for
how this works. Without doing this it isn't possible to resize persistent
volumes in a StatefulSet. By setting it up this way it makes it possible to add
more nodes with a new storage size then drain the old ones. It also solves the
problem of allowing the user to determine which node groups to update first when
doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure
Elasticsearch. It exposes ways to set environment variables and mount secrets
inside of the container. Doing this makes it much easier for this chart to
support multiple versions with minimal changes.
## Configuration
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` |
| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` |
| `createCert` | This will automatically create the SSL certificates | `true` |
| `enableServiceLinks` | Set to false to disabling service links, which can cause slow pod startup times when there are many services in the current namespace. | `true` |
| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` |
| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you could configure the [jvm heap size][] | `""` |
| `esJvmOptions` | [Java options][] for Elasticsearch. Override the default JVM options by adding custom options files . See [values.yaml][] for an example of the formatting | `{}` |
| `esMajorVersion` | Deprecated. Instead, use the version of the chart corresponding to your ES minor version. Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` |
| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` |
| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` |
| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` |
| `healthNameOverride` | Overrides `test-elasticsearch-health` pod name | `""` |
| `hostAliases` | Configurable [hostAliases][] | `[]` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` |
| `imageTag` | The Elasticsearch Docker image tag | `8.5.1` |
| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] |
| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` |
| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` |
| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` |
| `lifecycle` | Allows you to add [lifecycle hooks][]. See [values.yaml][] for an example of the formatting | `{}` |
| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` |
| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` |
| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` |
| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` |
| `networkPolicy` | The [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to set. See [`values.yaml`](./values.yaml) for an example | `{http.enabled: false,transport.enabled: false}` |
| `nodeAffinity` | Value for the [node affinity settings][] | `{}` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` |
| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] |
| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` |
| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] |
| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] |
| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` |
| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"`, or automount the service account token | see [values.yaml][] |
| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] |
| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` |
| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] |
| `roles` | A list with the specific [roles][] for the `nodeGroup` | see [values.yaml][] |
| `schedulerName` | Name of the [alternate scheduler][] | `""` |
| `secret.enabled` | Enable Secret creation for Elasticsearch credentials | `true` |
| `secret.password` | Initial password for the elastic user | `""` (generated randomly) |
| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` |
| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] |
| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` |
| `service.enabled` | Enable non-headless service | `true` |
| `service.externalTrafficPolicy` | Some cloud providers allow you to specify the [LoadBalancer externalTrafficPolicy][]. Kubernetes will use this to preserve the client source IP. This will configure load balancer if `service.type` is `LoadBalancer` | `""` |
| `service.httpPortName` | The name of the http port within the service | `http` |
| `service.labelsHeadless` | Labels to be added to headless service | `{}` |
| `service.labels` | Labels to be added to non-headless service | `{}` |
| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` |
| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` |
| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` |
| `service.transportPortName` | The name of the transport port within the service | `transport` |
| `service.publishNotReadyAddresses` | Consider that all endpoints are considered "ready" even if the Pods themselves are not | `false` |
| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` |
| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` |
| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` |
| `tests.enabled` | Enable creating test related resources when running `helm template` or `helm test` | `true` |
| `tolerations` | Configurable [tolerations][] | `[]` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` |
| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] |
## FAQ
### How to deploy this chart on a specific K8S distribution?
This chart is designed to run on production scale Kubernetes clusters with
multiple nodes, lots of memory and persistent storage. For that reason it can be
a bit tricky to run them against local Kubernetes environments such as
[Minikube][].
This chart is highly tested with [GKE][], but some K8S distribution also
requires specific configurations.
We provide examples of configuration for the following K8S providers:
- [Docker for Mac][]
- [KIND][]
- [Minikube][]
- [MicroK8S][]
- [OpenShift][]
### How to deploy dedicated nodes types?
All the Elasticsearch pods deployed share the same configuration. If you need to
deploy dedicated [nodes types][] (for example dedicated master and data nodes),
you can deploy multiple releases of this chart with different configurations
while they share the same `clusterName` value.
For each Helm release, the nodes types can then be defined using `roles` value.
An example of Elasticsearch cluster using 2 different Helm releases for master,
data and coordinating nodes can be found in [examples/multi][].
#### Coordinating nodes
Every node is implicitly a coordinating node. This means that a node that has an
explicit empty list of roles will only act as a coordinating node.
When deploying coordinating-only node with Elasticsearch chart, it is required
to define the empty list of roles in both `roles` value and `node.roles`
settings:
```yaml
roles: []
esConfig:
elasticsearch.yml: |
node.roles: []
```
More details in [#1186 (comment)][]
#### Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two
`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup`
and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all
pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can
just add new groups of nodes with a different `nodeGroup` and they will
automatically discover the correct master. If your master nodes have a different
`nodeGroup` name then you will need to set `masterService` to
`$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate
`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to
contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting
`masterService` to the desired `Service` name of the related cluster is
sufficient.
### How to deploy clusters with security (authentication and TLS) enabled?
This Helm chart can generate a [Kubernetes Secret][] or use an existing one to
setup Elastic credentials.
This Helm chart can use existing [Kubernetes Secret][] to setup Elastic
certificates for example. These secrets should be created outside of this chart
and accessed using [environment variables][] and volumes.
This chart is setting TLS and creating a certificate by default, but you can also provide your own certs as a K8S secret. An example of configuration for providing existing certificates can be found in [examples/security][].
### How to migrate from helm/charts stable chart?
If you currently have a cluster deployed with the [helm/charts stable][] chart
you can follow the [migration guide][].
### How to install plugins?
The recommended way to install plugins into our Docker images is to create a
[custom Docker image][].
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install
plugins is not a great idea or something that we recommend. Especially in
Kubernetes where it is normal and expected for a container to be moved to
another host at random times.
2. Mutating the state of a running Docker image (by installing plugins) goes
against best practices of containers and immutable infrastructure.
### How to use the keystore?
#### Basic example
Create the secret, the key name needs to be the keystore key path. In this
example we will create a secret from a file and from a literal string.
```
kubectl create secret generic encryption-key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic slack-hook --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
To add these secrets to the keystore:
```
keystore:
- secretName: encryption-key
- secretName: slack-hook
```
#### Multiple keys
All keys in the secret will be added to the keystore. To create the previous
example in one secret you could also do:
```
kubectl create secret generic keystore-secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
```
keystore:
- secretName: keystore-secrets
```
#### Custom paths and keys
If you are using these secrets for other applications (besides the Elasticsearch
keystore) then it is also possible to specify the keystore path and which keys
you want to add. Everything specified under each `keystore` item will be passed
through to the `volumeMounts` section for mounting the [secret][]. In this
example we will only add the `slack_hook` key from a secret that also has other
keys. Our secret looks like this:
```
kubectl create secret generic slack-secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
We only want to add the `slack_hook` key to the keystore at path
`xpack.notification.slack.account.monitoring.secure_url`:
```
keystore:
- secretName: slack-secrets
items:
- key: slack_hook
path: xpack.notification.slack.account.monitoring.secure_url
```
You can also take a look at the [config example][] which is used as part of the
automated testing pipeline.
### How to enable snapshotting?
1. Install your [snapshot plugin][] into a custom Docker image following the
[how to install plugins guide][].
2. Add any required secrets or credentials into an Elasticsearch keystore
following the [how to use the keystore][] guide.
3. Configure the [snapshot repository][] as you normally would.
4. To automate snapshots you can use [Snapshot Lifecycle Management][] or a tool
like [curator][].
### How to configure templates post-deployment?
You can use `postStart` [lifecycle hooks][] to run code triggered after a
container is created.
Here is an example of `postStart` hook to configure templates:
```yaml
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas
TEMPLATE_NAME=my_template
INDEX_PATTERN="logstash-*"
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
```
## Contributing
Please check [CONTRIBUTING.md][] before any contribution or for any questions
about our development and testing process.
[#1186 (comment)]: https://github.com/elastic/helm-charts/pull/1186#discussion_r631166442
[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/main/BREAKING_CHANGES.md
[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/main/CHANGELOG.md
[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html
[clustering and node discovery]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#clustering-and-node-discovery
[config example]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/config/values.yaml
[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/main/CONTRIBUTING.md
[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html
[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image
[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html#minimum_master_nodes
[docker for mac]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/docker-for-mac
[eck-charts]: https://github.com/elastic/cloud-on-k8s/tree/master/deploy
[elastic cloud on kubernetes]: https://github.com/elastic/cloud-on-k8s
[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params
[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
[environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config
[examples]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/
[examples/multi]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi
[examples/security]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/security
[gke]: https://cloud.google.com/kubernetes-engine
[helm]: https://helm.sh
[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/
[hostAliases]: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
[how to install plugins guide]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#how-to-install-plugins
[how to use the keystore]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/README.md#how-to-use-the-keystore
[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings
[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images
[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html
[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
[kind]: https://github.com/elastic/helm-charts/tree/main//elasticsearch/examples/kubernetes-kind
[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
[loadBalancer externalTrafficPolicy]: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
[microk8s]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/microk8s
[migration guide]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/examples/migration/README.md
[minikube]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/minikube
[multi]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/multi/
[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html
[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
[nodes types]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html
[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
[openshift]: https://github.com/elastic/helm-charts/tree/main/elasticsearch/examples/openshift
[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html
[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[snapshot lifecycle management]: https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-lifecycle-management.html
[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository.html
[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
[supported configurations]: https://github.com/elastic/helm-charts/blob/main/README.md#supported-configurations
[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count
[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings
[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[values.yaml]: https://github.com/elastic/helm-charts/blob/main/elasticsearch/values.yaml
[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage

View File

@@ -0,0 +1,356 @@
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.roles=master
# https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles
roles:
- master
- data
- data_content
- data_hot
- data_warm
- data_cold
- ingest
- ml
- remote_cluster_client
- transform
replicas: 2
minimumMasterNodes: 1
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
createCert: true
esJvmOptions: {}
# processors.options: |
# -XX:ActiveProcessorCount=3
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# Disable it to use your own elastic-credential Secret.
secret:
enabled: true
password: "" # generated randomly if not defined
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "8.5.1"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: true
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: https
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
# Consider that all endpoints are considered "ready" even if the Pods themselves are not
# https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec
publishNotReadyAddresses: false
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publicly expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
healthNameOverride: ""
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true

View File

@@ -0,0 +1,8 @@
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "elasticsearch.uname" . }} -w
2. Retrieve elastic user's password.
$ kubectl get secrets --namespace={{ .Release.Namespace }} {{ template "elasticsearch.uname" . }}-credentials -ojsonpath='{.data.password}' | base64 -d
{{- if .Values.tests.enabled }}
3. Test cluster health using Helm test.
$ helm --namespace={{ .Release.Namespace }} test {{ .Release.Name }}
{{- end -}}

View File

@@ -0,0 +1,97 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "elasticsearch.uname" -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- else -}}
{{ .Values.nameOverride }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- end -}}
{{/*
Generate certificates when the secret doesn't exist
*/}}
{{- define "elasticsearch.gen-certs" -}}
{{- $certs := lookup "v1" "Secret" .Release.Namespace ( printf "%s-certs" (include "elasticsearch.uname" . ) ) -}}
{{- if $certs -}}
tls.crt: {{ index $certs.data "tls.crt" }}
tls.key: {{ index $certs.data "tls.key" }}
ca.crt: {{ index $certs.data "ca.crt" }}
{{- else -}}
{{- $altNames := list ( include "elasticsearch.masterService" . ) ( printf "%s.%s" (include "elasticsearch.masterService" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "elasticsearch.masterService" .) .Release.Namespace ) -}}
{{- $ca := genCA "elasticsearch-ca" 365 -}}
{{- $cert := genSignedCert ( include "elasticsearch.masterService" . ) nil $altNames 365 $ca -}}
tls.crt: {{ $cert.Cert | toString | b64enc }}
tls.key: {{ $cert.Key | toString | b64enc }}
ca.crt: {{ $ca.Cert | toString | b64enc }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.masterService" -}}
{{- if empty .Values.masterService -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.nameOverride }}-master
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.endpoints" -}}
{{- $replicas := int (toString (.Values.replicas)) }}
{{- $uname := (include "elasticsearch.uname" .) }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.roles" -}}
{{- range $.Values.roles -}}
{{ . }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
8
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "elasticsearch.serviceAccount" -}}
{{- .Values.rbac.serviceAccountName | default (include "elasticsearch.uname" .) -}}
{{- end -}}

View File

@@ -0,0 +1,34 @@
{{- if .Values.esConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
{{- if .Values.esJvmOptions }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-jvm-options
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esJvmOptions }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,64 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "elasticsearch.uname" . -}}
{{- $httpPort := .Values.httpPort -}}
{{- $pathtype := .Values.ingress.pathtype -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className | quote }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- if .ingressPath }}
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- else }}
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
{{- end}}
rules:
{{- range .Values.ingress.hosts }}
{{- if $ingressPath }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $httpPort }}
{{- else }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ .servicePort | default $httpPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,61 @@
{{- if (or .Values.networkPolicy.http.enabled .Values.networkPolicy.transport.enabled) }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
spec:
podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
ingress: # Allow inbound connections
{{- if .Values.networkPolicy.http.enabled }}
# For HTTP access
- ports:
- port: {{ .Values.httpPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-http-client: "true"
{{- with .Values.networkPolicy.http.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.http.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.networkPolicy.transport.enabled }}
# For transport access
- ports:
- port: {{ .Values.transportPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-transport-client: "true"
{{- with .Values.networkPolicy.transport.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.transport.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
# Or from other ElasticSearch Pods
- podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.maxUnavailable }}
{{- if .Capabilities.APIVersions.Has "policy/v1" -}}
apiVersion: policy/v1
{{- else}}
apiVersion: policy/v1beta1
{{- end }}
kind: PodDisruptionBudget
metadata:
name: "{{ template "elasticsearch.uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.podSecurityPolicy.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ default $fullName .Values.podSecurityPolicy.name | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
spec:
{{ toYaml .Values.podSecurityPolicy.spec | indent 2 }}
{{- end -}}

View File

@@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}

View File

@@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
subjects:
- kind: ServiceAccount
name: "{{ template "elasticsearch.serviceAccount" . }}"
namespace: {{ .Release.Namespace | quote }}
roleRef:
kind: Role
name: {{ $fullName | quote }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@@ -0,0 +1,14 @@
{{- if .Values.createCert }}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: {{ template "elasticsearch.uname" . }}-certs
labels:
app: {{ template "elasticsearch.uname" . }}
chart: "{{ .Chart.Name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
{{ ( include "elasticsearch.gen-certs" . ) | indent 2 }}
{{- end }}

View File

@@ -0,0 +1,23 @@
{{- if .Values.secret.enabled -}}
{{- $passwordValue := (randAlphaNum 16) | b64enc | quote }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "elasticsearch.uname" . }}-credentials
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
type: Opaque
data:
username: {{ "elastic" | b64enc }}
{{- if .Values.secret.password }}
password: {{ .Values.secret.password | b64enc }}
{{- else }}
password: {{ $passwordValue }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,78 @@
{{- if .Values.service.enabled -}}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}
{{- else }}
name: {{ template "elasticsearch.uname" . }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4}}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
publishNotReadyAddresses: {{ .Values.service.publishNotReadyAddresses }}
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: {{ .Values.service.transportPortName | default "transport" }}
protocol: TCP
port: {{ .Values.transportPort }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}-headless
{{- else }}
name: {{ template "elasticsearch.uname" . }}-headless
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labelsHeadless }}
{{ toYaml .Values.service.labelsHeadless | indent 4 }}
{{- end }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
port: {{ .Values.httpPort }}
- name: {{ .Values.service.transportPortName | default "transport" }}
port: {{ .Values.transportPort }}

View File

@@ -0,0 +1,16 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ template "elasticsearch.serviceAccount" . }}"
annotations:
{{- with .Values.rbac.serviceAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
{{- end -}}

View File

@@ -0,0 +1,427 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
esMajorVersion: "{{ include "elasticsearch.esMajorVersion" . }}"
spec:
serviceName: {{ template "elasticsearch.uname" . }}-headless
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
replicas: {{ .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ template "elasticsearch.uname" . }}
{{- if .Values.persistence.labels.enabled }}
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{ toYaml .Values.volumeClaimTemplate | indent 6 }}
{{- end }}
template:
metadata:
name: "{{ template "elasticsearch.uname" . }}"
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if or .Values.esConfig .Values.esJvmOptions }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.fsGroup }}
fsGroup: {{ .Values.fsGroup }} # Deprecated value, please use .Values.podSecurityContext.fsGroup
{{- end }}
{{- if or .Values.rbac.create .Values.rbac.serviceAccountName }}
serviceAccountName: "{{ template "elasticsearch.serviceAccount" . }}"
{{- end }}
automountServiceAccountToken: {{ .Values.rbac.automountToken }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if or (eq .Values.antiAffinity "hard") (eq .Values.antiAffinity "soft") .Values.nodeAffinity }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
affinity:
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- if .defaultMode }}
defaultMode: {{ .defaultMode }}
{{- end }}
{{- end }}
{{- if .Values.esConfig }}
- name: esconfig
configMap:
name: {{ template "elasticsearch.uname" . }}-config
{{- end }}
{{- if .Values.esJvmOptions }}
- name: esjvmoptions
configMap:
name: {{ template "elasticsearch.uname" . }}-jvm-options
{{- end }}
{{- if .Values.createCert }}
- name: elasticsearch-certs
secret:
secretName: {{ template "elasticsearch.uname" . }}-certs
{{- end }}
{{- if .Values.keystore }}
- name: keystore
emptyDir: {}
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
secret: {{ toYaml . | nindent 12 }}
{{- end }}
{{ end }}
{{- if .Values.extraVolumes }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumes) }}
{{ tpl .Values.extraVolumes . | indent 8 }}
{{- else }}
{{ toYaml .Values.extraVolumes | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
enableServiceLinks: {{ .Values.enableServiceLinks }}
{{- if .Values.hostAliases }}
hostAliases: {{ toYaml .Values.hostAliases | nindent 8 }}
{{- end }}
{{- if or (.Values.extraInitContainers) (.Values.sysctlInitContainer.enabled) (.Values.keystore) }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
{{- end }}
{{ if .Values.keystore }}
- name: keystore
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- bash
- -c
- |
set -euo pipefail
elasticsearch-keystore create
for i in /tmp/keystoreSecrets/*/*; do
key=$(basename $i)
echo "Adding file $i to keystore key $key"
elasticsearch-keystore add-file "$key" "$i"
done
# Add the bootstrap password since otherwise the Elasticsearch entrypoint tries to do this on startup
if [ ! -z ${ELASTIC_PASSWORD+x} ]; then
echo 'Adding env $ELASTIC_PASSWORD to keystore as key bootstrap.password'
echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x bootstrap.password
fi
cp -a /usr/share/elasticsearch/config/elasticsearch.keystore /tmp/keystore/
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
envFrom: {{ toYaml .Values.envFrom | nindent 10 }}
resources: {{ toYaml .Values.initResources | nindent 10 }}
volumeMounts:
- name: keystore
mountPath: /tmp/keystore
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
mountPath: /tmp/keystoreSecrets/{{ .secretName }}
{{- end }}
{{ end }}
{{- if .Values.extraInitContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraInitContainers) }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraInitContainers | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
containers:
- name: "{{ template "elasticsearch.name" . }}"
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
exec:
command:
- bash
- -c
- |
set -e
# Exit if ELASTIC_PASSWORD in unset
if [ -z "${ELASTIC_PASSWORD}" ]; then
echo "ELASTIC_PASSWORD variable is missing, exiting"
exit 1
fi
# If the node is starting up wait for the cluster to be ready (request params: "{{ .Values.clusterHealthCheckParams }}" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$@" $args
fi
set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
curl --output /dev/null -k "$@" "{{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "{{ include "elasticsearch.esMajorVersion" . }}" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
if http "/_cluster/health?{{ .Values.clusterHealthCheckParams }}" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
exit 1
fi
fi
{{ toYaml .Values.readinessProbe | indent 10 }}
ports:
- name: http
containerPort: {{ .Values.httpPort }}
- name: transport
containerPort: {{ .Values.transportPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- if has "master" .Values.roles }}
- name: cluster.initial_master_nodes
value: "{{ template "elasticsearch.endpoints" . }}"
{{- end }}
{{- if gt (len (include "elasticsearch.roles" .)) 0 }}
- name: node.roles
value: "{{ template "elasticsearch.roles" . }}"
{{- end }}
{{- if lt (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: discovery.zen.ping.unicast.hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- else }}
- name: discovery.seed_hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- end }}
- name: cluster.name
value: "{{ .Values.clusterName }}"
- name: network.host
value: "{{ .Values.networkHost }}"
{{- if .Values.secret.enabled }}
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "elasticsearch.uname" . }}-credentials
key: password
{{- end }}
{{- if .Values.esJavaOpts }}
- name: ES_JAVA_OPTS
value: "{{ .Values.esJavaOpts }}"
{{- end }}
{{- if .Values.createCert }}
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.http.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.transport.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.transport.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
- name: xpack.security.http.ssl.key
value: "/usr/share/elasticsearch/config/certs/tls.key"
- name: xpack.security.http.ssl.certificate
value: "/usr/share/elasticsearch/config/certs/tls.crt"
- name: xpack.security.http.ssl.certificate_authorities
value: "/usr/share/elasticsearch/config/certs/ca.crt"
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
{{- if .Values.envFrom }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: "{{ template "elasticsearch.uname" . }}"
mountPath: /usr/share/elasticsearch/data
{{- end }}
{{- if .Values.createCert }}
- name: elasticsearch-certs
mountPath: /usr/share/elasticsearch/config/certs
readOnly: true
{{- end }}
{{ if .Values.keystore }}
- name: keystore
mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
{{ end }}
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.esConfig }}
- name: esconfig
mountPath: /usr/share/elasticsearch/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- range $path, $config := .Values.esJvmOptions }}
- name: esjvmoptions
mountPath: /usr/share/elasticsearch/config/jvm.options.d/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.extraVolumeMounts }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumeMounts) }}
{{ tpl .Values.extraVolumeMounts . | indent 10 }}
{{- else }}
{{ toYaml .Values.extraVolumeMounts | indent 10 }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{ toYaml .Values.lifecycle | indent 10 }}
{{- end }}
{{- if .Values.extraContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraContainers) }}
{{ tpl .Values.extraContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraContainers | indent 6 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,50 @@
{{- if .Values.tests.enabled -}}
---
apiVersion: v1
kind: Pod
metadata:
{{- if .Values.healthNameOverride }}
name: {{ .Values.healthNameOverride | quote }}
{{- else }}
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
securityContext:
{{ toYaml .Values.podSecurityContext | indent 4 }}
containers:
{{- if .Values.healthNameOverride }}
- name: {{ .Values.healthNameOverride | quote }}
{{- else }}
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "elasticsearch.uname" . }}-credentials
key: password
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail --cacert /usr/share/elasticsearch/config/certs/tls.crt -u "elastic:${ELASTIC_PASSWORD}" https://'{{ template "elasticsearch.uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
volumeMounts:
- name: elasticsearch-certs
mountPath: /usr/share/elasticsearch/config/certs
readOnly: true
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 4 }}
{{- end }}
restartPolicy: Never
volumes:
- name: elasticsearch-certs
secret:
secretName: {{ template "elasticsearch.uname" . }}-certs
{{- end -}}

View File

@@ -0,0 +1,356 @@
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.roles=master
# https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles
roles:
- master
- data
- data_content
- data_hot
- data_warm
- data_cold
- ingest
- ml
- remote_cluster_client
- transform
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
createCert: true
esJvmOptions: {}
# processors.options: |
# -XX:ActiveProcessorCount=3
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# Disable it to use your own elastic-credential Secret.
secret:
enabled: true
password: "" # generated randomly if not defined
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "8.5.1"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: true
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: https
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
# Consider that all endpoints are considered "ready" even if the Pods themselves are not
# https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec
publishNotReadyAddresses: false
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publicly expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
healthNameOverride: ""
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true