This article demonstrates how you can use the to deploy a Kubernetes Operator to your cluster. Then, you will use the Operator to spin up an Elastic Cloud on Kubernetes (ECK) cluster. Operator Lifecycle Manager An operator is a software extension that uses custom resources (extension of the Kubernetes API) to manage complex applications on behalf of users. The artifacts that come with an operator are: A set of that extend the behavior of the cluster without making any change in the codeA controller that supervises the CRDs, and performs various activities such as spinning up a pod, take a backup, and so on. CRDs The complexity encapsulated within an Operator can vary, as shown in the below diagram: Prerequisites A Kubernetes cluster (v1.7 or newer) with a control plane and two workers. If you don’t have a running Kubernetes cluster, refer to the section below. Create a Kubernetes Cluster with Kind Create a Kubernetes Cluster with Kind (Optional) Kind is a tool for running local Kubernetes clusters using Docker container "nodes". Follow the steps in this section if you don't have a running Kubernetes cluster: 1. Install kind by following the steps from the page. Kind Quick Start 2. Place the following spec into a file named : kind-es-cluster.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker 3. Create a cluster with a control plane and two worker nodes by running the Kind command followed by the flag and the name of the configuration file: create cluster --config kind create cluster --config kind-es-cluster.yaml Creating cluster ... ✓ Ensuring node image (kindest/node:v1 ) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 kubectl context to You can now use your cluster : kubectl cluster-info --context kind-kind Have a nice day! 👋 "kind" .16 .3 Set "kind-kind" with 4. At this point, you can retrieve the list of services that were started on your cluster: kubectl cluster-info Kubernetes master is running at https: KubeDNS is running at https: To further debug and diagnose cluster problems, use . //127.0.0.1:53519 //127.0.0.1:53519/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 'kubectl cluster-info dump' Install the Operator Lifecycle Manager In this section, you'll install the , a tool that helps you manage the Operators deployed to your cluster in an automated fashion. Operator Lifecycle Manager ("OLM") 1. Run the following commands to install OLM: curl -sL https: //github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/install.sh | bash -s 0.13.0 customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created namespace/olm created namespace/operators created clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created serviceaccount/olm-operator-serviceaccount created clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created deployment.apps/olm-operator created deployment.apps/catalog-operator created clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created operatorgroup.operators.coreos.com/global-operators created operatorgroup.operators.coreos.com/olm-operators created clusterserviceversion.operators.coreos.com/packageserver created catalogsource.operators.coreos.com/operatorhubio-catalog created Waiting deployment rollout to finish: updated replicas are available... deployment successfully rolled out deployment successfully rolled out Package server phase: Installing Package server phase: Succeeded deployment successfully rolled out for "olm-operator" 0 of 1 "olm-operator" "catalog-operator" "packageserver" Install the ECK Operator The ECK Operator provides support for managing and monitoring multiple clusters, upgrading to new stack versions, scaling cluster capacity, etc. This section walks through installing the ECK Operator to your Kubernetes cluster: 1. Enter the following kubectl apply command to install the ECK Operator: kubectl apply -f https: //download.elastic.co/downloads/eck/1.0.0/all-in-one.yaml customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created clusterrole.rbac.authorization.k8s.io/elastic-operator created clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created namespace/elastic-system created statefulset.apps/elastic-operator created serviceaccount/elastic-operator created validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created service/elastic-webhook-server created secret/elastic-webhook-server-cert created 2. Remember that the ECK Operator is a Kubernetes resource. Thus, you can display it by using the following command: kubectl get CustomResourceDefinition NAME CREATED AT apmservers.apm.k8s.elastic.co T07: : Z catalogsources.operators.coreos.com T06: : Z clusterserviceversions.operators.coreos.com T06: : Z elasticsearches.elasticsearch.k8s.elastic.co T07: : Z installplans.operators.coreos.com T06: : Z kibanas.kibana.k8s.elastic.co T07: : Z operatorgroups.operators.coreos.com T06: : Z subscriptions.operators.coreos.com T06: : Z 2020 -01 -29 02 24 2020 -01 -29 59 21 2020 -01 -29 59 20 2020 -01 -29 02 24 2020 -01 -29 59 20 2020 -01 -29 02 25 2020 -01 -29 59 21 2020 -01 -29 59 20 3. To see more details about a specific CRD, run the command followed by the name of the CRD: kubectl describe CustomResourceDefinition kubectl describe CustomResourceDefinition elasticsearches.elasticsearch.k8s.elastic.co Name: elasticsearches.elasticsearch.k8s.elastic.co Namespace: Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"creationTimestamp":null,"name... API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition Metadata: Creation Timestamp: 2020-01-29T07:02:24Z Generation: 1 Resource Version: 1074 Self Link: /apis/apiextensions.k8s.io/v1/customresourcedefinitions/elasticsearches.elasticsearch.k8s.elastic.co UID: 2332769c-ead3-4208-b6bd-68b8cfcb3692 Spec: Conversion: Strategy: None Group: elasticsearch.k8s.elastic.co Names: Categories: elastic Kind: Elasticsearch List Kind: ElasticsearchList Plural: elasticsearches Short Names: es Singular: elasticsearch Preserve Unknown Fields: true Scope: Namespaced Versions: Additional Printer Columns: Json Path: .status.health Name: health Type: string Description: Available nodes Json Path: .status.availableNodes < > none This above output was truncated for brevity. 4. You check the progress of the installation with: kubectl -n elastic-system logs -f statefulset.apps/elastic-operator { : , : , : , : , : , : , : } { : , : , : , : , : , : , : } { : , : , : , : , : , : , : } { : , : , : , : , : , : , : } { : , : , : , : , : , : } { : , : , : , : , : , : , : } "level" "info" "@timestamp" "2020-01-27T14:57:57.656Z" "logger" "controller-runtime.controller" "message" "Starting workers" "ver" "1.0.0-6881438d" "controller" "license-controller" "worker count" 1 "level" "info" "@timestamp" "2020-01-27T14:57:57.757Z" "logger" "controller-runtime.controller" "message" "Starting EventSource" "ver" "1.0.0-6881438d" "controller" "elasticsearch-controller" "source" "kind source: /, Kind=" "level" "info" "@timestamp" "2020-01-27T14:57:57.758Z" "logger" "controller-runtime.controller" "message" "Starting EventSource" "ver" "1.0.0-6881438d" "controller" "elasticsearch-controller" "source" "kind source: /, Kind=" "level" "info" "@timestamp" "2020-01-27T14:57:57.759Z" "logger" "controller-runtime.controller" "message" "Starting EventSource" "ver" "1.0.0-6881438d" "controller" "elasticsearch-controller" "source" "channel source: 0xc00003a870" "level" "info" "@timestamp" "2020-01-27T14:57:57.759Z" "logger" "controller-runtime.controller" "message" "Starting Controller" "ver" "1.0.0-6881438d" "controller" "elasticsearch-controller" "level" "info" "@timestamp" "2020-01-27T14:57:57.760Z" "logger" "controller-runtime.controller" "message" "Starting workers" "ver" "1.0.0-6881438d" "controller" "elasticsearch-controller" "worker count" 1 5. List the pods running in the namespace with: Note that the above output was truncated for brevity. elastic-system kubectl get pods -n elastic-system NAME READY STATUS RESTARTS AGE elastic-operator / Running m -0 1 1 0 11 Make sure the status is before moving on. Running Deploy an Elasticsearch Cluster In this section, we'll walk you through the process of deploying an Elasticsearch cluster with the Kubernetes Operator. 1. Create a file called with the following content: elastic-search-cluster.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: nodeSets: - name: count: config: node.master: node.data: node.ingest: node.store.allow_mmap: 7.5 .2 default 2 true true true false Things to note in the above output: the parameter specifies the Elasticsearch version the Operator will deploy version the parameter sets the number of database nodes. Make sure it's not greater than the number of nodes in your Kubernetes cluster. count 2. Create a two-node Elasticsearch cluster by entering the following command: kubectl apply -f elastic-search-cluster.yaml elasticsearch.elasticsearch.k8s.elastic.co/quickstart created Behind the scenes, the Operator automatically creates and manages the resources needed to achieve the desired state. 3. You can now run the following command to see the status of the newly created Elasticsearch cluster: kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart unknown m51s 7.5 .2 3 Note that the status has not been reported yet. It takes a few minutes for the process to complete. Then, the status will show as : HEALTH HEALTH green kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green Ready m47s 2 7.5 .2 8 4. Check the status of the pods running in your cluster with: kubectl get pods --selector= 'elasticsearch.k8s.elastic.co/cluster-name=quickstart' NAME READY STATUS RESTARTS AGE quickstart-es- / Running m18s quickstart-es- / Running m18s default -0 1 1 0 9 default -1 1 1 0 9 Verify Your Elasticsearch Installation To verify the installation, follow these steps. 1. The Operator exposes the service with a static IP address. Run the following command to see it: kubectl get service kubectl get service quickstart-es-http NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es-http ClusterIP <none> /TCP m 10.103 .196 .28 9200 15 2. To forward all connections made to to port 9200 of the pod running the service, type the following command in a new terminal window: localhost:9200 quickstart-es-http kubectl port-forward service/quickstart-es-http 9200 Forwarding : -> Forwarding [:: ]: -> from 127.0 .0 .1 9200 9200 from 1 9200 9200 3. Move back to the first terminal window. The password for the user is stored in a Kubernetes secret. Use the following command to retrieve the password, and save it into an environment variable called : elastic PASSWORD PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o=jsonpath= | base64 --decode) '{.data.elastic}' 4. At this point, you can use to make a request: curl curl -u -k "elastic:$PASSWORD" "https://localhost:9200" { : , : , : , : { : , : , : , : , : , : , : , : , : }, : } "name" "quickstart-es-default-0" "cluster_name" "quickstart" "cluster_uuid" "g0_1Vk9iQoGwFWYdzUqfig" "version" "number" "7.5.2" "build_flavor" "default" "build_type" "docker" "build_hash" "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf" "build_date" "2020-01-15T12:11:52.313576Z" "build_snapshot" false "lucene_version" "8.3.0" "minimum_wire_compatibility_version" "6.8.0" "minimum_index_compatibility_version" "6.0.0-beta1" "tagline" "You Know, for Search" Deploy Kibana This section walks through creating a new Kibana cluster using the Kubernetes Operator. 1. Create a file called with the following content: kibana.yaml apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: quickstart spec: version: count: elasticsearchRef: name: quickstart podTemplate: metadata: labels: foo: kibana spec: containers: - name: kibana resources: requests: memory: Gi cpu: limits: memory: Gi cpu: 7.5 .1 1 1 0.5 1 1 2. Enter the following command to create a Kibana cluster: kubectl apply kubectl apply -f kibana.yaml kibana.kibana.k8s.elastic.co/quickstart created 3. During the installation, you can check on the progress by running: kubectl get kibana NAME HEALTH NODES VERSION AGE quickstart s 7.5 .1 3 Note that in the above output, the status hasn't been reported yet. HEALTH Once the installation is completed, the status will show as green: HEALTH kubectl get kibana NAME HEALTH NODES VERSION AGE quickstart green s 1 7.5 .1 104 4. At this point, you can list the Kibana pods by entering the following command: kubectl get pods kubectl get pod --selector= 'kibana.k8s.elastic.co/name=quickstart' NAME READY STATUS RESTARTS AGE quickstart-kb b8d8fc-ftvbz / Running s -7578 1 1 0 70 Verify Your Kibana Installation Follow these steps to verify your Kibana installation. 1. The Kubernetes Operator has created a service for Kibana. You can retrieve it like this: ClusterIP kubectl get service quickstart-kb-http NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-kb-http ClusterIP <none> /TCP m 10.98 .126 .75 5601 11 2. To make the service available on your host, type the following command in a new terminal window: kubectl port-forward service/quickstart-kb-http 5601 Forwarding : -> Forwarding [:: ]: -> from 127.0 .0 .1 5601 5601 from 1 5601 5601 3. To access Kibana, you need the password for the user. You've already saved it into an environment variable called in Step 3 of the section. You can now display it with: elastic PASSWORD Verify Your Elasticsearch Installation echo $PASSWORD vrfr6b6v4687hnldrc72kb4q In our example, the password is vrfr6b6v4687hnldrc72kb4q but yours will be different. 4. Now, you can access Kibana by pointing your browser to https://localhost:5601 5. Log in using the username and the password you retrieved earlier: elastic Manage Your ECK Cluster with the Kubernetes Operator In this section, you'll learn how to scale down and up your ECK Cluster. 1. To scale down, modify the number of nodes running Elasticsearch by specifying in your file. Your spec should look like this: nodeSets.count: 1 elasticsearch.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: nodeSets: - name: count: config: node.master: node.data: node.ingest: node.store.allow_mmap: 7.5 .2 default 1 true true true false 2. You can apply the spec with: kubectl apply -f elastic-search.yaml elasticsearch.elasticsearch.k8s.elastic.co/quickstart configured Behind the scenes, the Operator makes required changes to reach the desired state. This can take a bit of time. 3. In the meantime, you can display the status of your cluster by entering the following command: kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green ApplyingChanges m 1 7.5 .2 56 In the above output, note that there's only one node running Elasticsearch. 4. You can list the pods running Elasticsearch: kubectl get pods --selector= 'elasticsearch.k8s.elastic.co/cluster-name=quickstart' NAME READY STATUS RESTARTS AGE quickstart-es- / Running m default -0 1 1 0 58 5. Similarly, you can scale up your Elasticsearch cluster by specifying in your file: nodeSets.count: 2 elasticsearch.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: nodeSets: - name: count: config: node.master: node.data: node.ingest: node.store.allow_mmap: 7.5 .2 default 3 true true true false 6. You can monitor the progress with: kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green ApplyingChanges m 1 7.5 .2 61 Once the desired stats is reached, the column will show as : PHASE Ready kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green Ready m 2 7.5 .2 68 Congratulations, you've covered a lot of ground, and now you are familiar with the basic principles behind the Kubernetes Operator! In a future post, we'll walk through the process of writing our own Operator. Thanks for reading! Source