{{announcement.body}}
{{announcement.title}}

First Steps with the Kubernetes Operator

DZone 's Guide to

First Steps with the Kubernetes Operator

This article goes through some of the initial steps to installing Kubernetes Operator and some additional tools to help add it to the product.

· Cloud Zone ·
Free Resource

 


This blog post demonstrates how you can use the Operator Lifecycle Manager to deploy a Kubernetes Operator to your cluster. Then, you will use the Operator to spin up an Elastic Cloud on Kubernetes (ECK) cluster.

An operator is a software extension that uses custom resources (extension of the Kubernetes API) to manage complex applications on behalf of users.

The artifacts that come with an operator are:

  • A set of CRDs that extend the behavior of the cluster without making any change in the code
  • A controller that supervises the CRDs, and performs various activities such as spinning up a pod, take a backup, and so on.

The complexity encapsulated within an Operator can vary, as shown in the below diagram:

Complexity in Operator

Complexity in Operator


Prerequisites

A Kubernetes cluster  (v1.7 or newer) with a control plane and two workers. If you don’t have a running Kubernetes cluster, refer to the “Create a Kubernetes Cluster with Kind” section below.

Create a Kubernetes Cluster with Kind (Optional)

Kind is a tool for running local Kubernetes clusters using Docker container "nodes." Follow the steps in  this section if you don't have a running Kubernetes cluster:

  1. Install kind by following the steps from the Kind Quick Start page.
  2. Place the following spec into a file named kind-es-cluster.yaml :
Shell
 




xxxxxxxxxx
1


 
1
kind: Cluster
2
apiVersion: kind.x-k8s.io/v1alpha4
3
nodes:
4
- role: control-plane
5
- role: worker
6
- role: worker


     3. Create a cluster with a control plane and two worker nodes by running the Kind create cluster command followed by the --config flag and the name of the configuration file: 

Shell
 




xxxxxxxxxx
1


 
1
kind create cluster --config kind-es-cluster.yaml


Shell
 




x


 
1
Creating cluster "kind" ...
2
 ✓ Ensuring node image (kindest/node:v1.16.3) ��
3
 ✓ Preparing nodes ��
4
 ✓ Writing configuration ��
5
 ✓ Starting control-plane ��️
6
 ✓ Installing CNI ��
7
 ✓ Installing StorageClass ��
8
 ✓ Joining worker nodes ��
9
Set kubectl context to "kind-kind"
10
You can now use your cluster with:
11
 
          
12
kubectl cluster-info --context kind-kind
13
 
          
14
Have a nice day! ��


      4. At this point, you can retrieve the list of services that were started on your cluster: 

Shell
 




xxxxxxxxxx
1


 
1
kubectl cluster-info


Shell
 




xxxxxxxxxx
1


 
1
Kubernetes master is running at https://127.0.0.1:53519
2
KubeDNS is running at https://127.0.0.1:53519/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
3
 
          
4
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


Install the Operator Lifecycle Manager

In this section, you'll install the Operator Lifecycle Manager ("OLM"), a tool that helps you manage the Operators deployed to your cluster in an automated fashion.

  1. Run the following commands to install OLM:
Shell
 




xxxxxxxxxx
1


 
1
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.13.0/install.sh | bash -s 0.13.0


Shell
 




xxxxxxxxxx
1
24


 
1
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
2
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
3
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
4
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
5
customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
6
namespace/olm created
7
namespace/operators created
8
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
9
serviceaccount/olm-operator-serviceaccount created
10
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
11
deployment.apps/olm-operator created
12
deployment.apps/catalog-operator created
13
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
14
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
15
operatorgroup.operators.coreos.com/global-operators created
16
operatorgroup.operators.coreos.com/olm-operators created
17
clusterserviceversion.operators.coreos.com/packageserver created
18
catalogsource.operators.coreos.com/operatorhubio-catalog created
19
Waiting for deployment "olm-operator" rollout to finish: 0 of 1 updated replicas are available...
20
deployment "olm-operator" successfully rolled out
21
deployment "catalog-operator" successfully rolled out
22
Package server phase: Installing
23
Package server phase: Succeeded
24
deployment "packageserver" successfully rolled out


Install the ECK Operator

The ECK Operator provides support for managing and monitoring multiple clusters, upgrading to new stack versions, scaling cluster capacity, etc. This section walks through installing the ECK Operator to your Kubernetes cluster:

  1. Enter the following kubectl apply command to install the ECK Operator:
Shell
 




xxxxxxxxxx
1


 
1
kubectl apply -f https://download.elastic.co/downloads/eck/1.0.0/all-in-one.yaml


Shell
 




xxxxxxxxxx
1
11


 
1
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
2
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
3
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
4
clusterrole.rbac.authorization.k8s.io/elastic-operator created
5
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
6
namespace/elastic-system created
7
statefulset.apps/elastic-operator created
8
serviceaccount/elastic-operator created
9
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
10
service/elastic-webhook-server created
11
secret/elastic-webhook-server-cert created


2. Remember that the ECK Operator is a Kubernetes resource. Thus, you can display it by using the following command:

Shell
 




xxxxxxxxxx
1


 
1
kubectl get CustomResourceDefinition


 

Shell
 




xxxxxxxxxx
1
10
9


 
1
NAME                                           CREATED AT
2
apmservers.apm.k8s.elastic.co                  2020-01-29T07:02:24Z
3
catalogsources.operators.coreos.com            2020-01-29T06:59:21Z
4
clusterserviceversions.operators.coreos.com    2020-01-29T06:59:20Z
5
elasticsearches.elasticsearch.k8s.elastic.co   2020-01-29T07:02:24Z
6
installplans.operators.coreos.com              2020-01-29T06:59:20Z
7
kibanas.kibana.k8s.elastic.co                  2020-01-29T07:02:25Z
8
operatorgroups.operators.coreos.com            2020-01-29T06:59:21Z
9
subscriptions.operators.coreos.com             2020-01-29T06:59:20Z


3. To see more details about a specific CRD, run the kubectl describe CustomResourceDefinition command  followed by the name of the CRD: 

Shell
 




xxxxxxxxxx
1


 
1
kubectl describe CustomResourceDefinition elasticsearches.elasticsearch.k8s.elastic.co


Shell
 




xxxxxxxxxx
1
35


 
1
Name:         elasticsearches.elasticsearch.k8s.elastic.co
2
Namespace:
3
Labels:       <none>
4
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
5
                {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"creationTimestamp":null,"name...
6
API Version:  apiextensions.k8s.io/v1
7
Kind:         CustomResourceDefinition
8
Metadata:
9
  Creation Timestamp:  2020-01-29T07:02:24Z
10
  Generation:          1
11
  Resource Version:    1074
12
  Self Link:           /apis/apiextensions.k8s.io/v1/customresourcedefinitions/elasticsearches.elasticsearch.k8s.elastic.co
13
  UID:                 2332769c-ead3-4208-b6bd-68b8cfcb3692
14
Spec:
15
  Conversion:
16
    Strategy:  None
17
  Group:       elasticsearch.k8s.elastic.co
18
  Names:
19
    Categories:
20
      elastic
21
    Kind:       Elasticsearch
22
    List Kind:  ElasticsearchList
23
    Plural:     elasticsearches
24
    Short Names:
25
      es
26
    Singular:               elasticsearch
27
  Preserve Unknown Fields:  true
28
  Scope:                    Namespaced
29
  Versions:
30
    Additional Printer Columns:
31
      Json Path:    .status.health
32
      Name:         health
33
      Type:         string
34
      Description:  Available nodes
35
      Json Path:    .status.availableNodes


This output was truncated for brevity.

  1. You check the progress of the installation with:
Shell
 




xxxxxxxxxx
1


 
1
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator


Shell
 




xxxxxxxxxx
1


 
1
{"level":"info","@timestamp":"2020-01-27T14:57:57.656Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-6881438d","controller":"license-controller","worker count":1}
2
{"level":"info","@timestamp":"2020-01-27T14:57:57.757Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
3
{"level":"info","@timestamp":"2020-01-27T14:57:57.758Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
4
{"level":"info","@timestamp":"2020-01-27T14:57:57.759Z","logger":"controller-runtime.controller","message":"Starting EventSource","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","source":"channel source: 0xc00003a870"}
5
{"level":"info","@timestamp":"2020-01-27T14:57:57.759Z","logger":"controller-runtime.controller","message":"Starting Controller","ver":"1.0.0-6881438d","controller":"elasticsearch-controller"}
6
{"level":"info","@timestamp":"2020-01-27T14:57:57.760Z","logger":"controller-runtime.controller","message":"Starting workers","ver":"1.0.0-6881438d","controller":"elasticsearch-controller","worker count":1}


Note that the above output was truncated for brevity.

  1. List the pods running in the elastic-system namespace with:
Shell
 




xxxxxxxxxx
1


1
kubectl get pods -n elastic-system


Shell
 




xxxxxxxxxx
1


1
NAME                 READY   STATUS    RESTARTS   AGE
2
elastic-operator-0   1/1     Running   0          11m


Make sure the status is Running before moving on.

Deploy an Elasticsearch Cluster

In this section, we'll walk you through the process of deploying an Elasticsearch cluster with the Kubernetes Operator.

  1. Create a file called elastic-search-cluster.yaml with the following content:
Shell
 




x
14


1
apiVersion: elasticsearch.k8s.elastic.co/v1
2
kind: Elasticsearch
3
metadata:
4
  name: quickstart
5
spec:
6
  version: 7.5.2
7
  nodeSets:
8
  - name: default
9
    count: 2
10
    config:
11
      node.master: true
12
      node.data: true
13
      node.ingest: true
14
      node.store.allow_mmap: false


Things to note in the above output:

  • the version parameter specifies the Elasticsearch version the Operator will deploy
  • the count parameter sets the number of database nodes. Make sure it's not greater than the number of nodes in your Kubernetes cluster.
  1. Create a two-node Elasticsearch cluster by entering the following command:
Shell
 




xxxxxxxxxx
1


 
1
kubectl apply -f elastic-search-cluster.yaml


Shell
 




xxxxxxxxxx
1


 
1
elasticsearch.elasticsearch.k8s.elastic.co/quickstart created


Behind the scenes, the Operator automatically creates and manages the resources needed to achieve the desired state.

  1. You can now run the following command to see the status of the newly created Elasticsearch cluster:
Shell
 




xxxxxxxxxx
1


1
kubectl get elasticsearch


Shell
 




xxxxxxxxxx
1


 
1
NAME         HEALTH    NODES   VERSION   PHASE   AGE
2
quickstart   unknown           7.5.2             3m51s


Note that the HEALTH status has not been reported yet. It takes a few minutes for the process to complete. Then, the HEALTH status will show as green

Shell
 




xxxxxxxxxx
1


1
kubectl get elasticsearch


Shell
 




xxxxxxxxxx
1


1
NAME         HEALTH   NODES   VERSION   PHASE   AGE
2
quickstart   green    2       7.5.2     Ready   8m47s


4. Check the status of the pods running in your cluster with:

Shell
 




xxxxxxxxxx
1


 
1
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'


Shell
 




xxxxxxxxxx
1


1
NAME                      READY   STATUS    RESTARTS   AGE
2
quickstart-es-default-0   1/1     Running   0          9m18s
3
quickstart-es-default-1   1/1     Running   0          9m18s


 

Verify Your Elasticsearch Installation

To verify the installation, follow these steps.

  1. The Operator exposes the service with a static IP address. Run the following kubectl get service command to see it:
Shell
 




xxxxxxxxxx
1


 
1
kubectl get service quickstart-es-http


Shell
 




xxxxxxxxxx
1


 
1
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
2
quickstart-es-http   ClusterIP   10.103.196.28   <none>        9200/TCP   15m


2. To forward all connections made to localhost:9200 to port 9200 of the pod running the quickstart-es-http service, type the following command in a new terminal window:

Shell
 




xxxxxxxxxx
1


 
1
kubectl port-forward service/quickstart-es-http 9200


 

Shell
 




xxxxxxxxxx
1


 
1
Forwarding from 127.0.0.1:9200 -> 9200
2
Forwarding from [::1]:9200 -> 9200


3. Move back to the first terminal window. The password for the elastic user is stored in a Kubernetes secret. Use the following command to retrieve the password, and save it into an environment variable called PASSWORD:

Shell
 




xxxxxxxxxx
1


 
1
PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)


4. At this point, you can use curl to make a request:

Shell
 




xxxxxxxxxx
1


 
1
curl -u "elastic:$PASSWORD" -k "https://localhost:9200"


Shell
 




xxxxxxxxxx
1
17


 
1
{
2
  "name" : "quickstart-es-default-0",
3
  "cluster_name" : "quickstart",
4
  "cluster_uuid" : "g0_1Vk9iQoGwFWYdzUqfig",
5
  "version" : {
6
    "number" : "7.5.2",
7
    "build_flavor" : "default",
8
    "build_type" : "docker",
9
    "build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
10
    "build_date" : "2020-01-15T12:11:52.313576Z",
11
    "build_snapshot" : false,
12
    "lucene_version" : "8.3.0",
13
    "minimum_wire_compatibility_version" : "6.8.0",
14
    "minimum_index_compatibility_version" : "6.0.0-beta1"
15
  },
16
  "tagline" : "You Know, for Search"
17
}


Deploy Kibana

This section walks through creating a new Kibana cluster using the Kubernetes Operator.

  1. Create a file called kibana.yaml with the following content:
Shell
 




xxxxxxxxxx
1
23


 
1
apiVersion: kibana.k8s.elastic.co/v1
2
kind: Kibana
3
metadata:
4
  name: quickstart
5
spec:
6
  version: 7.5.1
7
  count: 1
8
  elasticsearchRef:
9
    name: quickstart
10
  podTemplate:
11
    metadata:
12
      labels:
13
        foo: kibana
14
    spec:
15
      containers:
16
        - name: kibana
17
          resources:
18
            requests:
19
              memory: 1Gi
20
              cpu: 0.5
21
            limits:
22
              memory: 1Gi
23
              cpu: 1



2. Enter the following kubectl apply command to create a Kibana cluster: 

Shell
 




xxxxxxxxxx
1


 
1
kubectl apply -f kibana.yaml


Shell
 




xxxxxxxxxx
1


 
1
kibana.kibana.k8s.elastic.co/quickstart created



3. During the installation, you can check on the progress by running:

Shell
 




xxxxxxxxxx
1


 
1
kubectl get kibana


Shell
 




xxxxxxxxxx
1


 
1
NAME         HEALTH   NODES   VERSION   AGE
2
quickstart                    7.5.1     3s


Note that in the above output, the HEALTH status hasn't been reported yet.

Once the installation is completed, the HEALTH status will show as green:

Shell
 




xxxxxxxxxx
1


1
kubectl get kibana


Shell
 




xxxxxxxxxx
1


1
NAME         HEALTH   NODES   VERSION   AGE
2
quickstart   green    1       7.5.1     104s



4. At this point, you can list the Kibana pods by entering the following kubectl get pods command:

Shell
 




xxxxxxxxxx
1


 
1
kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'


Shell
 




xxxxxxxxxx
1


 
1
NAME                             READY   STATUS    RESTARTS   AGE
2
quickstart-kb-7578b8d8fc-ftvbz   1/1     Running   0          70s


Verify Your Kibana Installation

Follow these steps to verify your Kibana installation.

  1. The Kubernetes Operator has created a ClusterIP service for Kibana. You can retrieve it like this:
Shell
 




xxxxxxxxxx
1


 
1
kubectl get service quickstart-kb-http


Shell
 




xxxxxxxxxx
1


 
1
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
2
quickstart-kb-http   ClusterIP   10.98.126.75   <none>        5601/TCP   11m



2. To make the service available on your host, type the following command in a new terminal window:

Shell
 




xxxxxxxxxx
1


 
1
kubectl port-forward service/quickstart-kb-http 5601


Shell
 




xxxxxxxxxx
1


 
1
Forwarding from 127.0.0.1:5601 -> 5601
2
Forwarding from [::1]:5601 -> 5601



3. To access Kibana, you need the password for the elastic user. You've already saved it into an environment variable called PASSWORD in Step 3 of the Verify Your Elasticsearch Installation section. You can now display it with:

Shell
 




xxxxxxxxxx
1


 
1
echo $PASSWORD


Shell
 




xxxxxxxxxx
1


 
1
vrfr6b6v4687hnldrc72kb4q



In our example, the password is vrfr6b6v4687hnldrc72kb4q but yours will be different.

  1. Now, you can access Kibana by pointing your browser to https://localhost:5601

Kibana welcome screen


Kibana welcome screen



5. Log in using the elastic username and the password you retrieved earlier:

Add data to Kibana

Add data to Kibana


 

Manage Your ECK Cluster with the Kubernetes Operator

In this section, you'll learn how to scale down and up your ECK Cluster.

  1. To scale down, modify the number of nodes running Elasticsearch by specifying nodeSets.count: 1 in your elasticsearch.yaml file. Your spec should look like this:
YAML
 




xxxxxxxxxx
1
14


 
1
apiVersion: elasticsearch.k8s.elastic.co/v1
2
kind: Elasticsearch
3
metadata:
4
  name: quickstart
5
spec:
6
  version: 7.5.2
7
  nodeSets:
8
  - name: default
9
    count: 1
10
    config:
11
      node.master: true
12
      node.data: true
13
      node.ingest: true
14
      node.store.allow_mmap: false



2. You can apply the spec with:

Shell
 




xxxxxxxxxx
1


 
1
kubectl apply -f elastic-search.yaml


Shell
 




xxxxxxxxxx
1


 
1
elasticsearch.elasticsearch.k8s.elastic.co/quickstart configured



Behind the scenes, the Operator makes required changes to reach the desired state. This can take a bit of time.

  1. In the meantime, you can display the status of your cluster by entering the following command:
Shell
 




xxxxxxxxxx
1


 
1
kubectl get elasticsearch


Shell
 




xxxxxxxxxx
1


 
1
NAME         HEALTH   NODES   VERSION   PHASE             AGE
2
quickstart   green    1       7.5.2     ApplyingChanges   56m



In the above output, note that there's only one node running Elasticsearch.

  1. You can list the pods running Elasticsearch:
Shell
 




xxxxxxxxxx
1


 
1
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'


Shell
 




xxxxxxxxxx
1


 
1
NAME                      READY   STATUS    RESTARTS   AGE
2
quickstart-es-default-0   1/1     Running   0          58m



5. Similarly, you can scale up your Elasticsearch cluster by specifying nodeSets.count: 2 in your elasticsearch.yaml file:

YAML
 




xxxxxxxxxx
1
14


 
1
apiVersion: elasticsearch.k8s.elastic.co/v1
2
kind: Elasticsearch
3
metadata:
4
  name: quickstart
5
spec:
6
  version: 7.5.2
7
  nodeSets:
8
  - name: default
9
    count: 3
10
    config:
11
      node.master: true
12
      node.data: true
13
      node.ingest: true
14
      node.store.allow_mmap: false



6. You can monitor the progress with:

Shell
 




xxxxxxxxxx
1


 
1
kubectl get elasticsearch


Shell
 




xxxxxxxxxx
1


 
1
NAME         HEALTH   NODES   VERSION   PHASE             AGE
2
quickstart   green    1       7.5.2     ApplyingChanges   61m



Once the desired stats is reached, the PHASE column will show as Ready

Shell
 




xxxxxxxxxx
1


1
kubectl get elasticsearch


Shell
 




xxxxxxxxxx
1


 
1
NAME         HEALTH   NODES   VERSION   PHASE   AGE
2
quickstart   green    2       7.5.2     Ready   68m



Congratulations, you've covered a lot of ground, and now you are familiar with the basic principles behind the Kubernetes Operator! In a future post, we'll walk through the process of writing our own Operator. Until then, here are some more Kubernetes and Docker best practices for managing and deploying containers.

Thanks for reading!

Topics:
cloud, cloud native, devops, kind, kubernetes, kubernetes architecture, kubernetes operator

Published at DZone with permission of Sudip Sengupta . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}