DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • The Importance of Persistent Storage in Kubernetes- OpenEBS
  • Create a Kubernetes Cluster With Centos
  • Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin
  • Popular Tools Supporting YAML Data Format

Trending

  • DZone's Article Submission Guidelines
  • A Complete Guide to Modern AI Developer Tools
  • Start Coding With Google Cloud Workstations
  • Is Agile Right for Every Project? When To Use It and When To Avoid It
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. EFK Stack on Kubernetes (Part 1)

EFK Stack on Kubernetes (Part 1)

By 
Sudip Sengupta user avatar
Sudip Sengupta
DZone Core CORE ·
Sep. 16, 20 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
6.3K Views

Join the DZone community and get the full member experience.

Join For Free

This is the first post of the 2 part series where we will set-up production grade Kubernetes logging for applications deployed in the cluster and the cluster itself. We will be using Elasticsearch as the logging backend for this. The Elasticsearch set-up will be extremely scalable and fault tolerant.

1. Deployment Architecture

Deployment architecture

  • Elasticsearch Data Node Pods are deployed as a Stateful Set with a headless service to provide Stable Network Identities.
  • Elasticsearch Master Node Pods are deployed as a Replica Set with a headless service which will help in Auto-discovery.
  • Elasticsearch Client Node Pods are deployed as a Replica Set with an internal service which will allow access to the Data Nodes for R/W requests.
  • Kibana and ElasticHQ Pods are deployed as Replica Sets with Services accessible outside the Kubernetes cluster but still internal to your Subnetwork (not publicly exposed unless otherwise required). HPA (Horizontal Pod Auto-scaler) deployed for Client Nodes to enable auto-scaling under high load.

Important things to keep in mind:

  1. Setting ES_JAVA_OPTS env variable.
  2. Setting CLUSTER_NAME env variable.
  3. Setting NUMBER_OF_MASTERS (to avoid split-brain problem) env variable for master deployment. In case of 3 masters we have set it as 2.
  4. Setting correct Pod-AntiAffinity policies among similar pods in order to ensure HA if a worker node fails.

Let’s jump right at deploying these services to our GKE cluster.

1.1 Deployment and Headless Service for Master Nodes

Deploy the following manifest to create master nodes and the headless service.

Shell
x
103
 
1
apiVersion: v1
2
kind: Namespace
3
metadata:
4
  name: elasticsearch
5
---
6
apiVersion: apps/v1beta1
7
kind: Deployment
8
metadata:
9
  name: es-master
10
  namespace: elasticsearch
11
  labels:
12
    component: elasticsearch
13
    role: master
14
spec:
15
  replicas: 3
16
  template:
17
    metadata:
18
      labels:
19
        component: elasticsearch
20
        role: master
21
    spec:
22
      affinity:
23
        podAntiAffinity:
24
          preferredDuringSchedulingIgnoredDuringExecution:
25
          - weight: 100
26
            podAffinityTerm:
27
              labelSelector:
28
                matchExpressions:
29
                - key: role
30
                  operator: In
31
                  values:
32
                  - master
33
              topologyKey: kubernetes.io/hostname
34
      initContainers:
35
      - name: init-sysctl
36
        image: busybox:1.27.2
37
        command:
38
        - sysctl
39
        - -w
40
        - vm.max_map_count=262144
41
        securityContext:
42
          privileged: true
43
      containers:
44
      - name: es-master
45
        image: quay.io/pires/docker-elasticsearch-kubernetes:6.2.4
46
        env:
47
        - name: NAMESPACE
48
          valueFrom:
49
            fieldRef:
50
              fieldPath: metadata.namespace
51
        - name: NODE_NAME
52
          valueFrom:
53
            fieldRef:
54
              fieldPath: metadata.name
55
        - name: CLUSTER_NAME
56
          value: my-es
57
        - name: NUMBER_OF_MASTERS
58
          value: "2"
59
        - name: NODE_MASTER
60
          value: "true"
61
        - name: NODE_INGEST
62
          value: "false"
63
        - name: NODE_DATA
64
          value: "false"
65
        - name: HTTP_ENABLE
66
          value: "false"
67
        - name: ES_JAVA_OPTS
68
          value: -Xms256m -Xmx256m
69
        - name: PROCESSORS
70
          valueFrom:
71
            resourceFieldRef:
72
              resource: limits.cpu
73
        resources:
74
          limits:
75
            cpu: 2
76
        ports:
77
        - containerPort: 9300
78
          name: transport
79
        volumeMounts:
80
        - name: storage
81
          mountPath: /data
82
      volumes:
83
          - emptyDir:
84
              medium: ""
85
            name: "storage"
86
---
87
apiVersion: v1
88
kind: Service
89
metadata:
90
  name: elasticsearch-discovery
91
  namespace: elasticsearch
92
  labels:
93
    component: elasticsearch
94
    role: master
95
spec:
96
  selector:
97
    component: elasticsearch
98
    role: master
99
  ports:
100
  - name: transport
101
    port: 9300
102
    protocol: TCP
103
  clusterIP: None


If you follow the logs of any of the master-node pods, you will witness the master election among them. This is when the master-node pods choose which one is the leader of the group. When following the logs of the master-nodes, you will also see when new data and client nodes are added.

Shell
xxxxxxxxxx
1
 
1
root$ kubectl -n elasticsearch logs -f po/es-master-594b58b86c-9jkj2 | grep ClusterApplierService
2
[2018-10-21T07:41:54,958][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-9jkj2] detected_master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300}, added {{es-master-594b58b86c-lfpps}{wZQmXr5fSfWisCpOHBhaMg}{50jGPeKLSpO9RU_HhnVJCA}{10.9.124.81}{10.9.124.81:9300},{es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [3]])


It can be seen above that the es-master pod named es-master-594b58b86c-bj7g7 was elected as the leader and the other 2 pods were added to the cluster. The headless service named elasticsearch-discovery is set by default as an env variable in the docker image and is used for discovery among the nodes. This can of course be overridden.

1.2 Data Nodes Deployment

We will use the following manifest to deploy Stateful Set and Headless Service for Data Nodes:

Shell
xxxxxxxxxx
1
116
 
1
apiVersion: v1
2
kind: Namespace
3
metadata:
4
  name: elasticsearch
5
---
6
apiVersion: storage.k8s.io/v1beta1
7
kind: StorageClass
8
metadata:
9
  name: fast
10
provisioner: kubernetes.io/gce-pd
11
parameters:
12
  type: pd-ssd
13
  fsType: xfs
14
allowVolumeExpansion: true
15
---
16
apiVersion: apps/v1beta1
17
kind: StatefulSet
18
metadata:
19
  name: es-data
20
  namespace: elasticsearch
21
  labels:
22
    component: elasticsearch
23
    role: data
24
spec:
25
  serviceName: elasticsearch-data
26
  replicas: 3
27
  template:
28
    metadata:
29
      labels:
30
        component: elasticsearch
31
        role: data
32
    spec:
33
      affinity:
34
        podAntiAffinity:
35
          preferredDuringSchedulingIgnoredDuringExecution:
36
          - weight: 100
37
            podAffinityTerm:
38
              labelSelector:
39
                matchExpressions:
40
                - key: role
41
                  operator: In
42
                  values:
43
                  - data
44
              topologyKey: kubernetes.io/hostname
45
      initContainers:
46
      - name: init-sysctl
47
        image: busybox:1.27.2
48
        command:
49
        - sysctl
50
        - -w
51
        - vm.max_map_count=262144
52
        securityContext:
53
          privileged: true
54
      containers:
55
      - name: es-data
56
        image: quay.io/pires/docker-elasticsearch-kubernetes:6.2.4
57
        env:
58
        - name: NAMESPACE
59
          valueFrom:
60
            fieldRef:
61
              fieldPath: metadata.namespace
62
        - name: NODE_NAME
63
          valueFrom:
64
            fieldRef:
65
              fieldPath: metadata.name
66
        - name: CLUSTER_NAME
67
          value: my-es
68
        - name: NODE_MASTER
69
          value: "false"
70
        - name: NODE_INGEST
71
          value: "false"
72
        - name: HTTP_ENABLE
73
          value: "false"
74
        - name: ES_JAVA_OPTS
75
          value: -Xms256m -Xmx256m
76
        - name: PROCESSORS
77
          valueFrom:
78
            resourceFieldRef:
79
              resource: limits.cpu
80
        resources:
81
          limits:
82
            cpu: 2
83
        ports:
84
        - containerPort: 9300
85
          name: transport
86
        volumeMounts:
87
        - name: storage
88
          mountPath: /data
89
  volumeClaimTemplates:
90
  - metadata:
91
      name: storage
92
      annotations:
93
        volume.beta.kubernetes.io/storage-class: "fast"
94
    spec:
95
      accessModes: [ "ReadWriteOnce" ]
96
      storageClassName: fast
97
      resources:
98
        requests:
99
          storage: 10Gi
100
---
101
apiVersion: v1
102
kind: Service
103
metadata:
104
  name: elasticsearch-data
105
  namespace: elasticsearch
106
  labels:
107
    component: elasticsearch
108
    role: data
109
spec:
110
  ports:
111
  - port: 9300
112
    name: transport
113
  clusterIP: None
114
  selector:
115
    component: elasticsearch
116
    role: data


The headless service in the case of data nodes provides stable network identities to the nodes and also helps with the data transfer among them.It is important to format the persistent volume before attaching it to the pod. This can be done by specifying the volume type when creating the storage class. We can also set a flag to allow volume expansion on the fly. More can be read about that here.

Shell
xxxxxxxxxx
1
 
1
...
2
parameters: 
3
 type: pd-ssd 
4
 fsType: xfs
5
allowVolumeExpansion: true
6
...

1.3 Client Nodes Deployment

We will use the following manifest to create the Deployment and External Service for the Client Nodes

Shell
xxxxxxxxxx
1
97
 
1
apiVersion: apps/v1beta1
2
kind: Deployment
3
metadata:
4
  name: es-client
5
  namespace: elasticsearch
6
  labels:
7
    component: elasticsearch
8
    role: client
9
spec:
10
  replicas: 2
11
  template:
12
    metadata:
13
      labels:
14
        component: elasticsearch
15
        role: client
16
    spec:
17
      affinity:
18
        podAntiAffinity:
19
          preferredDuringSchedulingIgnoredDuringExecution:
20
          - weight: 100
21
            podAffinityTerm:
22
              labelSelector:
23
                matchExpressions:
24
                - key: role
25
                  operator: In
26
                  values:
27
                  - client
28
              topologyKey: kubernetes.io/hostname
29
      initContainers:
30
      - name: init-sysctl
31
        image: busybox:1.27.2
32
        command:
33
        - sysctl
34
        - -w
35
        - vm.max_map_count=262144
36
        securityContext:
37
          privileged: true
38
      containers:
39
      - name: es-client
40
        image: quay.io/pires/docker-elasticsearch-kubernetes:6.2.4
41
        env:
42
        - name: NAMESPACE
43
          valueFrom:
44
            fieldRef:
45
              fieldPath: metadata.namespace
46
        - name: NODE_NAME
47
          valueFrom:
48
            fieldRef:
49
              fieldPath: metadata.name
50
        - name: CLUSTER_NAME
51
          value: my-es
52
        - name: NODE_MASTER
53
          value: "false"
54
        - name: NODE_DATA
55
          value: "false"
56
        - name: HTTP_ENABLE
57
          value: "true"
58
        - name: ES_JAVA_OPTS
59
          value: -Xms256m -Xmx256m
60
        - name: NETWORK_HOST
61
          value: _site_,_lo_
62
        - name: PROCESSORS
63
          valueFrom:
64
            resourceFieldRef:
65
              resource: limits.cpu
66
        resources:
67
          limits:
68
            cpu: 1
69
        ports:
70
        - containerPort: 9200
71
          name: http
72
        - containerPort: 9300
73
          name: transport
74
        volumeMounts:
75
        - name: storage
76
          mountPath: /data
77
      volumes:
78
          - emptyDir:
79
              medium: ""
80
            name: storage
81
---
82
apiVersion: v1
83
kind: Service
84
metadata:
85
  name: elasticsearch
86
  namespace: elasticsearch
87
  labels:
88
    component: elasticsearch
89
    role: client
90
spec:
91
  selector:
92
    component: elasticsearch
93
    role: client
94
  ports:
95
  - name: http
96
    port: 9200
97
  type: LoadBalancer


The purpose of the service deployed here is to access the ES Cluster from outside the Kubernetes cluster but still internal to our subnet. The annotation “cloud.google.com/load-balancer-type: Internal” ensures this.

However, if the application reading/writing to our ES cluster is deployed within the cluster then the Elasticsearch service can be accessed by http://elasticsearch.elasticsearch:9200.

Once all components are deployed we should verify the following

  1. Elasticsearch deployment from inside the Kubernetes cluster using an Ubuntu container.
Shell
xxxxxxxxxx
1
19
 
1
root$ kubectl run my-shell --rm -i --tty --image ubuntu -- bash
2
root@my-shell-68974bb7f7-pj9x6:/# curl http://elasticsearch.elasticsearch:9200/_cluster/health?pretty
3
{
4
"cluster_name" : "my-es",
5
"status" : "green",
6
"timed_out" : false,
7
"number_of_nodes" : 7,
8
"number_of_data_nodes" : 2,
9
"active_primary_shards" : 0,
10
"active_shards" : 0,
11
"relocating_shards" : 0,
12
"initializing_shards" : 0,
13
"unassigned_shards" : 0,
14
"delayed_unassigned_shards" : 0,
15
"number_of_pending_tasks" : 0,
16
"number_of_in_flight_fetch" : 0,
17
"task_max_waiting_in_queue_millis" : 0,
18
"active_shards_percent_as_number" : 100.0
19
}


2. Elasticsearch deployment from outside the cluster using the GCP Internal Load balancer IP (in this case 10.9.120.8). When we check the health using curl http://10.9.120.8:9200/_cluster/health?pretty the output should be the same as above.

3. Anti-Affinity Rules for our ES-Pods

Shell
xxxxxxxxxx
1
 
1
root$ kubectl -n elasticsearch get pods -o wide 
2
NAME                         READY     STATUS    RESTARTS   AGE       IP            NODE
3
es-client-69b84b46d8-kr7j4   1/1       Running   0          10m       10.8.14.52   gke-cluster1-pool1-d2ef2b34-t6h9
4
es-client-69b84b46d8-v5pj2   1/1       Running   0          10m       10.8.15.53   gke-cluster1-pool1-42b4fbc4-cncn
5
es-data-0                    1/1       Running   0          12m       10.8.16.58   gke-cluster1-pool1-4cfd808c-kpx1
6
es-data-1                    1/1       Running   0          12m       10.8.15.52   gke-cluster1-pool1-42b4fbc4-cncn
7
es-master-594b58b86c-9jkj2   1/1       Running   0          18m       10.8.15.51   gke-cluster1-pool1-42b4fbc4-cncn
8
es-master-594b58b86c-bj7g7   1/1       Running   0          18m       10.8.16.57   gke-cluster1-pool1-4cfd808c-kpx1
9
es-master-594b58b86c-lfpps   1/1       Running   0          18m       10.8.14.51   gke-cluster1-pool1-d2ef2b34-t6h9

1.4 Scaling Considerations

We can deploy autoscalers for our client nodes depending on our CPU thresholds. A sample HPA for client node might look something like this

Shell
xxxxxxxxxx
1
13
 
1
apiVersion: autoscaling/v1
2
kind: HorizontalPodAutoscaler
3
metadata:
4
 name: es-client
5
 namespace: elasticsearch
6
spec:
7
 maxReplicas: 5
8
 minReplicas: 2
9
 scaleTargetRef:
10
   apiVersion: extensions/v1beta1
11
   kind: Deployment
12
   name: es-client
13
targetCPUUtilizationPercentage: 80


Whenever the autoscaler kicks in, we can watch the new client-node pods being added to the cluster by observing the logs of any of the master-node pods.

In case of Data-Node Pods all we have to do is increase the number of replicas using the K8 Dashboard or GKE console. The newly created data node will be automatically added to the cluster and will start replicating data from other nodes. Master-Node Pods do not require auto scaling as they only store cluster-state information. In case you want to add more data nodes make sure there is not an even number of master nodes in the cluster. Also, make sure the environment variable NUMBER_OF_MASTERS is updated accordingly.

Shell
xxxxxxxxxx
1
11
 
1
#Check logs of es-master leader pod
2
root$ kubectl -n elasticsearch logs po/es-master-594b58b86c-bj7g7 | grep ClusterApplierService
3
[2018-10-21T07:41:53,731][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-bj7g7] new_master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300}, added {{es-master-594b58b86c-lfpps}{wZQmXr5fSfWisCpOHBhaMg}{50jGPeKLSpO9RU_HhnVJCA}{10.9.124.81}{10.9.124.81:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [1] source [zen-disco-elected-as-master ([1] nodes joined)[{es-master-594b58b86c-lfpps}{wZQmXr5fSfWisCpOHBhaMg}{50jGPeKLSpO9RU_HhnVJCA}{10.9.124.81}{10.9.124.81:9300}]]])
4
5
[2018-10-21T07:41:55,162][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-bj7g7] added {{es-master-594b58b86c-9jkj2}{x9Prp1VbTq6_kALQVNwIWg}{7NHUSVpuS0mFDTXzAeKRcg}{10.9.125.81}{10.9.125.81:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [3] source [zen-disco-node-join[{es-master-594b58b86c-9jkj2}{x9Prp1VbTq6_kALQVNwIWg}{7NHUSVpuS0mFDTXzAeKRcg}{10.9.125.81}{10.9.125.81:9300}]]])
6
7
[2018-10-21T07:48:02,485][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-bj7g7] added {{es-data-0}{SAOhUiLiRkazskZ_TC6EBQ}{qirmfVJBTjSBQtHZnz-QZw}{10.9.126.88}{10.9.126.88:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [4] source [zen-disco-node-join[{es-data-0}{SAOhUiLiRkazskZ_TC6EBQ}{qirmfVJBTjSBQtHZnz-QZw}{10.9.126.88}{10.9.126.88:9300}]]])
8
9
[2018-10-21T07:48:21,984][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-bj7g7] added {{es-data-1}{fiv5Wh29TRWGPumm5ypJfA}{EXqKGSzIQquRyWRzxIOWhQ}{10.9.125.82}{10.9.125.82:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [5] source [zen-disco-node-join[{es-data-1}{fiv5Wh29TRWGPumm5ypJfA}{EXqKGSzIQquRyWRzxIOWhQ}{10.9.125.82}{10.9.125.82:9300}]]])
10
11
[2018-10-21T07:50:51,245][INFO ][o.e.c.s.ClusterApplierService] [es-master-594b58b86c-bj7g7] added {{es-client-69b84b46d8-v5pj2}{MMjA_tlTS7ux-UW44i0osg}{rOE4nB_jSmaIQVDZCjP8Rg}{10.9.125.83}{10.9.125.83:9300},}, reason: apply cluster state (from master [master {es-master-594b58b86c-bj7g7}{1aFT97hQQ7yiaBc2CYShBA}{Q3QzlaG3QGazOwtUl7N75Q}{10.9.126.87}{10.9.126.87:9300} committed version [6] source [zen-disco-node-join[{es-client-69b84b46d8-v5pj2}{MMjA_tlTS7ux-UW44i0osg}{rOE4nB_jSmaIQVDZCjP8Rg}{10.9.125.83}{10.9.125.83:9300}]]])


The logs of the leading master pod clearly depict when each node gets added to the cluster. It is extremely useful in case of debugging issues.

2. Deploying Kibana and ES-HQ

Kibana is a simple tool to visualize ES-data and ES-HQ helps in the administration and monitoring of Elasticsearch clusters. For our Kibana and ES-HQ deployment we keep the following things in mind:

  • We must provide the name of the ES-Cluster as an environment variable to the docker image.
  • The service to access the Kibana/ES-HQ deployment is internal to our organisation only, i.e. No public IP is created. We will need to use a GCP Internal load balancer.

2.1 Kibana Deployment

We will use the following manifest to create Kibana Deployment and Service

Shell
xxxxxxxxxx
1
50
 
1
apiVersion: apps/v1
2
kind: Deployment
3
metadata:
4
  namespace: logging
5
  name: kibana
6
  labels:
7
    component: kibana
8
spec:
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
     component: kibana
13
  template:
14
    metadata:
15
      labels:
16
        component: kibana
17
    spec:
18
      containers:
19
      - name: kibana
20
        image: docker.elastic.co/kibana/kibana-oss:6.2.2
21
        env:
22
        - name: CLUSTER_NAME
23
          value: my-es
24
        - name: ELASTICSEARCH_URL
25
          value: http://elasticsearch.elasticsearch:9200
26
        resources:
27
          limits:
28
            cpu: 200m
29
          requests:
30
            cpu: 100m
31
        ports:
32
        - containerPort: 5601
33
          name: http
34
---
35
apiVersion: v1
36
kind: Service
37
metadata:
38
  namespace: logging
39
  name: kibana
40
  annotations:
41
    cloud.google.com/load-balancer-type: "Internal"
42
  labels:
43
    component: kibana
44
spec:
45
  selector:
46
    component: kibana
47
  ports:
48
  - name: http
49
    port: 5601
50
  type: LoadBalancer 

2.2 ES-HQ Deployment

We will use the following manifest to create ES-HQ Deployment and Service

Shell
xxxxxxxxxx
1
45
 
1
apiVersion: apps/v1beta1
2
kind: Deployment
3
metadata:
4
  name: es-hq
5
  namespace: elasticsearch
6
  labels:
7
    component: elasticsearch
8
    role: hq
9
spec:
10
  replicas: 1
11
  template:
12
    metadata:
13
      labels:
14
        component: elasticsearch
15
        role: hq
16
    spec:
17
      containers:
18
      - name: es-hq
19
        image: elastichq/elasticsearch-hq:release-v3.4.0
20
        env:
21
        - name: HQ_DEFAULT_URL
22
          value: http://elasticsearch:9200
23
        resources:
24
          limits:
25
            cpu: 0.5
26
        ports:
27
        - containerPort: 5000
28
          name: http
29
---
30
apiVersion: v1
31
kind: Service
32
metadata:
33
  name: hq
34
  namespace: elasticsearch
35
  labels:
36
    component: elasticsearch
37
    role: hq
38
spec:
39
  selector:
40
    component: elasticsearch
41
    role: hq
42
  ports:
43
  - name: http
44
    port: 5000
45
  type: LoadBalancer


We can access both these services using the newly created Internal LoadBalancers.

Go to http://<External-Ip-Kibana-Service>/app/kibana#/home?_g=()

Kibana DashboardKibana Dashboard


Go to http://<External-Ip-ES-Hq-Service>/#!/clusters/my-es. 

ElasticHQ Dashboard for

ElasticHQ Dashboard for Cluster Monitoring and Management

3. Conclusion

This concludes deploying ES backend for logging. The Elasticsearch we deployed can be used by other applications as well. The client nodes should scale automatically under high load and data nodes can be added by incrementing the replica count in the statefulset. We will also have to tweak a few env vars but it is fairly straightforward. In the next blog we will learn about deploying Filebeat DaemonSet to send logs to the Elasticsearch backend. Stay tuned :)

Kubernetes Docker (software) pods Data (computing) Elasticsearch shell master

Opinions expressed by DZone contributors are their own.

Related

  • The Importance of Persistent Storage in Kubernetes- OpenEBS
  • Create a Kubernetes Cluster With Centos
  • Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin
  • Popular Tools Supporting YAML Data Format

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!