{{announcement.body}}
{{announcement.title}}

Running Couchbase Autonomous Operator 2.0 With Prometheus (Part 2)

DZone 's Guide to

Running Couchbase Autonomous Operator 2.0 With Prometheus (Part 2)

In this article, see part 2 on how to run Couchbase Autonomous Operator 2.0 with Prometheus.

· Database Zone ·
Free Resource

Prerequisites

As mentioned in Part 1 of the blog, we need to run Prometheus and Grafana in the Kubernetes environment on our Amazon EKS. The recommended way is to use Kube-Prometheus, an Open Source project. Not only will this simplify the deployment, but it adds a lot more components, like the Prometheus Node Exporter which monitors Linux host metrics and typically used in a Kubernetes environment. 

Clone the https://github.com/coreos/kube-prometheus repository from Github, but do not create any manifests just yet.

Components included in this package:

PowerShell
 




x
9
10
8


1
kube-prometheus git:(master) $ ls
2
DCO         README.md   examples        jsonnet scripts
3
LICENSE     build.sh    experimental    jsonnetfile.json
4
sync-to-internal-registry.jsonnet
5
Makefile    code-of-conduct.md          go.mod                           jsonnetfile.lock.json  test.sh
6
NOTICE      docs        go.sum          kustomization.yaml
7
tests       OWNERS      example.jsonnet hack
8
manifests


Note:

This tutorial works on the basis that the manifests which bring up the relevant resources for Prometheus Operator are still located in the folder `manifests`.

Please adjust accordingly if changes have been made since as the repository is experimental and subject to change.

Create the Couchbase `ServiceMonitor`

The ServiceMonitor tells Prometheus to monitor a Service resource that defines the endpoints Prometheus scrapes for incoming metrics provided by the couchbase-exporter. This file, couchbase-serviceMonitor.yaml, should be kube-prometheus/manifests directory.

YAML
 




x


 
1
kube-prometheus/manifests git:(master) $ cat couchbase-serviceMonitor.yaml
2
apiVersion: monitoring.coreos.com/v1
3
kind: ServiceMonitor
4
metadata:
5
  name: couchbase
6
  namespace: default # <1>
7
  labels:
8
    app: couchbase
9
spec:
10
  endpoints:
11
  - port: metrics       # <2>
12
    interval: 5s        # <3>
13
  namespaceSelector:
14
    matchNames:
15
    - default # <4>
16
  selector:
17
    matchLabels:
18
      app: couchbase # <5>



Legend:

  1. You may wish to include our Couchbase `ServiceMonitor` in the `monitoring` namespace along with the other `ServiceMonitors`. For examples of this tutorial, we have just left it in the `default` namespace for ease of use.
  2. The `port` can be a string value and will work for different port numbers of the service as long as the name matches.
  3. `interval` tells Prometheus how often to scrape the endpoint. Here we want to match the namespace of the `Service` will we be creating in the next step,

  4. note that the namespace our `Service` will be running in must be the same namespace of the Couchbase Cluster we wish to scrape metrics from.

  5. Similar to the `namespaceSelector`, this is a simple `labelSelector` that will select the service we will be creating.

Create the Couchbase Metrics `Service`

The `Service` will define the port that we described in our ServiceMonitor at `spec.endpoint[0].port` earlier. This file, couchbase-service.yaml, should be kube-prometheus/manifests directory.

YAML
 




x


1
kube-prometheus/manifests git:(master) $ cat couchbase-service.yaml
2
apiVersion: v1
3
kind: Service
4
metadata:
5
  name: couchbase-metrics
6
  namespace: default # <1>
7
  labels:
8
    app: couchbase
9
spec:
10
  ports:
11
  - name: metrics
12
    port: 9091 # <2>
13
    protocol: TCP
14
  selector:
15
    app: couchbase
16
    couchbase_cluster: cb-example # <3>



Legend:

  1. As mentioned previously, make sure that the `Service` is in the same namespace as the Couchbase cluster that you wish to scrape metrics from, otherwise no pods will be selected and no endpoints will be displayed in Prometheus Targets. Also make sure this value matches up with `spec.namespaceSelector` in the `ServiceMonitor`.

  2. Keep this port as its default value of 9091 as this is the port the Couchbase Exporter will be exporting to.

  3. A further level of granularity to your selector can be added in the scenario you have more than one Couchbase Cluster running in the same namespace.

Prometheus Dynamic Service Discovery

Prometheus discovers the monitoring end points, dynamically, by matching the labels on the ServiceMonitor to the Services which specify the cluster and end points, Port 9091 in our case.

Create the Manifests

Follow the specific command given in the GitHub README to bring up our created resources along with the other provided default manifests.

Components such as Prometheus, AlertManager, NodeExporter and Grafana should then startup and we can confirm this by inspecting the pods in the namespace `monitoring`.

Create the Kubernetes namespace and CRDs

PowerShell
 




x


1
kube-prometheus git:(master) $ kubectl create -f manifests/setup
2
namespace/monitoring created
3
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
4
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
5
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
6
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
7
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
8
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
9
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
10
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
11
deployment.apps/prometheus-operator created
12
service/prometheus-operator created
13
serviceaccount/prometheus-operator created



Wait for a few minutes before the next step, but it may be necessary to run the command multiple times for all components to be created successfully.

Create the Remaining Resources

PowerShell
 




x


1
kube-prometheus git:(master) $ kubectl create -f manifests/
2
alertmanager.monitoring.coreos.com/main created
3
secret/alertmanager-main created
4
service/alertmanager-main created
5
serviceaccount/alertmanager-main created
6
servicemonitor.monitoring.coreos.com/alertmanager created
7
service/couchbase-metrics created
8
servicemonitor.monitoring.coreos.com/couchbase created
9
...
10
servicemonitor.monitoring.coreos.com/kubelet created



Check Monitoring namespaces

Components such as Prometheus, AlertManager, NodeExporter and Grafana should then startup and we can confirm this by inspecting the pods in the namespace `monitoring`.

PowerShell
 




x


 
1
$ kubectl get pods -n monitoring
2
NAME                                   READY   STATUS    RESTARTS   AGE
3
alertmanager-main-0                    2/2     Running   0          69m
4
alertmanager-main-1                    2/2     Running   0          69m
5
alertmanager-main-2                    2/2     Running   0          69m
6
grafana-75d8c76bdd-4l284               1/1     Running   0          69m
7
kube-state-metrics-54dc88ccd8-nntts    3/3     Running   0          69m
8
node-exporter-pk65z                    2/2     Running   0          69m
9
node-exporter-s9k9n                    2/2     Running   0          69m
10
node-exporter-vhjpw                    2/2     Running   0          69m
11
prometheus-adapter-8667948d79-vfcbv    1/1     Running   0          69m
12
prometheus-k8s-0                       3/3     Running   1          69m
13
prometheus-k8s-1                       3/3     Running   0          69m
14
prometheus-operator-696554666f-9cnnv   2/2     Running   0          89m



Check that Our `ServiceMonitor` Has Been Created

PowerShell
 




x


1
$ kubectl get servicemonitors --all-namespaces
2
NAMESPACE    NAME                      AGE
3
default      couchbase                 2m33s
4
monitoring   alertmanager              2m33s
5
monitoring   coredns                   2m22s
6
monitoring   grafana                   2m26s
7
monitoring   kube-apiserver            2m22s
8
monitoring   kube-controller-manager   2m22s
9
monitoring   kube-scheduler            2m21s
10
monitoring   kube-state-metrics        2m25s
11
monitoring   kubelet                   2m21s
12
monitoring   node-exporter             2m25s
13
monitoring   prometheus                2m22s
14
monitoring   prometheus-operator       2m23s



Check that Our `Service` Has Been Created

PowerShell
 




x


1
$ kubectl get svc --all-namespaces
2
NAMESPACE     NAME                           PORT(S)
3
default       cb-example                     8091/TCP,8092/TCP,8093/TCP,
4
default       cb-example-srv                 11210/TCP,11207/TCP
5
default       couchbase-metrics              9091/TCP
6
default       couchbase-operator             8080/TCP,8383/TCP
7
default       couchbase-operator-admission   443/TCP
8
default       kubernetes                     443/TCP
9
kube-system   kube-dns                       53/UDP,53/TCP
10
kube-system   kubelet                        10250/TCP,10255/TCP,4194/TCP,...
11
monitoring    alertmanager-main              9093/TCP
12
monitoring    alertmanager-operated          9093/TCP,9094/TCP,9094/UDP
13
monitoring    grafana                        3000/TCP
14
monitoring    kube-state-metrics             8443/TCP,9443/TCP 
15
monitoring    node-exporter                  9100/TCP
16
monitoring    prometheus-adapter             443/TCP
17
monitoring    prometheus-k8s                 9090/TCP
18
monitoring    prometheus-operated            9090/TCP
19
monitoring    prometheus-operator            8443/TCP



In the above output, we not only see the services, but also the ports. We will use this information to forward these ports, like we did with the Couchbase Administration UI, in order to access these services.

To check that all is working correctly with the Prometheus Operator deployment, run the following command to view the logs:

PowerShell
 




xxxxxxxxxx
1


 
1
$ kubectl logs -f deployments/prometheus-operator -n monitoring prometheus-operator



Port Forwarding

We have already forwarded the Couchbase Admin UI port 8091 from one Couchbase node previously, but I’m giving this again, this time from the service. 

In addition to that port, we actually need only the Grafana service access, Port 3000. However, let’s access Prometheus service Port 9090 as well. Then we can take a look at all the metrics from the different exporters and try a little PromQL, the Prometheus Query Language as well. 

Now, the 3 above should be sufficient. However, there’s some additional advantage of taking a look at the metrics from each individual service as well. The Couchbase exporter exposes the Couchbase metrics on Port 9091. So, we can forward those ports as well. Note that you really need only access to Grafana.

PowerShell
 




xxxxxxxxxx
1
11
9


1
kubectl --namespace default port-forward svc/cb-example 8091 &
2
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 &
3
kubectl --namespace monitoring port-forward svc/grafana 3000 &
4
kubectl --namespace monitoring port-forward svc/alertmanager-main 9093 &
5
kubectl --namespace monitoring port-forward svc/node-exporter 9100 &
6
kubectl --namespace default port-forward svc/couchbase-metrics 9091 &



Check out Prometheus Targets

Access: http://localhost:9090/targets

All Prometheus targets should be UP. There are quite a few of these since Kube-Prometheus deployed a bunch of exporters.

Check out the Raw Couchbase Metrics

Access: http://localhost:9091/metrics

YAML
 




xxxxxxxxxx
1
23


1
# HELP cbbucketinfo_basic_dataused_bytes basic_dataused
2
# TYPE cbbucketinfo_basic_dataused_bytes gauge
3
cbbucketinfo_basic_dataused_bytes{bucket="pillow"} 1.84784896e+08
4
cbbucketinfo_basic_dataused_bytes{bucket="travel-sample"} 1.51648256e+08
5
# HELP cbbucketinfo_basic_diskfetches basic_diskfetches
6
# TYPE cbbucketinfo_basic_diskfetches gauge
7
cbbucketinfo_basic_diskfetches{bucket="pillow"} 0
8
cbbucketinfo_basic_diskfetches{bucket="travel-sample"} 0
9
# HELP cbbucketinfo_basic_diskused_bytes basic_diskused
10
# TYPE cbbucketinfo_basic_diskused_bytes gauge
11
cbbucketinfo_basic_diskused_bytes{bucket="pillow"} 1.98967788e+08
12
cbbucketinfo_basic_diskused_bytes{bucket="travel-sample"} 1.91734038e+08



This output is useful as you can rapidly search through the list.

Try a Basic PromQL Query

In the above UI, click on Graph first.

The drop box gives you the list of metrics scraped. This is the complete list of all the metrics scraped by all the exporters and that's a pretty daunting list.  One method to narrow down the list to just Couchbase metrics, is of course, to access the 9091 endpoint as previously described.

Check out Grafana

Access: http://localhost:3000

Userid and password are: admin/admin

The kube-prometheus deployment of Grafana already has the Prometheus Data Source defined and a large set of Default Dashboards. Let's check out the Default Node Dashboard

Build a Sample Grafana Dashboard to Monitor Couchbase Metrics

We will not be building a complete Dashboard, but a small sample with a few panels to show how it's done. This dashboard will monitor the number of items in a bucket and the number of GET and SET operations.

Note: Please have the pillow-fight application running as described in Part 1. This will generate the operations which we are interested in monitoring.

Prometheus Metrics

Access: http://localhost:9091/graph

We are interested in the current items in a bucket. There are a couple of metrics which supply that, cluster wide and per node. Let's use the per node metric. We will then allow Prometheus to handle all aggregations, as per best practices. Another advantage is that we can even show the current items in the bucket, per node basis just to check if our data set is skewed.

Let's take a look at one Element:

YAML
 




xxxxxxxxxx
1


 
1
cbpernodebucket_curr_items{bucket="pillow",endpoint="metrics",instance="192.168.2.93:9091",job="couchbase-metrics",namespace="default",node="cb-example-0000.cb-example.default.svc:8091",pod="cb-example-0000",service="couchbase-metrics"}




In the above example, we are interested in these labels: bucket, part of node (the middle part, cb-example which is the cluster name, and pod. We are also interested in service in order to filter. These will help us design a dashboard where we can view the metrics by bucket, node or cluster.

The Sample Dashboard

Let's create a new blank sample dashboard.

Adding Variables

Since we want the metrics per bucket, node, and cluster, let's add these variables so that they can be selected in a drop box.

The above example creates the variable bucket. Note the Query and Regex expression. Let's create 2 more so that we have 3 variables:

The Query does not change for these 3, but here are the Regex expressions:

Plain Text
 




xxxxxxxxxx
1


1
Query: {service="couchbase-metrics"}
2
$node: Regex= .*pod="(.*?)".*
3
$bucket: Regex= .*bucket="(.*?)".*
4
$cluster: Regex=.*node=\".*\.(.*)\..*\..*:8091\".*



Creating a Panel

Create 3 Panels for Current Items, GETs, and SETs

You can duplicate each panel and edit them. These are the queries:

Plain Text
 




xxxxxxxxxx
1


 
1
Items Panel: sum(cbpernodebucket_curr_items{bucket=~"$bucket",pod=~"$node"}) by (bucket)
2
GETs Panel: sum(cbpernodebucket_cmd_get{bucket=~"$bucket",pod=~"$node"}) by (bucket)
3
SETs Panel: sum(cbpernodebucket_cmd_set{bucket=~"$bucket",pod=~"$node"}) by (bucket)



The Completed Sample Grafana Dashboard

This is how our final sample dashboard looks like.

Clean Up

Finally, clean up your deployment:

PowerShell
 




x
9


 
1
kube-prometheus git:(master)$ kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
2
cao-2$ kubectl delete -f pillowfight-data-loader.yaml
3
cao-2$ kubectl delete -f my-cluster.yaml
4
cao-2$ bin/cbopcfg | kubectl delete -f -
5
cao-2$ kubectl delete -f crd.yaml
6
cao-2$ eksctl delete cluster --region=us-east-1 --name=prasadCAO2



Thanks for reading!

Topics:
couchbase ,grafana ,kubernetes ,monitoring ,operator ,prometheus

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}