Over a million developers have joined DZone.

How to Install Percona Monitoring and Management on Google Container Engine

Since GKE runs on Kubernetes, we had to do some interesting changes to the server install instructions to use GKE to manage the Docker container pmm-server uses.

· Performance Zone

Discover 50 of the latest mobile performance statistics with the Ultimate Guide to Digital Experience Monitoring, brought to you in partnership with Catchpoint.

This blog discusses installing Percona Monitoring and Management on Google Container Engine.

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the Docker container that pmm-server uses.

The regular install instructions are here.

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the G-cloud shell. This is done by clicking the G-cloud shell button at the top right of your screen when logged into your GCS project.

Installing Percona Monitoring and Management

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c
 
Updated property [compute/zone].

Then let’s set up our auth:

Shell

manjot_singh@googleproject:~$ gcloud auth application-default login
...
These credentials will be used by any library that requests
Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks and use the minimum recommended size (according to Google) for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv].
NAME              ZONE          SIZE_GB  TYPE         STATUS
pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv].
NAME                ZONE          SIZE_GB  TYPE         STATUS
pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv].
NAME               ZONE          SIZE_GB  TYPE         STATUS
pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv].
NAME                 ZONE          SIZE_GB  TYPE         STATUS
pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2
Creating cluster pmm-server...done.
Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server].
kubeconfig entry generated for pmm-server.
NAME        ZONE          MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
pmm-server  asia-east1-c  1.4.6           999.911.999.91  n1-standard-2  1.4.6         1          RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list
NAME                                       ZONE          MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-pmm-server-default-pool-73b3f656-20t0  asia-east1-c  n1-standard-2               10.14.10.14  911.119.999.11  RUNNING

Now that our container manager is up, we need to create two configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks, and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/d",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/c",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/m",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/g",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}
manjot_singh@googleproject:~$ vi pmm-server.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/data",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/consul-data",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/mysql",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/grafana",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json
pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash
root@pmm-server:/opt# supervisorctl stop grafana
grafana: stopped
root@pmm-server:/opt# supervisorctl stop prometheus
prometheus: stopped
root@pmm-server:/opt# supervisorctl stop consul
consul: stopped
root@pmm-server:/opt# supervisorctl stop mysql
mysql: stopped
root@pmm-server:/opt# mv consul-data/* c/
root@pmm-server:/opt# chown pmm.pmm c
root@pmm-server:/opt# cd prometheus/
root@pmm-server:/opt/prometheus# mv data/* d/
root@pmm-server:/opt/prometheus# chown pmm.pmm d
root@pmm-server:/var/lib# cd /var/lib
root@pmm-server:/var/lib# mv mysql/* m/
root@pmm-server:/var/lib# chown mysql.mysql m
root@pmm-server:/var/lib# mv grafana/* g/
root@pmm-server:/var/lib# chown grafana.grafana g
root@pmm-server:/var/lib# exit
manjot_singh@googleproject:~$ kubectl delete pods pmm-server
pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json
pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer

service "pmm-server" exposed

You can get more information on this by running this:

manjot_singh@googleproject:~$ kubectl describe services pmm-server
Name:                   pmm-server
Namespace:              default
Labels:                 run=pmm-server
Selector:               run=pmm-server
Type:                   LoadBalancer
IP:                     10.3.10.3
Port:                   <unset> 80/TCP
NodePort:               <unset> 31757/TCP
Endpoints:              10.0.0.8:80
Session Affinity:       None
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  22s           22s             1       {service-controller }                   Normal          CreatingLoadBalancer    Cr

To find the public IP of your PMM server, look under “EXTERNAL-IP”:

manjot_singh@googleproject:~$ kubectl get services
NAME         CLUSTER-IP    EXTERNAL-IP      PORT(S)   AGE
kubernetes   10.3.10.3     <none>           443/TCP   7m
pmm-server   10.3.10.99    999.911.991.91   80/TCP    1m
1
2
3
4
manjot_singh@googleproject:~$ kubectl get services
NAME         CLUSTER-IP    EXTERNAL-IP      PORT(S)   AGE
kubernetes   10.3.10.3     <none>           443/TCP   7m
pmm-server   10.3.10.99    999.911.991.91   80/TCP    1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the VPC. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use fewer mounts and make the configuration and setup easier.

Is your APM strategy broken? This ebook explores the latest in Gartner research to help you learn how to close the end-user experience gap in APM, brought to you in partnership with Catchpoint.

Topics:
performance ,monitoring ,google container engine ,percona ,tutorial

Published at DZone with permission of Manjot Singh, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}