{{announcement.body}}
{{announcement.title}}

Deploy More Complex Microservice Apps Using Platform9 Managed Kubernetes Free Tier

DZone 's Guide to

Deploy More Complex Microservice Apps Using Platform9 Managed Kubernetes Free Tier

In this example, we are going to see the deployment of a Redis master, Redis slaves, and a sample guestbook application that uses Redis as a store.

· Microservices Zone ·
Free Resource

In this tutorial, we are going to expand our examples by deploying a more complex microservice on Platform9 Managed Kubernetes Free Tier (PMKFT). The idea is to make you more comfortable with the platform and to show you how you can leverage it for more advanced scenarios.

In this example, we are going to see the deployment of:

  • A Redis master
  • Multiple Redis slaves
  • A sample guestbook application that uses Redis as a store

We assume that you have already set up a Platform9 cluster with at least one node, and the cluster is ready. If you haven't sign up now for free: https://platform9.com/signup/ 

Let's start with the Redis parts.

Deploying and Exposing a Redis Cluster

Redis is a key-value in-memory store that is used mainly as a cache service. To set up Clustering for Data Replication, we need a Redis instance that acts as Master, together with additional instances as slaves. Then the guestbook application can use this instance to store data. The Redis master will propagate the writes to the slave nodes.

We can initiate a Redis Master deployment in a few different ways: either using the kubectl tool, the Platform9 UI or the Kubernetes UI. For convenience, we use the kubectl tool as it's the most commonly understood in tutorials.

First, we need to create a Redis Cluster Deployment. Looking at their documentation here, to set up a cluster, we need some configuration properties. We can leverage Kubernetes configmaps to store and reference them in the deployment spec.

We need to save a script and a Redis.conf file that is going to be used to configure the master and slave nodes.

Create the following config Redis-cluster.config.yml

With these values:

Java
 




x
15


 
1
$ cat redis-cluster.config.yml
2
---
3
apiVersion: v1
4
kind: ConfigMap
5
metadata:
6
  name: redis-cluster-config
7
data:
8
  update-ip.sh: |
9
    #!/bin/sh
10
    sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${IP}/" /data/nodes.conf
11
    exec "$@"
12
  redis.conf: |+
13
    cluster-enabled yes
14
    cluster-config-file /data/nodes.conf
15
    appendonly yes



We define a script that will insert an IP value to the node.conf file. This is to fix an issue with Redis as referenced here. We use this script every time we deploy a new Redis image.

Then we have the redis.conf, which applies the minimal cluster configuration.

Apply this spec into the cluster:

Java
 




xxxxxxxxxx
1


 
1
$ kubectl apply -f redis-cluster.config.yml



Then verify that it exists in the list of configmaps:

Java
 




xxxxxxxxxx
1


1
$ kubectl get configmaps



Next, we need to define a spec for the Redis cluster instances. We can use a Deployment or a StatefulSet to define 3 instances:

Here is the spec: Redis-cluster.statefulset.yml

Java
 




xxxxxxxxxx
1
51


 
1
$ cat redis-cluster.statefulset.yml
2
---
3
apiVersion: apps/v1
4
kind: StatefulSet
5
metadata:
6
  name: redis-cluster
7
spec:
8
  serviceName: redis-cluster
9
  replicas: 6
10
  selector:
11
    matchLabels:
12
      app: redis-cluster
13
  template:
14
    metadata:
15
      labels:
16
        app: redis-cluster
17
    spec:
18
      containers:
19
      - name: redis
20
        image: redis:5.0.7-alpine
21
        ports:
22
        - containerPort: 6379
23
          name: client
24
        - containerPort: 16379
25
          name: gossip
26
        command: ["/conf/update-ip.sh", "redis-server", "/conf/redis.conf"]
27
        env:
28
        - name: IP
29
          valueFrom:
30
            fieldRef:
31
              fieldPath: status.podIP
32
        volumeMounts:
33
        - name: conf
34
          mountPath: /conf
35
          readOnly: false
36
        - name: data
37
          mountPath: /data
38
          readOnly: false
39
      volumes:
40
      - name: conf
41
        configMap:
42
          name: redis-cluster-config
43
          defaultMode: 0755
44
  volumeClaimTemplates:
45
  - metadata:
46
      name: data
47
    spec:
48
      accessModes: [ "ReadWriteOnce" ]
49
      resources:
50
        requests:
51
          storage: 1Gi



In the above step we defined a few things:

  • An IP environment variable that we need in the update-IP.sh script that we defined in the configmap earlier. This is the pod-specific IP address using the Downward API.
  • Some shared volumes including the configmap that we defined earlier.
  • Two container ports - 6379 and 16379 - for the gossip protocol.

With this spec we can deploy the Redis cluster instances:

Java
 




xxxxxxxxxx
1


1
$ kubectl apply -f redis-cluster.statefulset.yml



Once we verify that we have the deployment-ready, we need to perform the last step, which is bootstrapping the cluster. Consulting the documentation here for creating the cluster, we need to ssh into one of the instances and run the Redis-CLI cluster create command. For example, taken from the docs:

Java
 




xxxxxxxxxx
1


 
1
$ redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \
2
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
3
--cluster-replicas 1



To do that in our case, we need to get the local pod IPs of the instances and feed them to that command.

We can query the IP using this command:

Java
 




xxxxxxxxxx
1


1
$ kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 '



So if we save them in a variable or a file, we can pipe them at the end of the Redis-CLI command:

Java
 




xxxxxxxxxx
1


1
$ POD_IPS = $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')



Then we can run the following command:

Java
 




xxxxxxxxxx
1


1
$ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $POD_IPS



If everything is OK, you will see the following prompt. Enter 'yes' to accept and continue:

Java
 




xxxxxxxxxx
1
11


1
Can I set the above configuration? (type 'yes' to accept): yes
2
>>> Nodes configuration updated
3
>>> Assign a different config epoch to each node
4
>>> Sending CLUSTER MEET messages to join the cluster
5
Waiting for the cluster to join
6
........
7
 
           
8
[OK] All nodes agree about slots configuration.
9
>>> Check for open slots...
10
>>> Check slots coverage...
11
[OK] All 16384 slots covered.



Then we can verify the cluster state by running the cluster-info command:

Java
 




xxxxxxxxxx
1
18


1
$ kubectl exec -it redis-cluster-0 -- redis-cli cluster info
2
 
           
3
cluster_state:ok
4
cluster_slots_assigned:16384
5
cluster_slots_ok:16384
6
cluster_slots_pfail:0
7
cluster_slots_fail:0
8
cluster_known_nodes:6
9
cluster_size:3
10
cluster_current_epoch:6
11
cluster_my_epoch:1
12
cluster_stats_messages_ping_sent:28
13
cluster_stats_messages_pong_sent:34
14
cluster_stats_messages_sent:62
15
cluster_stats_messages_ping_received:29
16
cluster_stats_messages_pong_received:28
17
cluster_stats_messages_meet_received:5
18
cluster_stats_messages_received:62



Before we continue deploying the guestbook app, we need to offer a unified service frontend for the Redis Cluster so that it's easily discoverable in the cluster.

Here is the service spec: Redis-cluster.service.yml

Java
 




xxxxxxxxxx
1
18


1
$ cat redis-cluster.service.yml
2
 
           
3
---
4
apiVersion: v1
5
kind: Service
6
metadata:
7
  name: redis-master
8
spec:
9
  type: ClusterIP
10
  ports:
11
  - port: 6379
12
    targetPort: 6379
13
    name: client
14
  - port: 16379
15
    targetPort: 16379
16
    name: gossip
17
  selector:
18
    app: redis-cluster



We expose the cluster as Redis-master here, as the guestbook app will be looking for a hosting service to connect to with that name.

Once we apply this service spec, we can move on to deploying and exposing the Guestbook Application:

Java
 




xxxxxxxxxx
1


1
$ kubectl apply -f redis-cluster.service.yml



Deploying and Exposing a GuestBook Application

The guestbook application is a simple PHP script that shows a form to submit a message. Initially, it will attempt to connect to either the redis-master host or the redis-slave hosts. 

It needs the GET_HOSTS_FROM environment variable set pointing to the file with the following variables: 

  • REDIS_MASTER_SERVICE_HOST: of the master 
  • REDIS_SLAVE_SERVICE_HOST: of the slave

First, let's define the deployment spec bellow:

PHP-guestbook.deployment.yml

Java
 




xxxxxxxxxx
1
33


1
$ cat php-guestbook.deployment.yml
2
 
           
3
---
4
apiVersion: apps/v1
5
kind: Deployment
6
metadata:
7
  name: guestbook
8
spec:
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
      app: guestbook
13
  template:
14
    metadata:
15
      labels:
16
        app: guestbook
17
    spec:
18
      containers:
19
      - name: php-redis
20
        image: gcr.io/google-samples/gb-frontend:v6
21
        resources:
22
          requests:
23
            cpu: 150m
24
            memory: 150Mi
25
        env:
26
        - name: GET_HOSTS_FROM
27
          value: env
28
        - name: REDIS_MASTER_SERVICE_HOST
29
          value: "redis-master"
30
        - name: REDIS_SLAVE_SERVICE_HOST
31
          value: "redis-master"
32
        ports:
33
        - containerPort: 80



The code of the GB-frontend image is located here.

Next is the associated service spec:

Note: NodePort will assign a random port over the public IP of the Node. In either case, we get a public host:port pair where we can inspect the application. Here is a screenshot of the app after we deployed it:

guestbook

Cleaning Up

Once we have finished experimenting with the application, we can clean up the resources and all the servers by issuing kubectl delete statements. A convenient way is to delete labels.

Next Steps

Deploying multi-container systems in Kubernetes with Platform9 is no different than usual. The big difference is that you gain a quality of server and a maintenance-free Kubernetes experience with first-class support for troubleshooting issues. That's not to mention that you can host the cluster on your bare-metal server or AWS, eliminating any vendor lock-ins. For more information visit Platform9 Managed Kubernetes page.TFORM

Topics:
kubernetes ,managed kubernetes ,microservice ,microservices ,microservices application ,redis ,tutorial ,yaml

Published at DZone with permission of Kamesh Pemmeraju , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}