{{announcement.body}}
{{announcement.title}}

Microservices in Practice: Deployment Shouldn't Be an Afterthought

DZone 's Guide to

Microservices in Practice: Deployment Shouldn't Be an Afterthought

Follow this article to learn more about the set of cloud-native abstractors in Ballerina’s built-in Kubernetes support for Microservices your deployment.

· Microservices Zone ·
Free Resource

Microservice architecture is one of the most popular software architecture styles that enables the rapid, frequent, and reliable delivery of large, complex applications. There are numerous learning materials on the benefits of microservices, design, and implementations. However, there are very few resources that discuss how to write your code to cloud-native platforms like Kubernetes in a way that just works. In this article, I am going to use the same microservice E-Commerce sample used in the Rethinking Programming: Automated Observability article and discuss Ballerina’s built-in Kubernetes support to extend it to run in Kubernetes platforms.

The sample code covers the implementation of an e-commerce backend that simulates the microservices required to implement searching for goods, adding them to a shopping cart, doing payments, and shipping. 

 




 

 




 

E-Commerce Backend Microservices Architecture

E-Commerce Backend Microservices Architecture


Code to Kubernetes

Docker helps to package the application with its dependencies while Kubernetes helps to automate deployment and scaling and to manage containerized applications. Kubernetes defines a set of unique building blocks that collectively provide mechanisms to deploy, maintain, and scale applications. 

On the other hand, the developer has to write code in a certain way to work well in a given execution environment. The microservices have to be designed, architected, and implemented in a way that performs well in a platform like Kubernetes. Otherwise, the application code will not be well-fitting to the Kubernetes building blocks. In other words, deployment should not be an afterthought, we should design and write our code to run in Kubernetes.

Let’s look at potential Kubernetes deployment architecture for the above e-commerce application.  




 

 




 

Kubernetes Deployment Architecture for E-commerce Backend Microservices

Kubernetes Deployment Architecture for E-commerce Backend Microservices

One of the main challenges that developers are facing is the lack of tools and programing language abstraction support to design and implement the microservices to work well in Kubernetes. As a solution to this problem, Ballerina has introduced a set of cloud-native abstractions and tools to write microservices that just work in platforms like Kubernetes.

Let’s look at how we can use Ballerina’s Kubernetes abstraction to extend the e-commerce microservices to run in Kubernetes.

Order Management Microservice

The order management microservice named OrderMgt is the simplest microservices because it provides a set of functionality for billing, shipping, and admins but it is not dependant on any other microservices to complete the tasks. Let’s see how we can extend the OrderMgt microservice to support running in Kubernetes.

Java
 






Listing 1: OrderMgt Microservice

In the code snippet above, I have added three Kubernetes annotations on top of the OrderMgt service code block. I have set some properties in @kubernetes:Deployment to extend the code to run in Kubernetes.

  •    image                 : Name, registry and tag of the Docker image
  •    username           : Username for Docker registry
  •    password           : Password for Docker registry
  •    push                   : Enable pushing Docker image to the registry
  •    livenessProbe.   : Enable livenessProbe for the health check
  •    readinessProbe : Enable readinessProbe for the health check
  •    prometheus       : Enabled Prometheus for observability 

When you build the OrderMgt microservices, in addition to the application binaries, it will automatically create the Docker image by bundling application binaries with its dependencies, all while following the Docker image building best practices. Since we have set the push property as true, it will automatically push the created Docker image into the Docker registry.

Shell
 




xxxxxxxxxx
1
17


1
> export DOCKER_USERNAME=<username>
2
> export DOCKER_PASSWORD=<password>
3
> ballerina build ordermgt
4
 
          
5
Generating artifacts...
6
 
          
7
    @kubernetes:Service              - complete 1/1
8
    @kubernetes:Deployment           - complete 1/1
9
    @kubernetes:HPA                  - complete 1/1
10
    @kubernetes:Docker               - complete 3/3 
11
    @kubernetes:Helm                 - complete 1/1
12
 
          
13
    Run the following command to deploy the Kubernetes artifacts: 
14
    kubectl apply -f target/kubernetes/ordermgt
15
 
          
16
    Run the following command to install the application using Helm: 
17
    helm install --name ordermgt target/kubernetes/ordermgt/ordermgt


The following Dockerfile is created and used to build the corresponding Docker image.

Dockerfile
 




xxxxxxxxxx
1
20


1
# Auto Generated Dockerfile
2
FROM ballerina/jre8:v1
3
 
          
4
LABEL maintainer="dev@ballerina.io"
5
 
          
6
RUN addgroup troupe \
7
    && adduser -S -s /bin/bash -g 'ballerina' -G troupe -D ballerina \
8
    && apk add --update --no-cache bash \
9
    && chown -R ballerina:troupe /usr/bin/java \
10
    && rm -rf /var/cache/apk/*
11
 
          
12
WORKDIR /home/ballerina
13
 
          
14
COPY ordermgt.jar /home/ballerina
15
 
          
16
EXPOSE  8081 9797
17
USER ballerina
18
 
          
19
CMD java -jar ordermgt.jar --b7a.observability.enabled=true --b7a.observability.metrics.prometheus.port=9797


Listing 2: Autogenerated OrderMgt Dockerfile

As you can see, the Ballerina compiler read the source code and constructed the Dockerfile to build the image. It has set adequate security by updating and creating groups and copying the application binary with correct permissions. It has also correctly exposed the service ports: 8081 is the orderMgt service port, and 9797 is exposed because we have enabled the observability by setting Prometheus as true. Also, it has correctly constructed CMD as well as all additional parameters required to run properly in the containerized environment. 


 




 

 




 

Docker image is uploaded to the Docker registry

Docker image is uploaded to the Docker registry


In addition to the Docker image, the compiler has generated the Kubernetes artifacts based on the annotation we defined. 

YAML
 




xxxxxxxxxx
1
39


 
1
apiVersion: "apps/v1"
2
kind: "Deployment"
3
metadata:
4
  annotations: {}
5
  labels:
6
    app: "ordermgt"
7
  name: "ordermgt"
8
spec:
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
      app: "ordermgt"
13
  template:
14
    metadata:
15
      annotations: {}
16
      labels:
17
        app: "ordermgt"
18
    spec:
19
      containers:
20
      - image: "index.docker.io/lakwarus/ecommerce-ordermgt:1.0"
21
        imagePullPolicy: "IfNotPresent"
22
        livenessProbe:
23
          initialDelaySeconds: 10
24
          periodSeconds: 5
25
          tcpSocket:
26
            port: 8081
27
        name: "ordermgt"
28
        ports:
29
        - containerPort: 8081
30
          protocol: "TCP"
31
        - containerPort: 9797
32
          protocol: "TCP"
33
        readinessProbe:
34
          initialDelaySeconds: 3
35
          periodSeconds: 1
36
          tcpSocket:
37
            port: 8081
38
      nodeSelector: {}


Listing 3: Autogenerated OrderMgt Kubernetes deployment YAML

You can see that the generated deployment YAML consists of some properties we have set in the annotations and the rest will be correctly populated by reading the source code combined with the meaningful defaults.  

In the @kubernetes:Service annotation, I have set the service name ordermgt-svc. Other microservices can access the OrderMgt microservices by using this Kubernetes service with the defined service name. The Kubernetes service will be act as an internal load balancer with the default clusterIP type, where it’s only available to the internal network by blocking all external traffic to it.

YAML
 




xxxxxxxxxx
1
34


 
1
apiVersion: "v1"
2
kind: "Service"
3
metadata:
4
  annotations: {}
5
  labels:
6
    app: "ordermgt"
7
  name: "ordermgt-svc"
8
spec:
9
  ports:
10
  - name: "http-ordermgt-svc"
11
    port: 8081
12
    protocol: "TCP"
13
    targetPort: 8081
14
  selector:
15
    app: "ordermgt"
16
  type: "ClusterIP"
17
---
18
apiVersion: "v1"
19
kind: "Service"
20
metadata:
21
  annotations: {}
22
  labels:
23
    app: "ordermgt"
24
  name: "ordermgt-svc-prometheus"
25
spec:
26
  ports:
27
  - name: "http-prometheus-ordermgt-svc"
28
    port: 9797
29
    protocol: "TCP"
30
    targetPort: 9797
31
  selector:
32
    app: "ordermgt"
33
  type: "ClusterIP"


Listing 4: Autogenerated OrderMgt Kubernetes service YAML

In addition to the ordermgt-svc Kubernetes service, the compiler has generated ordermgt-svc-prometheus because we have set prometheus: true. This generated service name can be used to set up Prometheus to pull the observability stats. Prometheus setup will be discussed in the latter part of this article. 

One of the main benefits of microservice architecture is that individual services can be scaled independently. We can configure our microservices deployment to automatically scale depending on the load running on the service. Kubernetes Horizontal Pod Autoscaler (HPA) is the way to do it. Our @kubernetes:HPA annotation will correctly generate the required HPA YAML to run in Kubernetes.

YAML
 




xxxxxxxxxx
1
16


 
1
apiVersion: "autoscaling/v1"
2
kind: "HorizontalPodAutoscaler"
3
metadata:
4
  annotations: {}
5
  labels:
6
    app: "ordermgt"
7
  name: "ordermgt-hpa"
8
spec:
9
  maxReplicas: 4
10
  minReplicas: 2
11
  scaleTargetRef:
12
    apiVersion: "apps/v1"
13
    kind: "Deployment"
14
    name: "ordermgt"
15
  targetCPUUtilizationPercentage: 75


Listing 5: Autogenerated OrderMgt Kubernetes HPA YAML

In addition to the Docker image and Kubernetes YAMLs, the compiler has generated HELM charts, if someone wants to run using HELM. 

Two kubectl commands are also printed and just a copy and paste will work with the configured Kubernetes cluster. The Kubernetes cluster can run locally or in a public cloud-like Google GKE, Azure AKS, or Amazon EKS.

Shipping and Billing Microservices

Like the OrderMgt microservice, we have set the following annotations in the shipping and billing microservices.

  • @kubernetes:Deployment
  • @kubernetes:Service
  • @kubernetes:HPA

You can find the full source code of the shipping and billing microservices here. In addition to annotations, we have used the ordermgt-svc name and port as the URL in the orderMgtClient config.

Java
 




xxxxxxxxxx
1


1
http:Client orderMgtClient = new("http://ordermgt-svc:8081/OrderMgt");


Listing 6: orderMgtClient configuration

When you build these modules, the compiler will generate all Kubernetes and Docker artifacts corresponding to these two services. 

Cart and Inventory Microservices

Both Cart and Inventory microservices connect to the MySQL database to perform database operations. The dbClient has used the Ballerina config API to read the MySQL database username and password. 

Java
 




xxxxxxxxxx
1


 
1
jdbc:Client dbClient = new ({
2
   url: "jdbc:mysql://mysql-svc:3306/ECOM_DB?serverTimezone=UTC",
3
   username: config:getAsString("db.username"),
4
   password: config:getAsString("db.password"),
5
   poolOptions: { maximumPoolSize: 5 },
6
   dbOptions: { useSSL: false }
7
});


Listing 7: dbClient configuration

The MySQL database username and password set in ballerina.conf resides in the corresponding module directory. In the Kubernetes world, these config files can be passed into PODs by using Kubernetes ConfgiMaps. We have set corresponding configMap annotations in both the Cart (as seen below) and the Inventory microservices code.

Java
 




xxxxxxxxxx
1


 
1
@kubernetes:ConfigMap {
2
   conf: "src/cart/ballerina.conf"
3
}


Listing 8: Cart microservice configMap annotation

When you compile the Cart module you can see the following line in the generated Dockerfile.

Dockerfile
 




xxxxxxxxxx
1


1
CMD java -jar cart.jar --b7a.observability.enabled=true --b7a.observability.metrics.prometheus.port=9797 --b7a.config.file=${CONFIG_FILE}


Listing 9: CMD command of the Cart Dockerfile

You can see that it’s passing --b7a.config.file=${CONFIG_FILE} to read the config file that comes from the config map in addition to other properties required.

The compiler generates the following Kubernetes configMap along with the other Kubernetes artifacts.

YAML
 




xxxxxxxxxx
1
10
9


 
1
apiVersion: "v1"
2
kind: "ConfigMap"
3
metadata:
4
  annotations: {}
5
  labels: {}
6
  name: "shoppingcart-ballerina-conf-config-map"
7
binaryData: {}
8
data:
9
  ballerina.conf: "[db]\nusername=\"xxxx\"\npassword=\"xxxx\"\n"


Listing 10: Autogenerated Cart microservice Kubernetes configMap YAML

Also, the generated Kubernetes deployment YAML for the Cart microservice has been updated by adding the corresponding configMap along with the volume mounting instruction properties.

YAML
 




xxxxxxxxxx
1
33


1
spec:
2
      containers:
3
      - env:
4
        - name: "CONFIG_FILE"
5
          value: "/home/ballerina/conf/ballerina.conf"
6
        image: "index.docker.io/lakwarus/ecommerce-cart:1.0"
7
        imagePullPolicy: "IfNotPresent"
8
        livenessProbe:
9
          initialDelaySeconds: 10
10
          periodSeconds: 5
11
          tcpSocket:
12
            port: 8080
13
        name: "cart"
14
        ports:
15
        - containerPort: 8080
16
          protocol: "TCP"
17
        - containerPort: 9797
18
          protocol: "TCP"
19
        readinessProbe:
20
          initialDelaySeconds: 3
21
          periodSeconds: 1
22
          tcpSocket:
23
            port: 8080
24
        volumeMounts:
25
        - mountPath: "/home/ballerina/conf/"
26
          name: "shoppingcart-ballerina-conf-config-map-volume"
27
          readOnly: false
28
      nodeSelector: {}
29
      volumes:
30
      - configMap:
31
          name: "shoppingcart-ballerina-conf-config-map"
32
        name: "shoppingcart-ballerina-conf-config-map-volume"


Listing 11: Spec section of the autogenerated Cart deployment YAML

Admin Microservice

The Admin microservices connect all the other five microservices and expose the admin functionality to the administration actor. All other microservices communicate via their corresponding Kubernetes service name that we have defined in the annotations. 

Java
 




xxxxxxxxxx
1


1
http:Client cartClient = new("http://cart-svc:8080/ShoppingCart");
2
http:Client orderMgtClient = new("http://ordermgt-svc:8081/OrderMgt");
3
http:Client billingClient = new("http://billing-svc:8082/Billing");
4
http:Client shippingClient = new("http://shipping-svc:8083/Shipping");
5
http:Client invClient = new("http://inventory-svc:8084/Inventory");


Listing 12: Client configurations in the Admin microservice

The Admin microservice needs to open to the external network while all the other microservices only allow local traffic. This is achieved by setting the correct serviceType in the @kubernetes:Service annotation.

Java
 




xxxxxxxxxx
1


 
1
@kubernetes:Service {
2
   name: "admin-svc",
3
   serviceType: "NodePort",
4
   nodePort: 30300
5
}


Listing 13: Kubernetes service annotation for Admin microservice

When compiling the source code, it generates a Kubernetes service with the nodePort type which allows access to the admin microservice by using the Node IP and the port defined in the annotation. If you are running the Kubernetes cluster in a cloud provider you can set it as LoadBalancer type. Then it will generate the corresponding cloud load balancer config to expose admin services.

Running in a Kubernetes Cluster

Now that we have looked at how we can extend the e-commerce microservices to generate the required artifacts to run well in Kubernetes by using Ballerina’s built-in Kubernetes support. Let’s test the deployment. 

Step 1: Compile the source code

Clone the Git repo and run the following command to compile the source code.

Shell
 




xxxxxxxxxx
1


1
> ballerina build -a


This will generate all necessary artifacts to run in Kubernetes.

Step 2: Set up MySQL

Shell
 




xxxxxxxxxx
1


 
1
> kubectl apply -f resources/mysql.yaml


Step 3: Deploy e-commerce microservices

You can use the compiler output command to deploy all the microservices. 

Shell
 




xxxxxxxxxx
1


 
1
> kubectl apply -f target/kubernetes/ordermgt
2
> kubectl apply -f target/kubernetes/billing
3
> kubectl apply -f target/kubernetes/shipping
4
> kubectl apply -f target/kubernetes/inventory
5
> kubectl apply -f target/kubernetes/cart
6
> kubectl apply -f target/kubernetes/admin


Step 4: Set up Prometheus and Grafana

Deploy Prometheus

Shell
 




xxxxxxxxxx
1


1
> kubectl create configmap ecommerce --from-file resources/prometheus.yml
2
> kubectl apply -f resources/prometheus-deploy.yaml


Note: I have configured prometheus.yml to pull stats from all six microservices by using Kubernetes service names generated from the compiler.

Deploy Grafana

Shell
 




xxxxxxxxxx
1


 
1
> kubectl apply -f resources/grafana-datasource-config.yaml
2
> kubectl apply -f resources/grafana-deploy.yaml


Step 5: Verify the deployment

Shell
 




xxxxxxxxxx
1
57


 
1
> kubectl get all
2
NAME                                        READY   STATUS    RESTARTS   AGE
3
pod/admin-d8d598584-txfnk                   1/1     Running   0          35s
4
pod/billing-5d9fc8f5c-7frcs                 1/1     Running   0          22s
5
pod/cart-68c8786f5c-smpbl                   1/1     Running   0          14s
6
pod/grafana-6d7cc69ffb-2zws7                1/1     Running   0          8m23s
7
pod/inventory-848d9dbd7c-j2vk2              1/1     Running   0          9s
8
pod/mysql-574cf47d6b-fpmbp                  1/1     Running   0          47s
9
pod/ordermgt-54fb955d47-cz2m6               1/1     Running   0          4s
10
pod/prometheus-deployment-d97c996b6-jg5t2   1/1     Running   0          8m34s
11
pod/shipping-6c6df49cb7-x7lcm               1/1     Running   0          28s
12
 
          
13
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
14
service/admin-svc                  NodePort    10.98.142.252    <none>        8085:30300/TCP   35s
15
service/admin-svc-prometheus       ClusterIP   10.104.157.241   <none>        9797/TCP         35s
16
service/billing-svc                ClusterIP   10.99.0.148      <none>        8082/TCP         22s
17
service/billing-svc-prometheus     ClusterIP   10.106.106.169   <none>        9797/TCP         22s
18
service/cart-svc                   ClusterIP   10.99.37.61      <none>        8080/TCP         14s
19
service/cart-svc-prometheus        ClusterIP   10.102.26.25     <none>        9797/TCP         14s
20
service/grafana                    NodePort    10.96.73.66      <none>        3000:32000/TCP   8m23s
21
service/inventory-svc              ClusterIP   10.111.77.230    <none>        8084/TCP         9s
22
service/inventory-svc-prometheus   ClusterIP   10.101.1.91      <none>        9797/TCP         9s
23
service/kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP          27d
24
service/mysql-svc                  ClusterIP   None             <none>        3306/TCP         47s
25
service/ordermgt-svc               ClusterIP   10.100.221.87    <none>        8081/TCP         4s
26
service/ordermgt-svc-prometheus    ClusterIP   10.106.39.3      <none>        9797/TCP         4s
27
service/prometheus                 ClusterIP   10.106.45.109    <none>        9090/TCP         8m35s
28
service/shipping-svc               ClusterIP   10.109.118.101   <none>        8083/TCP         28s
29
service/shipping-svc-prometheus    ClusterIP   10.102.198.42    <none>        9797/TCP         28s
30
 
          
31
NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
32
deployment.apps/admin                   1/1     1            1           35s
33
deployment.apps/billing                 1/1     1            1           22s
34
deployment.apps/cart                    1/1     1            1           14s
35
deployment.apps/grafana                 1/1     1            1           8m23s
36
deployment.apps/inventory               1/1     1            1           9s
37
deployment.apps/mysql                   1/1     1            1           47s
38
deployment.apps/ordermgt                1/1     1            1           4s
39
deployment.apps/prometheus-deployment   1/1     1            1           8m35s
40
deployment.apps/shipping                1/1     1            1           28s
41
 
          
42
NAME                                              DESIRED   CURRENT   READY   AGE
43
replicaset.apps/admin-d8d598584                   1         1         1       35s
44
replicaset.apps/billing-5d9fc8f5c                 1         1         1       22s
45
replicaset.apps/cart-68c8786f5c                   1         1         1       14s
46
replicaset.apps/grafana-6d7cc69ffb                1         1         1       8m23s
47
replicaset.apps/inventory-848d9dbd7c              1         1         1       9s
48
replicaset.apps/mysql-574cf47d6b                  1         1         1       47s
49
replicaset.apps/ordermgt-54fb955d47               1         1         1       4s
50
replicaset.apps/prometheus-deployment-d97c996b6   1         1         1       8m35s
51
replicaset.apps/shipping-6c6df49cb7               1         1         1       28s
52
 
          
53
NAME                                               REFERENCE             TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
54
horizontalpodautoscaler.autoscaling/billing-hpa    Deployment/billing    <unknown>/75%   1         2         1          22s
55
horizontalpodautoscaler.autoscaling/ordermgt-hpa   Deployment/ordermgt   <unknown>/75%   1         4         0          4s
56
horizontalpodautoscaler.autoscaling/shipping-hpa   Deployment/shipping   <unknown>/75%   1         2         1          28s


Step 6: Run the simulator

Shell
 




xxxxxxxxxx
1


 
1
> ballerina run target/bin/simulator.jar 100 1000


Step 7: Test observerbility in Grafana dashboard

You can access the Grafana dashboard at http://localhost:32000/ (if you deployed in a cloud provider cluster, use the external node IP and the 32000 port).

 




 

Grafana Dashboard


Here, I have not set up Jaeger, but if you setup Jaeger you will be able to see tracing stats that are collected from the deployed microservices. 

Summary

To get the real benefits out of microservice architecture, cloud-native deployment is important in addition to the microservice application code implementations. But one of the common mistakes seen in many projects is not considering the deployment aspect while developing the application. One reason for this is the lack of tools and abstractions support in traditional programming languages that allow developers to write code that just works in these cloud-native platforms.

Ballerina is a new programming language that fills these gaps by providing a rich set of language abstraction and a unique developer experience when integrating cloud-native platforms. This ultimately leads to improved productivity of the whole microservice development life cycle. 

Source Code: https://github.com/lakwarus/microservices-in-practice-deployment

Topics:
ballerina, deploying microservices, deploying to kubernetes, kubernates, microservice, microservice architecture, programing language

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}