Java EE Microservices on Kubernetes With KumuluzEE
This tutorial will show you how to use the KumuluzEE framework to deploy microservices, plus how to perform health checks and scaling.
Join the DZone community and get the full member experience.
Join For FreeToday, Kubernetes is one of the most commonly used runtime platforms for containerized applications. Providing automatic bin-packing, horizontal scaling, automated rollouts and rollbacks, self-healing, service discovery, load balancing, and other services out of the box, Kubernetes is a platform that suits microservices down to the ground.
However, equally as important as the utilities provided by the runtime environment is the ability of the microservice framework to exploit provided utilities. In this post, we demonstrate how KumuluzEE microservice framework is able to make use of the Kubernetes services for optimal execution of the microservices. In the first part, we focus on deploying KumuluzEE microservices that are using service discovery and service configuration to Kubernetes cluster, while in the second part we move focus to health-checks and scaling of the KumuluzEE microservices.
Prerequisites and Sample Microservices
Our sample is comprised of two KumuluzEE microservices that are connecting with one another. Both microservices are using service discovery and service configuration, which means we need a working etcd cluster before we can start deploying and configuring our microservices. To demonstrate service discovery and service configuration we first deploy our own etcd cluster.
Both microservices read configuration from service configuration cluster. In our sample, we also demonstrate how to use Kubernetes ConfigMap to store configuration that does not change during the lifetime of the microservice, e.g. URL of service discovery and configuration cluster. Service configuration cluster allows us to live-update the configuration of microservice, we cannot achieve the same using the Kubernetes ConfigMaps (yet ConfigMap rollouts) as any change would require a restart of the container.
We use two KumuluzEE microservices: customers and orders. Both microservices are comprised of three maven modules:
- api: REST API serving/exposing data,
- business-logic: CDI beans implementing business logic, and
- persistence: JPA module handling data persistence.
As we are focused on packaging and deployment of KumuluzEE microservices to Kubernetes we will skip further implementation details. Each microservice has its own Postgresql database which must be deployed beforehand.
Deploying etcd Service Discovery and Configuration Cluster
The following code snippet shows the configuration for etcd Deployment and Service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: etcd-deployment
namespace: kumuluzee-blog
spec:
replicas: 1
template:
metadata:
labels:
app: etcd
spec:
containers:
- command:
- /usr/local/bin/etcd
- --name
- etcd0
- --initial-advertise-peer-urls
- http://etcd:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --listen-client-urls
- http://0.0.0.0:2379
- --advertise-client-urls
- http://etcd:2379
- --initial-cluster-state
- new
- -cors
- "*"
- --data-dir
- /etcd-data
image: quay.io/coreos/etcd:latest
name: etcd
ports:
- containerPort: 2379
name: client
protocol: TCP
- containerPort: 2380
name: server
protocol: TCP
volumeMounts:
- mountPath: /etcd-data
name: etcddata
volumes:
- name: etcddata
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: etcd
namespace: kumuluzee-blog
spec:
type: NodePort
ports:
- name: client
port: 2379
protocol: TCP
targetPort: 2379
selector:
app: etcd
Step 1: Building Docker Images
Each microservice is packaged into Docker image. Bellow, you can see the content of Dockerfile for customers microservice:
FROM openjdk:8-jre-alpine
RUN mkdir /app
WORKDIR /app
ADD ./api/target/customers-api-1.0.0-SNAPSHOT.jar /app
EXPOSE 8080
CMD ["java", "-jar", "customers-api-1.0.0-SNAPSHOT.jar"]
Microservice orders has the same Dockerfile, the only difference is the name of the jar.
Both images are available on public Dockerhub:
- customers: zvoneg/kubernetes-customers:v1.0.5
- orders: zvoneg/kubernetes-orders:v1.0.5
Step 2: Preparing Kubernetes Configuration Files
So far we deployed etcd cluster for service discovery and configuration, databases for each microservice, and prepared docker images for our microservices. Next, we provide the configuration that does not change over the lifetime of the microservice - Kubernetes ConfigMap.
Static Configuration Using ConfigMap
ConfigMap is Kubernetes object used to decouple the configuration artifacts from image content. In the case of our microservice, we use ConfigMap to provide configuration for discovery cluster name, discovery etcd cluster hosts, connection url of the data source, server url (service address for service registration) and connection url of the data source for the health check.
ConfigMap of the kubernetes-customers microservice (customers-cm.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-customer-config
namespace: kumuluzee-blog
data:
KUMULUZEE_DISCOVERY_CLUSTER: private-coreos
KUMULUZEE_DISCOVERY_ETCD_HOSTS: http://etcd:2379
KUMULUZEE_CONFIG_ETCD_HOSTS: http://etcd:2379
KUMULUZEE_DATASOURCES0_CONNECTIONURL: jdbc:postgresql://postgres-customers:5432/customer
KUMULUZEE_SERVER_BASEURL: http://192.168.29.246:32600
KUMULUZEE_HEALTH_CHECKS_DATASOURCEHEALTHCHECK_CONNECTIONURL: jdbc:postgresql://postgres-customers:5432/customer
and ConfigMap of the kubernetes-orders microservice (orders-cm.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-order-config
namespace: kumuluzee-blog
data:
KUMULUZEE_DISCOVERY_CLUSTER: private-coreos
KUMULUZEE_DISCOVERY_ETCD_HOSTS: http://etcd:2379
KUMULUZEE_CONFIG_ETCD_HOSTS: http://etcd:2379
KUMULUZEE_DATASOURCES0_CONNECTIONURL: jdbc:postgresql://postgres-orders:5432/order
KUMULUZEE_SERVER_BASEURL: http://192.168.29.246:32583
KUMULUZEE_HEALTH_CHECKS_DATASOURCEHEALTHCHECK_CONNECTIONURL: jdbc:postgresql://postgres-orders:5432/order
Kubernetes Deployment Configuration
First, we define deployment configuration for the orders microservice. This microservice is configured to register itself to the service registry when the pod (replica) is started. KumuluzEE framework obtains the pod IP address and inserts the record into the service registry. Remember, data to access service registry is provided through the properties defined in the ConfigMap which we need to reference in the deployment configuration. KumuluzEE configuration extension will automatically read the configuration from container environment variables on startup.
The content of deployment file for the orders microservice (order-deployment.yaml):
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: order-deployment
namespace: kumuluzee-blog
spec:
replicas: 1
template:
metadata:
labels:
app: order
spec:
containers:
- image: zvoneg/kubernetes-orders:v1.0.5
name: kubernetes-orders
envFrom:
- configMapRef:
name: kubernetes-order-config
ports:
- containerPort: 8081
name: server
protocol: TCP
We proceed with defining customers microservice deployment configuration. This microservice is consuming the orders microservice, i.e. is performing service discovery. To provide information for service registry we again use properties defined in ConfigMap.
Deployment configuration for customers microservice (customer-deployment.yaml):
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: customer-deployment
namespace: kumuluzee-blog
spec:
replicas: 1
template:
metadata:
labels:
app: customer
spec:
containers:
- image: zvoneg/kubernetes-customers:v1.0.5
name: kubernetes-customer
envFrom:
- configMapRef:
name: kubernetes-customer-config
ports:
- containerPort: 8080
name: server
protocol: TCP
During the startup customers microservice will perform service discovery to obtain IP address of the orders pod (or Kubernetes service). This is performed by kumuluzee-discovery extension.
Kubernetes Service Configuration
With kumuluzee-discovery, we can overcome the problem of pod mortality as each microservice is able to register its pod IP address into the service registry (which is taken care of by the kumuluzee-discovery extension). If multiple replicas are running for some microservice there will be multiple entries in the service registry. KumuluzEE discovery will also take care of the scenarios when a replica is killed by removing an entry from the registry. This means we do not need to create Kubernetes services if our microservices only needs to be accessible inside the Kubernetes cluster. The advantage of this approach is that the step where service IP needs to be translated into pod IP is skipped as we know the IP of the actual pod.
However, for our demo, we also want to expose our microservices to the applications living outside the Kubernetes cluster. We do so by creating Service object of type NodePort.
Service configuration for the customers microservice (customer-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: customer
namespace: kumuluzee-blog
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: customer
Service configuration for the orders microservice (order-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: order
namespace: kumuluzee-blog
labels:
app: order
spec:
type: NodePort
ports:
- name: server
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: order
With provided Service configuration both microservices are exposed on all nodes of the Kubernetes cluster.
Step 3: Deploying Microservices to Kubernetes
With deployment configuration files defined, we can now start deploying microservices to the Kubernetes cluster.
Deploying the Orders Microservice
We execute the following commands in the given order:
kubectl create -f order-service.yaml
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
order 10.3.0.82 <nodes> 8081:32583/TCP 5m
With the creation of the order-service, we get the external port of the service and we can set KUMULUZEE_SERVER_BASEURL in order-cm.yaml to http://192.168.29.246:32583. This will be used by kumuluzee-discovery extension when registering the replica instance on the startup.
Next, we can create orders microservice ConfigMap and after that, we can create deployment:
kubectl create -f order-cm.yaml
kubectl create -f order-deployment.yaml
Kubernetes should create order-deployment with a single replica which we should be able to see by executing kubectl get deploy -n kumuluzee-blog
:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
order-deployment 1 1 1 1 1m
At this point, we can check the service registry to see if there are any instances registered. The image shows the result of etcd-browser:
With orders microservice up and running, we proceed with the deployment of customers microservice.
Deploying the Customers Microservice
To deploy customers microservice to Kubernetes cluster we perform the following commands:
kubectl create -f customer-service.yaml
kubectl create -f customer-cm.yaml
kubectl create -f customer-deployment.yaml
Same as we did for the orders microservice we do for the customers microservice. After Kubernetes Service object is created, we check the external port of the Service and enter the property KUMULUZEE_SERVER_BASEURL into customer-ca.yaml ConfigMap. Then we create ConfigMap and Deployment objects.
To check if customer-deployment was created successfully we execute kubectl get deploy -n kumuluzee-blog
. The result of the command shows:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
customer-deployment 1 1 1 1 1m
order-deployment 1 1 1 1 20m
Monitoring the KumuluzEE Microservice's Health
Because of the distributed nature, microservices have to be able to deal with unexpected failures caused by node crashes, deadlocks, connectivity issues etc. To build truly resilient systems, failures have to be tackled with different mechanisms such as circuit breakers, retries/timeouts, and health checks, at the same time in order to cover all the aspects of the failures.
Now we shift our focus the health checks of the microservices and explain how to implement microservice using KumuluzEE framework together with KumuluzEE Health extension to expose the health status. We show how Kubernetes Deployments should be configured to allow Kubernetes to monitor the health status of the containers and initiate the self-healing process (initiate restart). Finally, we also describe how to create Horizontal Pod Autoscaler (HPA) to automatically scale KumuluzEE microservice when load exceeds the load limits.
KumuluzEE provides fully Kubernetes compliant extension named KumuluzEE Health that provides an easy, consistent and unified way of performing health checking and exposing health information. Enabling health checks on KumuluzEE microservice is achieved by including maven dependency kumuluzee-health:
<dependency>
<groupId>com.kumuluz.ee.health</groupId>
<artifactId>kumuluzee-health</artifactId>
<version>${kumuluzee-health.version}</version>
</dependency>
Dependency will automatically include /health endpoint providing the health status of the microservice. Before running microservice, with kumuluzee-health dependency included, we have to define configuration for health checks that should be used to validate the status of the microservice. To see the list of built-in health checks go visit KumuluzEE Health.
In our demonstration, we define health checks for both microservices, customers, and orders. We enable DataSourceHealthCheck and DiskSpaceHealthCheck with the following configuration (in config.yaml):
kumuluzee:
health:
checks:
data-source-health-check:
connection-url: jdbc:postgresql://postgres-customers:5432/customer
username: dbuser
password: postgres
disk-space-health-check:
threshold: 100000000
When /health endpoint is called, the microservice will invoke data-source-health-check to get the status of the database connectivity and disk-space-health-check the get the status of the disk usage. If both checks return status OK then the overall health of the microservice is considered to be OK.
If we initiate a health check request to http://192.168.29.246:32583/health the response is:
{
"outcome" : "UP",
"checks" : [ {
"name" : "OrderServiceHealthCheck",
"state" : "UP"
}, {
"name" : "DiskSpaceHealthCheck",
"state" : "UP"
}, {
"name" : "DataSourceHealthCheck",
"state" : "UP"
} ]
}
Updating Kubernetes Deployments
The main purpose of exposing the health status of KumuluzEE microservices is to allow Kubernetes to monitor the health status of the container in which microservice is running to detect unexpected failures. Kubernetes provides a build-in Liveness probe which is used to detect when a container should be restarted. It should be expected, especially for the applications running for long periods of time, to eventually transition to broken status and can only be recovered by the restart.
Liveness probe for the container is enabled by adding the following configuration to the container specification inside the deployment configuration:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 20
periodSeconds: 5
The above configuration specifies that Kubernetes can check the status of the container by sending HTTP GET request to the /health endpoint on the container port 8080 every 5 seconds. The configuration also defines that Kubernetes should wait for 20s for a container to start before sending the first health check request.
To demonstrate how Kubernetes responds to the NKO status from the health endpoint of the microservice we intentionally infect the health of the order microservice by sending the request to /management/healthy endpoint setting the health st atus to false.
Before we infect the service, let us check the pod status of order microservice using the command kubectl get pods -n kumuluzee-blog
. The command produces the following output:
NAME READY STATUS RESTARTS AGE
order-deployment-6bddc44584-kt4lh 1/1 Running 0 35m
We can see that since the start there were no restarts of the container, i.e. the health check was always successful. Now let us infect the microservice by sending the following request:
curl -X POST -d "false" -H "Content-Type: application/json" http://192.168.29.246:32583/v1/management/healthy
If we check the health status manually, we get the following response:
{
"outcome" : "DOWN",
"checks" : [ {
"name" : "OrderServiceHealthCheck",
"state" : "DOWN"
}, {
"name" : "DiskSpaceHealthCheck",
"state" : "UP"
}, {
"name" : "DataSourceHealthCheck",
"state" : "UP"
} ]
}
By checking the output of kubectl get pods -n kumuluzee-blog
we can see that Kubernetes performed a restart of the pod. Once the pod is restarted the health check is set back to OK as the infection is “destroyed” by the restart.
NAME READY STATUS RESTARTS AGE
order-deployment-6bddc44584-kt4lh 1/1 Running 1 1h
Auto-Scaling KumuluzEE Microservices
We showed how to provide health status of the microservice using KumuluzEE Health extension and how to configure Kubernetes to monitor the health of the microservice by defining the Liveness probe for the container. Deploying only one instance of the microservice would be a problem because we would experience a downtime if the instance would fail. We can overcome this problem if we deploy multiple replicas of the pod and configure a Horizontal Pod Autoscaler (HPA).
Before we create the HPA for our deployments we have to define CPU resource limits for each container. We do so by adding the following configuration to the deployment yaml:
resources:
limits:
cpu: 1
Now we can create an HPA using the following command:
kubectl autoscale deploy order-deployment -n kumuluzee-blog --cpu-percent=50 --min=2 --max=4
With above command, we specify that the pod cannot use more than one unit of the CPU.
To check the result of the command we execute kubectl get hpa -n kumuluzee-blog
:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
order-deployment Deployment/order-deployment <unknown> / 50% 2 4 0 11s
The configuration says Kubernetes should always run at least two replicas of the order-deployment and should not scale to more than four replicas. When HPA is created Kubernetes immediately starts another replica to match the HPA requirements. The configuration of HPA also says the order-deployment should be scaled when average CPU usage of the pods exceeds 50%. If we overload the microservice by sending a couple of requests to the /load endpoint of the customers microservice, the HPA should start scaling the orders microservice.
curl -X POST -d "42" -H "Content-Type: application/json" http://192.168.29.246:32600/v1/load
If we check the status of HPA, we see that the CPU usage is exceeding the targets:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
order-deployment Deployment/order-deployment 65% / 50% 1 4 1 28m
and if the target is exceeded for some period of time the Kubernetes starts creating new replica:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
order-deployment Deployment/order-deployment 99% / 50% 1 4 2 29m
Conclusions
We have demonstrated how easy it is to deploy KumuluzEE microservices to Kubernetes cluster, how the configuration for the microservices can be provided by the Kubernetes ConfigMap and by the service configuration cluster, how to perform service discovery, and finally, how to expose health status of the microservice using the KumuluzEE Health extension. We also showed how to configure Kubernetes Liveness probe to monitor the health of the KumuluzEE microservice and how to define HPA for auto-scaling of the KumuluzEE microservice. KumuluzEE is a perfect example how a microservice framework for the Java EE can provide seamless integration with the runtime platforms.
The source code of the sample is available at KumuluzEE Kubernetes.
Opinions expressed by DZone contributors are their own.
Comments