Spring Cloud Config Server on Kubernetes (Part 2)
Time to bring your services to Kubernetes.
Join the DZone community and get the full member experience.Join For Free
This is the second of a two part article about building centralised configuration with Spring Cloud Config Server. In this post we’ll take the the two Spring Boot services created in part one and run them on Kubernetes.
We’ll initially deploy to a local cluster before stepping things up and deploying to Azures manged Kubernetes Service, AKS. By the end of this post you should have two Spring Boot services deployed to an AKS cluster as shown in the diagram below.
To build and run the sample source code you’ll need Java, Docker and a local Kubernetes install. I use minikube locally, so if you want to follow the exact steps in this article you’ll need the same. If you’re using something other than minikube you’ll still be able follow along, you’ll just need to be familiar with whatever local Kubernetes install you’re using.
Later we’re going to create a Kubernetes
Service object for both the Config Service and Config Consumer Service. Before we do anything else, lets take a quick look at these objects and what they actually do.
Deployment object describes the desired state of an application running in a cluster. When you define a
Deployment object you specify one or more container images and the number of instances you want to run. When a
Deployment is created on the cluster, it creates Pods for running the containers. For example, if we create a
Deployment that specifies an nginx image with 3 replicas, then Kubernetes will create 3 Pods, each one running an nginx container. If for some reason one of these Pods die, Kubernetes will recognise that the number of running Pods is less than the specified value in the
Deployment. Kubernetes will then take action and create a new Pod, ensuring that the actual state is the same as the desired state described by the
Service object provides a stable network address for accessing a group of pods. The
Service maintains a list of active Pod IP addresses and load balances incoming requests to those Pods. This means that clients don’t need to worry about maintaining a list of active Pod replicas and their IP addresses. Instead, clients simply call the
Service object and the
Service takes care of routing the request to one of the Pod replicas.
There are 3 mian types of
ClusterIP– is the default
Servicetype and it exposes the
Serviceon an IP accessible only within the cluster (hence Cluster IP)
NodePort– is accessible on each nodes IP, using a fixed port. This allows a
Serviceto be accessed from outside the cluster using the nodes IP and the node port. From example
LoadBalancer– is used by cloud providers like AWS and Azure to stand up an internet facing load balancer for your cluster.
We’re going to create two
Service objects, one for the Config Service and one for the Config Consumer Service. For the Config Service we’ll create a
Service object. As shown in the architecture diagram earlier, this will be used to route traffic within the cluster, from the Config Consumer Service to the Config Service.
For the Config Consumer Service we’ll create a
Service so that we can access the Config Consumer Service from outside the cluster. When running locally we’ll use minikube to access the
Service. When we deploy to AKS, an internet facing load balancer will be stood up for the
Service and we’ll be able to access it over the internet.
In this section we’ll define the
Service objects for the Config Service.
Below is the Config Service
Deployment definition. I’ll explain each section in detail below.
kindindicates the type of object being created
metadata.nameis the name of the
Deployment. It can be used to inspect the object using kubectl.
spec.replicasspecifies the number of Pod replicas Kubernetes will create, in other words, the number of instances of the Config Service we want to run.
spec.selector.matchLabelsis used to specify which Pods are managed by this
- The Pods are labeled as
template.metadata.labels. This corresponds to the label specified above in
template.spec.containerslists the containers to run in the pod.
containers.namespecifies the name of the container,
cofig-servicein this instance.
containers.imagespecifies the name of the image to run as a container. The Dockerfile for creating the
briansjavablog/config-service:k8image is described in part one.
- the Config Service is exposed from the container on
container.livenessProbeare used to configure endpoints that are called by Kubernetes to verify the health of the Config Service.
readinessProbedefines the endpoint Kubernetes will call after a container starts. A successful response indicates the container is ready to receive requests.
livenessProbedefines the endpoint Kubernetes will call periodically to check the health of a container. A successful response indicates the container is healthy and can continue receiving requests.
- The attributes of the
readinessProbeare the same and are described below.
xxxProbe.httpGet.pathis the URI of the HTTP GET health check endpoint running in the container. This points at the Spring Boot health check endpoint.
xxxProbe.httpGet.portspecifies health check port 8888
initialDelaySecondstells Kubernetes how long to wait, in seconds, before calling the endpoint
timeoutSecondsis the number of seconds Kubernetes will wait for a response before timing out.
periodSecondsdefines how often (in seconds) Kubernetes should perform the probe
failureThresholddefines the number of failed probes before Kubernetes considers the container unhealthy. If this happens for a liveness prob, Kubernetes will restart the container
Below is the Kubernetes
Service definition for the Config Service. I’ll explain each section in detail below.
kindindicates the type of object being created
metadata.nameis the name of the
spec.portsdefines the exposed port of the
Serviceobject and the port of the Pod that the
Servicewill route traffic to.
spec.ports.protocolspecifies the protocol used by the
Service. You can find the various options here.
portis the port that the
Serviceis accessible on. Anything calling the Config Service
Servicecalls it on port 8888.
targetPortis the port used to call the container. This value should be the same as
containerPortspecified in the
Servicereceives requests on port 8888 and routes them to one of the Config Service containers on port 8888.
selector.appdefines the set of Pods that the
Servicewill route traffic to. This value aligns with the selector specified in the
Note that we didn’t specify
spec.type. As a result the
type defaults to ClusterIP.
Config Consumer Service
In this section, we’ll define the
Service objects or the Config Consumer Service. Note that these objects are very similar (particularly the
Deployment) so I’m not going to repeating the attribute descriptions above.
Below is the Config Consumer Service
Below is the Config Consumer Service
Note that there is one fundamental difference between the
Service above and the Config Service
Service defined earlier. The Config Service
Service is a
ClusterIP, which means its IP is only accessible within the cluster. The
Service above is a
Loadbalancer which means its accessible from outside the cluster. Later when we deploy to AKS, Azure will stand up an internet facing load balancer and expose this
Service on a public IP. This will allow us to test the Config Consumer Service by calling its public facing API.
Deploying to a Local Cluster
This section describes how to deploy your services to a local minikube cluster. Begin by starting minikube with
minikube start. You should see a start up sequence like this.
If you look at the sample code you’ll see a file called
config-server-cluster.yml containing the
Service objects we defined earlier. Using
kubectl you can apply the contents of
config-server-cluster.yml to your local cluster.
If everything goes to plan you should see the resources created as follows.
Viewing the Resources
To check the status of the resources that were created you can run
kubectl get all. This will list all the objects created in the cluster and provide a summary status for each.
As you can see from the snippet above, the two
Service objects were created as expected. Note that two
ReplicaSets were also created, even though these were not explicitly defined in the manifest. So where did these
ReplicaSets come from?
Deployment creates a
ReplciaSet object to ensure that the required set of Pod replicas are running at any given time. Remember that we declared 2 replicas in both
Deployment objects earlier. The
ReplicaSet is responsible for making sure that the requested number of Pods is running. In the event of a Pod failure, its the
ReplicaSet that ensures a new Pod is created.
You may have noticed that the External-IP of the
config-service-consumer-lb is marked as pending. In a cloud environment like Azure or AWS you’ll see status pending when the Cloud provider is spinning up an internet facing load balancer. However, in a local cluster that’s not going to happen, so how do we access
Thankfully minikube allows you to expose a
Service from the cluster using the
minikube tunnel command.
minikube tunnel, in a new window run
kubectl get service. You’ll see that
config-service-consumer-lb now has an
EXTERNAL-IP 127.0.0.1 assigned.
We can use this IP to call the Config Consumer Service from outside the cluster.
If there was a problem creating any of the objects, you’ll need to drill in and figure out what’s going on. You can inspect objects using the
describe command and the name of the object you want to inspect. For example running
kubectl describe pod/config-consumer-service-746586bf77-w7pbn will provide detailed metadata for the specified Config Consumer Service Pod. As well as a view of the objects configuration, you may also be able to see useful event data associated with the object.
Testing the Config Consumer Service
Its time to test the Config Consumer Service by calling the
timeout-config endpoint with
curl 127.0.0.1:8080/timeout-config. You should see a JSON response containing the sample config as follows.
This proves that Config Consumer Service and the Config Service are stood up in the cluster and we have end to end connectivity. If you’ve made it this far, good job!
Deploying to Azure AKS
To deploy to AKS you’ll need an Azure account and the Azure CLI installed. If you’ve pulled the source code you’ll see a script called
createAKSClusterAndDeploy.sh in the scripts directory. This script uses the Azure CLI to create a cluster and install the Config Service and Config Consumer Service. You’ll need to make sure you’ve authenticated the CLI by running
az login, before running the script.
The script is pretty straight forward and essentially does 3 things.
az group createcreates a new Resource Group in the region specified
az aks createcreates an AKS cluster with one node using VM type Standard_B2s
az aks get-credentialspulls back the access credentials for the cluster you just created. The credentials are saved to .kube/config so that they’re available to kubectl.
kubectl apply -f config-server-cluster.ymlinstalls the contents of the manifest in the cluster.
The script takes a few minutes to run and when it finishes you should see output like the following.
kubectl get all to list the resources that were created. You’ll notice that the config-service-consumer-lb
Service has a public IP.
You should be able to access the Config Consumer Service via the public IP of the config-service-consumer-lb
curl 220.127.116.11:8080/timeout-config should return the JSON payload shown below.
If you’ve made it this far, well done. You now have the Config Service and the Config Consumer Service running on AKS. Here’s a reminder of what you’ve deployed.
Published at DZone with permission of Brian Hannaway, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.