From Zero to Kubernetes: The Fast Track
This is the starter Kubernetes article all you Java developers have been waiting for.
Join the DZone community and get the full member experience.
Join For FreeHistorically, enterprise Java development was known (and feared) for its steep learning curve. It was necessary to use dozens of lines of XML code just to deploy a simple application or to configure application servers. With the rise of DevOps, this hassle is merely the beginning of a long and painful process — the developer (or DevOps engineer, if you wish) is not only responsible for configuration but also for running the application. This has historically required either the creation of curator scripts that ensure that the application is running (and restarts it, in case it is not running), or manual intervention in case of application failure.
Modern Trends to The Rescue
Fortunately for us, the times of laborious configuration are over. Now it is possible to utilize tools which automate most tasks that were previously time-intensive or which required active monitoring. In this tutorial, we demonstrate how to create a simple application in Java (Spring Boot) and how to containerize it and deploy it in a Kubernetes cluster. Our application, even though simple in functionality, will have resiliency built-in and will recover automatically in the event of failure. Based on the example provided today, the whole application can easily be created and deployed into a testing cluster in under 30 minutes, without much configuration.

About this Tutorial
This tutorial is intended for Java programmers with basic knowledge of server-side development, who are interested in running their code in Kubernetes cluster. Even though we will use Gradle for building the project, knowledge of this build system is not required — a developer with Maven-only experience will find the build code to be very familiar.
Prerequisites
Before we start, there are some prerequisites that you will need in your toolbox in addition to your favorite IDE and JDK. Those are:
- Docker
- Gradle
- Helm
- Kubernetes
We will be using these dependencies when building our solution (Gradle) and during the testing of our deployment (Docker, Helm, Kubernetes). If you already have these, skip ahead to the section The Application.
Docker and Kubernetes
If you are running Mac or Windows, the most convenient way to get both Docker and Kubernetes is to install Docker Desktop.
Docker.com will force you to register and login in order to get Docker Desktop. To bypass this requirement go to this GitHub issue: https://github.com/docker/docker.github.io/issues/6910.
Once installed, enable Kubernetes in Settings:

Linux
For Linux users the installation process will be a bit more painful — you will need to install Docker CE using the package manager of your distribution. For Kubernetes experiments try Minikube.
Helm
Helm is a package manager. It is similar in relation to Kubernetes as yum is for CentOS or apt-get is for Debian. We will be creating a Helm chart in this tutorial, which is essentially a package/deployment descriptor containing information about which container should be installed into the Kubernetes cluster, how many instances should be running, which ports are to be exposed. etc.

The installation process of Helm binary depends on your system, but generally, there are two ways: use a package manager or download the binary manually (and link it to your path). Both approaches are described in these docs.
Gradle
We will use Gradle as a build tool. But we will be using Gradle Wrapper as a simple script that will install Gradle for us during the build.
The Application
Now that we have our toolbox ready, we can proceed with the application itself. As this is an entry level article, we will not do anything fancy — a simple Hello World will be enough. As we are on the fast track, we will use Spring Initializr, which will generate the scaffold for us. On the web page, choose Gradle project, Java 11 and select Spring Web and Spring Actuator as dependencies. Then download the project.
If your IDE (such as IntelliJ Idea) supports Initializr out-of-the-box, feel free to use this feature.

Once you open the project in your IDE, your build.gradle file should appear as follows:
plugins {
id 'java'
id 'org.springframework.boot' version '2.1.3.RELEASE'
}
apply plugin: 'io.spring.dependency-management'
group = 'com.zoomint'
version = '1.0.0'
sourceCompatibility = '11'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-web'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
And the application class Initializr created for us:
package com.zoomint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class HelloWorldKubernetesApplication {
public static void main(String[] args) {
SpringApplication.run(HelloWorldKubernetesApplication.class, args);
}
}
Lets extend it a bit, to serve the /greeting
endpoint:
package com.zoomint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
@RestController
public class HelloWorldKubernetesApplication {
public static void main(String[] args) {
SpringApplication.run(HelloWorldKubernetesApplication.class, args);
}
@GetMapping("/greeting")
public String getGreeting() {
return "Hello world";
}
}
And that's it.
Now when we run the application - either directly from IDE, using java -jar build/libs/hello-world-kubernetes-1.0.0.jar
, or ./gradlew bootRun
, the application will start up on port 8080 (the default port in Spring Boot). When we access (GET) localhost:8080/greeting
, we will retrieve our message. When we access (GET) localhost:8080/actuator/health
, we will retrieve the following message with status 200
:
{"status":"UP"}
This health endpoint is important for us, as we will use it as a liveness probe in Kubernetes. In simple words: if the application is unhealthy (endpoint not accessible or status is not in the range of <200; 399>), Kubernetes will kill the container and spin up a new instance.
Containerization
As you may have already noticed, the application is not containerized by default. To do so, we need to alter our build.gradle a bit (see the docker section of the code listing):
plugins {
id 'java'
id 'org.springframework.boot' version '2.1.3.RELEASE'
id 'com.bmuschko.docker-spring-boot-application' version '4.6.2'
}
apply plugin: 'io.spring.dependency-management'
group = 'com.zoomint'
version = '1.0.0'
sourceCompatibility = '11'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-web'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
docker {
springBootApplication {
baseImage = 'adoptopenjdk/openjdk11:jdk-11.0.2.9-alpine-slim'
tag = "${project.name}:${project.version}"
}
}
In the plugins section we have added docker-spring-boot-application plugin. The other change is the addition of the docker section (extension), in which we have specified the baseImage and a tag. In this case, the baseImage is OpenJDK 11 slim with Alpine Linux (slim means that parts of the JDK distribution that are generally not necessary for cloud deployment are removed in order to make the image smaller). The tag, user-friendly name of the image is composed of the name of our project and its version.
Now we can execute ./gradlew dockerBuildImage
.
Once finished, we can list the locally present images by executing docker images -a
and start the container using docker run -p 8080:8080 hello-world-kubernetes:1.0.0
( -p 8080:8080
maps the inner port 8080 to port 8080 on our localhost). Once we verify that everything works as expected, we can list the running containers using docker ps and stop it using docker kill {containerId}
.
Behind the Scenes
The plugin did some magic for us, so let's dive into what happened behind the scenes. If we take a look into the build/docker
directory that the plugin generated for us, we will find that there is a file named Dockerfile and a couple directories: classes
, libs
, and resources
. These directories contain exactly what you would expect based on their names — our application classes, jar dependencies and static resources. The important part for us is the Dockerfile, which tells the Docker binary how to construct our container:
FROM adoptopenjdk/openjdk11:jdk-11.0.2.9-alpine-slim
WORKDIR /app
COPY libs libs/
COPY resources resources/
COPY classes classes/
ENTRYPOINT ["java", "-cp", "/app/resources:/app/classes:/app/libs/*", "com.zoomint.HelloWorldKubernetesApplication"]
EXPOSE 8080
As you can see from the code, first we declare that we are extending the base image (OpenJDK 11). The Workdir instruction tells Docker that we want to execute all the commands relative to the /app
directory in the container. Then there is the instruction to copy all the libs, resources and classes to the /apps/...
directory inside the container. Perhaps the most important instruction — entrypoint — tells Docker which command it should execute in the container once it is started. And lastly, we tell Docker that the container should expose port 8080.
You may have noticed that the plugin uses an expanded version of our project (libs, resources, classes), while our original approach was to use a fat-jar (everything bundled to a single jar file). This is intentional as Docker is able to share layers across multiple images, when a layer roughly corresponds to an instruction in the Dockerfile.
This behavior is useful during development because, generally speaking, we do not need to change the libraries as often as we change our application code. The layering mechanism makes sure that all of our images share common libraries and resources, hence they are only needed to be stored on the disk once. This behavior saves significantly in terms of resources (as all of the dependencies of the project tend to be quite bulky).
Helm
Now, that we have created a Docker container and uncovered how it is built, our last step is to make the application executable on a Kubernetes cluster. Kubernetes itself is declarative, which means that we only need to specify the qualities of our deployment: run 1 instance, require this amount of resources (CPU, RAM), use rolling deployment, expose this port, etc. Once the application is deployed, Kubernetes will make sure that the application always run in accordance with these requirements.
A deployment descriptor containing the requirements for our particular application can be written as a Helm Chart. To make things easy, we will use a scaffolding command helm create hello-world-kubernetes-chart
. Now we must customize the scaffold to support our application. A good place to start is values.yaml file, which contains variables that we can tweak.
values.yaml
# Default values for hello-world-kubernetes-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: nginx
tag: stable
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
First, we will notice the replicaCount
variable, which is set to 1. This means that Kubernetes will always ensure that one instance of our application is running. If we intentionally kill the instance, Kubernetes will automatically spin up a new one. For this demonstration, we will leave the value as 1 for now. The thing we want to change is the image. We have to change it as it defaults back to the installation of an Nginx web server. To prevent this we will alter the repository
and tag
in the following way:
image:
repository: hello-world-kubernetes
tag: 1.0.0
Rest of the values.yaml File
We will not alter anything else in the file, but let's look at the purpose of some the other variables, just to be clear about the options available.
- Service section (line 15) is closely related to
replicaCount
and high availability in our application. We can view it as an internal load balancer with its DNS name managed by Kubernetes. By accessing our application through this DNS name, we abstract ourselves from the number of instances running — Kubernetes will make sure that the request will be routed to some healthy instance. And if an instance fails, it will be evicted from the routing table automatically. - Ingress is for the routing of external traffic to our cluster. Simply said its role is to act as a reverse proxy. But we will not use it for our testing service, so we'll keep it disabled.
- Resources are a thing to keep in mind, as we can use them to instruct Kubernetes to limit the maximum amount of RAM and CPU resources available to the application (useful for memory pressure), or we can instruct it to schedule (install) our application only to the node that has a defined amount of memory available (and reserve it).
- NodeSelector and tolerations are used to schedule (or not schedule) the application of some particular node. This is useful for when you have some specialized hardware and want to make sure that an application uses it.
deployment.yaml
The second file we will change is the deployment.yaml that describes how Kubernetes should handle the installation of our application. The section that we are most interested in are the following lines:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
Specifically, we want to change containerPort
to 8080, as that is the port of our Spring Boot application. Next, we need to change the path to livenessProbe
and readinessProbe
to /actuator/health
. Liveness means that the application is running, but may not be in a state to accept requests — for example, it is warming up caches. Readinessprobe
indicates that the application is ready to accept requests. This should be the end result:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /actuator/health
port: http
readinessProbe:
httpGet:
path: /actuator/health
port: http
NOTES.txt
The last section is purely aesthetic. The NOTES.txt contains instructions that will be printed out, once the chart is installed. This information includes port forwarding. By default, the chart expects that our container exposes port 80, but in our case we expose port 8080, so let's fix it:
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "hello-world-kubernetes-chart.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}
The proper command (on line 4) is kubectl port-forward $POD_NAME 8080:8080
.
Running the Application
Now that all configuration steps have been completed, it is time for our Moment of truth! Let's run helm install hello-world-kubernetes-chart
. We can immediately harvest the fruits of our labor, as the correct notes are (hopefully) printed out:
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=hello-world-kubernetes,app.kubernetes.io/instance=bailing-guppy" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:8080
We will follow these instructions to port-forward traffic to our instance. Our application is now containerized and running inside Kubernetes and the endpoint on localhost:8080/greeting
returns the expected result.
For Inquisitive Readers
Helm
Two Helm commands to start with are helm list
(to get our helm deployments) and helm delete {name}
to get rid of those we do not need. There are plenty of stable Helm charts available for popular applications in this repo. These stable charts cover the whole spectrum of applications from infrastructure components such as Redis, PostgreSQL, RabbitMQ to top-level applications like GitLab or Wordpress. All this software can be installed using as simply as helm install {chart_name}
.
Application Failure Resiliency
If you wonder about the declarative properties of Kubernetes, the easiest way to verify them is to delete the pod (container with our application). First, list the available pods using the command kubectl get pods
. Then execute kubectl delete pods {pod_name}
. Try to list the pods again, you can see that Kubernetes reacted immediately by spawning the new instance. This behavior is based on the deployment configuration (see our Helm chart above) that determined that one replica must always be running.
Conclusion
In this tutorial, we have demonstrated how to easily containerize a Spring Boot application and how to run it on a Kubernetes cluster. Furthermore, we are now familiarized with basic usage of Kubernetes, which — together with a convenient Helm Chart — provides us simple access to advanced deployment features such as automatic recovery, replication, load balancing or resource management. Isn't it amazing how all of this can be achieved using only a few lines of code?
Sources
Source codes for this tutorial can be found in this GitHub repository.
Published at DZone with permission of Pavel Micka. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments