DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The Latest Containers Topics

article thumbnail
Spring Boot Docker Best Practices
In this blog, you will learn some Docker best practices mainly focussed on Spring Boot applications. You will learn these practices by applying them to a sample application. Enjoy! 1. Introduction This blog continues where the previous blog about Docker Best Practices left off. However, this blog can be read independently from the previous one. The goal is to provide some best practices that can be applied to Dockerized Spring Boot applications. The Dockerfile that will be used as a starting point is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE COPY target/${JAR_FILE} app.jar RUN chown -R javauser:javauser . USER javauser ENTRYPOINT ["java", "-jar", "app.jar"] This Dockerfile is doing the following: FROM: Take eclipse-temurin:17 Java Docker image as base image; WORKDIR: Set /opt/app as the working directory; RUN: Create a system group and system user; ARG: provide an argument JAR_FILE so that you do not have to hard code the jar file name into the Dockerfile; COPY: Copy the jar file into the Docker image; RUN: Change the owner of the WORKDIR to the previously created system user; USER: Ensure that the previously created system user is used; ENTRYPOINT: Start the Spring Boot application. In the next sections, you will change this Dockerfile to adhere to best practices. The resulting Dockerfile of each paragraph is available in the git repository in the directory Dockerfiles. At the end of each paragraph, the name of the corresponding final Dockerfile will be mentioned where applicable. The code being used in this blog is available on GitHub. 2. Prerequisites The following prerequisites apply to this blog: Basic Linux knowledge Basic Java and Spring Boot knowledge Basic Docker knowledge 3. Sample Application A sample application is needed in order to demonstrate the best practices. Therefore, a basic Spring Boot application is created containing the Spring Web and Spring Actuator dependencies. The application can be run by invoking the following command from within the root of the repository: Shell $ mvn spring-boot:run Spring Actuator will provide a health endpoint for your application. By default, it will always return the UP status. Shell $ curl http://localhost:8080/actuator/health {"status":"UP"} In order to alter the health status of the application, a custom health indicator is added. Every 5 invocations, the health of the application will be set to DOWN. Java @Component public class DownHealthIndicator implements HealthIndicator { private int counter; @Override public Health health() { counter++; Health.Builder status = Health.up(); if (counter == 5) { status = Health.down(); counter = 0; } return status.build(); } } For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML com.xenoamess.docker dockerfile-maven-plugin 1.4.25 mydeveloperplanet/dockerbestpractices ${project.version} ${project.build.finalName}.jar The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the jar file is done by invoking the following command: Shell $ mvn clean verify Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build Run the Docker image: Shell $ docker run --name dockerbestpractices mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT Find the IP-address of the running container: Shell $ docker inspect dockerbestpractices | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAddress": "172.17.0.3" In the above example, the IP-address is 172.17.0.3. The application also contains a HelloController which just responds with a hello message. The Hello endpoint can be invoked as follows: Shell $ curl http://172.17.0.3:8080/hello Hello Docker! Everything is now explained to get started! 4. Best Practices 4.1 Healthcheck A healthcheck can be added to your Dockerfile in order to expose the health of your container. Based on this status, the container can be restarted. This can be done by means of the HEALTHCHECK command. Add the following healthcheck: Dockerfile HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 This healthcheck is doing the following: interval: Every 30 seconds the healthcheck is executed. For production use, it is better to choose something like five minutes. In order to do some tests, a smaller value is easier. This way you do not have to wait for five minutes each time. timeout: A timeout of three seconds for executing the health check. retries: This indicates the number of consecutive checks which have to be executed before the health status changes. This defaults to three which is a good number for in-production. For testing purposes, you set it to one, meaning that after one unsuccessful check, the health status changes to unhealthy. command: The Spring Actuator endpoint will be used as a healthcheck. The response is retrieved and piped to grep in order to verify whether the health status is UP. It is advised not to use curl for this purpose because not every image has curl available. You will need to install curl in addition to the image and this enlarges the image with several MBs. Build and run the container. Take a closer look at the status of the container. In the first 30 seconds, the health status indicates starting because the first health check will be done after the interval setting. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 8 seconds ago Up 6 seconds (health: starting) dockerbestpractices After 30 seconds, the health status indicates healthy. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 33 seconds ago Up 32 seconds (healthy) dockerbestpractices After 2-5 minutes, the health status indicates unhealthy because of the custom health indicator you added to the sample application. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 2 minutes ago Up 2 minutes (unhealthy) dockerbestpractices Again, 30 seconds after the unhealthy status, the status reports healthy. Did you notice that the container did not restart due to the unhealthy status? That is because the Docker engine does not do anything based on this status. A container orchestrator like Kubernetes will do a restart. Is it not possible to restart the container when running with the Docker engine? Yes, it can: you can use the autoheal Docker image for this purpose. Let’s start the autoheal container. Shell docker run -d \ --name autoheal \ --restart=always \ -e AUTOHEAL_CONTAINER_LABEL=all \ -v /var/run/docker.sock:/var/run/docker.sock \ willfarrell/autoheal Verify whether it is running. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 10 minutes ago Up 10 minutes (healthy) dockerbestpractices d40243eb242a willfarrell/autoheal "/docker-entrypoint …" 5 weeks ago Up 9 seconds (healthy) autoheal Wait until the health is unhealthy again or just invoke the health actuator endpoint in order to speed it up. When the status reports unhealthy, the container is restarted. You can verify this in the STATUS column where you can see the uptime of the container. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 12 minutes ago Up 6 seconds (health: starting) dockerbestpractices You have to decide for yourself whether you want this or whether you want to monitor the health status yourself by means of a monitoring tool. The autoheal image provides you the means to automatically restart your Docker container(s) without manual intervention. The resulting Dockerfile is available in the git repository with the name 6-Dockerfile-healthcheck. 4.2 Docker Compose Docker Compose gives you the opportunity to start multiple containers at once with a single command. Besides that, it also enables you to document your services, even when you only have one service to manage. Docker Compose used to be installed separately from Docker, but nowadays it is part of Docker itself. You need to write a compose.yml file that contains this configuration. Let’s see what this looks like for the two containers you used during the healthcheck. YAML services: dockerbestpractices: image: mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT autoheal: image: willfarrell/autoheal:1.2.0 restart: always environment: AUTOHEAL_CONTAINER_LABEL: all volumes: - type: bind source: /var/run/docker.sock target: /var/run/docker.sock Two services (read: containers) are configured. One for the dockerbestpractices image and one for the autoheal image. The autoheal image will restart after a reboot, has an environment variable defined, and has a volume mounted. Execute the following command from the directory where the compose.yml file can be found: Shell $ docker compose up In the logging, you will see that both containers are started. Open another terminal window and navigate to the directory where the compose.yml can be found. A lot of commands can be used in combination with Docker Compose. E.g. show the status of the running containers. Shell $ docker compose ps NAME COMMAND SERVICE STATUS PORTS mydockerbestpracticesplanet-autoheal-1 "/docker-entrypoint …" autoheal running (healthy) mydockerbestpracticesplanet-dockerbestpractices-1 "java -jar /opt/app/…" dockerbestpractices running (healthy) Or stop the containers: Shell $ docker compose stop [+] Running 2/2 ⠿ Container mydockerbestpracticesplanet-autoheal-1 Stopped 4.3s ⠿ Container mydockerbestpracticesplanet-dockerbestpractices-1 Stopped 0.3s Or easily remove the containers: Shell $ docker compose rm ? Going to remove mydockerbestpracticesplanet-dockerbestpractices-1, mydockerbestpracticesplanet-autoheal-1 Yes [+] Running 2/0 ⠿ Container mydockerbestpracticesplanet-autoheal-1 Removed 0.0s ⠿ Container mydockerbestpracticesplanet-dockerbestpractices-1 Removed As you can see, Docker Compose provides quite some advantages and you should definitely consider using it. 4.3 Multi-Stage Builds Sometimes it can be handy to build your application inside a Docker container. The advantage is that you do not need to install a complete development environment onto your system and that you can interchange the development environment more easily. However, there is a problem with building the application inside your container. Especially when you want to use the same container for running your application. The sources and the complete development environment will be available in your production container and this is not a good idea from a security perspective. You could write separate Dockerfiles to circumvent this issue: one for the build and one for running the application. But this is quite cumbersome. The solution is to use multi-stage builds. With multi-stage builds, you can separate the building stage from the running stage. The Dockerfile looks as follows: Dockerfile FROM maven:3.8.6-eclipse-temurin-17-alpine@sha256:e88c1a981319789d0c00cd508af67a9c46524f177ecc66ca37c107d4c371d23b AS builder WORKDIR /build COPY . . RUN mvn clean package -DskipTests FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser COPY --from=builder /build/target/mydockerbestpracticesplanet-0.0.1-SNAPSHOT.jar app.jar RUN chown -R javauser:javauser . USER javauser HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 ENTRYPOINT ["java", "-jar", "app.jar"] As you can see, this Dockerfile contains two FROM statements. The first one is used for building the application: FROM: A Docker image containing Maven and Java 17, this is needed for building the application; WORKDIR: Set the working directory; COPY: copy the current directory to the working directory into the container; RUN: The command in order to build the jar file. Something else is also added to the FROM statement. At the end, AS builder is added. This way, this container is labeled and can be used for building the image for running the application. The second part is identical to the Dockerfile you used to have before, except for two lines. The following lines are removed: Dockerfile ARG JAR_FILE COPY target/${JAR_FILE} app.jar These lines ensured that the jar file from our local build was copied into the image. These are replaced with the following line: Dockerfile COPY --from=builder /build/target/mydockerbestpracticesplanet-0.0.1-SNAPSHOT.jar app.jar With this line, you indicate that you want to copy a file from the builder container into the new image. When you build this Dockerfile, you will notice that the build container executes the build and finally, the image for running the application is created. During building the image, you will also notice that all Maven dependencies are downloaded. The resulting Dockerfile is available in the git repository with the name 7-Dockerfile-multi-stage-build. 4.4 Spring Boot Docker Layers A Docker image consists of layers. If you are not familiar with Docker layers, you can check out a previous post. Every command in a Dockerfile will result in a new layer. When you initially pull a Docker image, all layers will be retrieved and stored. If you update your Docker image and you only change for example the jar file, the other layers will not be retrieved anew. This way, your Docker images are stored more efficiently. However, when you are using Spring Boot, a fat jar is created. Meaning that when you only change some of your code, a new fat jar is created with unchanged dependencies. So each time you create a new Docker image, megabytes are added in a new layer without any necessity. For this purpose, Spring Boot Docker layers can be used. A detailed explanation can be found here. In short, Spring Boot can split the fat jar into several directories: /dependencies /spring-boot-loader /snapshot-dependencies /application The application code will reside in the directory application, whereas for example, the dependencies will reside in directory dependencies. In order to achieve this, you will use a multi-stage build. The first part will copy the jar file into a JDK Docker image and will extract the fat jar. Dockerfile FROM eclipse-temurin:17.0.4.1_1-jre-alpine@sha256:e1506ba20f0cb2af6f23e24c7f8855b417f0b085708acd9b85344a884ba77767 AS builder WORKDIR application ARG JAR_FILE COPY target/${JAR_FILE} app.jar RUN java -Djarmode=layertools -jar app.jar extract The second part will copy the split directories into a new image. The COPY commands replace the jar file. Shell FROM eclipse-temurin:17.0.4.1_1-jre-alpine@sha256:e1506ba20f0cb2af6f23e24c7f8855b417f0b085708acd9b85344a884ba77767 WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ RUN chown -R javauser:javauser . USER javauser HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] Build and run the container. You will not notice any difference when running the container. The main advantage is the way the Docker image is stored. The resulting Dockerfile is available in the git repository with the name 8-Dockerfile-spring-boot-docker-layers. 5. Conclusion In this blog, some best practices are covered when creating Dockerfiles for Spring Boot applications. Learn to apply these practices and you will end up with much better Docker images.
December 20, 2022
by Gunter Rotsaert CORE
· 6,069 Views · 10 Likes
article thumbnail
How To Use HashiCorp Tools To Create a Secured Edge Infrastructure
This article is the missing step-by-step tutorial needed to glue all the pieces of Consul, Nomad, and Vault together.
December 19, 2022
by Daniele Santoro
· 5,853 Views · 2 Likes
article thumbnail
A Brief Introduction to SBOM and How to Use It With CI
Learn more about SBOM (Software Bills of Materials), why you should use it, what the standards are, and how to automate it with Continuous Integration.
December 19, 2022
by Tiexin Guo
· 5,783 Views · 1 Like
article thumbnail
Improve Microservices Security by Applying Zero-Trust Principles
Discover how the zero-trust principles can be applied in a microservices environment and what security controls should be implemented on the back end.
December 18, 2022
by Apostolos Giannakidis
· 6,293 Views · 7 Likes
article thumbnail
Kubernetes Remote Development in Java Using Kubernetes Maven Plugin
Learn how to effectively develop and debug java applications in Kubernetes cluster using Eclipse JKube's remote development functionality.
December 16, 2022
by Rohan Kumar
· 4,202 Views · 5 Likes
article thumbnail
Securing Your Containers—Top 3 Challenges
There are several pitfalls while securing containers and containerized ecosystems. Let's discuss the top three challenges in detail so you can manage them.
December 16, 2022
by Komal J Prabhakar
· 4,493 Views · 1 Like
article thumbnail
Apache Ranger and AWS EMR Automated Installation and Integration Series (5): Windows AD + Open-Source Ranger
In the last article of the five-part series, readers will understand the last high applicability scenario: Scenario 4: Windows AD + Open-Source Ranger.
December 16, 2022
by Laurence Geng
· 4,378 Views · 2 Likes
article thumbnail
Apache Ranger and AWS EMR Automated Installation and Integration Series (4): OpenLDAP + Open-Source Ranger
This article of the series will allow readers to understand the open-source Ranger integration solution against “Scenario 3: OpenLDAP + Open-Source Ranger.”
December 14, 2022
by Laurence Geng
· 3,520 Views · 2 Likes
article thumbnail
Building a 24-Core Docker Swarm Cluster on Banana Pi Zero
One alternative to the Raspberry Pi is the Banana Pi M2 Zero. In this tutorial, take on the challenge of creating a cheap cluster using Banana Pi devices.
December 14, 2022
by Alejandro Duarte CORE
· 48,770 Views · 7 Likes
article thumbnail
Apache Ranger and AWS EMR Automated Installation and Integration Series (3): Windows AD + EMR-Native Ranger
This article of the series will allow readers to understand EMR and Ranger integration solutions against “Scenario 2: Windows AD + EMR-Native Ranger.”
December 12, 2022
by Laurence Geng
· 3,751 Views · 2 Likes
article thumbnail
Kubernetes-Native Inner Loop Development With Quarkus
How do you develop and test an individual microservice that is part of a larger system? Quarkus and several other technologies help.
December 12, 2022
by Eric Deandrea
· 3,787 Views · 2 Likes
article thumbnail
3 Reasons for the Mounting Demand for Smart Cloud-Native Application Development
The demand for smart cloud-native applications rises as businesses realize that cloud-native application development helps them become agile.
December 12, 2022
by Mike Kelvin
· 2,434 Views · 1 Like
article thumbnail
Developing Cloud-Native Applications With Containerized Databases
Learn how to use Kustomize and Tekton to provide Kube-Native automated workflows using parameters such as database operators, StorageClass, and PVC.
December 10, 2022
by Sylvain Kalache
· 7,479 Views · 1 Like
article thumbnail
Dockerizing a MERN Stack Web Application
Learn how to dockerize an entire MERN Stack application.
December 9, 2022
by Avik Kundu
· 3,505 Views · 1 Like
article thumbnail
Control Your Kubernetes Cluster Compute Resources With ResourceQuota
This article will give a glimpse of how to handle resources within a Kubernetes cluster maturely.
December 7, 2022
by Aditya Bhuyan
· 13,213 Views · 4 Likes
article thumbnail
Apache Ranger and AWS EMR Automated Installation and Integration Series (2): OpenLDAP + EMR-Native Ranger
This article of the series will allow readers to understand EMR and Ranger integration solutions against Scenario 1: OpenLDAP + EMR-Native Ranger.
December 6, 2022
by Laurence Geng
· 3,545 Views · 2 Likes
article thumbnail
Docker Best Practices
In this blog, you will learn some Docker best practices mainly focussed on Java applications. This is not only a theoretical exercise, but you will learn how to apply the best practices to your Dockerfiles. Enjoy! 1. Introduction Writing Dockerfiles seems easy: just pick an example from the internet and customize it to fit your needs. However, many examples are good for a development environment but are not production worthy. A production environment has more strict requirements especially concerning security. Besides that, Docker also provides guidelines for writing good Dockerfiles. It is just like writing code: you may know the syntax, but that does not mean you can write clean and good code in that specific programming language. The same applies to Dockerfiles. With this blog, you will learn some best practices, guidelines you can apply when writing Dockerfiles. The previous sentence deliberately says can apply and not must apply. It all depends on your use case. The example Dockerfile which often can be found when searching for Dockerfile for Java applications, is the following: Dockerfile FROM eclipse-temurin:17 RUN mkdir /opt/app ARG JAR_FILE ADD target/${JAR_FILE} /opt/app/app.jar CMD ["java", "-jar", "/opt/app/app.jar"] This Dockerfile is doing the following: FROM: Take eclipse-temurin:17 Java Docker image as base image; RUN: create a directory for the application jar file; ARG: provide an argument JAR_FILE so that you do not have to hard code the jar file name into the Dockerfile; ADD: add the jar file to the Docker image; CMD: the command that has to be executed when running the container, in this case, just start the Java application. In the next sections, you will change this Dockerfile to adhere best practices. The resulting Dockerfile of each paragraph is available in the git repository in directory Dockerfiles. At the end of each paragraph the name of the corresponding final Dockerfile will be mentioned where applicable. This post is inspired by the CIS Docker Benchmarks, the blog 10 best practices to containerize Java applications with Docker by Brian Vermeer and my own experiences. Code being used in this blog is available at GitHub. 2. Prerequisites The following prerequisites apply to this blog: Basic Linux knowlegde; Basic Java and Spring Boot knowledge; Basic Docker knowlegde. 3. Sample Application A sample application is needed in order to demonstrate the best practices. Therefore, a basic Spring Boot application is created containing the Spring Web dependency. The application can be run by invoking the following command from within the root of the repository: Shell $ mvn spring-boot:run For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML com.xenoamess.docker dockerfile-maven-plugin 1.4.25 mydeveloperplanet/dockerbestpractices ${project.version} ${project.build.finalName}.jar The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the jar file is done by invoking the following command: Shell $ mvn clean verify Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build Run the Docker image: Shell $ docker run --name dockerbestpractices mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT Find the IP-address of the running container: Shell $ docker inspect dockerbestpractices | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAddress": "172.17.0.3" In the above example, the IP-address is 172.17.0.3. The application also contains a HelloController which just responds with a hello message. The Hello endpoint can be invoked as follows: Shell $ curl http://172.17.0.3:8080/hello Hello Docker! Everything is now explained to get started! 4. Best Practices 4.1 Which Image to Use The image used in the Dockerfile is eclipse-temurin:17. What kind of image is this exactly? Therefore, you need to check how this image is built. Navigate to DockerHub; Search for ‘eclipse-temurin’; Navigate to the Tags tab; Search for 17; Sort by A-Z; Click the tag 17. This will bring you to the page where the layers are listed. If you look closely to the details of every layer and compare this to the tag 17-jre, you will notice that the tag 17 contains a complete JDK and tag 17-jre only contains the JRE. The latter is enough for running a Java application and you do not need the whole JDK for running applications in production. It is even a security issue when the JDK is used because the development tools could be misused. Besides that, the compressed size of the tag 17 image is almost 235MB and for the 17-jre it is only 89MB. In order to reduce the size of the image even further, you can use a slimmed image. The 17-jre-alpine image is such a slimmed image. The compressed size of this image is 59MB and reduces the compressed size with 30MB compared to the 17-jre. The advantage is that it will be faster to distribute the image because of its reduced size. Be explicit in the image you use. The above used tags are general tags which point to the latest version. This might be ok in a development environment, but for production it is better to be explicit about the version being used. The tag being used in this case will be 17.0.5_8-jre-alpine. And if you want to be even more secure, you add the SHA256 hash to the image version. The SHA256 hash can be found at the page containing the layers. When the SHA256 hash does not correspond to the one you defined in your Dockerfile, building the Docker image will fail. The first line of the Dockerfile was: Dockerfile FROM eclipse-temurin:17 With the above knowledge, you change this line into: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e Build the Docker image and you will notice that the (uncompressed) size of the image is drastically reduced. It was 475MB and now it is 188MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/dockerbestpractices 0.0.1-SNAPSHOT 0b8d89616602 3 seconds ago 188MB The resulting Dockerfile is available in the git repository with name 1-Dockerfile-specific-image. 4.2 Do Not Run As Root By default, the application runs as user root inside the container. This exposes many vulnerability risks and is not something you must want. Therefore, it is better to define a system user for your application. You can see in the first log line when starting the container that the application is started by root. Shell 2022-11-26 09:03:41.210 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on 3b06feee6c65 with PID 1 (/opt/app/app.jar started by root in /) Creating a system user can be done by adding a group javauser and a user javauser to the Dockerfile. The javauser is a system user which cannot login. This is achieved by adding the following instruction to the Dockerfile. Notice that creating the group and user are combined in one line by means of the ampersand signs in order to create only one layer. Dockerfile RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser The complete list of arguments which can be used for adduser are the following: -h DIR Home directory -g GECOS GECOS field -s SHELL Login shell -G GRP Group -S Create a system user -D Don’t assign a password -H Don’t create home directory -u UID User id -k SKEL Skeleton directory (/etc/skel) You will also need to change the owner of the directory /opt/apt to this new javauser, otherwise the javauser will not be able to access this directory. This can be achieved by adding the following line: Dockerfile RUN chown -R javauser:javauser /opt/app And lastly, you need to ensure that the javauser is actually used in the container by means of the USER command. The complete Dockerfile is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e RUN mkdir /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE ADD target/${JAR_FILE} /opt/app/app.jar RUN chown -R javauser:javauser /opt/app USER javauser CMD ["java", "-jar", "/opt/app/app.jar"] In order to test this new image, you first need to stop and remove the running container. You can do so with the following commands: Shell $ docker stop dockerbestpractices $ docker rm dockerbestpractices Build and run the container again. The first log line mentions now that the application is started by javauser. Before, it stated that it was started by root. Shell 2022-11-26 09:06:45.227 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on ab1bcd38dff7 with PID 1 (/opt/app/app.jar started by javauser in /) The resulting Dockerfile is available in the git repository with name 2-Dockerfile-do-not-run-as-root. 4.3 Use WORKDIR In the Dockerfile you are using, a directory /opt/app is created. After that, the directory is several times repeated, because this is actually your working directory. However, Docker has the WORKDIR instruction for this purpose. When the WORKDIR does not exist, it will be created for you. Every instruction after the WORKDIR instruction will be executed inside the specified WORKDIR. So, you do not have to repeat the path every time. The second line contains the RUN instruction: Dockerfile RUN mkdir /opt/app Change this with using the WORKDIR instruction. Dockerfile WORKDIR /opt/app Now you can also remove every /opt/app reference, because the WORKDIR instruction ensures that you are in this directory. The new Dockerfile is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE ADD target/${JAR_FILE} app.jar RUN chown -R javauser:javauser . USER javauser CMD ["java", "-jar", "app.jar"] Build and run the container. As you can see in the logging, the jar file is still executed from within directory /opt/app: Shell 2022-11-26 16:07:18.503 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on fe5cf9223143 with PID 1 (/opt/app/app.jar started by javauser in /opt/app) The resulting Dockerfile is available in the git repository with name 3-Dockerfile-use-workdir. 4.4 Use ENTRYPOINT There exists a difference between the CMD instruction and the ENTRYPOINT instruction. More detailed information can be found in this blog. In short, use: ENTRYPOINT: when you build an executable Docker image using commands that always need to be executed. You can append arguments to the command if you like to; CMD: when you want to provide a default set of arguments but which are allowed to be overridden by the command line when the container runs. So, in the case for running a Java application, it is better to use ENTRYPOINT. The last line of the Dockerfile is: Dockerfile CMD ["java", "-jar", "app.jar"] Change it into the following: Dockerfile ENTRYPOINT ["java", "-jar", "app.jar"] Build and run the container. You will not notice any specific difference, the container just runs as it did before. The resulting Dockerfile is available in the git repository with name 4-Dockerfile-use-entrypoint. 4.5 Use COPY instead of ADD The COPY and ADD instructions seem to be similar. However, COPY is preferred above ADD. COPY does what it says, it just copies the file into the image. ADD has some extra features, like adding a file from a remote resource. The line in the Dockerfile with the ADD command: Dockerfile ADD target/${JAR_FILE} app.jar Change it by using the COPY command: Dockerfile COPY target/${JAR_FILE} app.jar Build and run the container again. You will not say a big change, besides that in the build log the COPY command is shown now instead of the ADD command. The resulting Dockerfile is available in the git repository with name 5-Dockerfile-use-copy-instead-of-add. 4.6 Use .dockerignore In order to prevent from accidentily adding files to your Docker image, you can use a .dockerignore file. With a .dockerignore file, you can specify which files may be sent to the Docker daemon or may be used in your image. A good practice is to ignore all files and to add explicitely the files you allow. This can be achieved by adding an asterisk pattern to the .dockerignore file which excludes all subdirectories and files. However, you do need the jar file into the build context. The jar file can be excluded from being ignored by means of an exclamation mark. The .dockerignore file looks as follows. You add it to the directory where you run the Docker commands from. In this case, you add it to the root of the git repository. Plain Text **/** !target/*.jar Build and run the container. Again, you will not notice a big change, but when you are developing with npm, you will notice that creating the Docker image will be much faster because the node_modules directory is not copied anymore into the Docker build context. The .dockerignore file is available in the git repository Dockerfiles directory. 4.7 Run Docker Daemon Rootless The Docker daemon runs as root by default. However, this causes some security issues as you can imagine. Since Docker v20.10, it is also possible to run the Docker daemon as a non-root user. More information how this can be achieved can be found here. An alternative way to accomplish this, is to make use of Podman. Podman is a daemonless container engine and runs by default as non-root. However, although you will read that Podman is a drop-in replacement for Docker, there are some major differences. One of them is how you mount volumes in the container. More information about this topic can be read here. 5. Conclusion In this blog, some best practices for writing Dockerfiles and running containers are covered. Writing Dockerfiles seems to be easy, but do take the effort in learning how to write them properly. Understand the instructions and when to use them.
December 6, 2022
by Gunter Rotsaert CORE
· 8,725 Views · 5 Likes
article thumbnail
Hosting .NET Core Web API Image With Docker Compose Over HTTPS
This article explains the SSL Certificate configuration for secure communication over the HTTPS using .NET Core Web API and Docker.
December 6, 2022
by Jaydeep Patil
· 1,899 Views · 2 Likes
article thumbnail
Platform Engineering Trends You Need to Know
PlatformCon 2022, the first-ever conference by and for platform engineers, dove into the latest trends in best practices. Here are the highlights you won’t want to miss.
December 6, 2022
by Aeris Stewart
· 7,807 Views · 4 Likes
article thumbnail
Progressive Delivery in Kubernetes: Analysis
An analysis of Progressive Delivery options in the Cloud Native landscape will be done to explore how this enhancement can be added in a Kubernetes environment.
December 5, 2022
by Ramiro Alvarez Fernandez
· 2,351 Views · 2 Likes
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: