Java EE Deployment Scenarios for Docker Containers
Java EE Deployment Scenarios for Docker Containers
Learn more about how to use Java EE, Docker, and Maven together for different deployment scenarios. You'll see how Java EE applications should be distributed.
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
I've been posting some content around Docker since a while and I like to play around with containers in general. You can find some more information about how to run Docker-Machine on Windows and also showed you how to use the Docker 1.6 client. One of the first blog posts was a compilation of all kinds of resources around Java EE, Docker and Maven for Java EE developers. Working more detailed and often with containers brings up the question about how Java EE applications should be distributed and how developers should use containers. This tries to clarify a little and give you a good overview about the different options.
Base Image Image vs. Custom Image and Some Basics
Most likely, your application server of choice is available on the public registry, known as docker hub. This is true for WildFly. The first decision you have to make is, if you want to use one of the base images or if you are going to bake your own image. Running with the base is pretty much:
docker run -p 8080 -it jboss/wildfly
Your instance is up and running in a second, after the base image was downloaded. But what does that mean? And how does it work? At the heart of every container is Linux. A teensy one. In a normal Linux, the kernel first mounts the root File System as read-only, checks its integrity, and then switches the whole rootfs volume to read-write mode. The teensy Linux in a Docker container does that differently. Instead of changing the file system to read-write mode, it takes advantage of a union mount to add a read-write file system over the read-only file system. In fact there may be multiple read-only file systems stacked on top of each other. If you look at the jboss/wildfly image, this is what you get on first sight:
You see four different levels in this picture. Let's not call them layer, because they aren't yet. This is the hierarchy of images which are the base for our jboss/wildfly image. Each of those images is composed out of a Dockerfile. This is an empty text file with a bunch of instructions in it. You can think of it as a sort of pom-file which needs do be processed through a tool called "Builder". It can contain a variety of commands and options to add users, volumes, add software, downloads and many many more. If you look at the jboss/wildfly Dockerfile you see the commands that compose the image:
# Use latest jboss/base-jdk:7 image as the base FROM jboss/base-jdk:7 # Set the WILDFLY_VERSION env variable ENV WILDFLY_VERSION 8.2.0.Final # Add the WildFly distribution to /opt, and make wildfly the owner of the extracted tar content # Make sure the distribution is available from a well-known place RUN cd $HOME && curl http://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz | tar zx && mv $HOME/wildfly-$WILDFLY_VERSION $HOME/wildfly # Set the JBOSS_HOME env variable ENV JBOSS_HOME /opt/jboss/wildfly # Expose the ports we're interested in EXPOSE 8080 # Set the default command to run on boot # This will boot WildFly in the standalone mode and bind to all interface CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0"]
Now imagine, that every single one of those lines does something to the filesystem. Most obvious example is a download. It adds something to it. But instead of writing it to an already mounted partition, it get's stacked up as a new layer. Looking at all the layers of jboss/wildfly this sums up to 19 unique layers with a total size of 951mb.
Using the base image, you can expect to have a default configuration at hand. And this is a great place to start. We at JBoss try to make our projects (and products too!) usable out-of-box for as many use cases as we can, but there is no way that one configuration could satisfy everyone’s needs. For example, we ship 4 flavors of the standalone.xml configuration file with WildFly since there are so many different deployment scenarios. But this is still not enough. We need to be able to tweak it at any point. The jboss/wildfly image is not an exception here.
Creating a custom image with
# Use latest jboss/wildfly as a base FROM jboss/wildfly
is your first step into the word of a customized image. If you want to know, how to do that, there is an amazing blog post which covers almost all the details.
Java EE Applications - On Docker
One of the main principles behind Docker is "Recreate — Do Not Modify ". With a container being a read-only, immutable piece of infrastructure with very limited capabilities to be changed at runtime, you might be interested in the different options you have to deploy your application.
This is mostly referred to as "custom image" beside the needed configuration changes, you also add the binary of your application as a new layer your image.
RUN curl -L https://github.com/user/project/myapp.war?raw=true -o /opt/jboss/wildfly/standalone/deployments/myapp.war
Done. Build, Push and run your custom image. If one instance dies, don't worry and fire up a new one.
- No re-deployments. Every
- No changes
- New version, new image version
- Not the typical operations model for now.
- Might need additional tooling (plugins for Maven/Jenkins)
- The Docker Way
- Easy to integrate into your project build.
- Easy to roll-out and configure
Containers as Infrastructure
There's no real term for it. It basically means that you don't care how your infrastructure is run. This might be the called old-fashioned operations model. You will need to have some kind of access to the infrastructure. Either a shared filesystem with the host to deploy applications via the deployment scanner or the management ports need to be open in order to use the CLI or other tooling to deploy your applications into the container.
- More complex layering to keep state in containers
- Not the Docker Way
- Not fancy.
- Centralized ops and administration
- Hardly any change to what you're used to as a developer in enterprises today
- Doesn't need to be integrated into your build at all. It's just an instance running somewhere
- No additional tooling.
This is hard. I'd suggest, that you look into what fits best for your situation. Most of the enterprise companies might tend to stick with the Containers as Infrastructure solution. This has a lot of drawbacks for now looking at it from a developers perspective. A decent intermediate solution might be OpenShift v3 which will optimize operations for containers and bring a lot of usability and comfort to the Java EE PaaS world.
If you are free to make a choice, you can totally go with the "Dockerize" way. Keep in mind, that this is a vendor-lock-in as of today and some more promising and open solutions on the horizon already.
Published at DZone with permission of Markus Eisele , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.