Over a million developers have joined DZone.

Continuous Deployment With Hippo CMS, Tutum and Docker

A tutorial to containerize your applications with Tutum and Docker through the Hippo CMS.

· DevOps Zone

The DevOps Zone is brought to you in partnership with Sonatype Nexus. The Nexus Suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today


Brands are investing in digital experiences to compete for customers and revenue. Those digital customer experiences are made up of the latest features and functions delivered and deployed as part of an agile software development lifecycle. With so much at stake, these companies must minimize time to market and address their customers’ expectations more frequently than ever before, continuously.

For leading brands, automated, rapid, and no-risk releases require a collaborative focus by infrastructure, operations, application development, and business leaders to organize into an agile and modern service delivery model. This organizational discipline is often referred to as "DevOps" (Development and Operations) and has given rise to concepts like "Continuous Deployment", "Continuous Delivery" and "Continuous Integration".

In this post, we examine an approach using Hippo CMS, Docker, and Tutum.


Docker is an open-source technology used to package, ship, and run applications. Docker has become synonymous with the concept of containers, which are used to create a complete environment for a software application to run, including code, runtime, system tools, and libraries. These containers are highly portable and are used to deploy applications across environments. Containers are not new. However, Docker has made containers easier and safer to use, standardizing their use and integration with other DevOps technology.

Specifically, Docker makes it possible to set up local development environments that are exactly like a live production server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment.


Using Docker to implement Continuous Deployment for Hippo CMS is relatively straightforward. We have to make some minor modifications to the application so that it runs successfully in Docker, set up the application stack, and then automate it with some simple scripts.

Implementation details

Step 1: Dockerize Hippo

Starting from the Hippo Maven project, there are a few steps required to build Hippo to a Docker image:

  1. Add context and repository configurations for the Docker image.
  2. Add a Dockerfile to the repository.
  3. Create a new assembly definition for adding the right files to the Docker image.
  4. Create a new maven profile that uses docker-maven-plugin to build a Docker image.

The application running in Docker requires slightly different settings than one running locally. To allow for this, create two new files conf/repository.xml and conf/docker-context.xml. By default Hippo uses the filesystem as a backing store for the Jackrabbit repository, which is not as fast. These files set up Hippo to use MySQL, which is more appropriate for a production system.

Next, create a Dockerfile that defines the Docker image we want to build:

FROM tomcat:jre8
ENV CATALINA_OPTS "-Djava.security.egd=file:/dev/./urandom -Drepo.bootstrap=true -Drepo.config=file:/usr/local/tomcat/conf/repository.xml -Djava.rmi.server.hostname= "
ENV JAVA_ENDORSED_DIRS "/usr/local/tomcat/endorsed"

ADD <YOUR ARTIFACT NAME HERE>-1.01.00-SNAPSHOT-distribution.tar.gz /usr/local/tomcat/

Make sure to replace <YOUR ARTIFACT NAME HERE> with the name of the maven artifact you are building. This defines a new Docker image based on Tomcat 8, with the proper environment set up and assembly deployment. Dockerfiles allow you to specify a rich set of directives that can modify the behavior of a Docker image. For more information see the Dockerfile reference

To generate the assembly, create src/main/assembly/docker-distribution.xml. This file defines the set of files that will get deployed into the Docker image.

Finally, create a new Maven profile to generate the Docker image:


This build profile uses the docker-maven-plugin to build the Docker image, copying the docker assembly tar to the right location in the target folder so that it can be copied into the image.

2. Set Up Tutum

Tutum combines the services of Docker Registry, Docker Engine, Docker Compose, provides a management GUI and API, and can also manage and provision cloud nodes through AWS, Azure, Digital Ocean, and other cloud providers. As such, it is a very handy tool for deploying and managing Docker images, and the API makes automation very straightforward (as we will see in section 6). To set up Tutum:

  1. Create an account at tutum.co. If you already have a docker hub account, you can use that.
  2. Provision a node. The full details of how to do so are beyond the scope of this tutorial, but Tutum allows you to either link a cloud provider and provision a node that way, or use a system you control with the "Bring your own node" option.
  3. Create a repository for the hippo application. In the "Repositories" tab, click "Create new repository". Call it "hippo-cd". This repository will store the hippo docker image we configured in section 1.

3. Push Hippo Docker Image

Now that we have created a repository for the Docker image, we should build and push the image:

  1. In the Maven project we set up in section 1, run:
    mvn clean package && mvn docker:build -P docker
    This will build the Docker image.
  2. Tag the Docker image:
    docker tag -f labs/hippo tutum.co/<YOUR TUTUM USERNAME>/hippo-cd
    Replacing <YOUR TUTUM USERNAME> with your Tutum account username. This will "mark" the image as belonging to the repository we created in the previous step.
  3. Log into the Tutum repository:
    docker login tutum.co/<YOUR TUTUM USERNAME>/hippo-cd
  4. Push the image:
    docker push tutum.co/<YOUR TUTUM USERNAME>/hippo-cd

After running all of these steps, you can check on the Tutum repository tab to confirm that the image has been successfully pushed.

4. Deploy Tutum Application Stack

One of the most useful features of Tutum is the ability to create application stacks. Similar to Docker Compose, these stacks allow you declaratively define multiple docker containers, configure them, and configure the links between them.

In Tutum, go to the "Stacks" tab and create a new Stack. Paste the following into the Stackfile editor. Replace the text in brackets as appropriate:

  image: 'tutum.co/<YOUR TUTUM USERNAME>/hippo-cd'
  autoredeploy: true
    - '8080:8080'
    - hippo-mysql
  image: 'mysql/mysql-server:latest'
    - MYSQL_DATABASE=hippo
    - MYSQL_PASSWORD=hippo
    - MYSQL_USER=hippo
    - '3306'

This creates a simple two-application stack. The first application is a container running the hippo-cd docker image we built in the previous section. Autoredeploy: true specifies that the application will be automatically re-deployed any time Tutum detects that a new version of the image has been deployed to the registry. The ports directive defines the ports that this container will expose on the host machine. In order to access Hippo from the outside world, we have to expose port 8080. The links directive sets up the internal network to allow communication between containers in the stack. By specifying Hippo-MySQL, the Hippo container will be able to access the Hippo-MySQL container over its local network.

The second application is a container running the latest version of MySQL, taken from Docker Hub. The environment directive allows us to define environment variables to control the behavior of the container. In the case, we can define a new database called "hippo", as well as the credentials to access it. The expose directive specifies the ports which will be exposed to the internal network. By exposing port 3306, the Hippo container will be able to access the database on Hippo-MySQL.

After creating this stack, deploy it to the node. Once it is done deploying, go to the "Stacks" tab, and select the stack we just deployed. Here, open the "Endpoints" tab and you will see an endpoint on port 8080. You should be able to access Hippo there.

We have now set up a simple continuous deployment system. Try it out by making a change to the code or configuration in the Maven project, build the project, build the Docker image, and tag and push it. The Hippo container in Tutum will automatically re-deploy from the latest image and your change will take effect.

A Word About Volumes

Open up the service definition for hippo-mysql. Under the "Configuration" tab we see a section called "Volumes". Here, a volume for "/var/lib/mysql" is defined. Volumes are Docker's way of providing semi-persistent storage for containers. In this context it means that in general, the content entered into Hippo (and hence into the MySQL database stored at /var/lib/mysql) will not be deleted when the MySQL container is re-deployed. To see this in action, make a change in hippo. Then in Tutum, re-deploy the hippo-mysql image. In the popup, make sure that "Reuse existing container volumes?" is set to "ON". After the MySQL image finishes deploying, check Hippo again, and confirm that your change is still there. The net result of this is that content entry changes will not be overwritten when containers are redeployed for builds, unless the redeploy is explicitly set to delete containers.

For more information see the Docker volumes documentation.

5. Implement Blue/Green Deployments

The system we have set up so far is sufficient for development or staging environments where downtime is not a concern. In production, 2 minutes of downtime for a build is certainly going to be a problem. To overcome this, we will now set up the stack to allow it to support zero-downtime deployments.

The first step is to modify the Stackfile so that it can support a Blue/Green deployment:

  image: 'tutum.co/<YOUR TUTUM USERNAME>/kis:latest'
    - 'HTTP_CHECK=OPTIONS / HTTP/1.1\r\nHost:\ www.<YOUR PRODUCTION SITE NAME>.com:8080/site'
    - '8080'
    - hippo-mysql
  image: 'tutum.co/<YOUR TUTUM USERNAME>/kis:latest'
    - '8080'
    - hippo-mysql
  image: 'tutum/haproxy:latest'
    - 'HEALTH_CHECK=check inter 500 rise 2 fall 4'
    - hippo-blue
    - hippo-green
    - '1936:1936'
    - '8080:8080'
  restart: always
    - global
  image: 'mysql/mysql-server:latest'
    - MYSQL_DATABASE=hippo
    - MYSQL_PASSWORD=hippo
    - MYSQL_USER=hippo
    - '3306'

The idea behind a Blue/Green deployment is that there are two container definitions, "Blue" and "Green". Generally, only one of them is active. During a deployment we can then spin up the inactive container, wait for it to stabilize, and then bring down the previously active container. In order to implement this with docker containers, we create two containers hippo-blue and hippo-green with very similar definitions. Note that port 8080 is no longer exposed on the host as that would prevent both containers from running simultaneously. We use HAProxy as a load balancer in front of the containers. Tutum/haproxy is a special build of HAProxy that automatically reconfigures itself based on the currently running containers. So, when hippo-blue (or hippo-green) becomes available, HAProxy will detect that change and start serving to it. Note the HEALTH_CHECK and HTTP_CHECK options defined in environment variables. These tell HAProxy how to do a health check on the two hippo containers to make sure they are actually serving content before passing requests off to them. Also note that HAProxy has a status page on port 1936. This is useful for automating the Blue/Green deployment, as we can poll the status page to detect when servers are ready.

Now that this new stack is in place, we can manually walk through the stages of a Blue/Green deployment. First, ensure that only hippo-blue is running. hippo-green should be stopped.

  1. Start hippo-green.
  2. Check the status page on port 1936. The username and password are both 'stats'.
  3. hippo-green should appear on the status page, and show status "DOWN". Keep refreshing the page until both hippo-blue and hippo-green show status "UP".
  4. Stop hippo-blue.

6. Automate

To automate the Blue/Green deployment, we can use the tutum-cli tool to run through the steps outlined in the previous section. Note that the Tutum API could be used for this as well:



wait_up() {
echo Waiting for $SERVER to come up
while [[ ! $UP ]]; do
  UP=$(curl -u stats:stats --silent "http://$HAPROXY:1936/;csv" | grep $SERVER | grep L7OK)
  sleep 1

  #Wait at most 10 minutes
  if (( i > 600 )); then
    echo "$SERVER failed to start up"
    return 1

ISGREEN=$(tutum service inspect $GREEN | grep Running)
if [[ $ISGREEN ]]; then 
  echo "Switching to Blue"
  tutum service redeploy --sync $BLUE
  wait_up BLUE
  if [ $? -ne 0 ]; then
    tutum service stop $BLUE
    return 1
    tutum service stop $GREEN 
  echo "Switching to Green"
  tutum service redeploy --sync $GREEN
  wait_up GREEN
  if [ $? -ne 0 ]; then
    tutum service stop $GREEN 
    return 1
    tutum service stop $BLUE 

This script first uses tutum service inspect to figure out which service is currently active. Then, it either starts Blue and stops Green or vice-versa. The wait_up function simply polls the HAProxy status page, waiting for the target service to show "UP" status. This ensures zero downtime. The script can be running by passing 3 arguments: The name of the Blue container, the name of the Green container, and the IP address (or hostname) of the node.

Now that all the individual pieces are in place, the build can be automated with a CI tool such as Jenkins or Bamboo. The basic steps are:

  1. Run the Maven build.
  2. Run unit and integration tests.
  3. If tests pass, build the Docker image and push it to Tutum.
  4. Run the Blue/Green swap script.

Conclusion and Further Reading

The use of containers is seeing mainstream adoption even in the most risk averse industries, suggesting that it should be considered by any organization looking to be more agile and responsive in the their software development and deployment.

For additional details on Continuous Delivery, Continuous Integration, and Continuous Deployment, check out our blog on the agile experience delivery model.

For further technical reading, check out the Docker documentation, the Tutum Stackfile reference, and the Tutum API documentation 

The DevOps Zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

continuous deployment ,hippo cms ,docker ,cloud

Published at DZone with permission of Mike Marmar. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}