Over a million developers have joined DZone.

Continuous Delivery for Containerized Applications

An article from DZone's Research Guide to Continuous Delivery, Volume III, out now!

· DevOps Zone

Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs

Application containers, like Docker, are enabling new levels of DevOps productivity by enabling best practices like immutable images and allowing proper separation of concerns across DevOps and platform teams. In this article, we will discuss how containers make it a easier to fully automate a continuous delivery pipeline, from a commit to running code in production environments. We will also examine best practices for defining and managing containers across a CI/CD pipeline, as well as best practices for deploying, testing, and releasing new application features.

Container Images and Tags

The lifecycle of an application container spans development and operations, and the container image acts as a contract between development and operations. In a typical cycle, code is updated, unit tested, built into a container image during the development phase. The container image can then pushed into a central repository. Next, while performing tests or deploying the application, the container image can be pulled from the central repository.

Since the image can be updated several times, changes need to be versioned and managed in an efficient way. For example, Docker images use layers and copy-on-write semantics to push and pull only the updated portions of an image.

In Docker terminology, container images are stored in an Image Registry, or a registry (e.g. Docker Hub, Google Container Registry, Quay, etc.). Within a registry, each application container has its own Image Repository, which can contain multiple tags.

Image title

Docker allows multiple tags to be applied to an container image. Think of tags as a named pointer to an image ID. Tags provide the basic primitive for managing container images across the delivery pipeline.

As of today, Docker tags are mutable. This means that a tag can point to a different image over time. For example, the tag “latest” is commonly used to refer to the latest available image. While it's convenient to be able to change which image a tag points to, this also poses a problem where pulling a tag does not guarantee getting the same image.

There are pending requests from the community to introduce the immutable tags as a feature in Docker, or to provide the ability to pull an image using the image ID, which is immutable. Until these are addressed, a best practice is to automate the management of tags and to establish a strict naming convention that separate mutable from immutable tags.

Building Immutable Container Images

When using application containers, a developer will typically write code and run unit tests locally on their laptop. The developer may also build container images, but these are not ready to be consumed by other team members, and so will not be pushed to the Image Registry.

A best practice is to maintain the automated steps to containerize the application as part of the code repository. For Docker these steps are defined in a Dockerfile [2], which can be checked in alongside the code. When changes are committed a build orchestration tool, like Jenkins or Bamboo, can build and tag the container image, and then push the image to a shared Image Registry.

Image title

With this approach each build creates an immutable image for your application component, which packages everything necessary to run the component on any system that can host containers. The image should not require any additional configuration management or installation steps. While it may seem wasteful to create an immutable image with every build, container engines like Docker optimize image management, using techniques like copy-on-write, such that only the changes across builds are actually updated.

Even though the application component does not need to be re-configured each time it is deployed, there may be configuration data that is necessary to run the component. This configuration is best externalized, and injected into the runtime for the container. Container deployment and operations tool should allow injecting configuration as environment variables, dynamically assign any bindings for storage and networking, and also dynamically inject configuration for services that are dependent on each other. For example, while creating an environment in Nirmata you can environment variables to one or more services.

Image title

Container-aware Deployment Pipelines

A deployment pipeline consists of various steps that need to be performed, to build, test, and release code to production. The steps can be organized in stages, and stages may be fully automated, or may require manual steps.

Once you start using application containers, your deployment pipeline needs to be aware of container images and tags. It is important to know which phase of the deployment pipeline a container image is at. This can be done as follows:

  1. Identify the stages and environment types in your pipeline

  2. Define a naming convention for immutable image tags that are applied to each image that is built by the build tool. This tag should never be changed:
    e.g. {YYYYMMDD}_{build number}

  3. Define naming conventions for image tags that will be accepted into an environment:
    e.g. {environment name}_latest

  4. Define naming conventions for image tags that will be promoted from an environment to the next stage in the deployment pipeline.
    e.g. {next environment name}_latest

Using these rules, each container image can have at least 2 tags, which are used to identify and track progress of an container image across a deployment pipeline:

  1. A unique immutable tag that is applied when the image is built and is never changed

  2. A mutable tag that identifies the stage of the image in the deployment pipeline

Image title

Application container delivery and lifecycle management tools, can now use this information to govern and manage an automated deployment pipeline. For example, in Nirmata you can define environment types that represent each phase in your deployment pipeline, and a tag naming scheme is used identify which tags are allowed into each environment type, and how to name tags for images that are promoted from an environment type.

Image title

Updating Applications Containers

So far, we have discussed how to build container images and manage container images across a deployment pipeline. The next step is to update the application in one or more environments. In this section we will discuss how containers ease adoption of best practices for updating applications across environments.


Microservices is an architectural style where an application is composed of several independent services, and each service is designed to be elastic, resilient, composable, minimal, and complete [3]. Microservices enable agility at scale, as organizations and software code bases grow, and are becoming increasing popular as an architectural choice for enterprise applications.

Containers are fast to deploy and run. Due to their lightweight packaging and system characteristics, containers are the ideal delivery vehicle for microservices, where each individual service has its own container image, and each instance if the service can now run in its own container.

One of the benefits of a Microservices style application is granular versioning and release management, so that each service can be versioned and updated independently. With the Microservices approach, rather than testing and re-deploying the entire system with a large batch of changes, small incremental changes can be safely made to a production system. And with the proper tooling, it is also possible to run multiple versions of the same service and manage requests across different versions of the service.

Blue-Green Deployments

A blue-green deployment (sometimes also called red-black deployment) is a release management best practice that allows fast recovery in case of potential issues [4]. When performing a blue-green update, a new version (“green”) is rolled out alongside an existing version (“blue”) and an upstream load balancer, or DNS service, is updated to start sending traffic to the “green” version. The advantage of this style of deployment, is that if something fails you simply revert back the traffic to the “blue” version, which is still running as a standby.

Containers make blue-green deployments faster and easier to perform. Since containers provide immutable images, it is always possible to revert back to a prior image version. And, due to the optimized image management capabilities, this can be done in a few seconds.

However, the real value of containers comes through as you start combining blue-green deployments with microservices style applications. Now individual services can leverage this best practice, which further helps reduce the scope of changes and potential errors.

Canary Launches

A canary launch goes a step further than blue-green deployments and provides even greater safety in deploying changes to a production system [5]. While with blue-green deployments, users are typically using either the blue or the green version of the application component, with a canary launch the new version runs alongside the older versions and only select users are exposed to the new version. This allows verifying the correct operation of the new version, before additional users are exposed to it.

For example, you can upgrade a service to a new version (v6.1) and initially only allow calls from internal or test users to that service. When the new version of the service looks stable, a small percentage of production traffic can be directed to the new version. Over time the percentage or production traffic can be increased, and the old version is decommissioned.

While containers are not necessary for implementing and managing canary launches, they can be an enabler in standardizing and automating update policies. For example, Nirmata allows users to select on a per Environment basis how to handle service updates. Users can choose to simply be notified and manually trigger a rolling upgrade, can choose to add the new version alongside existing versions, or can choose to replace existing versions via a rolling upgrade.

Image title

Environments are Now Disposable

Cloud Computing has enabled software defined infrastructure, and allows us to treat servers as disposable entities [6]. Containers take this a step further. Container are very fast to deploy and launch, and with the right orchestration and automation tools, you can now treat entire environments as on-demand and disposable entities.

While production environments are likely to be long lived, this approach provides several benefits for development and test environments, which can now be recreated with a single click. A deployment pipeline can now incorporate automated tests, which spin up environments, run tests, and automatically dispose the environment once the tests complete.


Containers are rapidly being embraced as a foundational tool for DevOps and continuous delivery. Along with microservices style architectures, containers enable and even help enforce best practices for managing application components across a delivery pipeline, from a commit to running in production.

While containers solve several key problems, they also require new tooling for automation and management of applications. Next generation application delivery and management solutions can now leverage containers as a standardized building block, to fully automate the application lifecycle. I believe that this combination of technologies will help unleash a new level of productivity, and advance software engineering, to fulfill the ever growing need for software enabled products and devices across all domains.

You can try Nirmata's automated container delivery pipelines for free at:





[1] Docker Architecture

[2] Dockerfile Reference

[3] Microservices: The Five Architectural Constraints

[4] Blue-green Deployments

[5] Using Canary Launches to Test in Production

[6] Pets vs. Cattle

Explore Nirmata's multi-cloud container services at: http:://nirmata.io

For more articles and news follow us at:

 social-1_logo-linkedin social-1_logo-twitter

Download “The DevOps Journey - From Waterfall to Continuous Delivery” to learn learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.

devops ,containers ,microservices ,blue-green ,canary

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}