Historically, we have seen waves of innovation hit the IT industry. Typically, these have happened separately in the areas of infrastructure (mainframe to distributed to virtual), application architecture (monolithic to client-server to n-tier web) and process/methodology. But if I look around, I see that right now we are in the midst of what is not just another wave in one of these areas, but a drastic transformation that cuts across multiple areas at once. We are watching the infrastructure space be completely disrupted by lightweight container technology (currently best represented by Docker). We are seeing application architectures moving to a distributed microservices model to allow value-added business logic to be quickly added or changed in order to better serve the end-user. Additionally, the way we deliver software is changing rapidly, with emphasis on techniques that were unthinkable just a few years ago, such as “A/B testing in production” and “feature flags.” The really interesting part is that these three waves are feeding on each other and amplifying the ultimate effect on IT: the ability to provide more value faster to the business/consumer/user.
Containers and continuous delivery (CD) are coming together to accelerate the innovation that can happen in microservices-based applications. This radical change in IT tooling and process has the potential to have a huge impact on all of us, because these innovations can amplify each other and make the whole bigger than the sum of the individual parts. The combination can allow us to see, in enterprise IT, the exponential growth in innovation that we have seen in consumer and mobile applications over the past five years or so.
One of the most interesting things about the Docker phenomenon is how it helps facilitate the way development and operations teams work together by changing the abstraction level from an application binary to a container level.
One way to view Docker is simply as a different, improved packaging approach, in much the same way that applications have been packaged in RPMs or other mechanisms. But if you focus solely on Docker as a packaging mechanism, you might think the impact will merely be about the last mile of how your application gets pushed to production. Yet, since Docker fundamentally improves the way you package, maintain, store, test, stage and deploy your applications, the target runtime environment isn’t an afterthought that’s left to the IT Ops team at a late stage of the CD pipeline. Instead, the runtime environment is closely associated to the application source code from the start. At the beginning of any CD pipeline, you’ll find a set of code repositories as well as a set of binary repositories containing a number of IT Ops-approved Docker images for the various environments required (operating system, application servers, databases, load-balancers, etc.).
Because Docker encapsulates both the application and the application’s environment or infrastructure configuration, it provides several key benefits:
Docker Makes it Easier to Test Exactly What You Deploy
Developers deliver Docker containers or elements that are consumed by other containers; IT operations then deploys those containers. The opportunity to screw up in a handoff or reassembly process is reduced or eliminated. Docker containers encourage a central tenet of continuous delivery - reuse the same binaries at each step of the pipeline to ensure no errors are introduced in the build process itself.
Docker Containers Provide the Basis for Immutable Infrastructure
Applications can be added, removed, cloned, and/or its constituencies can change, without leaving any residues behind. Whatever mess a failed deployment can cause is constrained within the container. Deleting and adding become so much easier that you stop thinking about how to update the state of a running application. In addition, when infrastructure can be changed (and it must change) independently of the applications that the infrastructure hosts - a very traditional line between development and operations responsibilities - there are inevitable problems. Again, the container abstraction provides an opportunity to reduce or eliminate the exposure. This gets particularly important as enterprises move from traditional virtualization to private or public cloud infrastructure. None of these benefits brought about by the use of Docker appear magically. Your software and infrastructure still need to be created, integrated with other software, configured, updated and managed throughout their lifetimes. Docker gives you improved tools to do that.
Updating the Environment Itself is Much More Formalized, Yet Simplified
In a typical software delivery process, the main trigger of a new CD pipeline will be a change in the source code of the application. This will initiate a web of tests, integrations, approvals and so on, which, taken together, comprise the software pipeline. However, in case one wants to update the environment itself (such as patching the operating system), this would happen separately, in parallel to the application build process, and it is only once the pipeline is executed again that the updated bits will be picked up. This could happen late in the pipeline execution; hence, an application could end up not going through all tests with that new environment. With Docker, not only will a code change initiate a CD pipeline execution, but uploading a new Docker base image (such as an operating system) will also trigger the execution of any CD pipeline that is a consumer of this image. Since Docker images can depend on each other, patching an operating system might result in the automatic update of database and application server images, which will in turn initiate the execution of any pipeline that consumes those database/application server images! A CD pipeline is no longer just for developers and their source code. Developers and IT Ops now share the exact same pipeline for all of their changes. This has the potential of hugely improving the safety and security of an IT organization. For example, when facing a critical and widely deployed security issue (such as the Heartbleed bug), IT Ops teams often struggle in making sure that absolutely ALL machines in production have been patched. How to make sure that no server gets forgotten? With a Docker-based CD pipeline, any environment dependency is explicitly/declaratively stated as part of the CD pipeline.
Even if your application has nothing to do with Docker or will not be delivered as a Docker image itself, you can still benefit from it by building and testing inside a container. Your testing will behave as if you have a whole computer to yourself, oblivious to the fact that it actually is confined in a jail cell. The Docker image essentially becomes an ephemeral executor entity that gets efficiently created and discarded 100 times a day. Each job executes in a fully-virtualized environment that’s not visible or accessible by any other concurrent build and each executor gets thrown away at the end of each build (or reused for a later build, if that’s what you want).
Another benefit of building and testing inside a container is that IT Ops no longer needs to be in charge of managing build environments and keeping them clean, a tedious but critical task in a well-run IT environment. Developers and DevOps can build and maintain their customized images while IT Ops provides generic vanilla environments. Moving to Docker images represents low-hanging fruit that comes with very little disruption, but lots of advantages.
As experience with Docker-based applications grows, the industry will quickly evolve to a place where a single container delivers an application or service within a microservices-based architecture. In that microservices world, fleet management tools like Docker Swarm+Compose, Mesos and Kubernetes will use Docker containers as building blocks to deliver complex applications. As they evolve to this level of sophistication, the need to build, test and ship a set of containers will become acute.