Continuous Delivery: From Theory to Practice
Join the DZone community and get the full member experience.
Join For Free
[this article was written by yaron parasol, vp product management at gigaspaces, as part of the dzone 2015
guide to continuous delivery
, which is available for download now.]
when we discuss continuous delivery (cd), the starting point would need to be the motivation that has driven this it evolution. the demand for continuous delivery was brought on by businesses’ need for more agility and faster time to market, where time to market is the primary motivation, and agility is the means to achieving that. if manual delivery processes used to involve running a compiler by hand, then creating a binary, then copying it manually to a server, and then restarting that server; clearly the time to market in such scenarios was long and complex, but even more so, was error prone due to the manual involvement.
another two important driving factors are the need for tighter and faster feedback loops, as well as the reduction of work in progress or process (wip). wip is a term from the inventory world, which is concept that refers to a company's partially finished goods waiting for completion and eventual sale (or their subsequent value), where in the interim these goods that are unable to reach the market just eat up company resources such as storage and bound capital.
this is why the whole agile methodology, with sprints and scrum, was even started in the first place. this then led to the development of continuous integration (ci), to automate build processes to speed up delivery, and minimize human involvement, to prevent error. however, even with ci, most deployments still remained manual. at best, parts of the deployment process were automated, but only in the context of really small closed environments, that are basically non-configurable environments.
however, if the ultimate goal, is to get the new feature that the developer writes to production, in a matter of hours rather than months or half a year, there was still a need to fill the gap between agile development and ci, and actually pushing frequent, small updates to production.
getting the code to production is where continuous delivery comes to complete the entire cycle from development to market. to do that you want to be able to push a minimal scope of new code in a way that is easily testable and with limited exposure, to monitor it closely, and roll it back if it’s not good.
continuous integration in the real world
while the motivation and methodology is generally clear, the actual execution is another story entirely. with ci/cd we’re talking about a streamlined process that’s quite complex. this means we don't want human intervention, or at the very least to minimize it significantly, and we want to extend the automation all the way to production, all while combining this with testing and intelligent monitoring.
so ci was essentially the starting point, let’s take a look at ci, and then where we need to get to - and the ideal way to bridge such a gap.
diagram 1 - typical ci process
iaas and containers in the ci process
after you’ve built a working automated build process, there is another important challenge to overcome before being able to push to production, and this is the matter of controlled environments. this is basically the primary reason iaas and containers have become very popular in the world of ci. the three leading challenges with controlling environments have been working in clean environments, ensuring proper utilization of resources, and parallelization of processes. this is due to a number of reasons:
-
“dirty” machines (those with packages already installed on them, with unclean configuration and files spread around) are bound to cause inconsistencies with running tests and building binaries.
-
to ensure more efficient utilization of resources i.e. so environments aren’t manually set up and occupied unnecessarily for extended periods of time, resulting in others not being able to use them. this then leads to the need to buy added hardware and software to accommodate this lack of resources.
-
to enable parallelization of the process by spawning up a set of vms/containers and the running of different test suites/builds on each.
for these reasons, with typical integration and testing, it has become popular to use iaas. with iaas you can build the on demand resources you need (compute, storage, and network resources) within just a few minutes, and if something goes wrong you can easily wipe the entire environment, and do this all over again.
of course, the most ideal way to do this, is to automate these processes. you can either choose the do it yourself method of using scripts, and install the application stack using configuration management tools, or even more scripts. this means starting the application components, cloning a git repo, and running the build process, which then yields a binary or package with the updated code. then you have your binary (which is really only half the process). then you need to install the binaries on the environment you just created and run the integration tests.
the benefit of this, is a more robust process that is less error prone (due to the minimization of human involvement).
this solves the issue of clean environments, and enables parallelization as resources are on demand, the disadvantage of using iaas is again the under-utilization of resources. that’s where containers come into the mix.
the use of containers basically provides the same as vms, but more lightweight, so if a vm takes minutes to load - a container can be spawned pretty much instantaneously. it’s a fresh environment, utilizes minimal resources, and enables parallelization, what’s more since it’s so lightweight the binaries are also released much more quickly.
this is where ci processes become interesting, a new form of ci that is a combination of application ci and automation ci.
diagram 2 - combined ci process

now let’s take a look at ci in the yet another flavor - containerization flavor..
diagram 3 - containerized ci process

there is also the need for container orchestration - to time the creation of the containers, and tie the different application components together, however that is a whole post unto itself, so i can just suggest further reading on container / docker orchestration .
continuous delivery...from theory to practice
after we’ve taken a look at typical ci, and then focused on ci in an iaas environment or with a containerization flavor, we can segue into cd.
so you would assume that after you have the application packaged and tested, it should be pretty easy to deploy to production, right? well the answer is actually, no. unfortunately, this has proven quite difficult in real production environments.
there are several obstacles when it comes to zero time deployments.
after you’ve automated your build process, it’s not just a matter of automatically deploying your code to production. for starters, the build process is not done in production, so ultimately it has no business risk. therefore, when deploying your code, more than anything the process is actually more important than the actual pressing of the button at the end of the day -- or a thousand times a day for that matter. before you can achieve this level of continuous delivery, you need to make sure that you aren’t jeopardizing your system, users, and ultimately your business, in the process.
this means that you need to know what makes the entire system work, where the potential problems are, and make sure that your deployment stack and new package take all this into consideration in advance. to do so, you need to ensure you properly test your code prior to deploying to production, and then after your code is deployed that you are actually monitoring the right things. the “right things” mean that you are aware of the changes that are being made, and how they can affect the system as a whole - e.g. if the deployment process contains a change in my database schema, you'd need to make sure that there is a process in place that ensures that when you deploy that the database schema stays intact (and that’s just one small example).
monitoring deployments is a whole bible unto itself, and so it would be too lengthy to discuss this in detail. however the focus on this is probably the most important factor, since the ability to deploy 1000 times a day is worthless if you are unable to monitor how these changes affect your system. since, let’s be honest, you’re not looking to deploy new features a thousand times a day, deploying a thousand times a day gives you the ability to fix things really fast and make small changes quickly.
that’s why you need to have the entire process set up in such a way that enables you to quickly understand exactly what’s going on in your system - and only then will you have the ability to deploy as many times as you want. this means monitoring the right places, the right kpis and metrics - cpu/load/memory, seeing if there is any performance lapse - and when you reach the level of numerous deployments a day - to note whether there is also gradual performance lapse, which is often overlooked. this is just the tip of the iceberg.
this has become exponentially more difficult these days with autoscaling capabilities, frequent changes to servers, locations of servers. where once upon a time you had two servers and everything was simple - these days you have thousands of servers distributed around the world, and multiple processes to deploy all the time.
reaching the continuous delivery promised land
when you’re ready to deploy your code to production, you would need to write a process to ensure you take all aspects of the deployment into account, this would typically include:
-
choosing the right tool for the job (as an aside, generally speaking, the tool isn’t really the problem with deployment, the process just needs to be ironclad. that said, there are tools that are less deployment oriented like common cm tools, where scripting tools may do a better job e.g. fabric or ansible that are more deployment oriented.)
-
automating the process of pulling your server list (taking amazon as an example, you can use tags to tag the different server types and then deploy to the right servers based on their type.)
-
choosing the type of deployment process. (there are a few common controlled deployment methods - canary, blue-green (aka a/b) and there is much to be said about these - the most important aspect of each lies in the next bullet).
-
monitoring the right things - so you can know when and what to rollback.
and while we can use these methods to deploy code - if we want to continuously deploy applications as a whole, including the infrastructure (not just the code) - aka infrastructure as code, that’s where orchestration would come in. we would still need to perform the same initial steps:
-
understanding the process and how it affects our system,
-
how to create the binaries/ logical containers in a clean environment
-
testing
-
monitoring
however, on top of all of this, we could add code that creates infrastructure, including:
-
loading the resources
-
keeping the infrastructure in source control, and
-
taking the binaries created and deploying them with the infrastructure when the deployment is run.
so whether you choose to do a canary or blue-green deployment, an orchestrator will come in very handy in either scenario to manage business continuity and data integrity. with the complexity involved with pushing code to live systems in production, the orchestrator should be built in a manner that enables you to address the entire application lifecycle - a good example for this is tosca (topology and orchestration specification for cloud applications). tosca is an open cloud standard language from the oasis organization (the same organization that brought us xml) that is based on yaml.
tosca has the combination of declarative descriptions of the application topology with all its components - including the load balancer, network, the compute resources, software and everything else, along with an imperative set of workflows to describe the logic of any process we need to automate. what this means from a continuous delivery perspective, is that with tosca topology each application component has lifecycle hooks, that enable the adding of more hooks to cope with new processes, such as the invocation of a/b testing of deployments, testing, and monitoring.
with these new ci/cd capabilities, a line of business that once upon a time had to go through the process of requesting a feature from engineering based on business-level requirements, and then spend another year waiting for these to actually reach the market, organizations can now expect a new feature to be shipped to production within just a few weeks. needless to say the business impact of such processes is driving an unprecedented evolution in it, that will only progress and gain momentum in the near future, however, with all new technology - continuous delivery needs to be implemented with safe measures while taking all of the complexities into account - as the negative business impact of cd gone wrong can by far out-weigh the positive aspects.
Opinions expressed by DZone contributors are their own.
Trending
-
5 Key Concepts for MQTT Broker in Sparkplug Specification
-
What Is React? A Complete Guide
-
Building a Flask Web Application With Docker: A Step-by-Step Guide
-
Why You Should Consider Using React Router V6: An Overview of Changes
Comments