5 Things You Should Know About Continuous Deployment... by the Man Who Coined the Term
5 Things You Should Know About Continuous Deployment... by the Man Who Coined the Term
Join the DZone community and get the full member experience.Join For Free
Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.
Tech Guru Timothy Fitz on making the jump to Continuous Deployment, the buzz around DevOps, and why GitHub has set back Software Departments by 5-10 years…
How would you define continuous deployment and how is it different to continuous delivery?
Timothy Fitz: This is a great question that isn’t frequently asked, since it is often assumed that they both mean the same thing. Usually, when people refer to continuous delivery, they actually mean continuous deployment.
Martin Fowler has a great blog post that clearly defines continuous delivery, along with a list of practices. I think that continuous deployment includes those practices, and then some. The big jump to continuous deployment involves going fully automatic in order to deploy on every single commit, safely, and without many deployment failures.
“Continuous Delivery is sometimes confused with Continuous Deployment. Continuous Deployment means that every change goes through the pipeline and automatically gets put into production, resulting in many production deployments every day. Continuous Delivery just means that you are able to do frequent deployments but may choose not to do it, usually due to businesses preferring a slower rate of deployment. In order to do Continuous Deployment you must be doing Continuous Delivery.” Check out Martin Fowler’s blog on this subject
DevOps didn’t exist five years ago. What does it mean today?
TF: DevOps is a buzzword that’s thrown around a lot. I see a pattern where startups and corporations adopted the cloud and got to the point where they needed to redefine operations. A big part of operations’ responsibility was racking servers and designing physical issues, which is no longer the case with the cloud. DevOps is what’s left after you remove the physical nature of operations, and then incorporate coding and a higher level process that involves a different set of skills.
In large corporations there’s often a mandate from the top level to move to the cloud, which percolates throughout the organization. As companies try to define what the cloud means to them, they add products such as Chef and Puppet to their tool sets. These make processes more automatic and ensure that their environments can be rebuilt at a moment’s notice. These companies also try to define DevOps, which guides them in the direction of continuous deployment. This buzz leads people to question what it means to them, which I love. The combination of buzzword compliance along with engineers really wanting to adopt these concepts is leading to a much faster adoption than I would have ever predicted. You can even see widespread adoption at really large organizations with complicated deployment scenarios like large legacy codebases, embedded hardware, and regulated industries. Everything is going the Continuous Deployment way much faster than I expected.
Which is the weakest in organizations today: Continuous Integration, Continuous Development, Continuous Integration, or Continuous Testing?
TF: It’s two things in rapid succession. It turns out that building the deployment pipeline is pretty straightforward when you just need buy-in from everyone. Going from commit to live is not difficult and only requires a couple of changes to your Jenkins configuration.
The problem is that people aren’t doing real Continuous Integration, but are following GitHub’s model and doing “branches” that are short lived (three to five days). I would go as far as saying that GitHub has set back most software departments five to ten years in terms of Continuous Integration, because it made branching cheap and the optimal flow between Git and GitHub. This process makes perfect sense for open source, but not for products with one deployment target. Anything that’s a branch and hasn’t made it into that single deployment target will cost you. It’s either a potential waste, will cause merge conflicts, or just won’t deploy because it’s wrong.
Continuous Deployment eliminates these issues by saying ‘No” to branching. Instead, it requires you to work off-trunk and make commits once a day. It’s easy once you start doing it. But you need to acquire a specific skill-set and knowledge for Continuous Deployment, and it’s hard for many to understand it if they haven’t been doing it.
If I’m teaching Continuous Deployment, I need a week to explain everything I just said, and then ten whole years to teach automated testing practices. The meat of the whole thing is in the tools, frameworks, skill-sets, people, knowledge and sharing. Everything is hard and new, and no one really understands it if they haven’t been doing it for many years.
The BlazeMeter team evangelize Continuous Testing. How do you think it fits into the Deployment process?
TF: Again, it’s important to acknowledge that there’s a big difference between Continuous Delivery and Continuous Deployment. If you’re doing two or three deploys a week – which is considered fairly aggressive but is also the standard for continuous delivery – it’s still easy to say “OK before we do a release, we’ll do two hours of baking or testing. However, as soon as you say “every commit has to go live right away” you can’t have a two hour automated test that says “yes, this scales and it performs well”.
With Continuous Deployment, you need to switch from big, slow, and expensive methods to small, frequent and continuous ones. You need to know that this will happen without worrying about whether or not it was deployed.
The “Continuous” trend is happening everywhere. It’s not just load and performance testing, but this is a big one that causes problems as people don’t innately see how to switch from a big discrete process to a continuous one. Performance Testing is hard for people to wrap their heads around. They ask: “How do I do continuous testing? Don’t I need to build a whole cluster, and then a big testing infrastructure, and fire a bunch of tests, wait an hour and see if it all worked?” No. Instead, you need to outfit production with enough headroom to allow you to fire fake traffic, measure warning signs, receive early indicators that something’s failing, and incorporate other system building elements that guarantee that all commits go through that testing. You don’t ever need to stop or slow down development to get it done.
What 3 tips would you give DevOps engineering teams?
TF: First, in production, your goal should always be to go from commit to live in production automatically. With that in mind, you’ll quickly see what you can’t afford to do in a Continuous Deployment world, which will tease out issues that are normally papered over. By thinking in this mindset even before actually going through with Continuous Deployment, you’ll be able to uncover holes that you need to patch up
Secondly, use feature flags or flippers instead of branching. This may be diverging from the GitHub best practices that so many love and evangelize, but it’s the right way to go. Because of this switch, I never have merge conflicts that don’t take more than just a few minutes to resolve. These types of issues go away and everything becomes transparent when everyone commits really frequently.
Finally, you should accept the reality that you’re not going to trust Continuous Deployment until you use it and see that nothing went wrong. I’ve noticed that before making the jump to Continuous Deployment, many want to write automated tests or spend weeks doing automated test scripting because they don’t trust their automated test coverage. Instead, you should run a bunch of really small production tests, performing real interactions with your live site, such as creating new user accounts, using Selenium, or just curling some URLs. Then add them to your deployment pipeline. If any of the tests fail, just deploy the old version by immediately contacting the developer who just deployed. Dollar for dollar, you can write those tests in a day or two. They’ll catch 95% of your failures and 99% of your business issues regarding customer impact and dollars lost.
Published at DZone with permission of Ofir Nachmani , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.