Implementing Continuous Delivery is Easy... Isn't It?
Implementing Continuous Delivery is Easy... Isn't It?
Join the DZone community and get the full member experience.Join For Free
Don’t let inefficiencies in software testing lead to delayed deployments and poor quality products. Get the 90 Days to Better QA Guide by Rainforest QA for best practices to avoid these common pitfalls.
Last week I saw the following email quite rightly thanking a bunch of developers for their herculean efforts in managing against, the odds, to launch a web app on time for an industry show. It went something like this:
“We've had late nights, all nights, plans A - E, last minute AAARRRGGGHHHs, frantic calls to the server company, web servers going down on launch day, cancelled holidays, tension and early stage Stockholm Syndrome. Yet somehow we did it and are still talking to each other. The following people should stand up and take the applause they justly deserve because it has been a truly amazing effort: Laura, Mo, Sir Chris, Jason, Alistair, Andy, Jess, Victoria, Greg, Ben and Bradley 1.”
...and therein lies the problem. Emails like this should be the exception rather than the rule as releasing code into the production environment should be a non-event. To my way of thinking, if you continually work weekends and late nights and suffer stress and cancelled holidays, so that you can painfully and manually push your app in to the production environment, then there’s something very wrong with your development process. I read somewhere once that you should “work smart and not hard”, which seems to echo the sentiments of @jezhumble who recently gave an inspiring talk in Berlin on continuous delivery. During the talk Jez went through the fundamentals of continuous delivery the crux of which seemed to be:
- When delivering, deliver small chunks of work.
- Deliver often.
- Be able to rollback deliveries quickly and simply.
- Do the above by automating everything.
- People are key - talk to other teams / departments and work with them.
In making these particularly, paraphrased points Jez made it all sound incredibly easy, but is it as easy as he makes out? I think that the answer is both yes and no... In implementing continuous delivery what you’re actually going is undertaking a series of process changes within a corporate environment and managing that change can be incredibly hard, often intensely frustrating, and hugely political. I think that it’s one of those times where “size matters”.
John Allspaw, formally of flickr and now of Etsy, recounts on Continuous Delivery his experiences of continuous delivery at flickr and how the company was so small that the developers had no choice but to do everything themselves; from code to QA to operations. This lead to developers pushing small incremental code changes to the production environment. This process can still be seen if you look at the bottom of http://code.flickr.com, where you’ll see something like this:
...which is very cool. If the email at the start of this blog was sent out by a flickr manager then last week he’d have sent out 84 emails!
But I digress. The point of the John Allspaw / flickr example is that it’s relatively easy to implement procedures in a new company or change existing procedures in a very small company. Large companies, on the other hand, are a different kettle of fish, where implementing charge is more difficult. Apart from the logistics of managing more people, often over multiple sites around the world, there are also ingrained, established and cherished procedures, which sometimes result in a resistance to change and a ‘jobs worth’ attitude.
I’ve been in several projects where the dev team has organized their internal procedures to include writing tests, committing frequently, running continuous builds, adding automation and releasing frequently to the test team. This is usually driven by team members who hate the hassle of stressful, infrequent, manual releases and want to make theirs lives easier and are enthusiastic enough to want to embrace the latest ideas and take responsibility for what they do. In management speak this would be ‘bottom up’ instigation of change.
Although this idea of ‘bottom up’ change works well in both small companies and at team level, my experience is that it’s pretty ineffectual when it comes to dealing with large organizations. In larger companies it seems normal to have several development teams working on a multitude of different applications, while quite often there’s a single database team and operations team who are responsible for managing the databases and deployment of all applications. This is where developer driven, bottom up continuous delivery fails. It’s easy to change the way your team works, but then you have to change how other development teams work. If you manage that, then you’ll have to start changing the way the database guys work and so too the operations team. This makes implementing change interdepartmental and of course, at this point someone will usually suggest forming a committee that will evaluate the current situation, come up with a list of alternatives and, in the fullness of time, at the appropriate juncture, recommend that another committee is required. My experience is that when it comes to software delivery, you as a developer are expected to follow the operations team's procedures and guidelines however antiquated they may be, and often because “that’s the way it’s always been done”.
So, the question is how do you effect corporate change? This is not a trivial question. Jez Humble states that “people are key”, which is of course true. He recommends that a good start would be for developers to talk to the ops guys and build up a relationship with them by, for example, inviting them to your release parties.
However, as they say, “it’s not what you know, but who you know” 3, so it’s a good idea to talk to the right people or person. It’s no good getting the operations team’s junior tea-boy on side, you’ve got to get the manager, director and general head honcho on side. If, as a junior developer, you think that suggesting a new approach to the head honcho is a daunting task then, as a wise man once said 4, imagine them on the toilet before you start your conversation.
When the head honcho changes things then in management speak this is known as ‘top down’ change management.
I did wonder why @jezhumble made implementing continuous delivery sound so simple and thought that maybe it’s because he works for ThoughtWorks, who according to their website are either “building a new custom system for a client, fixing a project that's tied up in knots, or helping to make a software development organization more productive” 5. ThoughtWorks are consultants and there’s nothing wrong with ThoughtWorks or consultants; however, companies pay a lot of money for consultants and because a company’s management have paid out a lot of money they often more readily listen to, and implement, the consultant’s advice even if it’s wrong or reiterates what the company’s own faithful, permanent employees have been saying all along. I’ve heard this called the Consultancy Syndrome.
That there is a need for continuous delivery is undeniable. For some this may be purely on cost grounds - the inference being that continuous delivery is cheap, whilst monolithic manual releases are very expensive. For developers it’s generally a matter of work/life balance: being able to see your partner and family. Being there to take your children swimming, read them bedtime stories or to meet friends for drinks and the cinema whilst not suffering from developer burnout. As the old story goes, no one lying their death bed ever thinks “I wish I’d spent more time at work”.
1Not their real names.
2See Bob Dylan 1964 on the Columbia record label.
5See About ThoughtWorks
Published at DZone with permission of Roger Hughes , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.