Continuous Delivery, Warts and All
Join the DZone community and get the full member experience.Join For Free
the “warts and all” title was meant to be a caveat that they don’t claim to have got everything perfectly right, and that there were problems along the way on this project. the client for this particular project was “springer” (a publishing company) and the job was to redesign the website (basically). one of the problems they were aiming to fix was the “time to release”, which was in the region of months, rather than hours, and so they decided to go all continuous delivery from the outset. another thing worth mentioning was that this was a greenfield project, which has its advantages and disadvantages, as outlined here in my incredibly pointless table:
i did that table in powerpoint, thus highlighting my potential as a senior manager.
why continuous delivery?
the fact that they chose to follow the continuous delivery path right
from the outset was an important decision. in my experience, continuous
delivery isn’t something you can easily retro fit into an existing
system, well, it’s not as easy as when you set out right from the start
to follow continuous delivery. tom put it like this:
you can’t sell continuous delivery as a bolt-on
which, as usual, is a much better way of putting it than i just did.
once of the reasons why they went for the continuous delivery approach with this client was to sell more of jez humble’s continuous delivery book (available on amazon at a very reasonable price). just kidding! they would never do that. they actually chose continuous delivery because of the good-practices (i’m trying to stop using the term “best practices” as i’ve learned that it’s evil) it enforces on a project. continuous delivery allows you to have fast, frequent releases, which forces small changes rather than big ones, and also forces you to automate pretty much everything. they even automated the release notes, which is something we’ve also done on a project i’m working on currently! our release notes are populated from a template, content pulled in from jira, and they’re packaged up in every single build. neat, no? well tom seemed pretty impressed with the idea, and i’m quite chuffed that we’re doing the same stuff.
another reason they opted for a continuous delivery approach was to overcome the it bottleneck problem.
it would seem that there was an it black hole which was unable to produce as quickly as the business demanded. i usually hear people say “agile” is the solution to the it bottleneck, rather than continuous delivery, but tom made a point of saying that they were agile as well. i think continuous delivery helps teams to focus on the delivery aspect of agile, and gives us a way of bringing the delivery issues much further back down the line, where they can be addressed more easily, and not at the last minute. as i mentioned earlier, time-to-market was an important driving factor in choosing continuous delivery. i would also add that, in my experience, having a predictable time to market is of great importance to the business. you tend to find that project sponsors don’t mind waiting a couple of weeks, maybe longer, for a change to go live, as long as that estimate is realistic.
i won’t go into too much technical detail about the project they were working on, so i’ll summarise it like this:
- local virtualisation was done using vagrant and virtualbox, so dev’s could easily spin up new environments locally.
- they used git, and it wasn’t easy. steep learning curve etc. using submodules didn’t help either.
- they had on-site git go-to people, which helped with the git learning curve.
- devs could deploy to any environment – this was useful for building up environments, but is scary as hell.
- they kept the branches to a minimum – only for bugfixes or when doing feature toggle releasing.
- they do check-in stats analysis to “incentivize” people. small and frequent commits were rewarded.
- they used go (they have my sympathy).
- they deploy using capistrano
- they deploy to a versioned directory and use symlinks which helps with rollbacks (i’d say this was a pretty standard practice)
- they use kickstart and chef to build workstations, and chef-solo for other environments
- the servers are provisioned with vmware, the base os installed with cobbler/kickstart, and the “configuration” applied by chef
- even the qa environment was load balanced!
- this is a long list of bullet points
i was pretty interested with the idea of load balancing the test environment because it reminded me of a problem i had at a company i was working for a few years ago. we didn’t have a load balanced test environment but we did have a load balanced live environment, and one night we did a scheduled production release which just wouldn’t work. it was about 4am and things weren’t looking good. luckily for me, a particularly bright developer by the name of andy butterworth was on hand, and he got to the bottom of the problem and dug us out of a hole. the problem was load-balance related of course. our new code hadn’t been written for a load balanced cluster, but we never picked it up until it was too late. i’m not sure what past experiences drove tom and marc to implement a load balanced test environment, but it’s a good job they did, as tom testified that it has saved their bacon a few times.
load balancing qa has saved our bacon a few times!
one of the other things that i was interested in was the idea of using vagrant and virtualbox for local vm stuff. i was surprised at this because they are also using vmware. i wondered why, if they’re already using vmware, they don’t just use vmware player for their local vms?
i was also interested in the way they’d configured go, which, at a glance, looked totally different to how we’ve got our one setup here where i’m currently working. i’m hoping tom will shed some light on this in due course!
i loved the idea of using check-in stats to incentivize the team! i’m really keen on the whole gamification thing at the moment, and i’m trying to think of some cool gamified way of incentivizing teams where i work. the check-in stats approach that tom talked about looked cool, they analyse the number of check-ins per person and also look at the devs comments too, and produce a scoreboard
more than tools
i’ve been to a few talks and conferences recently and one of the underlying messages i’ve got from most of them is that people and relationships are more important than tools, and by that i mean that it’s more important to get relationships right than it is to pick the right tools. bringing in a new amazing tool isn’t going to fix the big problems if the big problems are down to relationships.
i can think of a few examples: introducing tools like vmware and chef
are great at helping to speed up provisioning and configuring of
environments, but if you don’t actually work on the relationships
between the development and operations teams, then the tools won’t have
any effect, the operations team might not buy into them, or maybe
they’ll use them but not in the way the developers want them to. another
example: bringing in a new build tool because your old build system was
unreliable. this isn’t going to fix your problem if your problem was
that your old system was unreliable because development weren’t
communicating clearly with the build engineers.
so relationships are key. but how do we make sure we’ve got good relationships? well, i think if anyone knew the answer to that one they’d bottle it and sell it for millions. the truth is that it’s different for every situation, but there are things which can make sure you’re all on the same page, which is a start:
have shared goals! i’m often banging on about this. everyone has to
push in the same direction. for me, in reality this often means trying
to educate people that we don’t make any money from having reliable
builds on developers laptops if the builds are unreliable in the
ci/build system. we don’t make money out of finishing all our story
points on time. we don’t make money out of writing new features. we make
delivering quality software to customers!
so i think that is exactly what we should all be focused on.
be agile! i know this might seem a bit like it’s the wrong way
around, but i actually think that being agile helps to build
relationships. it’s a practice and a mindset as much as a process, and
so if people share that mindset they’re naturally going to work better
together. in my experience, in operations teams we’ve been quite slow at
adopting agile in comparison to other teams. it’s time for this to
change. tom said that on the project he’s working on, the ops team are
agile, and he identified that as one of the success areas.
pair up. there’s nothing quite like sitting next to someone for a
couple of days to help you see things from their perspective! on tom
& marc’s project at springer they paired the ops guys with dev. i
would recommend going further and pairing dev with support engineers, qa
(obvs!) and build/release management on a regular basis. pairing them
with users/customers would be even better!
skill up. tom & marc talked about cross pollination of skills,
and by this he means different people (possibly from different teams)
learning parts of each others trade and skills. increasing your skillset
helps you understand other people’s issues and problems better, as well
as making you more valuable, of course!
i became a better developer by understanding how things ran in production – marc hofer
in summary – tools are important, people and relationships are importanter (new word), you should automate everything, take little steps instead of big ones, stick to the principles of continuous delivery, and the new snow white movie is bollocks.
Published at DZone with permission of James Betteley, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.