CI, Breaking Builds, Bisecting, and Reverting
Looking at when you should revert your commits if or when the build breaks, based on how many people commit to the same branch, how often they do so, how often it breaks, how long a build takes, and your tooling.
Join the DZone community and get the full member experience.Join For Free
revert on red (build breakage)?
a few days ago henry lawson asks when you should revert a commit if the build breaks: revert on red . (there was a reddit discussion too ) — a real problem for many enterprises trying to push ahead with devops things, and worth bringing up for discussion. in the reddit discussion, henry links off to a relevent martin fowler article, and ex-colleage andrew care dives in too, with some solid insights.
the answer is really a function of how many people commit to the same branch/trunk, how often a day they commit, how often the build breaks, how long the build takes, and what tooling you’ve done to accelerate build-breakage news dissemination.
a tiny team
at the lower end, a team of 5 people each committing once or twice a day, could easily take a “let’s investigate” attitude to build-breakage. that team could do a commit that follows the break to fix it. they might not even have a continuous integration daemon (like jenkins), even if they are doing trunk based development. a build breakage, for that team, might be something that’s determined from a developer workstation by a human, within earshot of other developers.
many thousands divided into many teams
at the higher end, you’re always going to auto-revert the first commit that breaks a build. the higher end could be like google’s 20,000 developers in one big trunk. proving which commit to revert is non-trivial that that situation. i say that because trunk might be going forwards at one commit a second, and all builds take between one minute and fifteen to complete (or fail). in this case a sequence of 300 builds might all go red after you discover the break (red build) out of order because they were all running in parallel in a scaled cloudy infrastructure. if they were running in series, you’d never keep up with one commit a second. it’s only the earliest one that you want to revert, so if there are any earlier commits still running, you should delay your auto-rollback determination until you’re absolutely sure which one. delay in this parallelizd design meen minutes. after the applicable robot made that determination and did the revert, the other 299 builds need to be kicked off again to make sure they did not separately break the build too. that is probably 599 commits by now. for google, that situation is exceptionally rare - they separately verify the pending commit before it arrives in a place that the other 19999 developers could pull/sync it. it’s only unfortunate timing of two commits that could possibly cause a break. key to their “the build never breaks” success, is their use of blaze (bazel in its open source guise) which is a directed graph build system that ties classes to test classes much more tightly that historical build systems.
somewhere in between?
of course, your enterprise is somewhere between these two places. you are not google, and you have basic ci which at best batches up many commits per build job - meaning one of say twenty commits broke the build, and it is going to take some sleuthing to work out which one.
bisecting towards the commit that broke the build.
it does not matter whether your source-control tool has bisect (a binary search capability) built in or not, the process is quite easy to understand. somewhere between the last known good commit, and the most recent commit amongst the batch of commits that the ci daemon first determined are broken, is the causal break. that is one of the commits in build 8235, in the diagram above. you need to hone in on it before you can revert it. to find out which one it is: go half in that sequence, test that, go half way back or forwards from there based on the test you did being red or green, and continue until you identify the actual breaking commit. maybe that is a one person job. maybe a few people take a different commit to test, but that is much less methodical, even if it is potentially faster. everyone in a dev team should know how to bisect, and be prepared to do it if requested.
even after all that, you still need to determine whether rolling it back is best or not. you probably have time to try rolling it back in your dev workstation’s workspace while also compiling and testing all the following commits. i mean you have time to predict ultimate ci red or greenness, before doing the actual commit/push. that is a high-confidence prediction, as you expect the ci daemon to rubber stamp what you determined on your workstation, given you moved up to head.
git’s bisect came in handy for me today. something subtly broke a few months ago. i was able to identify one good build from back then, and knew that now was bad. it took three commands and 90 seconds for each iteration,so it wasn’t costly to do the six or so steps to get back to root cause. as it happens, that was not a revert - too much change since dependent on that precise commit. it is at least data that feeds into work that fixes the subtle break at head, and introduces a test to make sure it does not happen again.
trunk based development and continuous integration
some skinny definitions:
continuous integration (ci)
ci is an intention for a development team, from wikipedia :
“continuous integration is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day. it was first named and proposed by grady booch in his 1991 method, although booch did not advocate integrating several times a day. it was adopted as part of extreme programming (xp), which did advocate integrating more than once per day, perhaps as many as tens of times per day."
continuous integration daemons
continuous integration daemons (like jenkins) are servers that purport to deliver on the promise of continuous integration. they verify commits made to a source-control system are all good: the “build” passes. apache’s gump was an early ci daemon (1999), and it concentrated on verifying that multiple apache projects that built jars, would work with sibling projects that also built jars (many interdependencies). not only jars of course, and not only apache projects. it was an early warning system for incompatibilities. at apache, each projects’ own leads set direction, and that could include backward incompatible changes. something like gump is invaluable, if incredibly daunting in terms of permutations.
a couple of years later cruisecontrol was launched for enterprise developers. it was similar enough to gump, but a lot more installable, and much more well known. a few years after that, jenkins (originally hudson) took over from cruisecontrol. it had a polished user interface, but did not store its configuration in source control (sadly). there’s been an explosion since - saas too.
trunk based development (tbd)
tbd focuses on the source-control practices. have one trunk; don’t make branches you intend developers to commit to over time. branch by abstraction (bba) was always part of tbd, even if it only gained a name later. feature toggles/flags too. for teams not doing bba, a technique leveraging #ifdef, #else, #endif (and equivalents) was how you effected longer-to-achieve changes in a single branch/trunk. inferior of course. tbd is really about branching, merging, and your fine-grained enthusiasm for those.
tbd has had to adapt over the years. the advent of git, and pull-requests that are effectively organised on a branch, the only remaining concern is how long that branch lives before it can be deleted, and how many developers contributed to it. you hope (or mandate) just one pair and for just one day. you try to remind people that lengthening the life of a “one day i promise” branch, risks it becoming a sunk cost fallacy trainwreck . at the start of the article, i alluded to a martin fowler article that henry linked to - here it is “pending head” and it is worth a read, because it chronicles the experiments that came before a pull requests became a mainstream scm feature.
google’s big trunk
google scaled their trunk to 20,000 developers (as mentioned), and they share code (dependencies) at source level, while allowing devs to not checkout anything unrelated to the application or service that would go into production on some cadence. there’s no doubt at all that this is the highest-throughput configuration of 20,000 developers: hats off to google.
sharing binary dependencies is inferior versus googles way: sharing at source level with a composite checkout. having used it, i dream of that tooling being made readily accessible. amongst many benefits, the ability to do atomic commits (and atomic rollbacks) stands out, especially as you consider you’re never losing history for sophisticated refactorings. that compared to moving code from
, and that being two commits, and potentially two nudges for two ci daemons.
trunk based development vs continuous integration
a gratuitous venn diagram:
continuous integration is a practice that purports to encompass trunk based development. you know this because the modern translation of “merging all developer working copies to a shared mainline” is “a single trunk’, despite the science and advances of source-control being almost entirely defined after grady booch’s 1991 ooad book.
continuous integration (ci) is inextricably linked with continuous integration daemons, though. notable implementations included cruisecontrol 13 years ago (from thoughtworks), and hudson/jenkins 10 years ago (and still a force). indeed most people think of a jenkins-like server when you say ci. just look at the original wikipedia page for evidence of that. moreover with an explosion of choices for in that field after jenkins became the enterprise gorilla, means that multi-branch development can be aggressively guarded by ci daemons. deliberate multi-branch development is not ci at all (i mean the original intent) yet it is popular.
back to tbd again. as mentioned before, it possible to do tbd without a ci server/daemon guarding your commits at all.
mono repo versus one big trunk
of course you can do a trunk based development model on your 22 smaller github repositories. binaries output from one, could be the input for another. so “monorepo” is a meme in the last couple of years, driven by details of googles and facebook’s large repos. it is unfortunate as what is really valuable is the single, large branch (a trunk) to which all developers directly contribute. the destincion is key, as monorepo could have any branch design (in practice nobody does). some companies like pixar and nvidia may have incredibly large trees of files in a single repo, but these a bunch of a dissimilar files that have no merge connotation with other parts of the tree. thats definately a monorepo usage, but not trunk based development. onebigtrunk teams may make a branch (hopefully lightweight) to support a prior release, if they’re not doing a roll-forward strategy for production bugs - branches are not totally avoided.
one last thing
martin mentioned his semantic diffusion bliki article recently. there are a few names of things above that are moving in terms of meaning over time. they’ll move some more. i remember the late 90’s when developers had used only two scm tools, and both were bad. say cvs and vss. they’d hang on to the less bad one, like their life depended on it, and not be open to a third. opinions as to what is acceptable move over time, names for the best practices are going to do so too.
Published at DZone with permission of Paul Hayden. See the original article here.
Opinions expressed by DZone contributors are their own.