Using Jenkins for Disparate Feedback on GitHub
Tracking changes and fixing problems in code can be made easier with SCM tools like GitHub; but sometimes, this doesn't provide enough information or feedback. Read on to find out how to use a CI tool like Jenkins to improve this process.
Join the DZone community and get the full member experience.Join For Free
Picking a pear from a basket is straightforward when you can hold it in your hand, feel its weight, perhaps give a gentle squeeze, observe its color, and look more closely at any bruises. If the only information we had was a photograph from one angle, we'd have to do some educated guessing.
As developers, we don't get a photograph; we get a green checkmark or a red X. We use that to decide whether or not we need to switch gears and go back to a pull request we submitted recently. At edX, we take advantage of some Jenkins features that could give us more granularity on GitHub pull requests, and make that decision less of a guessing game.
Multiple Contexts Reporting Back When They're Available
For example, if I made adjustments to my branch and know more requirements are coming, then I may not be as worried about passing the linter; however, if my unit tests have failed, I likely have a problem I need to address regardless of when the new requirements arrive. Timing is important as well. Splitting out the contexts means we can run tests in parallel and report results faster.
Developers Can Re-Run Specific Contexts
Occasionally, the feedback mechanism fails. It is oftentimes a flaky condition in a test or test setup. (Solving flakiness is a different discussion I'm sidestepping. Accept the fact that the system fails for the purposes of this blog entry.) Engineers are armed with the power of re-running specific contexts, also available through the PR plugin. A developer can say "Jenkins, run bokchoy" to re-run the acceptance tests, for example. A developer can also re-run everything with "Jenkins run all." These phrases are set through the GitHub Pull Request Builder configuration.
More Granular Data Is Easier to Find for Our Tools Team
Splitting the contexts has also given us important data points for our Tools team to help in highlighting things like flaky tests, time to feedback, and other metrics that help the org prioritize what's important. We use this with a log aggregator (in our case, Splunk) to produce valuable reports such as this one.
I could go on! The short answer here is we have an intuitive way of divvying up our tests, not only for optimizing the overall amount of time it takes to get build results, but also to make the experience more user-friendly to developers.
Published at DZone with permission of Hannah Inman, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.