DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Implementation of Duke's Choice Award Winning Clover

Implementation of Duke's Choice Award Winning Clover

Nick Pellow user avatar by
Nick Pellow
·
Jun. 09, 09 · Interview
Like (0)
Save
Tweet
Share
5.52K Views

Join the DZone community and get the full member experience.

Join For Free

don't you hate committing code and then waiting hours to find out you broke the build? even worse is when other people commit code at a similar time to you, and you get dragged into the 'who broke the build' witch-hunt by pure circumstance.

if your build times are blowing out because of long test runs (greater than ten minutes), then you are most likely suffering from ci (continuous integration) latency and the above problems are real problems for you and your team.

atlassian's code coverage analysis product and recent duke's choice award winner , clover , can help alleviate theses problems, by optimizing both unit and acceptance tests, drastically reducing the feedback time for each commit. below is a case study of how clover's test optimization is run during the development of another atlassian product: confluence .

serious ci


with about 55 different ci plans setup in bamboo , a continuous integration server product also made by atlassian, the confluence team are very serious about ci. so serious in fact, that if each build were to run end-to-end, a single commit would take over two days to be tested by each of those plans. fortunately, bamboo provides a pretty impressive ci-cloud, that entails 20 different agents that run build plans in parallel. this makes it possible to get feedback in a couple of hours, as opposed to days.

often, those few hours can mean the difference between one changeset being included in the build, or many. the main build can run for up to 40 minutes before a failure is detected. in that time, possibly multiple other commits have been made, making it more difficult to track down the root cause of the failure. 40 minutes is also long enough for a developer to be tempted by the ultimate scm sin; commit-and-run.

for the past month, the clover team have run a shadow plan of the confluence trunk build which only runs the acceptance tests that cover code which was modified since the previous build. this is made possible by clover's per-test coverage data , that reports which tests hit which lines of code during a test run.

results


the optimized build (charted on the left) is configured to do a complete test run every 10 builds to refresh the per-test coverage data.

the optimized build provides faster feedback on average compared with the main build, and because it completes on average a lot faster than the main build, more builds get run in the same amount of time.

faster feedback


a specific case of where the clover optimized build failed before the full confluence build can be seen in ccd-confdf-164, where it took 7 minutes to detect the acceptance test failure, as opposed to 38 minutes in conffunc-main-5247. this was one case where the changeset that triggered the optimized build was identical to the changeset that triggered the main build. quite often, the optimized build was started on another agent while the main build was still churning through each and every jwebunit acceptance test.

clover optimized build failing in 7 minutes

the full build took 38 minutes to fail

on average the clover optimized build takes just 7 minutes. this currently runs all unit and integration tests, and optimizes the long running acceptance tests. the main confluence build takes 40 minutes on average to complete.

the clover optimized build is a 'gateway' build for the confluence ci-pipeline. it is the canary down the ci-mineshaft, if you will. if the optimized build smells danger, then it fails; preventing other builds from being triggered and hogging valuable ci cycles.

greater ci throughput == clearer ci results


the faster a build can run, the greater its throughput will be. what does this mean for a build that typically takes 40 minutes to run? this means that you get a much clearer picture of exactly which changeset has caused a build failure.

these next two screenshots show the bamboo build history page for the full build plan and the optimized build plan:

full build results for the past two hours

you can see that build 5285 failed fairly spectacularly. however, who do we blame for this failure? three developers made changes that triggered the build, which causes a possible disturbance for all three devs as each tries to clear their name.

the clover optimized build paints a clearer picture of the situation:

optimized build results for the past two hours

over the past seven days there were 74 full confluence builds triggered. for the same time period there were 116 optimized builds, which represents an approximate 56% increase in build throughput.

where's the catch?


of course, test optimization of acceptance tests is not a silver bullet.

a full build, that runs all tests regularly should still play an important role in the ci stack. this is because there are still cases where an optimized build may pass, but a full build will fail. since clover tracks per-test coverage for java source files only, it will not detect which tests to run when a non-java source file is modified. this means that if the only modification for a changeset is for a non-java file, such as web.xml; velocity macro; pom.xml; build.xml; .jsp (and so on) then possibly no tests will be run, causing the build to pass — when it should have failed!

the aim of an optimized build plan is for it to fail faster than a full build, and also to run more often than a full build does. an optimized build should be about quantity, whereas the main build is there for quality.

what is per-test coverage?


per-test coverage, is a mapping of each line of code that was covered by one or more tests, back to the tests that covered the line. as an example here is a screenshot of the clover report for a test run of confluence's acceptance tests showing all the tests that covered the "allquery" class:

per-test coverage for the allquery class

this shows us that "allquery" was covered by 16 test methods and 6 testcases. if the class containing this code is modified in any way clover will ensure only those 6 testcases get run. this means the confluence acceptance tests can complete in just a few minutes as opposed to 40.

per-test coverage data is also excellent for answering the question: "is there already a test for this class and if so, which one?".

the benefits of test optimization


the confluence team benefit by having a clover optimized in the following ways:


* on average, they are alerted earlier to build breakages.

* when a build does break, fewer committers are involved with the breakage, making it easier to discern who broke the build.

* fewer ci resources on our build server are consumed by confluence's long running main build, thereby reducing latency of other builds.

Continuous Integration/Deployment Clover (mobile app) Testing optimization Confluence (software) Implementation

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • How Chat GPT-3 Changed the Life of Young DevOps Engineers
  • Java REST API Frameworks
  • Apache Kafka Is NOT Real Real-Time Data Streaming!
  • 19 Most Common OpenSSL Commands for 2023

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: