Over a million developers have joined DZone.

Reducing Test Times by Only Running Impacted Tests - Python Edition

· Performance Zone

Download Forrester’s “Vendor Landscape, Application Performance Management” report that examines the evolving role of APM as a key driver of customer satisfaction and business success, brought to you in partnership with BMC.

This article follows on from my Omnipresent, Infallible, Omnipotent and Instantaneous Build Technologies one ten days ago, and Reducing Test Times by Only Running Impacted Tests a week ago, which was for Java.

The last few sections of last week’s blog entry apply to this one too, but I’m not going to copy/paste them here.

Proof of concept

I forked a Github project that Kenneth Reitz had made to show repository structure for a Python + Nose project.

If you want to run it, clone my fork – there are few pip installable dependencies.

Running testimpact.sh there – it will rebuild a map of what tests cover what production-classes (sources). I’ve checked in the previous results from running this script, that’s what a team would do. It runs Nose against one test at a time, to calculate coverage then store that per test. As before, Kenneth’s example project is not launching Selenium (or equivalent slow test), and you’re going to have to imagine very slow tests to warrant this subsetting.

The testimpact.sh script uses python and ack, sed and awk (you’ll have to install those if you want to run this – NuGet and Chocolatey will give you them for Windows). The coverage data comes out of Nose via syserr, and that’s post-processed into the text files in meta/tests and meta/tests2 (showing the sources covered by each test). Yes, they are checked in. Those files end in .py but are actually plain text rather than Python (sorry):

Following the running of each test individually and picking out the pertinent coverage data, I needed to collect all of that and make a map. That is the last line of testimpact.sh and its a chain of ack, sed, awk and sort into another file: meta/impact-map.txt. Inside that file is a list of production sources and the tests that would exercise them. Actually its a map of sources vs tests:

Experimenting with what I’ve done

Remember I’ve checked everything in, so you only need Python to play with it (ack, sed, awk are only used to regenerate the impact map).

Change sample/core.py. Don’t commit that pending change, just leave it there showing up as modified in git status.

Run python tests.py and watch it run one or two tests in the same invocation:

Now undo that change, and change sample2/core.py, and run python tests.py again.

Different tests ran, right? Like so:

Undo that change, and do python tests.py once more.

No tests ran, right? That would be the same for changed sources/classes that had no tests exercising them (covering them under test invocation) at all.

Of course these few tests are really quick, but they could have been both subset from hundreds or thousands of tests with an elapsed time of many hours. This is for shaving 9.5 hours off your 10 hour long suite.

See Forrester’s Report, “Vendor Landscape, Application Performance Management” to identify the right vendor to help IT deliver better service at a lower cost, brought to you in partnership with BMC.


Published at DZone with permission of Paul Hammant, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}