I am looking forward to sharing my thoughts on Reinventing Performance Testing at the imPACt performance and capacity conference by CMG held on November 7-10, 2016 in La Jolla, CA. I decided to publish a few parts here to see if anything triggers a discussion.
It would be published as separate posts:
Continuous Integration (this post)
In more and more cases, performance testing should not be just an independent step of the software development lifecycle when you get the system shortly before release. In Agile development and DevOps environments, it should be interwoven with the whole development process. There are no easy answers here that fit all situations. While Agile development and DevOps have become mainstream nowadays, their integration with performance testing is just making first steps.
Integration support becomes increasingly important as we start to talk about continuous integration (CI) and Agile methodologies. Until recently, while there were some vendors claiming that their load testing tools better fit Agile processes, it usually meant that the tool was a little easier to handle (and, unfortunately, often only because there was not much functionality offered).
What makes Agile projects really different is the need to run a large number of tests repeatedly, resulting in the need for tools to support performance testing automation. The situation started to change recently as Agile support became the main theme in load testing tools. Several tools recently announced integration with Continuous Integration Servers (such as Jenkins or Hudson). While initial integration may be minimal, it is definitively an important step toward real automation support.
It doesn’t look like we may have standard solutions here, as Agile and DevOps approaches differ significantly and the proper integration of performance testing can’t be done without considering such factors as development and deployment processes, system, workload, and the ability to automate and automatically analyze results.
The continuum here would be from old traditional load testing (which basically means no real integration: it is a step in the project schedule to be started as soon as system would be ready, but otherwise, it is executed separately as a sub-project) to full integration into CI when tests are run and analyzed automatically for every change in the system.
Automation means here not only using tools (in performance testing, tools are used in most cases), but automating the whole process including setting up an environment, running tests, and reporting and analyzing results. However, full performance testing automation doesn’t look like a feasible option in most cases. Using automation in performance testing helps with finding regressions and checking against requirements only, and it should fit the CI process (being reasonable in the length and amount of resources required). So large-scale, large-scope, and long-length tests would not probably fit, as well as all kinds of exploratory tests (as explained in the Agile part of this series). What would be probably needed is a combination of shorter automated tests inside CI with periodic larger and longer tests outside or, maybe, in parallel to the critical CI path as well as exploratory tests.
Other abilities of load testing tools are also important for proper integration, such as cloud integration, support of new technologies, integrated monitoring, and analysis. Cloud integration (including public clouds, private clouds, and cloud services) simplifies deployment automation. Support of new technologies minimizes the amount of manual work needed. Integrated monitoring and analysis allow us to collect information and evaluate the results of performance tests (which may be quite sophisticated).