In the recent Forrester Research webinar, The Forrester Wave: Functional Test Automation (FTA), Q2 2015: Test Fast To Deliver Fast!, Diego Le Giudice addresses a question that is now critical to every organization’s long-term success: How do you deliver software at speed without compromising quality?
The most successful organizations today are those that are able to deliver value on business or user demands, and then constantly keep their solutions up-to-date as these demands inevitably change. Such organizations are similar in many respects to the unicorns, who enter a market, bring innovation, and then rapidly gain the largest market share. Organizations with more traditional business models are then forced either to reinvent or to move out of the space.
Lo Giudice rightly points out how the Continuous Delivery of software is key to achieving this constant innovation. He describes how in the “age of the Customer,” delivery cycle time has been driven down by an unprecedented demand for the speedy delivery of quality software, making the adoption of DevOps a necessity.
In this delivery cycle, however, he describes a fundamental problem with testing, where it is recognized as necessary to guarantee quality, but is also seen as something which slows projects down. Therefore, people make a compromise between delivering software on time and properly testing it. This is especially true when testing is heavily manual and test cases are not optimized. Here, testing is too slow and is “suboptimized.”
We believe that to properly achieve both speed and quality in the “age of the customer,” organizations must shift left the effort of testing and automate more, earlier. It is only by creating close a link between user requirements, test cases, data, expected results, and automated tests that organizations can create and execute optimized tests at the pace with which user requirements change.
This can be achieved using model-based testing (for example, modeling user requirements and change requests as “active” flowcharts). Doing so, each path through the flowchart becomes a test case, so that every possible test can be identified automatically. The logical steps (blocks) through the flowchart are in effect the test components which can be executed in an automation engine, with different combinations and orders of blocks providing maximum test coverage.
Mathematical optimization algorithms can also be used to reduce this to the smallest set of tests, which provides 100% coverage, so that tests are “optimized,” in contrast to traditional testing methods. Lo Giudice observes, for example, how test optimization design tools like Grid-Tools’ Agile Designer and Tosca Tricentis offer the ability to quickly generate optimized test cases from requirements.
To then derive automated tests quickly enough from these optimized test cases, test cases and test scripts must be treated, in practice, as one and the same. If they are treated as separate assets, a lot of manual effort is required for converting the test cases to test scripts. Instead, to keep up with changing user requirements, test cases should be created in a format which they can be pushed out to execution engines without requiring any additional manual effort.
Tests generated from a formal flowchart model can be created with the data and expected results needed to execute them and in a format suitable for execution engines. As they are created, data can be created automatically from default values, based on output names, values, and attributes defined in the flowchart. Expected results can similarly be defined as the requirements are modeled, being automatically exported with the test cases.
Crucially, in the age of the “age of the customer,” the traceability introduced between automated tests, tests cases, and user requirements means that change can be implemented in minutes – not days or weeks. In other words, because automated tests, data, and expected results “fall out” of the requirements themselves, when the requirements change, tests can be automatically updated. The updated tests can then be executed automatically so that test teams do not need to choose between quality and speed when faced with constantly changing requirements.
Want to learn more? Check out the webcast Transform Your Testing for Digital Assurance.