While “Test Automation” does not define Continuous Testing, it certainly is a part of it. In fact, of the five “It’s not” statements mentioned earlier, “It’s not a QA-led activity” is the only one that doesn’t represent some part of Continuous Testing. Instead, a true Continuous Testing initiative would come from the DevOps management level. And just like any good discipline, concept, or initiative, in order to achieve a “true” Continuous Testing practice, there are 10 tenants for success. They are more like 10 key elements, spread out across the SDLC, representing certain processes and technology which have not yet reached their potential inside of most organizations or, at the very least, been combined to such a degree.
1. Automatically generate test automation scripts from requirements
Quality is owned by everyone in the SDLC. Each stage of the life cycle needs a quality control. However, building quality into an application should start at the source of it all; the requirements. That’s where a requirements model comes into play. Models enable requirements to be defined in an unambiguous and testable manner so that test cases and automated test scripts can be automatically generated from requirements. That happens as soon as requirements are defined, before a single line of code is written. Additionally, models are self-healing. If you change the design of your requirements, you’ve already changed your test cases and automation scripts. You have a record of versioning, previous test cases, and metrics indicating the functional test case coverage of the application.
2. Simulate your test environment
With tests in hand after defining requirements, the next step is to wait for the application code to be deployed. But once that occurs, the team quickly realizes the first roadblock to overcome is the lack of available environments and interfaces needed to run the previously created tests. So, you need to be able to virtualize those environments and interfaces to remove those testing constraints whether your testing at the unit, integration, or functional levels. Virtual end points and request response pairs can also be a part of the requirements model, pushing the elimination of environmental road blocks even further to the left.
3. Access test data on-demand & ensure PII compliance
With the tests ready to run, and interfaces available (real or virtual), the next roadblock the team has to tackle is test data. It takes too much time to find or make the right test data and ensure it’s available to be consumed during testing. While masking and subsetting or creating virtualized, individual, archived copies of production are helpful, it’s sometimes like asking for a haystack to get a needle. To tackle the challenges of test data, we need to think about test data differently. We assume the best place to get test data is production. If the purpose of test data is testing, then we need to set up generation rules to create fit-for-purpose, synthetic test data and give the ownership to execute those rules to the testers. Taking it further, test data must be fully correlated across systems and databases and tied to the test cases. This means capturing data generation rules at the requirements and planning phases. The data is then captured by design, desensitized, and ready once the code hits the QA environment.
4. Start multi-layer / backend “request & response” testing prior to UI
Now that we have the tests, interfaces, and test data ready to go, the next step is to run the tests (finally!). That’s when teams will need a test execution engine. Executing tests and consuming test data across multiple applications and system layers (backend, API, UI, etc.) is essential to achieving the speed and quality all teams are looking for.
5. Democratize performance testing
Up to this point we’ve framed the conversation from an Agile development and testing perspective. Performance testing historically was for preproduction and maybe QA, but now performance testing can be done even earlier. By democratizing performance and load testing we can shift further to both the left and right. We can know at the unit level if our code will cause degradation of the overall system, and continue testing it through to post-production.
Great open source testing tools are out there. Think Cucumber, Selenium, JMeter, Gatling, and Locust to name a few. Developers and testers love them. Like smartphones, they come with the employees, and we should work with them wherever possible. For one, it saves money, but it also means we can keep pace with the industry and embrace new technologies. It’s important to not only include them in our tool boxes but expand our integration with them so that we can automate the interactions and test at scale.
7. Ensure comprehensive cloud-based API testing
With virtually limitless scalability, teams can throttle up or down their testing against Web applications and APIs according to their needs. The rapid adoption of microservices architecture is only causing the proliferation of both private and public APIs to accelerate and become revenue-generating opportunities. Lack of test coverage results in broken APIs coupled with the challenge of pinpointing issues once deployed, thus reducing development velocity. SaaS–based functional API testing solutions can auto-generate and execute comprehensive tests for each API allowing teams to test earlier, more often, and prevent defects from occurring.
8. Built-in automated application security testing
With that perfectly working code that is perfectly performing, before putting it in the hands of your users, teams must ensure that code is secure. That’s where static and dynamic security tests come into play. It’s critical to treat the remediation of security risks the same as we do defects. Therefore, we must begin testing application security at the development, pre-production, and production levels to lower the risk of significant financial losses. Security must be at the core of all we do.
For teams to coordinate all those different activities we need an orchestration engine, which is responsible for defining what is supposed to happen and when, as well as capturing the data from all the involved tools to ensure the team can quickly understand what is going on across the different parts of the lifecycle. It’s more than just a release engine. It’s a manager of managers that automates all the tools within the entire pipeline. For example, the orchestrating engine would:
- Launch virtual services mentioned in Element 2 based off the models in Element 1, as well as provisioning test data from Element 3 which was also tied to the models.
- Activate Element 4’s automation that picks up the test scripts created from the same models in Element 1 to feed them into the test automation engine and execute them.
- After receiving the all-clear, start the performance, API, and any other testing in Elements 5, 6, and 7; and so on.
10. Harness application insight across the SDLC to improve user experience
To top it off, before and after the application is released to users, teams must continuously monitor the technology, user, and business metrics to correlate them and learn whether users are perceiving the value the team originally intended to deliver. Based on those lessons learned, new hypotheses are created, and ideas for new features or changes are defined and added to the next sprint’s backlog so the team can go at it again.