The goal of software testing — whether automated or manual — is to ensure that our application or system adheres to its specifications. A specification can be as simple as a component producing a specific or discrete result, or it can be as complex as a component completing its execution within a statistically bound period of time.
In most cases, our systems will have multiple levels of specifications, with each successively lower level focusing on a smaller subsection of the system. For example, our highest-level specifications will likely focus on the system as a whole, executing in a scaled-down clone of the production environment, while the lowest-level specifications may focus on a single class or method. The tests that we create must, therefore, reflect the granularity of the specifications being tested.
Generally, our development will have four phases — or levels of specifications and corresponding tests:
- Unit – Focuses on individual classes and methods.
- Integration – Focuses on interfaces and interactions between units.
- System – Focuses on the entire system while mocking external interactions.
- Acceptance – Focuses on the entire system executing in an environment that mimics the production environment.
We can visualize these phases using the V-Model:
Tests can be executed either manually or automatically. While manual tests have their place — often as in visual or User Interface (UI) testing — they have been largely phased out, and for good reason. Manual tests are usually:
- Slow – Tests can only be executed as fast as a human can complete a test and record the results.
- Inconsistent – It is difficult to repeat a test the same way twice, which introduces variance and irregularities.
- Tedious – Following the same steps each time a test is executed can become monotonous over time and cause testers to lose concentration.
In contrast, automated testing — creating repeatable tests that can be executed on demand — has significant advantages over manual testing, including:
- Speed – Tests execute like code.
- Consistency – Tests are executed in the same environment every time.
- Repeatability – Tests are executed the same way every time.
- Integration – Tests and results can be easily integrated with other tools.
When a significant number of tests, called a suite, are accumulated for each phase, we can further automate our testing by creating a pipeline. A pipeline is a set of steps that we can automatically execute to exercise our system. Each stage in the pipeline generally corresponds to a phase of testing. For example, the following pipeline (or a variation of it) is common when testing larger systems:
At its genesis, a single change to our system — encapsulated as a commit to our version control system — results in a new execution of our pipeline. Each stage, from left to right, walks higher up the right side of the V-Model. In the case of the pipeline above, we may even include performance or accessibility tests (e.g., user acceptance testing, capacity testing, and staging stages), as well as a deploying stage that ultimately deploys our system into its production environment.
Regardless of the specifics of each stage and the combination of stages we utilize, when one stage of the pipeline successfully completes, the next stage executes. Once the entire pipeline completes, we know that the changes we made in our commit meet all of our specifications. The faster the pipeline executes, the faster we get feedback about our commit. Therefore, the more stages that we automate, the quicker we can get feedback about our change.
Lower-level stages, such as unit and integration testing, are usually easy to automate. At these points in the V-Model, tests are typically written by the development team in the same programming language as the components being exercised. As we work our way further up the V-Model toward system and acceptance testing, this is no longer the case. As the tests become more abstract, more often than not, non-developers will write the tests. For example, it is common that the stakeholders in a product may write the acceptance tests in a natural language, such as English.
Since these tests are being written by non-developers, automating them can be difficult. To solve this problem, we can use automated testing tools that translate natural language into automated actions, or even allow a tester to drag and drop actions to create tests. One of the best tooling options available is cloud-based test automation software.