{{announcement.body}}
{{announcement.title}}
Refcard #343

Automated Testing at Scale

Test code and test tools are as critical to a software application as the application code itself. This Refcard will explore the fundamentals of testing in an Agile world and how automated tests can drastically improve the quality of our applications. We will then look at two critical strategies for reducing the execution time of our automated tests to keep our builds lean.

Published: Jan. 08, 2021
1,158
Free PDF for easy Reference

Brought to you by

Eggplant Software
refcard cover

Written by

author avatar Justin Albano Software Engineer, Catalogic Software, Inc.
asset cover
Refcard #343

Automated Testing at Scale

Test code and test tools are as critical to a software application as the application code itself. This Refcard will explore the fundamentals of testing in an Agile world and how automated tests can drastically improve the quality of our applications. We will then look at two critical strategies for reducing the execution time of our automated tests to keep our builds lean.

Published: Jan. 08, 2021
1,158
Free PDF for easy Reference

Written by

author avatar Justin Albano Software Engineer, Catalogic Software, Inc.

Brought to you by

Eggplant Software
Table of Contents

Introduction

Testing Fundamentals

Automated Testing

Scaling Tests

Conclusion

Section 1

Introduction

Test code and test tools are as critical to a software application as the application code itself. In practice, we often neglect tests because they add overhead to our development and build times. These slowdowns affect our release cadence and disrupt our short-term schedules (for long-term gains that are hard to see during crunch). 

In this Refcard, we will look at the fundamentals of testing in an Agile world and how automated tests can drastically improve the quality of our applications. We will then look at two critical strategies for reducing the execution time of our automated tests to keep our builds lean. 


This is a preview of the Automated Testing at Scale Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 2

Testing Fundamentals

Rarely does a team outright reject tests, but in many cases, tests are relegated to second-class citizens — and sometimes even treated as a stretch goal — which comes at the price of: 

  • Instability 
  • Lack of confidence 
  • Hesitancy to change other’s code 

The benefits of first-class testing are the opposite: Stability, increased confidence, and increased interaction between developers. This enables us to create new features and work on different parts of the application without worrying that we will create regressions. 

Regardless of the development process we choose, we will work through a basic set of steps when building an application: 

  1. Eliciting requirements from the customer or stakeholders and specifying them as specifications (such as formal requirements or use cases) 
  2. Creating architectural and low-level designs based on our requirements 
  3. Writing code abiding by our designs 

The testing we perform correlates to these generalized steps and is captured in the V-Model. 

The V-Model pairs each of the four non-coding steps with an accompanying type of test: 

  1. Acceptance tests: Ensures the application meets the needs of the customer and stakeholders 
  2. System tests: Ensures the application meets its specification when deployed in a production-like environment 
  3. Integration tests: Ensures the components of the application function according to their designs 
  4. Unit tests: Ensures each unit that makes up the components function according to its design 

While the V-model provides a sound basis, the Agile movement has brought some much-needed additions to this foundation. 


This is a preview of the Automated Testing at Scale Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 3

Automated Testing

Prior to the Agile movement, tests were commonly performed manually, with a Quality Assurance (QA) team deploying the system, injecting inputs, and inspecting outputs. While this did suffice for some time, it was: 

  • Tedious 
  • Monotonous 
  • Error-prone 
  • Difficult to repeat 

This manual process changed with the introduction of an automated mindset. Instead, tests were created in code so that they can be executed on demand. This brought numerous benefits, including: 

  • Reduced execution time 
  • Quick feedback 
  • Repeatability 
  • Version-controlled configuration 

This automation also led to the introduction of fully-automated integration and deployment pipelines, called Continuous Integration (CI) and Continuous Deployment (CD) pipelines. 

Each commit to the application repository can be automatically tested and deployed, which allows us to determine if each change breaks our product. As beneficial as this is, automated testing is not a silver bullet: It is only as good as the tests we create and poor tests can lead to over-confidence and a false sense of security.  

Additionally, the quality and speed of each test tend to diminish in proportion to the number of tests. In the following sections, we will look at some of the practices for creating quick unit, integration, system, and acceptance tests, as well as some strategies for quickly executing a large number of tests. 


This is a preview of the Automated Testing at Scale Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 4

Scaling Tests

While automated testing is essential, the number of tests can grow to the point where builds take hours and developers cannot obtain quick feedback. To remedy this situation, we must focus on scaling our tests properly by optimizing individual tests and parallelizing test execution. 

Optimizing Tests 

One of the most effective strategies for reducing the execution time of tests is to optimize individual test cases. As with any performance optimization, we must target the portion of the execution time that will provide us with the most reward for our effort. Generally, test execution time can be divided into three stages: 

  1. Setup: The time required to acquire and create the resources that the test case will use and configure the environment 
  2. Execution: The time required to run the test case 
  3. Tear down: The time required to release and destroy any resources created during setup and leave the environment in a stable state 

This three-part division of execution time also applies to an entire test phase as well. For example, deploying our system to a production-like environment for system tests can be thought of as the setup portion of the system test phase and the execution of the test cases — each with their own setup, execution, and tear down parts — can be considered the execution stage. 

For unit and integration tests, the most time-consuming stage will commonly be execution, as the setup and tear down phases of the test should only create simple objects or sets of objects. For system and acceptance tests, on the other hand, setup (and possibly tear down) will commonly take up as much or more time as the test case execution. 

When setup and tear down are the long poles in the tent, we can do the following to reduce their execution time: 

  1. Favor lightweight orchestrationLightweight object, such as containers (like Docker)are generally faster to deploy and should be favored over heavier-weight objects, such as Virtual Machines (VMs). Additionally, orchestration frameworks (such as Kubernetes) can generally configure networks and file systems quicker than VM orchestration tools. 
  2. Stub live services when possibleMock any external services whose behavior is known but whose implementation is not directly needed. This may be more viable for unit and integration tests, since system and acceptance tests will commonly use the actual services needed in production, but it is still possible to stub some system services. For example, a system test for a shopping application can stub the financial institution so that the payment processing portion of the application can be tested, but the payment itself is not made to a real financial institution. 
  3. Share startup procedures between tests: In some cases, resources such as databases can be shared between tests. This reduces the need for each test case to startup and tear down the shared resource. When possible, the interactions with shared resources should be independent (see the following section) and not cause contention (e.g., write to the same collection in a database). 

This is a preview of the Automated Testing at Scale Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 5

Conclusion

Testing is often relegated to an inferior position behind application code, but demotion can lead to a serious lack of quality. While automated testing has reduced the burden of testing, it is not a final solution. Instead, we need to combine efficient tests and parallelization to ensure we balance sufficient test coverage and reasonable build times. 


This is a preview of the Automated Testing at Scale Refcard. To read the entire Refcard, please download the PDF from the link above.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}