Reinventing QA in the DevOps era
Reinventing QA in the DevOps era
DevOps testers need to leverage best practices in Agile testing, Continuous Integration, and test-driven development if they want to accelerate their test QA processes.
Join the DZone community and get the full member experience.Join For Free
Documentation and testing are two activities that have traditionally been given short shrift on software development teams. Documentation in the form of text or illustrations serves the underappreciated role of explaining how a program operates or how to use it. Unfortunately, many developers have learned the hard way that detailed documentation is wasteful and often can't be trusted because it's usually out of sync with the code it's meant to describe — especially on dynamic projects with changing requirements.
Testing is undervalued on traditional software teams, too, especially among developers who think the tester's job is to break the code that they've spent hours crafting. While one of the broad guidelines of the Agile Manifesto is to value working software over comprehensive documentation, since "working software is the primary measure of progress" on Agile projects, the importance of testing on agile DevOps projects has, if anything, increased in value.
Agile development takes a test-first approach rather than the test-at-the-end approach of traditional development. In Agile testing, code is developed and tested in small increments of functionality. Almost all DevOps environments use an Agile project management and product development methodology that promotes frequent interaction between IT departments and business users and tries to regularly build and deliver software that meets users' changing requirements. This means, in effect, building a continuous, two-way DevOps software pipeline between you and your customers.
Building a successful Continuous Delivery pipeline means creating a DevOps culture of collaboration among the various teams involved in software delivery (developers, operations, quality assurance, business analysts, management, etc.), as well as reducing the cost, time, and risk of delivering software changes by allowing for more incremental updates to applications in production. In practice, this means teams produce software in short cycles, ensuring that the software can be reliably released at any time.
Agile development recognizes that testing is not a separate phase from coding, but an integral part of the software development process. Because Agile is an iterative development methodology, testing and coding are done incrementally and interactively. Features can evolve in response to changing customer requirements. Agile testing covers all types of testing, i.e., unit, functional, load, and performance tests. The following Agile Testing Quadrants diagram is a useful tool for cross-functional Agile development teams to use to plan and execute testing activities.
Figure 1. Agile testing quadrant (Source).
Agile expert Lisa Crispin developed these four Agile testing quadrants as a guide for managers and development teams to use to create test strategies. It's important to realize that the Agile Testing Quadrants diagram is simply a taxonomy to help teams plan their testing — there are no hard and fast rules about which tests belong in which quadrant or in which order the different tests need to be done. (For example, it's not necessary to work through the quadrants from Q1 to Q4 in a Waterfall style.)
Crispin's four quadrants are based on Brian Marick's Agile testing matrix, which makes a distinction between tests that are either business-facing or technology-facing (see the top and bottom labels on Figure 2).
A business-facing test is one you can describe to a business expert in business terms, such as, "If your user's account is overdrawn, will the system add a service fee?"
A technology-facing test is one that uses language that developers might understand, such as "PersistentUser#overdrawn adds service fee."
Marick also recognizes a difference between tests that support the development team or critique the product (see left and right labels on Figure 2). By tests that "support the team," he means tests like component or unit tests where testable parts of an application are individually and independently scrutinized for proper operation. Tests that "critique the product" are those that are not focused on the development process but look at inadequacies in the finished product, such as not fulfilling a business requirement.
The four quadrants are described in more detail below.
These are technology-facing tests that guide development, such as unit tests, API tests, web services testing, and component tests that improve product design. Tests in Q1 are often associated with automated testing and Continuous Integration.
These are business-facing tests that guide development (i.e., those used for functional testing, story tests, prototypes, and simulations) that make sure your software products are properly aligned with the business. Tests in Q3 are often associated with both automated and manual testing.
Business-facing tests used to evaluate or critique the product. Q3 covers tests such as exploratory testing, scenario-based testing, usability testing, user acceptance testing, and alpha/beta testing. It can involve product demos designed to get feedback from actual users. Tests in Q3 are often associated with manual testing.
Technology-facing tests used to evaluate or critique the product. Q4 covers tests that have to do with performance, load, stress, scalability tests, recovery, security, maintainability, memory management, compatibility and interoperability, data migration, and infrastructure. These tests are often automated.
The clouds at the quadrant corners signify whether tests in that quadrant generally require automation, manual testing, or specialized tools. The division of tests into quadrants allows teams to strategize whether they have the right skills to accomplish each of the different types of testing, or if they have the necessary hardware, software, data, and test environments. It also makes it easier to customize your agile testing process on a project-by-project or skill-by-skill basis.
For example, if you don't have a tester on your QA team with appropriate load or performance testing skills, it helps you see the need to bring in a contractor or outsource that particular test. A testing strategy based on the Agile Testing Quadrants requires effective workgroup communication, which is made easier by a test management tool that allows the team to work collaboratively in real-time.
DevOps Test Automation
Testers on DevOps teams also need to leverage best practices in Agile testing, Continuous Integration, and test-driven development to accelerate their test QA processes and reduce cycle time. This includes automating as many tests as possible (i.e., GUI, API, integration, component, and unit tests). Crispin and other Agile testing experts favor automating unit tests and component tests before other tests since these represent the highest return on investment.
The Agile test automation pyramid is a strategy guide for automating software tests (Source).
If developers are doing test-driven development (TDD), they'll have written unit test programs before the application code is written. A unit test in TDD is designed to fail — until application code is written to fulfill the conditions of the test. Writing the test ﬁrst ensures that the developer understands the required behavior of the new code. TDD unit tests are easiest to automate since they can be stored and used as regression tests whenever a new build is done.
In addition to doing bottom-up testing for unit and components, many Agile DevOps teams also practice test-first approaches such as acceptance test-driven development (ATDD) and behavior-driven development (BDD) for tests higher up on the Test Automation Pyramid. This allows testing to be repeated in increments as software components are assembled upon each other.
Test Automation Frameworks
Test automation works by running a large number of tests repeatedly to make sure an application doesn’t break whenever new changes are introduced. For most Agile DevOps development teams, these automated tests are usually executed as part of a Continuous Integration build process. To simplify the automation effort, many DevOps teams rely on test automation frameworks made of function libraries, test data sources, and other reusable modules that can be assembled like building blocks so teams can create automation tests specific to different business needs.
For example, a team might use a specific test automation framework to automate GUI tests if their software end users expect a fast, rich, easy UI experience. If the team is developing an app for an IoT device that primarily talks to other IoT devices, they would likely use a different test automation framework.
Frameworks enable teams to build and reuse blocks of code for future tests but they're not a panacea that solves all test automation problems. Selecting, implementing, and updating a test automation framework should be done with the same care and thought that goes into writing production code. While test automation frameworks can dramatically cut test suite maintenance costs and improve productivity on DevOps projects, their proper implementation still takes time, skill and a lot of experimentation.
The image below shows how different parts of a test automation suite work together.
Starting at the top, the framework has the following components:
Custom code: This is code specific to the teams’ needs and may include abstractions for interacting with page- or view-level objects, communicating to web services, checking the database, etc.
Framework: Frameworks like Robot or Cucumber allow teams to write code that focuses on the business problem being tested versus the specific UI technology. In some cases, this enables the same test to be reused across different web browsers, mobile apps, etc.
Driver: The driver is the lowest-level component. It knows how to interact with the application’s specific UI. For example, Selenium WebDriver has different drivers that know how to manipulate Chrome, Firefox, Microsoft Edge, etc.
Application: This is the actual UI technology being tested (i.e., a web browser, native iOS or Windows desktop application).
The majority of surveyed projects use Agile/Scrum methodology.
As part of a yearly survey titled How the World Tests, Zephyr recently surveyed over 10,000 customers in more than 100 different countries. One of the main questions asked was how many projects were being done using Agile methodology. In 2016, a clear majority of projects across the board were run in an Agile/Scrum way, and almost 30% used a hybrid/customized version of Agile.
The survey found that one of the biggest barriers to using Agile effectively had to do with test automation. Over 50% of respondents stated that their organizations did not have enough test automation or not having enough time to run all the tests needed on fast-paced Agile projects.
An interesting survey fact is that in 2016, over 70% of all customers used multiple automation tools (an increase from 58% to 75% in just a year). Having multiple automation tools means multiple sets of test scripts, plans, execution runs, and results, which can impede test automation efforts and cause serious maintenance issues. Properly designed and constructed test automation frameworks such as the one described above can help by making it easier for Agile teams to go to one place instead of multiple places to fix or extend test cases.
DevOps Doesn't Do Away With Manual Tests
Agile DevOps projects still need manual testers to engage in exploratory test sessions while the automation test suite runs. In addition to revising and fine-tuning the automated tests, exploratory testers are important on DevOps projects since developers and other team members often get used to following a defined process and can stop thinking outside the box.
Because of the desire for fast consensus among self-organizing Agile teams (including globally distributed ones), collaboration can devolve into groupthink. Exploratory testing combats this tendency by allowing a team member to play the devil's advocate role and ask tough, "what if" testing questions. Because of the adaptable nature of exploratory testing, it can also be run in parallel with automated testing and doesn’t have to slow deployment down on DevOps projects committed to delivering software rapidly, frequently, and more reliably.
Published at DZone with permission of Sanjay Zalavadia , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.