API Testing Is Great, So Why Isn't Everyone Doing It?
API Testing Is Great, So Why Isn't Everyone Doing It?
Well? Why ISN'T everyone doing it? Let's take a look at some likely reasons why and what can be done about it.
Join the DZone community and get the full member experience.Join For Free
The move to Microservices- and API-driven architectures is driving significant innovation across industries, but has also opened businesses to a hidden layer of risk. The human interfaces (web and mobile UIs) are no longer where the primary business risks lie. Instead, the biggest vulnerabilities are hidden in the non-human interface of the API.
For this reason, API testing has increasingly become a focus, but we still hear all the time, "What is API testing again, and why do I need it?"
The quick summary is that Application Programming Interfaces (APIs) are how applications communicate with each other using a common interface managed by a defined contract. The driving force behind API testing adoption is to be able to test the business logic of the application in a stable manner independent of the UI. API testing allows for more comprehensive testing than testing solely at the front-end, allowing for performance and security tests, for example. Industry analysts and agile experts, such as Martin Fowler and Mike Cohn, agree that API testing is the way to go. So what's holding us back?
The Impact of Ineffective Testing
Software teams want to spend the ideal amount of time, and nothing more, testing and debugging to maximize the potential of successful projects. Traditionally, however, it's been difficult to decrease this amount of time spent testing and debugging because many serious bugs and security vulnerabilities have been found late in the software lifecycle, including after release.
The chart below illustrates when defects are introduced into the application, and the impact of timing on the cost to repair a defect at each stage. As you can clearly see, the cost of late-cycle defects is significant — and this increase in cost comes from many factors (i.e. the time it takes to diagnose the issue and identify the root cause, the number of people involved in the process, and the increasing complexity (and therefore risk) associated with defect remediation).
If you are thinking to yourself, "I've seen this before," you probably have! In 1996, Capers Jones released the research behind this chart and, even with the changes in software development practices over the last 20 years, the updated research says it is still relevant today.
Where We Want to Be: the Testing Pyramid
So where are we going wrong? We are going wrong with our approach to quality — we need to be looking at the chart above and looking for ways to shift-left the defect detection and find the defects earlier, when they are easier to diagnose and easier to remediate. Techniques like deep code analysis can quickly uncover security and reliability issues embedded into the codebase as soon as the code is written, but to be able to validate the runtime or functional behavior, we need to invest time into creating and maintaining automated tests — and, in an Agile world, we need those tests to be executed continuously once they have been created so they can "shift-left" the detection and capture regressions as soon as new functionality is implemented.
The ideal way to invest time and organize your portfolio of tests is often represented as the "test pyramid," as shown below, by pushing as much testing effort as early as possible in the development timeline. You start with a foundation of unit tests, where bugs found are cheap to fix, and then API, integration, component testing, and system and UI testing round out the levels of the pyramid.
But while practices such as test-driven development (TDD), early stage unit testing, and more automation in software testing is becoming more mainstream, software teams are still spending too much time in late cycle UI and system testing. We've often described the current state of affairs as an inverted pyramid (an ice cream cone); however, taking a closer look at the data, we see a significant lack of API testing in the industry, so a martini glass is more appropriate:
It's unfortunate that this middle layer of the test pyramid is generally unused, because there are definite advantages to investing in API testing. For example, testing can be done earlier in the software development lifecycle (as soon as the API contracts/definitions are available), they can be more easily automated, and they are fundamentally less brittle to incoming changes in the UI/UX of the application.
It's possible to create scenario-level tests by organizing API tests into common use cases, and automation of API testing paves the way for early-stage and scenario-driven performance and security testing. Ultimately, investment in API testing enables teams to manage change better and get the agility promised from modern development methods. Testing early and often, what's not to like? Unfortunately, teams are struggling to implement API testing for various reasons.
What's Constraining API Testing?
The biggest impediment to increased adoption of API testing is the actual creation of the tests. It's not easy to create meaningful tests that work at the API level, let alone string them together to create proper test scenarios. Equally fundamental in preventing API testing adoption is the knowledge gap that exists between developers and testers. API tests require knowledge and abilities that testers often lack, and managers don't want to assign developers to do integration or API testing.
Developers work from the bottom of the pyramid up, and are comfortable working at the unit level. It's their code (or at least, their realm of responsibility), and unit testing seems a natural fit in their workflow. Unit test automation and guided unit test creation have improved the efficiency at this level, and the software industry understands the need for thorough testing here.
Testers, on the other hand, are working from the top of the pyramid at the UI level, where the use cases and interfaces are intuitive and easy to map to original business requirements. Their view of the application is on the outside looking in.
API testing sits between these two roles and requires knowledge of both the design of the interfaces and how they are used. Testers usually don't work at this level, viewing the API as code, and while developers understand interfaces and APIs, they typically don't have the complete view on how the interface is going to be used in conjunction with other sub-systems, so view API testing as functional testing and outside of their role.
Until recently, there has been a lack of test tool automation to help bridge this gap between unit and system testing, developers and testers. To help the software industry get closer to the ideal test pyramid and evolve from the martini glass we see today, we introduced the Smart API Test Generator to Parasoft SOAtest, our functional test automation tool that is easy to adopt and use.
Give Up the Martini Glass
The Smart API Test Generator is a plugin for the Google Chrome web browser that monitors manual testing and uses artificial intelligence to create automated API scenario tests, lowering the technical skills required to adopt API testing and helping you build a comprehensive API testing strategy that scales across the team and organization. It works like this:
The Smart API Test Generator monitors background traffic while you're executing manual tests, and sends to its artificial intelligence engine, which identifies API calls, discovers patterns, analyzes the relationships between the API calls, and automatically generates complete, meaningful API testing scenarios, not just a series of API test steps.
Making API Testing More Accessible
The Smart API Test Generator makes API testing more accessible to test teams because these API test scenarios are created using testing practices they are already doing. And unlike manual or even automated UI tests, the recorded API activity helps testers collaborate better with developers, with a single artifact that can be easily shared and understood by both teams, and is better at diagnosing the root cause of defects than a complex UI test that requires the entire application to be assembled.
With only UI testing, on the other hand, developers and testers tend to remain siloed in their communication and debugging techniques — often leading to long wait times and iterations between defect introduction, defect detection, and defect resolution.
The Power of API Test Scenarios
API interactions recorded during UI testing require some sort of organization into scenarios or use cases. SOAtest Smart API Test Generator's artificial intelligence helps by creating scenarios based on the relationships between the different API calls.
Without the Smart API Test Generator, which leverage, users would have to spend time investigating their test cases, looking for patterns, and manually building the relationships to form each test scenario. In addition, Parasoft SOAtest provides an intuitive, UI-driven method for describing assertions, allowing testers to perform complex assertion logic without having to write any code. Without this, users code each assertion by hand and they might miss one or build it wrong. These API tests can then be extended using visual tooling and tool-assisted logic creating larger scale test suites.
Although software teams acknowledge the desire to reach an ideal distribution of unit, API, and UI tests, the reality is that your average team is doing an average job of unit testing and still relying on late-stage UI and system tests. API testing provides an ideal communication mechanism between developers and testers with a high level of maintainable automation that can be extended into performance and security testing.
Shifting these tests left and executing them earlier in the software lifecycle means catching critical security and architectural defects early, where they are easier to diagnose and less risky to fix. Leveraging the automation provided by Parasoft SOAtest's Smart API Test Generator, API testing is more accessible and the time associated with creating meaningful test scenarios can be significantly reduced.
Published at DZone with permission of Mark Lambert , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.