DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Cypress.io — The Rising Future of Web Automation Testing
  • Mocking Dependencies and AI Is the Next Frontier in Vue.js Testing
  • Architecting a Comprehensive Testing Framework for API and UI Testing
  • Cypress API Testing: A Detailed Guide

Trending

  • Modern Test Automation With AI (LLM) and Playwright MCP
  • Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems
  • Apple and Anthropic Partner on AI-Powered Vibe-Coding Tool – Public Release TBD
  • Secrets Sprawl and AI: Why Your Non-Human Identities Need Attention Before You Deploy That LLM
  1. DZone
  2. Software Design and Architecture
  3. Integration
  4. Why Record and Replay Isn't Enough for Automated API Testing

Why Record and Replay Isn't Enough for Automated API Testing

Discover what uses AI to convert manual UI tests into automated API tests so you don't need expertise in API testing or even the ability to write any code to get started.

By 
Chris Colosimo user avatar
Chris Colosimo
·
Jul. 12, 18 · Opinion
Likes (4)
Comment
Save
Tweet
Share
5.7K Views

Join the DZone community and get the full member experience.

Join For Free

A couple of weeks ago, we released a new capability in Parasoft SOAtest called the Smart API Test Generator. I was geeked. This technology is legitimately groundbreaking — it uses artificial intelligence to convert manual UI tests into automated API tests so you don't need expertise in API testing or even the ability to write any code at all to get started. It's all script-less, and it's activated through a simple plugin for Chrome, so you don't have to install a large toolset in order to use it.

However, at the STAREAST testing conference back in May, where I gave a long talk about how awesome this technology is, I kept encountering people ask me how this was different from record and replay technologies that already exist on the market. Of course artificial intelligence is the answer, but AI for AI's sake is meaningless — why do we even care?

We care because record and replay testing just isn't enough. We needed to make API testing easier than that! To really scale API testing adoption and tackle the problems that testing teams are having keeping pace with development, we needed more! Instead of just collecting traffic, recording it, and playing it back, we needed to be able to automatically help users identify and organize captured API activity into meaningful, reusable, and extensible tests. We can lower the API testing adoption bar and get more testers involved.

Why We Even Need API Testing

First, let me just make sure you understand how important this is.

Historically, organizations have relied on UI testing as the primary testing practice because it is easy and intuitive to define and execute and easy to automate, at least initially. There is a low barrier to entry, and it can scale across a large team of testers.

The challenge with this exclusive reliance on manual and UI testing is the hidden costs. Anyone who has worked with Selenium knows that things get difficult when the UI changes and you need to update your scripts. In fact, we've found that up to 80% of testing time is spent either executing manual UI tests or fixing automated UI tests that have broken as a result of application change. On top of all that, UI testing can't be executed until the full application is available — and if a defect is discovered, there is a high cost of rework because the application needs to be torn apart, fixed, and reassembled before testing can continue. Often, this late cycle defect detection leads to significant release delays and raises the total cost of testing.

Complementing UI Testing With API Testing

To complement and reduce the reliance on UI testing, organizations can leverage API testing, which solves many of these issues by providing maintainable, end-to-end scenarios that can be reused for more than just functional testing. API tests create a good communication channel between developers and testers since they help document the API's behavior in concrete, realistic terms. Shifting the diagnosis and fix of bugs and security vulnerabilities found by API testing to earlier in the lifecycle has a big pay-off in reaching schedule and quality goals.

Organizations, however, have struggled to adopt API testing because even awesome API testing tools just haven't historically provided enough help. In order to use API testing tools effectively, testers have needed intimate knowledge of the APIs they are trying to test, including how the APIs are used by the application in question, which requires specialized skills and expertise. And developers don't have the time to test them, so this extremely beneficial practice becomes avoided — untenable for testers and undesirable for developers.

Why Build API Tests From Traffic ("Record and Replay Testing")

To solve this challenge, functional test automation companies many years ago came up with the idea of recording API activities and creating API tests from traffic. This was powerful because by simply recording the transactions between the application and back-end system, you could capture the activities of the APIs, including how the API calls restructured the data that was being passed.

With this technology, you were able to record the scenarios that were taking place in the back-end systems. This helped non-technical users be able to understand which APIs were called and get a basic understanding of the data being used as each one was called; however, simple traffic collecting didn't help them skill up or learn how to maintain or scale their tests. It couldn't teach them the technical skills required to build different tests with all the different message formats and protocols utilized by APIs, and it didn't provide enough help on its own to allow a non-technical user to approach the practice. It's a long road between a traffic recording and a fully functioning API test scenario.

Why Record and Replay Isn't Enough

That's where we started thinking about the next step in lowering the barriers to adopting API testing. We got to thinking. Simply recording network traffic between the tester's UI and target application isn't sufficient to help automate API testing to the point where its usefulness is realized. It's perhaps analogous to an MP3 audio recording. You can play it back to hear the song, but it doesn't contain any information about how the song is created or what instruments were used. The song can't be modified or extended.

Consider the following issues with simple record and replay testing:

What If My UI Changes?

UIs are in constant flux during development, and maintaining UI-based test automation is time-consuming. UIs only expose a certain, possibly limited, representation of the underlying business logic of the application, and relying on record and replay is both limiting and susceptible to breakage from frequent changes.

What Is the Right Traffic?

Application testing at the system level from the UI is going to create lots of network traffic. It's difficult, even for the trained eye, to decipher which traffic is part of an actual test scenario happening at the UI level. Relying on human interpretation of network traffic is both time-consuming and error-prone. Moreover, it's typically not a skill testers have, so they have to rely on developers to help.

How Do I Connect These Test Steps Into Scenarios?

Creating test scenarios from basic traffic recordings is difficult. If multiple tests are needed to create a scenario, this difficulty multiplies. Replaying a traffic recording in place of a scenario is often difficult because it relies on exact preconditions for the original test. Moreover, it can be impossible to replay the same test in repetition, for example, which is important for creating performance or security-related tests.

How Can I Capture and Reuse the Knowledge?

A traffic recording is simply a sum of all the network activity during a test session. There's no inherent understanding of the underlying message passing nor relationship to API services. Without this, it's impossible to extend these recordings for other purposes or even make changes to adapt to new requirements. They're often frozen in time and only useful for the period they were recorded.

Let's Get Back to Artificial Intelligence

This is where artificial intelligence comes into play so that the traffic recording can not only take place but be extended into real, actionable value for its users. This is why we developed the Smart API Test Generator, so we could create a place for novice API testers to get started API testing without writing a single line of code. So users could quickly get started building full, meaningful test scenarios, and even extend those API tests into security and performance tests, leveraging the simple, intuitive interface of Parasoft SOAtest.

How Does It Work?

As you are testing your UI, the Smart API Test Generator monitors the underlying API calls that are made to your application, just like a traffic collector might, and then uses artificial intelligence to discover patterns and understand relationships between those API calls. It can then generate automated API test scenarios that perform the same actions as your UI tests but are fully automated and easily extendable.

Essentially, this:

But why does that matter? Here are some of the benefits this method provides:

  • Reduces time spent determining the right way to build API tests by automatically converting the actions you perform in your browser into automated API tests that model the same actions you performed in the UI (in the right order).
  • Makes it easier to build meaningful, comprehensive API tests by automatically creating full testing scenarios based on the relationships between the different API calls. (Without this, users have to spend time investigating test cases, looking for patterns, and manually building the relationships to form each test scenario.)
  • Automatically adds assertions and validations to ensure your APIs work as intended so you can perform even the most complex type of assertion logic without having to write any code (or risk building them wrong).
  • Reduces the time spent maintaining tests. Because it's scriptless, users don't have to spend time rewriting code for test cases whenever a service changes.
  • Helps development and test teams collaborate with a single artifact that can be easily shared and understood by both teams (and is better at diagnosing the root cause of defects than a UI test).
  • Lays the foundation for a scalable API testing strategy by helping users extend tests, test flow logic, and data solutions to accomplish the full scope of API test coverage needed to fully validate applications in the short time given.

To summarize, the tool both automatically creates tests based on a meaningful interpretation of the captured API activity and supports the easy extension and maintenance of these tests so their value is multiplied throughout the software lifecycle.

Let's Take It a Step Further

All of this is good in its own right, but the part that I get even more excited about is the part where the Smart API Test Generator helps users understand the relationships between the UI actions and the API calls, making it easier for testers to "skill up" and adopt a comprehensive API testing practice. Since API testing can be fully automated and easily scales, teams can lower the total cost of quality while avoiding delayed releases.

Let's break that down a little. Because the Smart API Test Generator takes on the heavy lifting, giving testers an easy, scriptless place to start building API tests, it lowers the technical entry point to API testing, bringing beginners into the API testing world and into the user-friendly Parasoft SOAtest ecosystem, where users benefit from powerful visual tools that are easy to adopt and use.

That's Why I'm Excited

Oh, the implications! Traffic collection of API activity during system and UI testing is insufficient for automating API testing, but that's all the industry has had until now. The dependency on preconditions makes these recordings less reusable and almost impossible to extend for other purposes. Not to mention the difficulty in creating meaningful tests scenarios from complex traffic, something most testers are not skilled at.

But that doesn't matter anymore! Now that we have the Parasoft SOAtest Smart API Test Generator, users can leverage artificial intelligence for the heavy lifting. Beginning API testers can use it to get started and learn how API testing works, and experienced API testers can leverage it to be wildly more efficient (that's one of the main ways we use it now, here at Parasoft). At the end of the day, organizations can benefit from saving time and money by building meaningful, extensible, and reusable tests by leveraging a machine. It is 2018, right?

API API testing Testing Record (computer science) AI application

Published at DZone with permission of , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Cypress.io — The Rising Future of Web Automation Testing
  • Mocking Dependencies and AI Is the Next Frontier in Vue.js Testing
  • Architecting a Comprehensive Testing Framework for API and UI Testing
  • Cypress API Testing: A Detailed Guide

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!