Turn Databases Into APIs for Data-Driven Testing
Use dynamic data rather than fixed or pre-built data to leverage a better breed of Data-driven Testing (DDT) that can validate full API consumer flows.
Join the DZone community and get the full member experience.
Join For FreeThe most effective method of testing an API program involves creating multi-step integration tests that validate common API consumer flows. API endpoints are meant to work together, so it follows that test data coming from one API that feeds another API should not be fixed or pre-built. This is very important because the less you rely on fixed data, the more unpredictable and therefore thorough the testing path will be.
Moreover, API mutation operations may have side effects that cannot be evaluated by simply validating the same endpoint. Side effects propagate throughout the system, and their efficacy can only be validated by querying other endpoints and comparing the results.
The simplest example of this can be found in the relationship between the "Add to Cart" and the "Shopping Cart" endpoints. Only by accounting for the price of every added item can you tell whether the shopping cart "Total" is correct.
While this dynamic data methodology does not adhere exactly to the classic Data-Driven Testing (DDT) definition, it is an advanced variation of it. While the code of the test simply describes the "comparison" methodology, the data comes from the API itself, which makes the test resilient to environmental changes and API upgrades. In my experience, this is the ideal approach. However, it can often be challenging to apply this idyllic methodology. Classic data-driven testing then becomes a necessity.
Ultimately, the goal is to create API tests that are as dynamic and unpredictable as your users.
Beware the buzz that keeps telling you DDT is just "using a big database of inputs." This incomplete definition doesn't convey the full benefits of the methodology. DDT involves using a large database of inputs along with related outputs (questions and answers) that are considered the correct response. This becomes very handy when the correctness of an API result cannot be determined by calling another API to compare, or by applying straightforward reverse engineering.
The role of the data-driven test is to perform the call with the given input, and validate whether the output is the expected one. If an API was meant to tell you what's the meaning of life, the universe, and everything, the payload would be "42," and the only way you could test it would be to use a database that already knows the answer.
Consider the following true story of how a fast-food chain detected and solved a substantial API bug by using true DDT (from the eBook: "How Many Costly Software Bugs Are Due to API Flaws"):
A large fast-food chain was concerned that their "Find a restaurant" API that used geolocation was failing to return all the points of presence correctly. They had been extracting the point of presence by ZIP code and introducing them into a dedicated "test data" database.
To simplify the QA work of writing meaningful tests, they deployed Bloodhound to convert that database into APIs, so that they didn't need to get into the details of the database, and could treat the test data as any other API.
The resulting multi-step API test was very easy. First, it called the database as a regular API, and then for each ZIP code, it would call the "Find a restaurant" API and compare the returned entries with the expected one. It didn't take long to spot a bug that severely impacted one of their newest chains, and it was all due to GPS calculation rounding.
In the example, the fast-food chain simplified their QA work by converting the database into APIs. Next, let's see how you can easily do the same thing.
Databases as APIs
Recently, we released a free, open-source micro gateway on GitHub called Bloodhound that captures, tracks, and transforms API calls for simplified debugging. Additionally, Bloodhound can be configured to connect with any database that can be accessed with JDBC or MongoDB, and then turn that connection into an API. Bloodhound can then trigger a select query that will return the result as a JSON array payload. This is an easy way to turn hidden or inaccessible datasets into an API for testing.
In the following example, we'll use the Bloodhound Template found on Github and use PostgreSQL as the database. The template will deploy a PostgreSQL database to experiment with, but you will want to connect it to your database once you understand the mechanics.
- First, deploy the entire package found in the templates folder. This will deploy Bloodhound as well as the databases.
sudo docker-compose up -d
Next, to allow you to experiment with our demo setup, point the browser to “http://localhost:8081” and use the following settings:
System: PostgreSQL
Server: Postgres
Username: apipulse
Password: jk5112
Database: apipulse
- Load the test data in the UI:
- To do this, click on “Import” and load the “postgres.sql” file and click on “Execute.” Note that this will throw an error but it has no effect.
Now that we have our database populated with test data, we can query it using the API “localhost:8080/postgres.” Let’s try a “select” query. We will execute the following curl command "curl -H 'content-type:text/plain' -d 'select * from orders' localhost:8080/postgres"
All of the other query operations are also supported via the API call localhost:8080/postgres, which can be the first step in your data-driven multi-step API test. Use this API in your automated API testing suite. Once you understand the mechanics you can configure the module to connect to your JDBC databases by modifying the flow YAML. You can learn more about the module and its configurations here. Visit the Bloodhound Wiki to learn about configuration options.
New Tools, New Capabilities, New Testing Goals
There are tools today that make it easy to move beyond the limited methods of providing test data (CSVs) and achieve truly robust data-driven testing. In API testing platforms and other tools, you can even generate fake data if you don’t have a database handy. Ultimately, the goal is to create tests that are as dynamic and unpredictable as your users.
Opinions expressed by DZone contributors are their own.
Trending
-
Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline
-
Auditing Tools for Kubernetes
-
Top 10 Pillars of Zero Trust Networks
-
Leveraging FastAPI for Building Secure and High-Performance Banking APIs
Comments