Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

5 Steps to Keep APIs From Sabotaging Performance

DZone's Guide to

5 Steps to Keep APIs From Sabotaging Performance

Learn how to measure your performance to see how APIs are impacting your performance, and how to prevent these issues in your application.

· Performance Zone ·
Free Resource

Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More!

Many of the applications we all use every day leverage a number of third-party solutions via application programming interfaces, or, as they're more commonly known, APIs. The steady transition from monolithic application architectures to distributed microservices has played a big role in the explosion of APIs. Once relegated to back-office processing, today API calls are mission-critical, enabling development teams to truly focus on the core differentiation for their application while leveraging state-of-the-art external services for everything else. The API economy is alive and well. And with the explosive growth of API usage, it has now become essential to ensure your API performance is up to snuff.

Your application might be extremely well designed, with a well-architected front-end; but it means nothing if your API takes a long time to respond or throws an exception. A reliable, well-defined and scalable API has become a cornerstone of great software.

The most important aspect of API performance is whether it can perform well at scale; i.e. when the API receives thousands of concurrent connections. With an exponentially growing, 24×7 user base and chatty applications like IoT, the magnitude at which APIs need to be served and consumed is staggering.

Whether you publish an API service for others to consume, or you consume API services in your application, you need to make sure those services work flawlessly and don't throw any of the dreaded exceptions. The following guide offers a straightforward, step-by-step approach to accomplish this mission by load testing and performance-tuning your business-critical APIs.

Step 1. Understand Metrics: How to Measure API Performance

The most common workload, currently used by websites and many mobile applications, is transactional HTTP web applications, often using SSL and TLS. The most important metrics for this workload are all of the typical metrics like RPS (requests per second), TPS (transactions per second), and throughput.

To measure the user experience for web applications and mobile apps, a statistical measurement of the response time is also very critical, as it affects the user experience and, as a result, business parameters like conversion rates and brand advocacy. For response time, averages and distributions are important, as well as statistical parameters such as percentiles. The ability to see response times, both for complete transactions as well as individual HTTP requests is also important, and it helps in troubleshooting your performance problems.

For APIs, a very similar set metrics can be derived. REST APIs are becoming very common. A 'REST API' refers to a common API transport for services. REST APIs are very commonly used in machine-to-machine communication. Because the REST API is based on HTTP, the key metrics are the same as for HTTP: RPS, and throughput. Response latency is also very important. Milliseconds add up quickly, and you don't want the API to become the bottleneck of your transaction.

Note that, unlike web and mobile application testing, for API testing, the concept of 'virtual users' is actually not very relevant since what we are measuring is not the end-to-end user experience but an important step in a transaction. As such, the most important performance factor for API is usually throughput.

Step 2. Gather API Usage Data

Before you start testing your APIs, it is important to gather some usage data from your API endpoints in typical high traffic scenarios. If you are the consumer of the API, this could be quite easy; if you are the publisher, it may be more complex. Basically, it means understanding how developers are going to use the API in their applications.

What is the maximum throughput? Are all the endpoints receiving more or less the same traffic, or some endpoints are used more than others? Are the RPS pretty steady, or erratic? Is there any pattern in the traffic? What is the combination of endpoints that are used the most?

And the most important: what data are used with your APIs?

This information can easily be derived from your monitoring solution, like New Relic, and is a very important piece of data to start your test. You can also log a set of API calls with live traffic (with data) and use that as a basis for your tests.

Step 3. Test Your Endpoints in Isolation First

We highly recommend testing your API endpoints in isolation first. Start with the basics. Test one endpoint at the time, and once you have a good baseline for each endpoint, then proceed to create sequences of requests that mimic complex transactions.

It used to be quite complex to test a single endpoint — one basically had to develop an application for that purpose. But now with modern tools, it has become possible to test individual endpoints at scale with real data.

Postman is widely considered to be the state-of-the-art for building API endpoints. We love Postman here at Nouvola, as it enables you to test single endpoints or sequences of endpoints with a variety of input options, and easily inspect the results.

Step 4. Performance Testing Setup and Tool Selection

You need a load testing solution that makes it simple to create tests based on your API and easily derive insights from the results. The best solution of this type is Nouvola. With Nouvola, you can import directly from Postman and convert your Postman collections into a test, seamlessly, with a single click. You'll have a great UI with results already processed into nice graphs and reports, shareable with the rest of the team and with management. And Nouvola is API-centric so you can automate your tests with Jenkins, CodePipeline or the system of your choice.

To optimize your load testing, it's important to set up the amount of traffic you want to test with a linear ramp. It's also important to measure that your APIs perform well across multiple geographies, so don't forget to set up multiple geographic locations. You may also want to set up the proper authentication from your endpoints. Nouvola allows you to easily incorporate all of these elements into your tests, and to validate the response data.

Step 5. Run Your Tests

Here you go! You are ready to run your first test. Here's an example of an API endpoint performance test results. You can increase your load until you figure out at what point the API doesn't respond well any longer and doesn't meet your performance goals.

Test Parameters and results:

  • Avg. Response latency: 585 ms
  • Throughput: 14688 reqs / min (244 rps)
  • Geographic locations: London, Oregon, Australia, Mumbai, San Paulo

As you can observe from the results below, for this API end point the fastest regions are Oregon and London, and the slowest are Mumbai and San Paulo.

And this is not only load testing or stress testing. It is about making sure that the recent code changes don't change the performance of the API. We live in a world where data volumes double every 18 months and so the magnitude at which APIs will need to be served to continue to rapidly increase. That's why automating the steps above and running them for every code change or at regular intervals is the best approach to promote strong API performance.

Collect, analyze, and visualize performance data from mobile to mainframe with AutoPilot APM. Learn More!

Topics:
performance ,performance optimization ,tutorial ,api performance

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}