Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Next Generation API Management

DZone's Guide to

Next Generation API Management

· Integration Zone
Free Resource

Share, secure, distribute, control, and monetize your APIs with the platform built with performance, time-to-value, and growth in mind. Free 90-day trial of 3Scale by Red Hat

 What is API management?

It is an umbrella term covering the entire lifecycle and management of API’s. It composes of publishing an API, its governance by exposing it to either internal or external systems with a secure and scalable environment.

API Management Software suites

There are different vendor products available which makes API management easier for Organizations. Some of the popular suites are Layer 7, Apigee, Intel Mashery. Some organizations have gone in-house development to do API management as well. API management suites provide wide range of functionality in terms of enabling ease of interconnection with disparate systems, as well providing a secure exposure of API’s from inherently out-dated or even insecure systems. It also allows versioning of API, manages traffic, and allows throttling configurations as well.

The architecture below is a very simple representation of an API manager, in real time there could be hundreds of API providers and thousands or even millions of API consumers.


Why the current Generation of API management suites are not enough?

Current API management enables throttling an API call request based on user or system throttling rates. It is ignorant to the performance characteristics of the API provider system. The current API management tools are not able to address the following.

  Optimal utilization of an API

  Predictive crash prevention of an API provider

  Real time crash prevention of an API provider

  Predictive throttling rate configuration.

  Peak/Off-peak throttling configuration.
 

What the Next Generation of API management suite needs

The current API management provide static configuration. Next generation of API management tools will have to do more and will need to integrate with APM (Application Performance Monitoring) tools, Big Data Analytics and build Elasticity into their very behavior. This will help in providing a more business centric and real time use case as well as better monetization of API’s. Such a capability will give huge advantage in the API management market.





Capabilities for the Next Generation API platforms
The current API management tools have come a long way but with the market getting crowded, the API management should focus on the next generation capabilities, some of which can be achieved by utilizing existing platforms like Big Data and APM. Such integration will provide real time throttling, predictive throttling and much higher customization options. Here are some of the features, which will define the next generation of API management platforms.

  •   Mining performance & historical data
  •   Understanding the performance behavior
  •   Max utilization instead of safe throttling rates
  •   Predictive and real-time crash prevention
  •   Predictive throttling configuration
  •   Peak-Time throttling

Mining performance & historical data
The current generation of API Management tools does collect performance information of the API but it is mostly for reports but the data can provide powerful insights about the performance characteristics of the API provider. Using the historical data an API manager can generate the throughput characteristics of an API using Big Data analytics.

Understanding the performance behaviors

This is a critical step in achieving the next generation capabilities. Every API provider would have a specific performance characteristic, understanding the capabilities of the API provider can help the API manager to predict better when the API provider can scale linearly and when its performance might drop exponentially.
Let us take an example of some of the performance characteristics for a sample of API providers shown in Figure 1.




  Figure 1: Throughput chart

This is a reference thought-put chart

  •   API-1 shows saturation at around 1250 but shows no degeneration of throughput.
  •   API-2 shows linear scalability even at 2000 clients.
  •   API-3 shows saturation at 1550 but shows degeneration after that and eventually crashes at 1900.
 
Optimal utilization instead of safe throttling rates

Current configuration of throttling rates for an API are static and pre-defined usually set to a safe throughput rate as per the performance characteristics of the API provider but considering that any business wants maximum ROI(Return on investment). It is best that the API are used to the optimal rates to generate maximum revenue. Given that, any API provider system would have a variable behavior sometimes increasing its throughput or in worst cases decreasing its throughput. This could be due to many factors like

  •   API version changes
  •   Patch /defect updates
  •   Performance upgrades
Of course, each of these steps can be followed-up by reconfiguring the throttling rates but when you integrate the APM with the API manager, you are effectively getting better intelligence with respect to how much you can load the system. Suppose the API provider is exposing a paid API service, it is better to allow as many API request as possible so that we achieve optimum utilization of the API. This results in maximum revenue possible to the API provider, instead of lesser revenue due to static throttling rates.

In case when an API provider is hosted on a cloud, it would scale up and down as per the need but any organization subscribes a plan with a maximum number of virtual CPU’s limiting the maximum throughput and in those case its best for the organization that any API is used at optimal rates for maximum ROI.


Predictive and Real-time crash prevention

Let us go back to the Figure 1. In case of API-3 if the API manager adds any more clients beyond 1700, the API provider enters into a critical state and eventually crashes. In such scenarios based on throughput rates, API manager can predict an eventual crash and do dynamic throttling to prevent a crashing of the API provider system as long as the real time throughput follows the historical throughput rate, which leads to a crash. This is Predictive crash prevention. This can be achieved in current API platforms by setting maximum clients per API provider but we can move away from a static configuration to optimal utilization and build a capability of smartness and auto-configuration to API management platforms.



For Real-time crash prevention, in case of API-1 and API-2 wherein they do not show degenerative throughput, the API manager can keep on adding clients until 2000, but suppose there is an issue in the API-2 provider, caused due to a bad patch or a recent update causing instability in the API provider. Its real time throughput deviates from the historical throughput rates; the API manager can automatically detect this behavior as the real time throughput drops drastically. Here throttling can be dynamically adjusted to prevent the API provider from crashing adding real-time crash prevention feature.



This real time throttling feature can be used to increase clients as well when the API provider shows additional scaling capability when it shows better real time throughput as the number clients reaches the previous maximum rates.

Predictive throttling configuration
The admin of the API manager can arrive at an optimal throttling configuration based on user or time but there could be easily be cases where over-provisioning can happen. For example, though the source API can only support 2000 client systems with a throughput of 200 requests/sec. During the configuration, an administrator thorough an erroneous calculation arrives at a throughput of 500 requests/sec and configures the rate. Such a configuration would cause a crash in the API provider, as the system may not be able to scale to that many clients. If the API manager already has the analytics though which it knows that the configuration is an over-provisioning it can alert the administrator that the configuration is over-shooting the API providers maximum capability. Such intelligence can also provide the optimal thrilling rates for a given API provider. This feature would equip the administrators and the business users to manage the API’s better with much ease.
 
Peak-Time throttling
The current generation of API management does not provide peak time throttling. Many enterprise systems often have peak load duration. Business can define appropriate throttling rates for peak and off-peak duration and better monetize the peak throttling rates.

Credits
Thanks to Damodaran who helped me in getting information for this article.
–  Ramachandran, Damodaran (Syntel, Ltd)


Thanks to the reviewers who reviewed this article.
Kannan, Aravinth (Syntel, Ltd)
Patil, Bhalchandra (Syntel, Ltd)
Naidu, Narendra (Syntel, Ltd)

Glossary
APM – Application Performance management
ROI – Return on investment

References
http://apigee.com/docs/
http://www.layer7tech.com/products/layer-7-api-portal
http://www.mashery.com/

Explore the core elements of owning an API strategy and best practices for effective API programs. Download the API Owner's Manual, brought to you by 3Scale by Red Hat

Topics:

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}