Architecting API Management Solutions With WSO2 API Manager, Part 1
Designing the Solutions Architecture, Planning Capacity Requirements, Designing the Deployment Architecture and Infrastructure Selection.
Join the DZone community and get the full member experience.Join For Free
At WSO2, we work with enterprises around the globe on implementing API management solutions for various scenarios. This includes use cases starting from managing APIs used within organizations, managing APIs exposed to business partners, managing third-party APIs consumed by internal systems, managing APIs publicly exposed to the internet, and many other business specific requirements. Organizations that require APIs for internal and partner usage may preliminarily deploy such systems on their internal data centers. Some deployments may also span to public cloud infrastructures depending on the need for scalability, geographical distribution, disaster recovery, cost savings, etc. On the other hand, businesses who run their existing systems on public cloud infrastructures may deploy API management solutions on the same platform.
In any of the above scenarios we first start with the business use cases and design a high level solution architecture which could illustrate the internal systems involved, integrations among multiple systems, messaging channels required, external services used, mediations needed, identity and access management, analytics, monitoring and every other aspect of the solution needed for exposing and managing APIs at different levels of the organization. This process would allow all stakeholders of the project to understand the high-level requirements of the solution for supporting business needs and design the detailed solution architecture accordingly.
Designing the High-Level Solution Architecture
Figure 1: A Sample High-Level Solution Architecture
Most large organizations will have a collection of systems for managing the information of employees, finance, inventory, manufacturing, distribution, maintenance, etc., depending on the nature of their business. Organization-wide client applications that let their internal users interact with business functions may need to consume services from multiple systems, either by invoking them directly or by producing integrated services. In addition, these internal systems may also need to talk to third-party systems such as Salesforce, Google Apps, Concur, and JIRA using organization-level API subscriptions. Moreover, such organizations may also need to expose APIs to the external world for allowing their partners and third-party application developers to consume their business services for providing value-added services.
The high-level solution architecture illustrated in Figure 1 has addressed all of the above requirements for designing such solutions. A collection of internal systems have been exposed via a central API management solution while some systems have been integrated together using an integration system. Communication channels with external systems have also been identified and mentioned on the left and right sides. The consumption of public APIs by external client applications has been shown on the left side and the usage of external APIs for implementing internal business functions has been shown on the right. The external APIs have also been routed through the central API management solution without them being directly integrated with the internal system for better management.
According to this architecture, internal applications can consume services exposed by internal systems, internal integration systems, and third-party systems via a central API gateway. At the same time, external client applications can also subscribe and consume an organization’s public APIs.
Designing the Detailed Solution Architecture
Figure 2: A Sample Detailed Solutions Architecture
Once the high-level solution architecture diagram is in place, enterprise architects may design a detailed solution architecture diagram for visualizing the actual interactions between systems, defining services, protocols, messaging channels, listing APIs, communication channels between internal networks and external networks, etc. The integrations between systems can be implemented according to Enterprise Integration Patterns (EIPs) using an ESB or a lightweight integration services platform such as BallerinaLang. Internal and external communication channels can be isolated by introducing a separate API gateway for public facing APIs. This approach would also allow each API gateway cluster to be scaled independently according to their own capacity requirements.
Moreover, all of the above components may need to be integrated with organization-wide systems which provide identity and access management, analytics, and monitoring capabilities. The importance of having such centralized systems is that when the business grows it could be difficult to manage users, permission models, analyze data, and monitor production deployments if they are scattered. Most importantly, the infrastructure platforms required for each system and their geographical distribution would need to be carefully designed. These decisions may directly affect API latency and ultimately user experience depending on the latencies between data centers, the time taken to replicate databases, filesystems, etc. Therefore, it would be quite important to note down that information at this stage of the project when designing an API management solution.
Planning Capacity Requirements
The second key phase of designing an API management solution is to identify the throughput required by each API manager component. The convention of calculating throughput is by the number of requests that a component can handle for a second. This is also known as transactions per second (TPS).
WSO2 API Manager consists of six main components; API Gateway, Publisher, Store, Key Manager, Traffic Manager, and Analytics. In almost all the scenarios the API Gateway would get the highest number of requests. For each API request received by the API Gateway, a statistics message will be published to the Analytics server, and API usage data will be published to the Traffic Manager. The Key Manager will only get access token management requests from the API Gateway when those cannot be satisfied using the cache. Similarly, the UI components would not get a considerable amount of load if the API portal is not exposed to a wider audience for utilizing the API management solution as a service. Otherwise, web UIs will only be used for managing API subscriptions and administrative purposes.
The WSO2 research team, together with the API manager team, conducts regular performance benchmark tests for WSO2 API Manager for identifying the throughputs that it can handle according to different message sizes, backend latencies, and concurrent users. The results of the latest test carried out for WSO2 API Manager v2.1.0 can be found below:
Figure 3: Throughput comparison of WSO2 API Manager v2.1.0, Reference 
The performance test shown above has been performed on AWS by using a compute optimized EC2 instance (c3.xlarge) which has 7.5 GB of memory and 4 vCPUs. The backend service latency has been defined in the graphs as “Sleep Time.” The JVM heap size of API manager has been set to 4 GB and it has been started in all-in-one mode by including all components in a single JVM. Ideally, when API manager components are decomposed each component may perform better than shown in these graphs when dedicated resources allocated for each.
According to the third graph, one instance of the API Gateway can handle around 3000 TPS for an API which receives nearly 10 KB of data in the payload, which does not include any mediation policies and when the backend service takes nearly 500 ms to respond. Similarly, an API which contains mediation policies would be able to handle around 900 TPS depending on the complexity of the mediation policies used. Nevertheless, when the message size increases to around 100 KB, the throughput may decrease to around 400 TPS with the same parameters.
In general, an API Gateway would contain a collection of APIs which may handle different sized messages and communicates with backend services with various latencies. Moreover, the backend latency may also vary based on the request parameters. Therefore, it is important to note that these figures can only be used for taking a rough performance figure and when implementing an API management solution for a specific requirement a performance benchmark test might need to be executed for identifying the actual throughput figures that it may need to handle in production.
That's all for Part 1, tune back in tomorrow to learn about the four ways we can design the deployment architecture!
References for the Series
 WSO2 API Manager Performance and Capacity Planning, WSO2: https://docs.wso2.com/display/AM210/WSO2+API-M+Performance+and+Capacity+Planning
 Securing Microservices (Part 1), Prabath Siriwardena: https://medium.facilelogin.com/securing-microservices-with-oauth-2-0-jwt-and-xacml-d03770a9a838
 WSO2 API Manager Deployment Patterns, WSO2: https://docs.wso2.com/display/AM2xx/Deployment+Patterns
 WSO2 API Manager Customer Stories, WSO2: https://wso2.com/api-management/customer-stories/
 Compatibility of WSO2 Products. WSO2: https://docs.wso2.com/display/compatibility/Compatibility+of+WSO2+Products
 Benefits of a Multi-regional API Management Solution for a Global Enterprise, Lakmal Warusawithana: https://wso2.com/library/article/2017/10/benefits-of-a-multi-regional-api-management-solution-for-a-global-enterprise/
Published at DZone with permission of Imesh Gunaratne. See the original article here.
Opinions expressed by DZone contributors are their own.