DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Mastering System Design: A Comprehensive Guide to System Scaling for Millions (Part 1)
  • What Is API-First?
  • Monitoring Kubernetes Service Topology Changes in Real Time
  • Lifecycle Microservices With GenAI Tools

Trending

  • Unlocking AI Coding Assistants Part 3: Generating Diagrams, Open API Specs, And Test Data
  • Article Moderation: Your Questions, Answered
  • Building Resilient Networks: Limiting the Risk and Scope of Cyber Attacks
  • Top Book Picks for Site Reliability Engineers
  1. DZone
  2. Software Design and Architecture
  3. Performance
  4. API Gateway vs. Istio Service Mesh

API Gateway vs. Istio Service Mesh

Learn the difference between the traditional API gateway and Istio service mesh. Find out which one you need for cloud-native applications and app modernization.

By 
Debasree Panda user avatar
Debasree Panda
·
Jun. 23, 23 · Analysis
Likes (4)
Comment
Save
Tweet
Share
8.7K Views

Join the DZone community and get the full member experience.

Join For Free

Architects, DevOps, and cloud engineers are gradually trying to understand which is better to continue the journey with: the API gateway, or adopt an entirely new service mesh technology? In this article, we will try to understand the difference between the two capabilities and lay out some reasons for the software team to consider or not consider a service mesh such as Istio (because it is the most widely used service mesh). 

Please note we are heavily driven by the concept of MASA architecture that guarantees infrastructure agility, flexibility, and innovation for software development and delivery teams. (MASA — a mesh architecture of apps, APIs, and services — provides technical application professionals delivering applications with the optimal architecture to meet those needs.)

Also, keep in mind that we might be critical of the API gateway due to our futuristic stance. But our evaluation will be genuine and helpful for enterprises looking to introduce radical transformation.

Let us start by analyzing current trends in app modernization.

Trends of App Modernization

  1. Microservices: Nearly every mid to large company uses microservices to deliver services to the end customer. Microservices (over monoliths) are suitable for developing smaller apps and releasing them to the market. 
  2. Cloud: This needs no explanation. One can only imagine providing services by being on the cloud. Due to regulations and policies and added data-security, Companies also adopt a hybrid cloud- a mix of public/private cloud and on-prem VMs
  3. Containers (Kubernetes): Containers are used to deploy microservices, and their adoption is rising. Gartner predicts that 15% of workloads worldwide will run in containers by 2026. And most enterprises use managed containers in the cloud for their workloads. 

And the adoption of these technologies will be more and more in the coming years (Dynatrace predicts the adoption of containers and cloud almost 127% y-o-y).

The implication of all of the above technology trends is that transaction of data over the network has increased. The question is, will the API gateway and load balancers at the center of the app modernization be sufficient for the future? 

Before we find the answer, let us look at some of the implementations of API gateway. 

Sample Scenarios of API Gateway and Load Balancer Implementation 

After discussing with many clients, we understood various ways API gateways and load balancers are configured. Let us quickly see a few scenarios and understand the limitations of each of them. 

All the microservices, cloud, and container journeys start with an API gateway. Take a practical example of two microservices — author and echo server services — hosted on the EKS cluster. (In reality, things can be more complicated; we will see that in another example later.) Suppose architects or the DevOps team want to expose the two services in the private cluster to the public (through DNS name: http://abc.com). In that case, one of the ways is to apply network load balancers for each of these services (assuming each of them is hosted in the same cluster but different nodes). And an API gateway such as AWS API gateway can be configured to allow the traffic to these node balancers. 

If the cluster is in private VPC, then VPC Link needs to be introduced to receive the traffic from the API gateway to inside the private subnet, which can be further allowed to network load balancers. And each node balancer will redirect the traffic to their respective services.  

API gateway and multiple network balancer implementation
Fig A: API gateway and multiple network balancer implementation 

The downside of such an architecture is there can be multiple load balancers, and costs can go high. Thus, some architects may use another implementation by a single load balancer with numerous ports opened to serve various services ( in our case, author and echo server). Refer to Fig B:

API gateway implementation with single network load balancers
Fig B: API gateway and network balancer implementation 
One of the widely used architectures is an application load balancer instead of a network load balancer (refer to Fig C). In such scenarios, some of the responsibilities of the API gateway can be shifted from API gateway to application load balancer, such as L7 traffic routing. In contrast, an API gateway can perform advanced traffic management functionalities like retries and protocol translation.

API gateway implementation with application load balancers
Fig C: API gateway and application balancer implementation 

We have looked into simple use cases, but practically, there can be many microservices hosted in multicloud and container services, which can look something like the below:

API gateway implementation in a banking application
Fig D: API gateway in an interdependent microservices setup 
But regardless of the architecture, there can be a few limitations of using an API gateway in your app modernization journey. 

Limitations of API Gateways

Traffic Handling Limited to Edge

An API gateway is good enough for taking the traffic at the edge, but if you want to manage the between the services (like in Fig D), it can soon become complicated. 

Sometimes the same API gateway can be used to handle the communication between two services. This is done primarily for peer-to-peer connections. However, two services residing in the same network but using external IP addresses to communicate creates unnecessary hops, and such designs should be avoided. The data takes longer, and communication uses more bandwidth. 

This is called U-turn NAT and NAT loopback (network hair pinning). And such designs should be completely avoided.  

Inability to Handle North-South Traffic 

When the services are in different clusters or data centers (public cloud and on-prem VMs), enabling communication between all the services using an API gateway is tricky. A workaround can be enabling multiple API gateways used in a federated way, but the implementation can be complicated, and the project’s cost will outweigh the benefits.

Lack of Visibility into Internal Communication

Let us consider an example where the API gateway allows the request to service A. Service A needs to talk to Service B and C to respond to the request. If there is an error in Service B, Service A will not be able to respond. And it is difficult to detect the fault with the help of an API gateway.

No Control Over Network Inside the Cluster

When DevOps and cloud teams want to control the internal communication or create network policies between microservices and API gateway cannot be used for such scenarios. 

East-West Traffic Is Not Secured 

Since API gateway usage is limited to the edge, the traffic inside a cluster (say EC2 or EKS cluster) will not be secured. If a person hacks one service into a cluster, he can quickly take control of all other services (known as a vector attack). Architects can use workarounds such as implementing certificates and configuring TLS, etc. But again, this is an additional project which can consume a lot of time of your DevOps folks.

Cannot Implement Progressive Delivery 

API gateways are not designed in a way to understand the application subsets in Kubernetes. For example, Service A has two versions — V1 and V2 — running simultaneously (with the former being the older and the later being canary deployment); in such case, an API gateway cannot be implemented for implementing canary deployments. The bottom line is you cannot extend the API gateway to implement progressive delivery such as canary, blue-green, etc. 

To overcome all the limitations of API gateways, your DevOps can develop workarounds (such as multiple API gateways implemented in a federated manner), but note that the maintenance of such a setup could be more scalable and would be regarded as technical debt. 

Hence, a service mesh infrastructure should be considered. Open-source Istio, developed by Google and IBM, is a widely used service mesh software. Many advanced organizations, such as Airbnb, Splunk, Amazon.com, Salesforce, etc., have used Istio to gain agility and security in their network. 

Introducing Istio Service Mesh

Istio is an open-source service mesh software that abstracts the network and security from the fleet of microservices. 

From implementation point-of-view, Istio injects Envoy proxy (a sidecar) into each service and handles the L4 and L7 traffic. DevOps and cloud teams can now easily define network and security policies from the central plane. 

Istio service mesh
Fig E: Istio service mesh diagram

Since the application, transport, and network traffic (L7/l5/L4) can be controlled by Istio, it is easy to manage and secure both the north-south and east-west traffic. You can apply fine-grained network and security policies to east-west and north-south traffic in multi-cloud and hybrid cloud applications. The best part is Istio provides central observability of network performance in a single plane. 

DevOps team can use Istio to implement canary or blue-green deployment strategies using CI/CD tools.

Read more about the features here: Istio service mesh.

Tabular Comparison of API Gateway and Istio Service Mesh

Please follow the comparison of the API gateway and Istio service mesh across a few dimensions, such as network management, security management, observability, and extensibility.

API gateway vs Istio service mesh comparison

Conclusion

The API gateway was good for load balancing and handling other network management at the edge. Still, as you adopt microservices, cloud, and container technologies to attain scale, architects need a bit of re-imagination for network agility and security. Istio service mesh is compelling because it is open source and is heavily contributed to by Google, IBM, and Red Hat. And there are various architectural scenarios for integrating an API gateway and Istio for app modernization.

API Network management Load balancing (computing) microservice

Published at DZone with permission of Debasree Panda. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Mastering System Design: A Comprehensive Guide to System Scaling for Millions (Part 1)
  • What Is API-First?
  • Monitoring Kubernetes Service Topology Changes in Real Time
  • Lifecycle Microservices With GenAI Tools

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!