Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Rule Your Microservices With an API Gateway: Part II

DZone's Guide to

Rule Your Microservices With an API Gateway: Part II

After discussing the goal of an API gateway for microservices, this article shows how to use the famous Kong API gateway, and the easier-to-use API gateway Kontena.

· Microservices Zone ·
Free Resource

Learn how modern cloud architectures use of microservices has many advantages and enables developers to deliver business software in a CI/CD way.

In the previous part of this blog post series, I talked about microservices and the purpose of API gateways.

We're rather obsessed with microservices here at Kontena, so here are some other microservices related articles that may interest you: Event Sourcing Microservices with Kafka and how to implement Event-Driven Microservices with RabbitMQ.

Now, I'm going to show how to run API gateways in practice and set up and configure services.

As stated in my previous blog post, an API gateway provides a single, unified API entry point across one or more internal APIs. Rather than invoking different services, clients simply talk to the gateway.

The gateway enables support for mixing communication protocols and decreases microservice complexity by providing a single point to handle cross-cutting concerns such as authorization using API tokens, access control enforcement, and rate limiting.

API gateways can be divided roughly into three categories:

Using Kontena Load Balancer as an API Gateway

Kontena is the developer-friendly container and microservices management platform. Kontena Load Balancer is an L7 proxy based on HAProxy. It's a powerful and fully automated load balancer for any number of Kontena Stacks and Services.

You can install it as a ready-made Kontena Stack with Kontena CLI:

kontena stack install kontena/ingress-lb 

Or use it with your own stack:

...
services:  
  api-gateway:
    image: kontena/lb:latest
    ports:
      - 80:80
...

The Kontena Load Balancer does not do anything unless there are some Kontena Services linked to it. Any Kontena Service may be linked to the Kontena Load Balancer simply by adding a link variable with the name of the Kontena Load Balancer. Load balancing options for this Kontena Service may be configured via the environment variable.

...
services:  
  api-gateway:
    image: kontena/lb:latest
    ports:
      - 80:80
  images-api:
    image: nginx:latest
    environment:
      - KONTENA_LB_INTERNAL_PORT=80
      - KONTENA_LB_VIRTUAL_PATH=/images
    links:
      - api-gateway
  products-api:
    image: nginx:latest
    environment:
      - KONTENA_LB_INTERNAL_PORT=80
      - KONTENA_LB_VIRTUAL_PATH=/products
    links:
      - api-gateway
...

Typically, with L7 proxies, you can configure only load balancing rules, and other advanced configuration options are quite limited. For example, with Kontena you can configure Basic Authentication for the Kontena Load Balancer.

When using other container management tools, you can check Traefik for Docker Compose and NGINX Ingress Controller for Kubernetes.

Using Kong as an API Gateway

When an L7 proxy is not enough, traditional API gateways stand out. They provide more functionality and configuration options for securing microservices. One out-of-the-box solution is Kong.

Kong is a microservice API gateway. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations, and logging.

With Kontena, you can install Kong with a single command:

kontena stack install kontena/kong 

The installer will deploy the Kong API and optional PostgreSQL.

For other installation methods, please refer to the Kong installation guides.

Service Configuration

With the Kontena Load Balancer, the configuration is very straightforward. You can just register services to a load balancer and define load balancing rules in Kontena Stack files and Kontena will update the HA proxy configuration on the fly when changes are made. Unfortunately, this is not possible with Kong at this moment. For Kubernetes there is an on-going process to provide the official Kong Ingress Controller, which should automate lots of things. Meanwhile, to configure APIs and Plugins, you need to use Kong's Admin API. But you can still automate this.

With Kontena, you can create a post-start script that will configure all required settings when a service is deployed. With Kontena, Kong's Admin API is not exposed to the outside world but services can still utilize it by using Kontena's internal network.

Those scripts can be, for example, basic bash scripts that execute curl requests to Kong Admin API. However, bash scripts can easily become too messy and complicated, so I have created the Kong API client library for Ruby that makes things easier.

We are internally using Ruby a lot, so we use Rake tasks to run db migrations, so why not also register Kong APIs, too?

namespace :kong do  
  desc "Register Kong configurations"
  task register: :environment do
    api = Kong::Api.find_by_name('images')
    unless api
      api = Kong::Api.new(name: 'images')
    end
    api.uris = ['/images']
    api.upstream_url = "http://#{ENV['KONTENA_STACK_NAME']}.#{ENV['KONTENA_GRID_NAME']}.kontena.local:3000}"
    api.save

    rate_limiting_plugin = api.plugins.find {|p| p.name == 'rate-limiting' }
    unless rate_limiting_plugin
      rate_limiting_plugin = Kong::Plugin.new(name: 'rate-limiting')
      rate_limiting_plugin.api = api
    end
    rate_limiting_plugin.config = {
      minutes: ENV['RATE_LIMIT_PER_MINUTE'] || 10
    }
    rate_limiting_plugin.save
  end
end  

This task will register the images API to Kong and configures a rate limiting plugin for the API.

Then we can have a post start hook in a stack file:

hooks:  
  - name: register Kong APIs
    cmd: bundle exec rake kong:register
    instances: 1  

So every time when we deploy a new version of the service, it will update the Kong configuration related to that service.

You can check the complete example from the GitHub repo.

Want to Learn More?

Register for my upcoming webinar: Why Do Microservices Need an API Gateway? to find out how an API gateway offers the possibility to provide a uniform interface and a central connection point for the various microservices behind it and how they can then be handled dynamically.

Discover how to deploy pre-built sample microservices OR create simple microservices from scratch.

Topics:
api gateway ,microservices ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}