Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Load Balancing Containers With Docker Swarm and NGINX or NGINX Plus

DZone's Guide to

Load Balancing Containers With Docker Swarm and NGINX or NGINX Plus

The latest version of Docker integrates Docker Engine and Swarm and adds some new orchestration features. You can use NGINZ and NGINX Plus with Docker Swarm.

· DevOps Zone
Free Resource

Download “The DevOps Journey - From Waterfall to Continuous Delivery” to learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.

At nginx.conf 2016 in Austin this September, I gave a presentation on using NGINX and NGINX Plus in a Docker Swarm cluster. In this post, I discuss how to use NGINX and NGINX Plus for Docker Swarm load balancing in conjunction with the features introduced in Docker 1.12. All files I used during my demo at nginx.conf (and more) are available on GitHub for you to experiment with.

Overview

Docker version 1.12, released in late July 2016, integrates Docker Engine and Swarm and adds some new orchestration features to create a platform similar to other container platforms such as Kubernetes. In Docker 1.12, Swarm Mode allows you to combine a set of Docker hosts into a swarm, providing a fault-tolerant, self-healing, decentralized architecture. The new platform also makes it easier to set up a Swarm cluster, secures all nodes with a key, and encrypts all communications between nodes with TLS.

At the same time, the Docker API has been expanded to be aware of services, which are sets of containers that use the same image (similar to services in Docker Compose, but with more features). You can create and scale services, do rolling updates, create health checks, and more. DNS service discovery and load balancing are built in, and you can also set up cluster‑wide overlay networks.

Topology for Docker Swarm Load Balancing

For this discussion and demo, I have three Swarm nodes – a master and two workers. The master node is where the Swarm commands are run. Swarm handles the scheduling, DNS service discovery, scaling, and container load balancing (represented in the figure by the small boxes) on all nodes.

The topology for Docker load balancing, with a Swarm master node and two Swarm worker nodes

Figure 1. A Docker Swarm cluster with a master and two worker nodes.

To provide private network communications between containers inside a cluster, containers can be connected to multiple internal overlay networks that span across all nodes in the cluster. Containers can be exposed outside of the cluster through the Swarm load balancer.

For Docker Swarm cluster load balancing, internal overlay networks provide external and internal connectivity

Figure 2. Internal and external network connectivity in a Docker Swarm cluster.

The Docker Swarm load balancer runs on every node and can load balance requests across any of the containers on any of the hosts in the cluster. In a Swarm deployment without NGINX or NGINX Plus, the Swarm load balancer handles inbound client requests (represented by the green arrows in Figure 3) as well as internal service-to-service requests (represented by the red arrows).

Without NGINX or NGINX Plus, Docker Swarm load balancing handles internal and external traffic at Layer 4 only

Figure 3. Load balancing of client and service-to-service requests in a Swarm cluster without NGINX or NGINX Plus.

Now that Swarm includes load balancing, why would you need another load balancer? One reason is that the Swarm load balancer is a basic Layer 4 (TCP) load balancer. Many applications require additional features, like these, to name just a few:

  • SSL/TLS termination.
  • Content‑based routing (based, for example, on the URL or a header).
  • Access control and authorization.
  • Rewrites and redirects.

In addition, you might already have experience with a load balancer, and being able to use it with Swarm lets you take advantage of the tooling and knowledge you are already using.

Using Open Source NGINX and NGINX Plus

Open source NGINX and NGINX Plus are two load balancers that provide application‑critical features that are missing from the native Swarm load balancer.

Using Open Source NGINX

The open source NGINX software provides the features previously mentioned (SSL/TLS termination, etc.) and more, including:

  • A choice of load‑balancing algorithms.
  • More protocols, for example, HTTP/2 and WebSocket.
  • Configurable logging.
  • Traffic limits, including request rate, bandwidth, and connections.
  • Scripting for advanced use cases, using Lua, Perl, and JavaScript (with the nginScript dynamic module).
  • Security features such as whitelists and blacklists.

The simplest way to use open source NGINX is to deploy it as a service, with one or more containers. The necessary ports for the NGINX service are exposed on the cluster, and the Swarm load balancers distribute requests on these ports to the NGINX containers.

When you deploy NGINX as a service in a Docker Swarm load balancing topology, Swarm distributes requests to NGINX for processing, such as SSL/TLS termination

Figure 4. Swarm load balancers distribute requests for the NGINX services across instances.

For the purposes of this example, the service that NGINX provides is SSL/TLS termination. To illustrate how this works, we deploy a backend service A in the cluster and scale it to have three containers (two instances on one node and one instance on another, as shown in Figure 5). Swarm assigns a virtual IP address (VIP) to service A for use inside the cluster. We use this VIP in the NGINX configuration of the upstream group for service A, rather than listing the individual IP addresses of the containers. That way we can scale service A without having to change the NGINX configuration.

As shown in Figure 5, when a client makes a request for service A to the first Swarm node, the Swarm load balancer on that node routes the request to NGINX. NGINX processes the request, in this example doing SSL/TLS decryption, and routes it to the VIP for service A. The Swarm load balancer routes the (now unencrypted) request to one of the containers for service A, on any of the Swarm nodes.

As part of the topology for Docker Swarm cluster load balancing, open source NGINX can provide SSL/TLS termination for external client requests

Figure 5. NGINX provides SSL/TLS termination for external client requests.

It is possible to have NGINX load balance requests directly to the backend containers, and also handle internal service‑to‑service requests, but only with a more complex solution that requires changing and reloading the NGINX configuration each time service A is scaled. I will discuss later how this is easily accomplished with NGINX Plus.

Using NGINX Plus

Some of the additional features provided by NGINX Plus are the following.

Active Application Health Checks

NGINX Plus continuously checks the backend nodes to make sure they are healthy and responding properly and removes unhealthy nodes from the load‑balancing rotation.

Session Persistence

Also known as sticky sessions, and needed by applications that require that the clients continue to have their requests sent to the same backend. NGINX Plus supports session draining for when you need to take backend servers offline without impacting the clients that have open sessions.

Dynamic Reconfiguration

This provides the ability to scale backends up and down without requiring that the NGINX configuration is changed and reloaded. This is especially useful when doing service discovery with a microservices platform such as Swarm, and it is one of the most important features that allows NGINX Plus to be fully integrated with these platforms.

There are two dynamic reconfiguration methods: an API that allows you to push changes to NGINX Plus, and DNS, which NGINX Plus checks continually for changes to the number of nodes attached to a domain name. This is the method utilized in this NGINX Plus demo to integrate with the built‑in service discovery in Swarm.

Live Activity Monitoring

Provides an API for getting extensive metrics from NGINX Plus, and a web dashboard built on the API where you can view the metrics as well as add, remove, and change backend servers.

In the open source NGINX configuration described above, the Swarm load balancer both distributes external requests to the backend containers and handles service‑to‑service requests among them. The role of NGINX is SSL/TLS offloading.

When using NGINX Plus, client requests from external clients hit the Swarm load balancer first, but NGINX Plus does the actual load balancing to the backend containers (Figure 6). Having client requests hit the Swarm load balancer first provides an easy way of making NGINX Plus highly available.

In a cluster load balancing topology, NGINX Plus load balances client requests (forwarded to it by the Docker Swarm load balancer) among service instances

Figure 6. The Docker Swarm load balancer forwards client requests to NGINX Plus for load balancing among service instances.

Similarly, the Swarm load balancer receives interservice requests but NGINX Plus actually distributes them among the services (Figure 7).

In a Docker Swarm load balancing topology, NGINX Plus load balances interservice requests (forwarded to it by the Docker Swarm load balancer) among service instances

Figure 7. The Docker Swarm load balancer forwards interservice requests to NGINX Plus for load balancing among service instances

Demonstrations

To provide some examples of using Swarm for Docker load balancing with and without NGINX, I have created three demonstrations. All the files for these demonstrations are available on GitHub, along with detailed instructions. The three demonstrations are:

Docker Swarm Load Balancing

This demonstrates Docker Swarm load balancing of requests to a simple web app backend, without NGINX or NGINX Plus.

Docker Swarm Load Balancing With Open Source NGINX

This demo adds open source NGINX to provide SSL/TLS offload for external requests. The Swarm load balancer distributes requests to the same simple web app backend as in the previous demo and handles internal service‑to‑service requests.

Docker Swarm Load Balancing With NGINX Plus

This demo has two parts. The first part uses NGINX Plus, which in addition to doing SSL/TLS offload, load balances requests directly to the backend containers and also handles internal service‑to‑service requests. It is integrated with Swarm service discovery, using dynamic DNS to frequently re‑resolve the domain name associated with the backends. NGINX Plus is load balancing two backend services, Service1 and Service2. It demonstrates internal service‑to‑service requests by having Service1 make a request to Service2.

In a Docker Swarm load balancing topology, NGINX Plus uses Swarm's dynamic DNS service discovery mechanism

Figure 8. NGINX Plus uses Swarm’s dynamic DNS service discovery mechanism when load balancing backend services.

The second part shows how you can combine the NGINX Status API with the Docker Service API to automatically scale the backend containers. A Python program uses the NGINX Plus Status API to monitor the load on Service1 and Service2, and the Docker Swarm Service API to scale the backend containers up or down.

In a Docker load balancing topology, Swarm uses NGINX Plus live activity monitoring to track service load for autoscaling purposes

Figure 9. Docker Swarm uses NGINX Plus live activity monitoring to track service load for autoscaling purposes.

Summary

The new features introduced in Docker 1.12 make Swarm a more powerful platform, but it can be enhanced by taking advantage of open source NGINX and even more by using NGINX Plus. The ability of NGINX Plus to dynamically reconfigure the backend containers to load balance using DNS, and the visibility provided by the Status API, make for a very powerful container solution.

Try out the demonstrations available at our GitHub repository.

Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs

Topics:
docker swarm ,load balancing ,containers ,devops

Published at DZone with permission of Rick Nelson, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}