Over a million developers have joined DZone.

The Role of A Container Cluster Manager

A better approach for deploying containers in production.

· Integration Zone

Learn how API management supports better integration in Achieving Enterprise Agility with Microservices and API Management, brought to you in partnership with 3scale

Image title

Deploying Containers on a Single Container Host

Almost all the container runtimes have been designed to run containers on a single container host. This is by design; containers share the host operating system kernel and features such as cgroups, namespaces, chroot, SELinux and seccomp, etc. for providing isolation and security. Therefore, a given set of containers may need to run on a single container host. At the moment, none of the container runtimes available today provide a mechanism for integrating multiple container hosts together for sharing the workload except by using a container cluster manager. Figure 1 illustrates how a software solution deployed on a set of VMs can be moved to a containerized environment using a single container host:

Figure 1: Moving a solution (components: A1, A2, An) deployed on a set of VMs to a single container host.

This deployment model is straightforward, easier to set up, and simple to use. It would fit very well setting up a development environment on developer machines. However, when moving the software solution beyond the dev environment and deploying in QA, performance testing, pre-production, and production environments, the following limitations might have to be considered because of the fact that containers are being deployed on a single container host:

  • Single point of failure: Because all of the containers are running on a single host, if the host fails at some point, all the containers will also fail. As a result, the entire software system that ran on those containers will become unavailable.
  • Resource constraints: Because there is only one container host available, containers will reach their resource (CPU, memory, disk) limitation at some point. Afterward, the system will not be able to scale up unless the host is vertically scaled.
  • No auto-healing and auto-scaling features: Currently, none of the container runtimes provide auto-healing and auto-scaling features for containers. Therefore, those might need to be managed by a human or automated using additional software components.
  • Limited container orchestration features: Containers may need some level of orchestration features for container grouping, container cluster grouping, handling dependencies, health checking, etc. when deploying a composite application. Docker has provided a solution for this with Docker Compose. However, it has some limitations — even if its own container cluster manager Docker Swarm is used.
  • Limited service discovery features: Components of a composite application may need to interact with each other using some mechanism. The easiest method is to use domain names to discover their dependents. Docker Compose has solved this problem with Docker-links; what it does is, if container A depends on container B, when starting container A, we can specify a link to container B. Then, Docker generates an /etc/hosts entry in container A pointing to container B's IP address. This would work for 1:1 scenarios but may not work for 1:M use cases.

Deploying Containers on a Collection of Container Hosts

The easiest way to solve the above problems might be to use a collection of container hosts. Please refer the below diagram:

Figure 2: A container host collection

This approach may look simple and straightforward, but it may have following implications;

  • Independently operated container hosts: Even though a collection of container hosts are used, they would not have any knowledge of each other for distributing the workload.
  • Container management overhead: A human would need to manually interact with the system and distribute the containers among the container hosts to ensure the high availability of each application component. This is called container scheduling. In real life, this might not be practical with immutable, short-lived containers. A programmatic approach might be needed for scheduling containers, auto-healing, and auto-scaling them.
  • Disconnected bridge container networks: Because each container host would have its own container bridge network, the container IP addresses will get leased by the container host. As a result, container IP addresses would get duplicated across the hosts. More importantly, there would be no direct routing between containers. This might become a problem when deploying a composite application, which needs internal routing among application component containers.
  • No dynamic load balancing: Let's assume that containers expose their ports using host ports. A given application component containers may available in multiple container hosts. In a such a situation a load balancer needs to be manually configured according to the container availability by pointing to container host IP and host ports.

Deploying Applications on a Container Cluster Manager

Figure 3: A reference architecture for a container cluster manager

The above diagram illustrates a reference architecture for a container cluster manager. In this approach, almost all of the issues identified in the second approach for using container host collections have been solved by programmatically managing the container host cluster and the container clusters that run on top of them:

  • Scheduler

The scheduler is the brain of the container cluster manager. It monitors the entire container cluster by analyzing the resource utilization of each host and takes container scheduling decisions for optimizing the resource usage and high availability of the containers.

  • Agent

The agent component runs in each container host for providing host management capabilities and sending resource usage statistics to the scheduler. Whenever a scheduler wants to create or terminate a container instance in a host, it talks to the relevant agent and lets it execute a container management command.

  • Overlay Network

The overlay network can be implemented using a software defined network (SDN) framework such as Flannel, Open VSwitch, or Weave. The main advantage of using such a solution is that all of the containers in the container cluster would get connected to a single network with container-to-container routing. More importantly, containers would get unique IP addresses across the container hosts leased by the SDN and, if needed, will be able to integrate with the physical network of the container host cluster.

  • DNS

DNS is another key element of a container cluster management system. It mainly serves two purposes:

    • Providing domain names for containers and container clusters. For example, say if an application server is deployed on a set of containers, each container and the container cluster may need domain names for accessibility.
    • Service discovery with DNS round robin. If application layer routing is not needed, a DNS server can be used for round-robin load balancing at the network layer of the OSI model.

  • Load Balancer
  • The container cluster manager can dynamically configure a third-party load balancer, such as Nginx or HAProxy, for providing load-balancing for containers at the application layer. This would suit well for routing HTTP traffic if session affinity is needed for UI components. Moreover, it would provide the ability to do hostname-based routing while exposing well-known HTTP ports such as 80 and 443 without having to expose dynamic host ports.

    Key Features Required in a Container Cluster Manager

    The following key features would be needed in a container cluster manager for deploying composite, complex applications in production:

    • Container grouping: Group a set of containers together by sharing disk, processes, users, etc. using Linux namespaces.
    • Container cluster management: Managing a group of container groups as a cluster of an application component.
    • Application health checking: This is essential for managing a list of active containers in the load balancer routing rules and for auto healing.
    • Auto-healing: Application components can try to auto heal from catastrophic situations by restarting the containers.
    • Horizontal auto-scaling: An application component cluster can be scaled horizontally by increasing the number of containers.
    • Domain naming and service discovery: Domain naming and service discovery is important for deploying a composite application on containers.
    • Dynamic load balancing: A container cluster manager needs to dynamically configure a load balancer as the container ports, host ports can change in runtime according to the deployment.
    • Centralized log access: Accessing the logs of hundreds of containers on the container itself would be nearly impossible. Therefore the cluster manager needs to provide a mechanism to access logs from a central location.
    • Multi-tenancy: Multi-tenancy might be an essential for sharing a single container cluster manager instance with multiple tenants.
    • Identity and authorization: Identity and authorization management would be needed for both cluster manager and applications deployed on top.
    • Mounting storage systems: Applications that need persistent storage would need to use volume mounts to avoid losing data written to disk when restarting containers.


    In summary, containers can be run on a single container host while bearing some limitations: single point of failure, resource constraints, without auto-healing auto-scaling, limited container orchestration features, limited service discovery features, etc. On the positive side, it would reduce the overhead of setting up a container cluster manager. Docker has solved the problem of deploying composite applications on a single container host using Docker Compose. It also works on the Docker-owned container cluster manager Docker Swarm, but with some limitations.

    Therefore, a production-grade composite application deployment may need a container cluster manager, which can handle complex deployment requirements such as container grouping, container cluster management, application health checking, auto-healing, horizontal auto-scaling, domain naming, service discovery, dynamic load balancing, centralized log access, multi-tenancy, identity, authorization, mounting storage systems, etc.


    [1] Docker, What is Docker?: https://www.docker.com/what-docker

    [2] Docker Docs, Docker Compose: https://docs.docker.com/compose/

    [3] Docker Docs, Docker Swarm:https://docs.docker.com/swarm/overview/

    [4] Kubernetes Docs, What is Kubernetes:http://kubernetes.io/docs/whatisk8s/

    [5] Kubernetes Github Repository, Kubernetes Architecture:https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/architecture.md

    [6] Apache Mesos Docs, Mesos Architecture:http://mesos.apache.org/documentation/latest/architecture/

    Unleash the power of your APIs with future-proof API management - Create your account and start your free trial today, brought to you in partnership with 3scale.


    Published at DZone with permission of Imesh Gunaratne. See the original article here.

    Opinions expressed by DZone contributors are their own.

    The best of DZone straight to your inbox.

    Please provide a valid email address.

    Thanks for subscribing!

    Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}