Run 10,000 Docker Containers In Less Than 45 Minutes On 30 Rackspace Cloud Servers With 4GB Of Memory Each
Achieve a deployment of 10,000 Nginx Docker containers in less than 45 minutes on a cluster of 30 cloud servers with 4GB of memory on Rackspace.
Join the DZone community and get the full member experience.
Join For FreeWhile application portability (i.e. being able to run the same application on any Linux host) is still the leading driver for the adoption of Linux Containers, another key advantages is being able to optimize server utilization so that you can use every bit of compute. Of course, for upstream environments, like PROD, you may still want to dedicate more than enough CPU & Memory for your workload – but in DEV/TEST environments, which typically represent the majority of compute resource consumption in an organization, optimizing server utilization can lead to significant cost savings.
This all sounds good on paper -- but DevOps engineers and infrastructure operators still struggle with the following questions:
- How can I group servers across different clouds into clusters that map to business groups, development teams, or application projects?
- How do I monitor these clusters and get insight into the resource consumption by different groups or users?
- How do I set up networking across servers in a cluster so that containers across multiple hosts can communicate with each other?
- How do I define my own capacity-based placement policy so that I can use every bit of compute in a cluster?
- How can I automatically scale out the cluster to meet the demands of the developers for new container-based application deployments?
DCHQ, available in hosted and on-premise versions, addresses all of these challenges and provides the most advanced infrastructure provisioning, auto-scaling, clustering and placement policies for infrastructure operators or DevOps engineers.
- A user can register any Linux host running anywhere by running an auto-generated script to install the DCHQ agent, along with Docker and the software-defined networking layer (optional). This task can be automated programmatically using our REST API’s for creating “Docker Servers” (https://dchq.readme.io/docs/dockerservers)
- Alternatively, DCHQ integrates with 13 cloud providers, allowing users to automatically spin up virtual infrastructure on vSphere, OpenStack, CloudStack, Amazon Elastic Cloud Computing, Google Compute Engine, Rackspace, DigitalOcean, SoftLayer, Microsoft Azure, and many others.
Servers across hybrid clouds or local development machines can be associated with a cluster, which is a logical mapping of infrastructure. This cluster has advanced options, like:
- Networking – a user can select either Docker networking or the software-defined networking to facilitate cross-container communicate across multiple hosts
- Lease – a user can specify when the servers in this cluster expire so that DCHQ can automatically destroy those servers.
- Placement Policy – a user can select from a number of placement policies like a proximity-based policy, round robin, or the default policy, which is a capacity-based placement policy that will place the Docker workload on the host that has sufficient compute resources.
- Quota – a user can indicate whether or not this cluster adherers to the quota profiles that are assigned to users and groups. For example, in DCHQ.io, all users are assigned a quota of 8GB of Memory.
- Auto-Scale Policy – a user can define an auto-scale policy to automatically add servers if the cluster runs out of compute resources to meet the developer’s demands for new container-based application deployments
- Granular Access Controls – a tenant admin can define access controls to a cluster to dictate who is able to deploy container-based applications to it. For example, a developer may register his/her local machine and mark it as private. A tenant admin, on the other hand, may share a cluster with a specific group of users or to all tenant users.
In addition to the advanced infrastructure provisioning & clustering capabilities, DCHQ simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.
Once an application is provisioned, a user can monitor the CPU, Memory, & I/O of the running containers, get notifications & alerts, and perform day-2 operations like Scheduled Backups, Container Updates using BASH script plug-ins, and Scale In/Out. Moreover, out-of-box workflows that facilitate Continuous Delivery with Jenkins allow developers to refresh the Java WAR file of a running application without disrupting the existing dependencies & integrations.
In this blog, we will go over the deployment of 10,000 Nginx containers in less than 45 minutes on a cluster of 30 cloud servers with 4GB of Memory on Rackspace. We will cover:
- Building the application template for the clustered Nginx that can re-used on any Linux host running anywhere
- Provisioning the underlying infrastructure on any cloud (with Racksapce being the example in this blog)
- Deploying the Nginx cluster programmatically using DCHQ’s REST API’s
- Monitoring the CPU, Memory & I/O of the Running Containers
In order to simulate a realistic scenario for an enterprise deploying 10,000 Docker Nginx containers, we used the following configuration:
- We created 10 different users on DCHQ.io
- We created 10 Clusters, each having 3 Cloud Servers on Rackspace. Each Cloud Server had 4GB of Memory and 2 CPUs.
- Each of the 10 users was assigned one of the Clusters as the default cluster for application deployment.
- The application template was shared across with the 10 users
Building the Application Template for the Nginx Cluster
Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Manage > Templates and then click on the + button to create a new Docker Compose template.
We have created a simple Nginx Cluster for the sake of this scalability test. You will notice that the cluster_size parameter allows you to specify the number of containers to launch (with the same application dependencies).
The mem_min parameter allows you to specifcy the minimum amount of Memory you would like to allocate to the container.
The host parameter allows you to specify the host you would like to use for container deployments. That way you can ensure high-availability for your application server clusters across different hosts (or regions) and you can comply with affinity rules to ensure that the database runs on a separate host for example. Here are the values supported for the host parameter:
- host1, host2, host3, etc. – selects a host randomly within a data-center (or cluster) for container deployments
- <IP Address 1, IP Address 2, etc.> -- allows a user to specify the actual IP addresses to use for container deployments
- <Hostname 1, Hostname 2, etc.> -- allows a user to specify the actual hostnames to use for container deployments
- Wildcards (e.g. “db-*”, or “app-srv-*”) – to specify the wildcards to use within a hostname
Provisioning the Underlying Infrastructure on Any Cloud
Once an application is saved, a user can register a Cloud Provider to automate the provisioning and auto-scaling of clusters on 13 different cloud end-points including vSphere, OpenStack, CloudStack, Amazon Web Services, Rackspace, Microsoft Azure, DigitalOcean, HP Public Cloud, IBM SoftLayer, Google Compute Engine, and many others.
First, a user can register a Cloud Provider for Rackspace (for example) by navigating to Manage > Repo & Cloud Provider and then clicking on the + button to select Rackspace. The Rackspace API Key needs to be provided – which can be retrieved from the Account Settings section.
A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster.
A user can now provision a number of Cloud Servers on the newly created cluster by navigating to Manage > Hosts and then clicking on the + button to select Rackspace. Once the Cloud Provider is selected, a user can select the region, size and image needed. A Cluster is then selected and the number of Cloud Servers can be specified.
Deploying the Nginx cluster programmatically using DCHQ’s REST API’s
Once the Cloud Servers are provisioned, a user can deploy the Nginx cluster programmatically using DCHQ’s REST API’s. To simplify the use of the API’s, a user will need to select the cluster created earlier as the default cluster. This can be done by navigating to User’s Name > My Profile, and then selecting the default cluster needed.
Once the default cluster is selected, then a user can simply execute the following curl script that invokes the “deploy” API (https://dchq.readme.io/docs/deployid).
In this simple curl script, we have the following:
- A for loop, from 1 to 100
- With each iteration we’re deploying the 10-node (container) Nginx cluster application using the default cluster assigned to each of the 10 users. This means that each iteration will deploy 10x10 (or 100) containers across 10 different clusters.
- user1%40dchq.io is used for user1@dchq.io where @ symbol is replaced by hex %40
- @ between password & host is not replaced by hex
- <id> refers to the Nginx cluster application ID. This can be retrieved by navigating to the Library > Customize for the Nginx cluster. The ID should be in the URL
- sleep 22 is used between each iteration. This accounts for 2,200 seconds – or 37 minutes.
You can try out this curl script yourself. You can either install DCHQ On-Premise (http://dchq.co/dchq-on-premise.html) or sign up on DCHQ.io Hosted PaaS (http://dchq.io).
Monitoring the CPU, Memory & I/O Utilization of the Cluster, Servers & Running Containers
DCHQ allows users to monitor the CPU, Memory, Disk and I/O of the clusters, hosts and containers.
- To monitor clusters, you can just navigate to Manage > Clusters
- To monitor hosts, you can just navigate to Manage > Hosts > Monitoring Icon
- To monitor containers, you can just navigate to Live Apps > Monitoring Icon
We tracked the performance of the hosts and cluster before and after we launched the 10,000 containers.
Before spinning up the containers, we’ve captured a screenshot of the performance charts for the hosts. You can see that the CPU utilization was negligible and the Memory utilization was at 16%.
The following screenshot is from the Rackspace account showing the 30 VMs after being successfully provisioned. We had provisioned 3 hosts per DCHQ user x 10 clusters using the same Rackspace account.
After spinning up 5,000 containers, we’ve captured screenshots of the performance charts for the hosts. You can see that the highest Memory utilization was at 48%.
After reaching 6,000 containers, we drilled down into one of the 3 hosts in one of the clusters to see more details like the # of containers running on that particular host, the number of images pulled and of course, the CPU/Memory/Disk Utilization.
After spinning up 10,000 containers, we’ve captured screenshots of the performance charts for the hosts. You can see that the highest Memory utilization was at 74%.
When we drilled down into one of the 3 hosts in one of the clusters, we saw more details like the # of containers running on that particular host, the number of images pulled and of course, the CPU/Memory/Disk Utilization.
Here’s a view of all running 1,000 Nginx clusters (where each cluster had 10 containers).
After we deleted all our container-based applications, we captured other screenshots for the cluster. The Memory Utilization was at 19%.
We then drilled down into one of the servers to view the historical performance – where the Memory Utilization decreased from close 75% all the way down to 19%.
Failure Rate
Only 4 our of the 10,000 containers failed to provision during this test – setting the failure rate at 0.0002%.
Conclusion
Orchestrating Docker-based application deployments is still a challenge for many DevOps engineers and infrastructure operators as they often struggle to manage pools of servers across multiple development teams where access controls, monitoring, networking, capacity-based placement, auto-scale out policies and quota are key aspects that need to be configured.
DCHQ, available in hosted and on-premise versions, addresses all of these challenges and provides the most advanced infrastructure provisioning, auto-scaling, clustering and placement policies for infrastructure operators or DevOps engineers.
In addition to the advanced infrastructure provisioning & clustering capabilities, DCHQ simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.
Sign Up for FREE on http://DCHQ.io or download DCHQ On-Premise to get access to out-of-box multi-tier Java application templates along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.
Published at DZone with permission of Amjad Afanah, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments