Over a million developers have joined DZone.

Docker Swarm Lessons from Swarm3K

150,000 containers. Five continents. See the methods, challenges, and lessons learned from trying to set up a massive Docker Swarm.

· Cloud Zone

Download this eBook outlining the critical components of success for SaaS companies - and the new rules you need to play by.  Brought to you in partnership with NuoDB.

This is a guest post by Prof. Chanwit Kaewkasi, Docker Captain who organized Swarm3K – the largest Docker Swarm cluster to date.

Swarm3K Review

Swarm3K was the second collaborative project trying to form a very large Docker cluster with the Swarm mode. It happened on Oct. 28, 2016, with more than 50 individuals and companies joining this project.

Sematext was one of the very first companies that offered to help us by offering their Docker monitoring and logging solution. They became the official monitoring system for Swarm3K. Stefan, Otis, and their team provided wonderful support for us from the very beginning.

Swarm3K public dashboard by Sematext
Swarm3K public dashboard by Sematext


To my knowledge, Sematext is the only Docker monitoring company that allows you to deploy the monitoring agents as the global Docker service at the moment. This deployment model provides for a greatly simplified monitoring process.

Swarm3K Setup and Workload

There were two planned workloads:

  1. MySQL with WordPress cluster
  2. C1M

The 25 nodes formed a MySQL cluster. We experienced some mixing of IP addresses from both my net and ingress networks. This was the same issue found when forming a cluster of Apache Spark in the past (see https://github.com/docker/docker/issues/24637). We prevented this by binding the cluster to a single overlay network.

A WordPress node was scheduled somewhere on our huge cluster, and we intentionally didn’t control where it should be. When we were trying to connect a WordPress node to the backend MySQL cluster, the connection kept timing out. We concluded that a WordPress/MySQL combo would be set to run correctly if we put them together in the same DC.

We aimed for 3000 nodes, but in the end we successfully formed a working, geographically distributed 4,700-node Docker Swarm cluster.

Swarm3K Observations

What we also learned from this issue was that the performance of the overlay network greatly depends on the correct tuning of network configuration on each host.

When the MySQL/WordPress test failed, we changed the plan to try NGINX on Routing Mesh.

The Ingress network is a /16 network that supports up to 64K IP addresses. Suggested by Alex Ellis, we then started 4,000 NGINX containers on the formed cluster. During this test, nodes were still coming in and out. The NGINX service started and the Routing Mesh was formed. It could correctly serve even as some nodes kept failing.

We concluded that the Routing Mesh in 1.12 is rock solid and production ready.

We then stopped the NGINX service and started to test the scheduling of as many containers as possible.

This time we simply used “alpine top” as we did for Swarm2K. However, the scheduling rate was quite slow. We went to 47,000 containers in approximately 30 minutes. Therefore it was going to be ~10.6 hours to fill the cluster with 1M containers. Unfortunately, because that would take too long, we decided to shut down the manager as it made no point to go further.

Swarm3k Task Status
Swarm3k Task Status


Scheduling with a huge batch of containers stressed out the cluster. We scheduled the launch of a large number of containers using “docker scale alpine=70000”.  This created a large scheduling queue that would not commit until all 70,000 containers were finished scheduling. This is why when we shut down the managers all scheduling tasks disappeared and the cluster became unstable, for the Raft log got corrupted.

One of the most interesting things was that we were able to collect enough CPU profile information to show us what was keeping the cluster busy.

dockerd-flamegraph-01

Here we can see that only 0.42% of the CPU was spent on the scheduling algorithm. I think we can say with certainty: 

The Docker Swarm scheduling algorithm in version 1.12 is quite fast.

This means that there is an opportunity to introduce a more sophisticated scheduling algorithm that could result in even better resource utilization.

dockerd-flamegraph-02

We found that a lot of CPU cycles were spent on node communication. Here we see the Libnetwork’s member list layer. It used ~12% of the overall CPU.

dockerd-flamegraph-03

Another major CPU consumer was the Raft communication, which also caused the GC here. This used ~30% of the overall CPU.

Docker Swarm Lessons Learned

Here’s the summarized list of what we learned together.

  1. For a large set of nodes like this, managers require a lot of CPUs. CPUs will spike whenever the Raft recovery process kicks in.
  2. If the Leading manager dies, you better stop “docker daemon” on that node and wait until the cluster becomes stable again with n-1 managers.
  3. Don’t use “dockerd -D” in production. Of course, I know you won’t do that.
  4. Keep snapshot reservation as small as possible. The default Docker Swarm configuration will do. Persisting Raft snapshots uses extra CPU.
  5. Thousands of nodes require a huge set of resources to manage, both in terms of CPU and Network bandwidth. In contrast, hundreds of thousand tasks require high Memory nodes.
  6. 500-1,000 nodes are recommended for production. I’m guessing you won’t need larger than this in most cases, unless you’re planning on being the next Twitter.
  7. If managers seem to be stuck, wait for them. They’ll recover eventually.
  8. The parameter –advertise-addr is mandatory for Routing Mesh to work.
  9. Put your compute nodes as close to your data nodes as possible. The overlay network is great and will require tweaking Linux net configuration for all hosts to make it work best.
  10. Despite slow scheduling, Docker Swarm mode is robust. There were no task failures this time even with unpredictable network connecting this huge cluster together.

Credits

Finally, I would like to thank all Swarm3K heroes: @FlorianHeigl, @jmaitrehenry from PetalMD, @everett_toews from Rackspace,  Internet Thailand, @squeaky_pl, @neverlock, @tomwillfixit from Demonware, @sujaypillai from Jabil, @pilgrimstack from OVH, @ajeetsraina from Collabnix, @AorJoaand @PNgoenthai from Aiyara Cluster, @f_soppelsa, @GroupSprint3r, @toughIQ, @mrnonaki, @zinuzoid from HotelQuickly,  @_EthanHunt_,  @packethost from Packet.io, @ContainerizeT – ContainerizeThis The Conference, @_pascalandy from FirePress, @lucjuggery from TRAXxs, @alexellisuk, @svega from Huli, @BretFisher,  @voodootikigod from Emerging Technology Advisors, @AlexPostID,  @gianarb from ThumpFlow, @Rucknar,  @lherrerabenitez, @abhisak from Nipa Technology, and @enlamp from NexwayGroup.

I would like to thank Sematext again for the best-of-class Docker monitoring system, DigitalOcean for providing all resources for huge Docker Swarm managers, and the Docker Engineering team for making this great software and supporting us during the run.

While this time around we didn’t manage to launch all 150,000 containers we wanted to have, we did manage to create a nearly 5,000-node Docker Swarm cluster distributed over several continents. The lessons we’ve learned from this experiment will help us launch another huge Docker Swarm cluster next year.  Thank you all, and I’m looking forward to the new run!

Related Refcard:

Learn how moving from a traditional, on-premises delivery model to a cloud-based, software-as-a-service (SaaS) strategy is a high-stakes, bet-the-company game for independent software vendors. Brought to you in partnership with NuoDB.

Topics:
docker swarm ,orchestration tool ,cloud ,infrastructure as code

Published at DZone with permission of Chanwit Kaewkasi. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}