In this two part write-up, I'm going to be looking at container networking and will talk about how we enable this as part of the Cloud 66 service. My first piece will break down the main components of the network, with my follow-up tomorrow providing a more detailed look at container port mapping and traffic servicing.
The Need for Networking
While Docker is ideal for creating distributed applications that run in the cloud, the containers and the services running in these containers need to be equipped with the ability to communicate with one another.
The thing to remember here is each of your container-based microservices will behave like a self-contained site, which allows other containers to call in via the use of an API to leverage workloads. For this reason, having a secure and reliable container network layer is going to be fundamental to your architecture.
Supported by ContainerNet
One of the service components we've built for Cloud 66 is ContainerNet, a private and secure network based on Weave. Weave is the common container ‘glue’ between all your servers and the components of your stack, including databases. The networking technology we implement creates a virtual network to connect containers across multiple hosts, in order to enable automatic service discovery.
ContainerNet allows resources outside your container to be accessible over a secure connection as if they were one, and vice versa. The network provides an internal IP address to each container, automatically updating with DHCP and DNS and is fully integrated with the life-cycle management of your services.
As covered in a previous article, Cloud 66 provides DNS hostnames for each server you deploy through the service, which allows us to assign a new IP address to your application on your behalf if need be, while still maintaining the same hostname.
Integrated With ElasticDNS
ElasticDNS sits on top of ContainerNet for providing simple DNS-based service discovery, consisting of two parts: a small client and a central service. The client has a DNS server and local cache, and runs in a container on your server(s). It serves DNS queries ending with
.cloud66.local by making a query to the central service and caching the results for their TTL duration, enabling you to call
api.cloud66.local to contact a container running your API service.
As ElasticDNS is centrally backed, it also knows about the caller, which is important when you have multiple versions of your application running at any given moment (for example during deployment). When a deployment happens the load balancer for the externally available services is instructed to switch new traffic to the new containers while still serving the existing traffic with the old containers.
ElasticDNS is clever enough to know which version of the app is running in a container. So if an old web container asks for
api.cloud66.local, it'll be given the address to an old api container, but if a new web container asks for the same thing, it will get the address to a new api container.
With Built-in Encryption
Weave includes a secure, performant authenticated encryption mechanism which we automatically configure on your behalf, so you don’t have to take-on any custom encryption actions yourself.
That's part one complete. Check in again tomorrow, as I'll be publishing the follow-up to this piece. In part two, I'll summarize service configuration options and how we enable port mapping to connect traffic from inside your container with the outside world.
Thanks, and happy coding.