Microservices Networking With Nirmata And Docker
Microservices Networking With Nirmata And Docker
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Nirmata’s mission is to fully automate the operations and management of multi-cloud applications packaged in containers.
Docker, the popular application container technology, has networking features that provide the basic building blocks for an orchestration system to provision and manage multi-host networks for complex applications, without requiring any new overlay protocols or devices.
In this post, I will describe how Nirmata uses default Docker Networking to enable containers to communicate across multiple hosts, and to provide several value advanced features such as service registration and discovery, dynamic and distributed load balancing, service gateway functions, and programmable routing.
Docker Networking Basics
By default, the Docker engine sets up a virtual interface on the host machine using a private address space. This interface acts as a bridge to forward packets across all interfaces that are connected to it. When you run a Docker container its interface attaches to the bridge and can communicate with a host interface. This allows containers to talk to each other. To enable external connectivity, you can specify port mapping for the container. Docker automatically creates NAT rules that map container ports to host ports, for each port mapping configured.
There are several options you can use to tune the basic setup. See the Docker documentation for details.
As a best practice, cloud native applications should be decoupled from the underlying infrastructure (see cloud native application maturity model).
When application containers are deployed across a pool of hosts, we don’t know which IP address and port will get assigned to a container. So how can application services running in containers interconnect? And how do external systems address application services running in a container?
The Nirmata Solution
At Nirmata, our solution design goals were:
- Easy to use and manage at scale
- Works with standard Docker networking
- Works on any public or private cloud
- Allows administrators full visibility and control of the host networks
- Does not require an overlay protocol, external controllers, or devices
Based on your application, here are 2 approaches you can leverage in Nirmata to interconnect services:
Service Dependency Injection
Nirmata’s Application Blueprints allow modeling of dependencies across services. The dependency is used to manage service deployment orders, but more importantly also provides services with important runtime information about each other. Below is an example, where I have configured a Service Dependency and injected information across services.
In my application blueprint, the “orders” service depends on the “catalog” service:
In the run settings for my orders service, I am injecting 2 variables that are dynamically assigned and resolved:
- CATALOG_ADDRESS: the IP Address of the catalog service instance
- CATALOG_PORT: the host port for the HTTP port of the catalog service instance
When I deploy my application to an environment, Nirmata will first deploy the catalog service, and then deploy the orders service, passing in the IP Address and HTTP port of the catalog service into the container.
Once the services are deployed, I can connect to the hosts and view the environment variables in my containers:
My orders service can now use this information, from within the application container, to connect to the catalog service. You can read more about Nirmata’s Service Dependency Injection feature at:
Service dependency injection works great for some applications. However, there a few limitations to consider:
- Dependencies need to be managed across restarts. Nirmata helps with this, by providing a service restart option that optionally recomputes the service dependencies. But it is something you need to be aware of and manage in your application.
- If there are multiple instances of application services, it gets tricky to inject dependencies and manage connections in the application.
For dynamic and distributed applications, the Nirmata Service Networking feature suite may be a better choice. These features are described below:
Service Naming, Registration, and Discovery
Each Service in Nirmata has a name that is unique within the application. The service names, application names, and environment names are DNS compliant.
As service instances are deployed, Nirmata automatically tracks the runtime data for each service and populates this in a distributed service registry that is managed across hosts, within the Nirmata Host Agent container.
Enabling Service Networking is easy – it’s a single checkbox!
When Service Networking is enabled, Nirmata will resolve DNS and TCP requests originating from the application container. Only DNS requests that end with the “.local” domain, and TCP requests for declared Service Ports are handled by the Nirmata and all other requests are propagated upstream.
Dynamic Load Balancing
When Service Networking is enabled, application can connect to each other using the service names. For example, the orders service can connect to the catalog service using a the well known name: catalog.shopme.local.
As shown in the CMD shell output, an HTTP/S request can be simply made as: “https://catalog.shopme.local” and Nirmata will dynamically resolve the IP address and port for the service. If multiple instances of the catalog service are running, Nirmata will automatically load balance requests across these, and will keep track of instances that are added, deleted, or are unreachable. The service load-balancing is also fully integrated with service health checks.
For other TCP protocols, you can connect using the service name and the well-known Service Port, which is the port you configure as the Container Port in the application blueprint. For example: service.app.local:3596
As instances are stopped or started, Nirmata keeps track of the placement and will automatically update the service data on each (relevant) host.
It is important to note that all of this is happening for east-west traffic, across service instances and hosts, in the application tier. There is no hair-pinning of traffic to an external load-balancer or proxy. Load balancing of client request (north-south) is also available, via the Service Gateway.
A Service Gateway provides HTTP request routing, as well as load-balancing capabilities. The Service Gateway is used as an entry point to a Microservices style application. Most load balancers provide server or Host name based load balancing. However, a Service Gateway solves a slightly different problem:
For a Microservices style application, a client must connect to a backend service. However, requests from the client may need to be routed to different services within an application. A Service Gateway using information in the HTTP packet to determine which backend application service should be targeted. Once the application service is selected, the Service Gateway must choose an available instance and resolve its IP address and port.
For example, in the figure below the application has 3 services and each service has multiple instances. The Service Gateway acts as a single client endpoint for all front-end services. This allows a single client endpoint to dynamically address multiple services, on the same connection, by using HTTP information such as the URL path.
Here is the corresponding Service Gateway configuration in Nirmata:
In addition to HTTP URL routing, and load balancing, Nirmata’s service gateway also supports DNS routes to handle websocket connections to a service. Port based TCP routing, at the Service Gateway, is currently in alpha tests and expected to be released soon.
For more information, see: http://docs.nirmata.io/en/latest/Services.html
Nirmata allows fine-grained control of which service instances can communicate with each other. You can allow or deny traffic across services and versions, using the Routing Policy in an Application Blueprint or Environment.
Here I am configuring 2 deny rules, one to not allow traffic from catalog to orders, and the other to not allow traffic from the gateway to orders. Note that I have not chosen a tag, but could use that to control traffic to different versions of services in my environment:
Nirmata provides a rich set of features to make it easy to network application containers. Nirmata uses Docker’s standard networking, and works on any public or private cloud. With the Nirmata solution, administrators have full control of the host networking and security.
Nirmata extends Docker’s networking to provide Service Registration and Discovery, Dynamic Load Balancing, and Service Gateways, and programmable routing.
Docker networking is evolving in Release 1.7. At Nirmata we are keeping a close watch on the Container Network Model, and are looking forward to further enhancements that support an even richer set of use cases.
Let me know if you have any questions, comments, or want more information on Nirmata.
Founder and CEO
You can try Nirmata’s Service Networking for free at: http://signup.nirmata.io
Follow us: @NirmataCloud
This article was first published at:
Opinions expressed by DZone contributors are their own.