About Choria Network Federation
About Choria Network Federation
Learn about the data centers that Choria takes advantage of and how NATS has enabled simple, scalable network operations for Choria.
Join the DZone community and get the full member experience.Join For Free
SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.
R.I. Pienaar is the creator of MCollective, which was sold to Puppet Labs. He is now working on Choria, a new set of utilities and tools to greatly simplify the MCollective configuration and orchestration process. In this post, he shares the networking model across geographically distributed data centers that Choria takes advantage of and how NATS has enabled simple, scalable network operations for Choria.
Running large or distributed MCollective networks has always been a pain. As much as middleware is an enabler, it starts actively working against you as you grow and as latency increases — and this is felt especially when you have geographically distributed networks.
Federation has been discussed often in the past but nothing ever happened. NATS ended up forcing my hand because it only supports a full mesh mode, something that would not be suitable for a globe-spanning network.
I spent the last week or two building in Federation, first into the Choria network protocol and later added a Federation Broker. Federation can be used to connect entirely separate collectives together into one from the perspective of a client.
Here, we can see a distributed Federation of Collectives. Effectively, London, Tokyo, and New York are entirely standalone collectives. They are smaller, they have their own middleware infrastructure, and they even function just like a normal collective and can have clients communicating with those isolated collectives like always.
Here, I have five node NATS meshes in every region. We then add a Federation Broker cluster, I’d suggest one instance on every NATS box, and these provide bridging services to a central Federation network.
Clients who connect to the central Federation network and that are configured correctly will interact with all the isolated collectives as if they are one. All current MCollective features keep working and Sub-Collectives can span the entire Federation.
The advantages in large networks are obvious. Instead of one giant 100,000 node middleware you now need to build 10 x 10,000 node networks, something that is a lot easier to do — especially with NATS, it’s more or less trivial.
Not so obvious is how this scales with regards to MCollective. MCollective has a mode called Direct Addressed where the client would need to create one message for every node targeted in the request. Generally, very large requests are discouraged, so it works okay.
These requests being made on the client ends up having to travel individually all across the globe and this is where it starts to hurt.
With Federation, though, since the Federation Brokers are in reality Choria Network Protocol-aware, the client will divide the task of producing these per client messages into groups of 200 and pass the request to the Federation Broker Cluster. The cluster will then, in a load-shared fashion, do the work for the client. Since the Federation Broker tends to be near the individual Collective, this yields a massive reduction in work and traffic. The Federation Broker Instances are entirely state-free, so you can run as many as you like and they will share the workload more or less evenly across them.
In my tests against large collectives, this speeds up the request significantly and greatly reduce the client load.
In the simple broadcast case there is no speed up, but when doing 10,000 requests in a loop, the overhead of Federation was about two seconds over the 10,000 requests — so, hardly noticeable.
The Choria protocol supports Federation in a way that is not tied to its specific Federation Broker implementation. The basic POC Federation Broker was around 200 lines, so not really a great challenge to write. I imagine that, in time, we might see a few options here:
- You can use different CAs in various places in your Federated network. The Federation Broker using Choria Security super user certificates can provide user ID mappings and rewritings between the Collectives.
- If you want to build a SaaS management services on top of Choria, a Federated network makes a really safe way to reach into managed networks without exposing the collectives to each other in any way. A client in one member Collective cannot use the Federation Brokers to access another Collective.
- Custom RBAC and auditing schemes can be built at the Federation Broker layer where the requests can be introspected and only ones matching policy are passed to the managed Collective.
- Federation is tailor-made to provide protocol translation. Different protocol Collectives can be bridged together. An older MCollective SSL-based collective can be reached from a Choria collective via a Federation Broker providing translation capabilities. Ditto with a Websocket interface to Collectives can be a Federation Broker listening on Websocket while speaking NATS on the other end.
The security implications are huge, isolated collectives with isolated CAs. Unique user auditing, authorization, and authentication needs that are bridged together via a custom RBAC layer that is horizontally scalable are quite a big deal — and I needed to do this in a way where the Federation would not be a SPOF.
Protocol translation is equally massive. As I move towards looking at ways to fork MCollective, given the lack of cooperation from Puppet, Inc., this gives me a very solid way forward to not throw away people's investments in older MCollective while wishing to find a way to move forward.
This will be released in version 0.0.25 of the Choria module, which should be sometime this week. I’ve published pre-release docs already. Expect it to be deployable with very little effort via Puppet, given a good DNS setup it needs almost no configuration at all.
I’ll make a follow-up post that explores the network protocol that made this possible to build with zero stored state in the Federation Broker Instances — a major achievement in my book.
Published at DZone with permission of R.I. Pienaar . See the original article here.
Opinions expressed by DZone contributors are their own.