Over the past few years I have had opportunities to learn, adopt and extend different configuration management systems. I have used it for a wide variety of use cases. From conventional enterprise-wide change management to (more recently) enabling continuous delivery. Each of these use cases point to certain strengths and pitfalls with individual configuration management systems. But now as I start thinking about the arena as a whole, these systems are evolving, in their design, in their capacity to scale, the kind of uses cases they solve, the kind of end users who will be adopting them.
But aside from these obvious and somewhat predictable evolution theres a more interesting development happening on how you model your system, how you percieve configuration, and whether you correlate the state of an infrastructure with its configuration.
The Flavors of Config. Managment
Most of the established configuration management systems (like chef , puppet, cfengine3) take a system, determine its current state, and compile a list of configurations that needs to be applied which generate a desired state. Then it does the bare minimal work to bring the current state to the desired state. And if it succeeds in doing that, the system is declared as 'converged'. Though there are certain assumptions for the convergence to be idealistic (individual resources need to be idempotent), these are pretty straight forward to understand and to use.
The varieties exist within these systems are on implementation level.
- Puppet is fairly declarative (unless you want to extend the RAL), hence it can be easily adopted by the ops folks (assuming the ops folks code less than the devs)
- Chef is pretty flexible and uses vanilla Ruby as a DSL, hence it is pretty popular in the dev community.
- Cfengine3 is comparatively easy if you want the configurations to be generated by another automated system.
- Pallet uses clojure which runs on the JVM, so if you are a j2ee shop and fascinated with FP, Pallet might be a good choice.
There are other points of comparison, like how easily you can scale them or how easily they mingle with other infrastructure frameworks etc. But I think there are ample resources available on those topics.
Here's a start: Wikipedia Comparison
But, there are other systems out there like babushka which is much lighter. It doesn't assume you have the whole config that needs to be applied in the same place. So, individual systems can have their own dependency modeled in babushka (deps as they call it), and each of them can apply it on the system they are running on. So, the configs are not really in a central place, nor are they applied at once, but they are applied on demand.
Also, in the same vein, it's a common practice to run the the configuration management system periodically or as a service, to ensure the system is in the desired state, and to audit against possible changes. But I have seen certain scenarios where this is overkill, and running them only when required is more than enough.
It is definitely possible to run puppet or chef-like systems in the latter mode also. But generally they are not designed to be used like that.
Automated, Multi-Level Testing
One common theme that's emerging is the importance of testing. Automated, multi-level (integration testing, unit testing ) testing and testing frameworks are of immense value. But, there's not an easy way to do it. Even though virtualization and few other tools made it easy, its still not at a satisfactory state where you can at least address your problem and then improvise on the solution. We're even struggling to solve the 'happy path' testing.