Today, there are two broad categories of provisioning and deployment automation. The first are convergent tools such as Puppet and Chef. The second are directed automation tools. You are using a directed automation tool when you deploy using your build server, or an application release automation tool like UrbanCode’s uDeploy.
Both convergent and directed automation tools tend to use agents to actually do the work of a deployment. The agents are installed on the deployment target (or with access to the deployment target) and carry out the deployment tasks. A handful of directed automation tools are agentless – a single master server uses web service calls and remote secure shells to deploy. Those tools tend to be more limited in what they can do relative to agent driven systems which can use either agents directly on the target, or using the same web service or secure shell strategies from one agent to another target.
The Convergent Deployment Model
In a convergent tool, agents poll policy/manifest master servers for updated definitions of what their server should be like. An administrator who wants to deploy a patch or new version of some infrastructure posts that version as the new, correct version for a type of server. Agents that know they belong to that role eventually discover the change, and execute the appropriate automation. Agents on servers that are offline eventually come back online, wake up, discover the new manifest and come back into compliance. Convergent models do a good job of guaranteeing that any machine that is online will have the right stuff on it within some time period related to the polling interval.
The Directed Deployment Model
In a directed deployment tool, the agents do not poll for new manifests. Instead the knowledge of the type of targets they provide access to is managed centrally. Good directed deployment tools also keep track of what should be on any given machine, and what has actually been deployed out there. When something is deployed, the central server directs the agents to run the automation and keeps track of how each agent is faring in its task. This is especially useful when multiple related components are being deployed. For instance, when there are database changes that a new version of an application will rely on. It is natural for a directed tool to be able to force the application changes to wait on the successful application of databases changes.
When to Choose Directed
There are many tasks that both convergent and directed tools do an excellent job with. Applying a security patch can be done either way quite comfortably. Look to directed tools when more coordination is needed, especially where the timing of things happening on different servers is important. To over-simplify, it is much easier for a directed automation system to have application deployments on one set of servers wait on database deployments on others. Directed tools have a sweet spot in more complex or multi-tier application deployments.
Convergent automation tools tend to do well where machines are often offline or are frequently created from stale images and need to be able to be brought into line with newer standards. Perhaps a standard virtual image is brought online and needs its its middleware and patches applied. Convergent tools have a sweet spot in this type of infrastructure provisioning.
These tools can work well together. It’s not uncommon for a new server to come online and have the convergent tool layer on infrastructure including the agent, and agent meta-data for a directed deployment tool. The directed deployment tool can then detect that new deployment target and assign it to the right roles and environments based on that meta-data. Recurring, scheduled or push-button directed deployments would add the appropriate applications, and add to the new server to appropriate load balancers.