The Perfect Combination: Imperative Orchestration and Declarative Automation
The Perfect Combination: Imperative Orchestration and Declarative Automation
An imperative or workflow-based orchestrator allows you to execute not only declarative automation but also autonomous imperative units of automation, should you need to.
Join the DZone community and get the full member experience.Join For Free
Easily enforce open source policies in real time and reduce MTTRs from six weeks to six seconds with the Sonatype Nexus Platform. See for yourself - Free Vulnerability Scanner.
There are two fundamental approaches of how to design automation processes:
Declarative, also often known as model-based.
Imperative, also often known as workflow- or procedural-based.
The purpose of this article is to explain the principles of each, explore a little of how they work, try to highlight some of the strengths and weaknesses, and explore how they can be used together. I don’t think one size fits all. Based on your requirements and some of what is explored below, you may use a combination of the two — even if some of the tooling you use leans toward one way or the other.
Declarative or Model-Based Automation
The fundamental state of the declarative model is the desired state. The principle is that we declare, using a model, what a system should look like. The model is data-driven, which allows data in the form of attributes or variables to be injected into the model at runtime to bring it to the desired state. A declarative process should not require the user to be aware of the current state; it will usually bring it to a required state using a concept known as idempotent.
What this basically means is that if you deploy Version 10 of a component to a development environment and it is currently at Version 8, changes 9 and 10 will be applied. If you deploy the same release to a test environment where Version 5 is installed, changes 6 through 10 will be applied, and if you deploy it to a production system where it has never been deployed, it will apply changes 1 through 10. Each of the deployment processes brings it to the same state regardless of where they were initially. The user, therefore, does not need to be aware of the current state.
Once we have a model for our infrastructure or applications, any future changes required are made to the model. This means that as a model changes over time — and may have a current, past, or future state. This raises the question of how can a model be applied to a target system and how can it know what changes to make? This is not always easy to answer and it is not the same for every platform and runtime you are managing.
It’s hard to cover a broad subject like this without making some generalizations—and there are some in this section. There are broadly three methods used to keep an environment or system and its desired state models in line.
1. Maintain an Inventory of What Has Been Deployed
This where we maintain the state of what has been deployed and what is in the desired state of a release, and only apply the difference.
- My release or desired state contains database updates: 1.sql, 2.sql, 3.sql, 4.sql, and 5.sql.
- My example test system has had 1.sql, 2.sql, and 3.sql already deploy (recorded in my inventory).
- Automation determines to only run 4.sql and 5.sql.
2. Use Case
This type of state management is particularly useful for things such as databases or systems that require updates from a propriety API where the state of the target cannot always be easily determined.
3. Validate and Compare the Desired State With What Has Been Deployed
As a simple illustration, we may want to determine if
important_file.xml from my desired state deployment exists on the target system and whether it looks the same as the one in my package.
- If it doesn’t exist, create it. copy it, etc.
- If it does, determine whether it is different.
- If it is different, how do I update it (converge, copy over, etc.)?
Probably the most widely used implementation is to simply manage the files that many cloud-native runtimes use to store configuration definitions, such as SSH keys for a server or XML files to manage a Java runtime such as Tomcat. This also can be used to maintain runtimes that are managed via an API. For instance, What is my heap or config item for my WebSphere Server? Is this what it should be? If it is doing nothing, set it to correct desired state value.
There are occasions when we don’t really care about what exists. The most obvious example here is anything that is stateless, such as a container. There are probably very few examples of why you would want to run any automation within a container to change its state; you would just want to instantiate a new one that has the new configuration you require.
Imperative, Procedural, or Workflow-Based Automation
The fundamental of what often is referred to as workflow, procedural, or imperative approach is that a series of actions are executed in a specific order to achieve an outcome. For an application deployment, this is where the process of “how the application needs to be deployed” is defined, and a series of steps in the workflow are executed to deploy the entire application.
A standard example might include:
- Some pre-install/validation steps.
- Some install/update steps.
- Finally, some validation to verify what we have automated has worked as expected.
The often-cited criticism of this approach is that we end up with lots of separate workflows that layer changes onto our infrastructure and applications — and the relationship between these procedures are not maintained. The user, therefore, needs to be aware of the current state of a target before they know which workflow to execute. That means the principle of desired state/idempotent is hard to maintain and that each of the workflows are tightly coupled to the applications.
The reality here is that the situation is not black and white. Puppet is an example of what is seen as a declarative automation tool, while Chef is said to be imperative. Do they both support concepts of the desired state and are they both idempotent? The answer is, of course, yes. Is it possible to use a workflow tool to design a tightly coupled release that is not idempotent? The answer is, again, yes.
What Are the Benefits of Workflows?
The benefit of using a workflow is that we are able to define relationship and dependencies for our units of automation and orchestrate them together. In the example above, we can create a workflow to perform pre-installation, installation, and post-installation steps, perhaps adding additional conditional steps if it’s in a particular environment such as production (disable a monitoring system, additional security configurations, etc.).
Procedural workflows also allow us to, for example, deploy component A and B on Server1, then deploy component C or Server2, and then continue to deploy D on Server1. This gives us much greater control in orchestrating multi-component releases.
It is often suggested that the choice is between imperative- and declarative-based deployment. However, these two approaches are not mutually exclusive.
Workflow for Applications
One of the issues in the world of DevOps is the lack of a consistent lexicon or terminology. Just ask someone who works in the industry what their definition of an environment is or what the term application means to them. My guess is that you will get differing answers.
To me, an application is made up of components and a release is generally in the form of an application that is created from multiple versioned components. Now, depending on the organization you work in, you may or may not recognize the definition I have given — and if everything in your world is a stateless loosely coupled microservice on a cloud-native platform, then this may be the case. If you work in a more traditional enterprise organization, you will more likely recognize the requirement for multi-component release orchestration, requiring procedural or workflow processes to handle the coordination and dependencies between the different components.
Model for Components
When it comes to deploying a specific component, I think it’s hard to argue against the benefits of using a declarative or model-based approach. The concept of something being idempotent — I can deploy any version to any target system if it has never been deployed to before. If we want to introduce a new change to an already existing environment or even if we want to roll a unit of automation back, it is hard to argue against for its consistency and simplicity.
Imperative Orchestration and Declarative Automation
The future of IT operations is doubtless based on things such as stateless microservices running in fungible containers and using Kubernetes watchers to trigger event-driven automation, much as the future of retail is for Amazon to deliver milk to my IoT-connected refrigerator using a drone.
That being said, most of us still live in the present and have the need — for some time to come, at least — to address some of the automation and orchestration challenges that exist in today’s IT landscape. With this in mind, a combination of declarative (or model-driven) units of automation, coordinated by imperative (or workflow-based) orchestration can be used to achieve this. An imperative or workflow-based orchestrator also will allow you to execute not only declarative automation but also autonomous imperative units of automation, should you need to.
My recommendation is that an application workflow defines the order and dependencies of components being deployed. The declarative model for a component determines what action needs to be taken to bring a target system into compliance with the model.
It is hard to underestimate the importance of the desired state—it has changed the way we look at system and application updates and drastically improved the visibility, audit, and control of IT systems. Any progressive organization looking to move to a Continuous Delivery or deployment model or to implement things such as autoscaling or “deploy and destroy” for development or testing purposes will benefit from the use of declarative or model-based automation.
However, a model-based approach does not answer all the difficult questions around orchestration. This, in my opinion (for the moment, at least) is best achieved using imperative, procedural, or workflows processes.
Published at DZone with permission of David Sayers . See the original article here.
Opinions expressed by DZone contributors are their own.