There’s an old joke about somebody going into a restaurant, opening a menu, and saying, “Everything looks so good, I think I’ll have ‘em all!” That’s what it can feel like when you look at your options for managing multiple clouds. Make no mistake, multi-cloud is today’s reality for companies large and small.
In the cloud orchestration field, you no longer “select” a particular approach and toolset at the exclusion of all others. For years, the difference between two of the most popular orchestration tools, Chef and Puppet, has been that Chef is “imperative” while Puppet is “declarative.” That’s why developers are said to prefer Chef’s recipes, which let them describe the steps required to achieve the desired state (similar to programming), while the operations side favors Puppet’s ability to define the target state and the path from the current state to the target state (akin to project management).
Just as the walls separating the dev side from the ops side have been removed with the adoption of continuous delivery, the two principal orchestration methods are coming to resemble each other as each is enhanced. CIMI Corp.’s Tom Nolle points out on the Server Side blog that, despite their “radically different architectures,” Chef and Puppet both rely on scripting and the client/server model. Both also are comprised of repositories of reusable DevOps elements modularized to support multiple clouds, hybrid clouds, and cloud data center environments.
A side-by-side comparison of Chef and Puppet, two popular orchestration tools, shows the rivals have as many similarities as they have differences. Source: OpenSourceForU.com
Giant Steps Toward ‘Infrastructure as Code’
To accommodate the application change management that is the heart of continuous delivery, Chef and Puppet are converging: Chef Delivery supports code base management, version control, development collaboration control, and development pipelining; while Puppet’s Code Manager covers the entire development/change cycle, and Node Manager makes it possible to support software-defined networks. The key is for both approaches to be ready ahead of time for any new models or new infrastructures as they arise.
Attempts to future-proof your application operations will be centered on the concept of “infrastructure as code,” which separates deployment into a separate intermediate layer via an abstract hosting model. The model works with any cloud, multi-cloud, or hybrid environment: to add a new cloud provider, you simply define it in the deployment layer as infrastructure.
Any time you add a layer to the operation stack, you’re introducing some complexity, which can negate the operational benefits you’re intending to realize. To ensure your software-defined network delivers on its performance and efficiency promises, you have to maximize automation and minimize any manual work. Use the fewest abstract hosting models that you're able to, take advantage of infrastructure-as-code toolkits, and eliminate as many resource dependencies as you can.
Combine Puppet With Docker to Automate Cloud Configuration Management
Some DevOps pros believe that if they use Docker’s container orchestration, they no longer need Puppet or Chef. TechTarget’s Beth Pariseau describes how an Australian health-care provider has turned this idea on its ear. Scott Coulton, the architect of Healthdirect Australia’s Docker-Puppet solution, explains that “Docker does build, ship, run, [and] Puppet is the ship” in which the container code is delivered. Coulton described the process in a recent PuppetConf keynote presentation.
The healthcare company used Puppet automation to harden the Docker REST API, which was then directed to deploy the container infrastructure. The process joined app development and infrastructure as code in a single continuous delivery process, according to Coulton. "If Puppet sees that one of the containers that's part of a service is not running, Puppet will actually send an API call to update the service to make sure it is running," Coulton added.
Forrester Research analyst Robert Stroud states that as containers are used in more complex environments, the need for efficient configuration management will increase. Puppet, Chef, and other orchestration tools will have to coexist with Kubernetes, Docker Swarm, and Mesos. The recently released Gareth Module for Puppet created by Gareth Rushgrove is one such tool, allowing Puppet to communicate with the Docker REST API. According to Coulton, “as long as you write your Ruby code to understand the responses when Puppet runs, it will look for the resource on a cluster of nodes."
Orchestration will take a front-line role in enterprises as cloud IP traffic represents the lion’s share of data-center traffic, reaching 92 percent by 2020. Source: Gartner, via Equinox.
Are You Ready for the Multi-Cloud Era?
The future of data management is in the clouds, as evident by the findings of research conducted by Cisco Systems that forecasts global cloud IP traffic will almost quadruple between 2015 and 2020 to a total of 14.1 zettabytes. In addition, a recent IDC study reports that 85 percent of enterprises will commit to multi-cloud architectures in 2017, as reported by Tony Bishop on the Equinox blog. Bishop recommends the use of “cloud exchanges” that offer “fast and cost-effective, direct, and secure provisioning of virtualized connections to multiple cloud vendors and services.”
As your company’s data assets are dispersed far and wide in multiple clouds, tools and services that let you develop, deploy, maintain, and update critical apps and databases from a single window will become the command center your operation relies on for smooth sailing.