Where’s the Beef in DevOps?
“Where’s the Beef?” Readers of a certain age (and waistline…) will recall this as a catchphrase which originated for the fast food chain Wendy’s. It’s fine to give customers big burgers, but where’s the meat in the middle? It’s a battle cry that might just as easily be applied to DevOps.
You’re no doubt as exhausted as I am by either the lack of depth in the DevOps coverage, or the wide cast of the DevOps net that encapsulates a broad assortment of tools, methodologies, and concepts.
For all the talk about continuous this and that in DevOps, one core technology stack seems to be absent in the discussion. I ask: where’s the beef in DevOps? I’m talking about the data. To take the Wendy’s analogy one step further, the beef of every application is the data. And in this age of ever more agile development, we risk losing sight of how we manage the data that every application artifact relies on.
Let me explain. Traditionally, developers are provided small subsets of production data because providing larger, more comprehensive datasets is often wildly unmanageable, since production databases can comprise of terabytes and terabytes of data. However, the quality of application code built by developers is dependent on the quality of the data that the application code is tested against at every step of the continuous delivery lifecycle: from prototype, build, and test, to QA and staging. Continuous delivery of data for development is needed.
Dramatic Improvements in Agility
Copy data virtualization might be the answer. This approach frees your increasingly strategic data from your increasingly commoditized infrastructure, eliminating the many siloed systems you rely on to protect and access copies of the same production data. It replaces all the software licensing and costly hardware tied up in Dev and Test with a single, radically simple approach that does one thing: Make whatever data, from whenever it was created, available wherever you need it.
Some years back, server virtualization freed massive unused capacity in computing; today, copy data virtualization frees more of your data from its legacy physical infrastructure.
Here’s how it works. Production data is captured non-disruptively and in its native form, to make it instantly available when needed. A single physical copy—a “golden master,” kept current through an “incremental forever” model—is used to spawn unlimited virtual copies, across any use case where a copy of production data is required.
Couple copy data virtualization with Automic Release Automation and you blend versioned application artifacts with versioned synthetic-full virtualized copies of databases in one cohesive package. Automic provides the automation backbone for your enterprise continuous delivery pipeline, while copy data virtualization manages the beef of the applications in the continuous delivery pipeline—data and database virtualization.
The result? An accelerated development lifecycle, increased quality of releases and reduced the risk of unplanned production outages.
Imagine, for example, being able to provide complete self-contained or “app-in-a-box” environments that include the requisite infrastructure, versioned app artifacts, configured app host, along with a virtualized, versioned, and obfuscated copy of a production database. All of this is possible with Automic and copy data virtualization—even on demand. Moreover, when QA has completed its testing against the “app-in-a-box” environment it can simply be de-provisioned. No more VM or container sprawl. No more data related bugs in production. Use only what you need, when you need it.
Roll-backs are straightforward too. Changes to application artifacts and configuration items are simply undone, and the database is simply reverted to a pre-deployment virtual copy.
Where’s the beef in DevOps? Copy data virtualization delivered with Automic can beef up the quality and flexibility of the applications that your DevOps practice delivers. Automic can provide continuous everything, continually on-demand.