Why the Database Needs to Be Part of the Continuous Delivery Pipeline
Continuous delivery needs data store, update and migrated, so databases must be in your CD pipeline. Applying changes to database continuously is difficult.
Join the DZone community and get the full member experience.Join For Free
Speed matters. Deploying, delivering, and integrating working production releases almost guarantees the upper hand against the competition. According to James Smith, co-founder of DevOps Guys, “High performing IT organizations deploy 30 times more frequently, with 50% fewer failures, and 8000x faster lead times than their peers. They are also two times more likely to exceed profitability, market share, and productivity goals. They experience 50% higher market capitalization growth over 3 years.”
However, there is a certain apprehension when applying these “continuous” principles to the database. Data stored in the database is not simple to restore or reinstall if incorrectly tinkered with. On top of that, database changes are difficult to test. Taking these issues into consideration, it is logical for many to avoid applying continuous delivery principles to the database. On the surface, manual review seems like the safe route to take.
The problem is that time consuming steps such as manual review before the execution of database updates causes bottlenecks that are becoming unacceptable. The expectation is the delivery of new functionalities and applications as fast as possible, without unnecessary delays.
How to Apply Continuous Principles to the Database Safely
Not only is it possible for the database to be part of the continuous delivery pipeline, it’s necessary. To apply it correctly and avoid an error-prone, partly automated, partly manual process, ensure the database follows practices such as enforced database version control. The idea is the same as traditional code version control, but designed to make sure you achieve a reliable source of truth and reliable foundations, while not having to deal with out-of-process changes, undocumented database updates etc.
Then, efficiently package all changes sets that need to be deployed for both application and database. Database changes can be deployed either based on labels or on specific change sets. Perform a trustworthy impact analysis for database changes prior to actual continuous database deployments. The analysis must identify and prevent code overrides or conflicts and make sure we don’t break production by automatically deploying something it shouldn’t have.
Finally, ensure the database impact analysis and database change execution are invoked at the right time in the overall deployment plan, enabling controlled, secure, database updates as part of the Continuous Delivery pipelines.
We know that the goal of continuous delivery is to establish an optimized end-to-end process, enhance the development to production cycles, lower the risk of release problems, and provide a quicker time to market. However, more often than not, the database gets left behind. According to a DBmaestro survey, only 13% of companies that practice continuous delivery for their application also apply continuous delivery principles to their databases. The rest are still using manual processes, making it impossible to maximize efficiency. DBmaestro’s Team Work enables automated database updates as part of the Continuous Delivery pipeline while giving teams and organizations the kind of control, insight and traceability that they require.
Published at DZone with permission of Yaniv Yehuda, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.