How to Address Database Change Scripts and Other Challenges of Continuous Delivery
Effective DevOps and continuous delivery must encompass the database, but the database represents some unique challenges not faced in the application.
Join the DZone community and get the full member experience.Join For Free
devops (a portmanteau of development and operations) is a practice that emphasizes collaboration between software developers and other information-technology (it) professionals while automating the process of software delivery and infrastructure changes.
devops focuses on organizational culture, while continuous delivery and continuous integrations are mainly about automation and tests, which require a trustworthy source control. an ever-increasing number of organizations are implementing devops and continuous delivery process. they are fueled by reports of the benefits, which include quicker time to market, reduced costs, and higher quality products.
effective devops and continuous delivery must encompass the database, but the database represents some unique challenges not faced in the application. here we will address some of these challenges and provide best practices to handle them.
in order to generate the correct database change scripts , the build phase must have information on the current structure and source control structure. but only having the current and source control structure, as is the case with standard compare and sync tools, is not enough.
simply comparing two environments does not provide insight regarding the nature of the differences, for example:
- a case where the difference conflicts with an emergency fix.
- the trunk/stash/qa environment was already updated with other changes from a different branch.
- the later environment (trunk/stash/qa) is more up-to-date regarding specific objects – thus the difference should not be part of the delta changes script.
this missing information is only available with baseline aware analysis . the input for the database build phase should absolutely be taken from the source control repository which includes only changes that were checked-in and does not include changes that are still in work-in-progress mode. this brings us to the starting point of the process—the source control and how to make sure the build process retrieves the relevant changes.
develop using a reliable database source control
in this phase, developers introduce changes to the database structure, reference lookup content, or logic in the database (procedures, function, etc.)
the two common approaches of database development are:
- using a shared database environment for the team.
- using a private database environment for each developer.
both methods have many advantages and challenges.
using a shared database environment reduces the code merges for the database code and also reduces the complexity and cost of updating the database structure based on the source control. using a private database environment causes many merges of the database code, but reduces the potential of code overriding by another colleague. in addition, a private database environment may have other factors to consider such as management overhead, licenses, hardware, and cost.
the primary reason why the private environment method is not commonly used, relates to how developers publish changes from their private workspace environment to the integration environment. publishing changes should not revert changes made by someone else and updating the private environment from the source control repository should not revert work-in-progress.
the same process of building the native code using only changes which are documented in the source control repository should be applied to database code changes. developers work on the native code in the ide and then check-in the changes to the source control repository without any additional manual steps. having a file-based script that a developer is maintaining for his/her changes will create a few challenges that will be difficult to resolve and will require a lot of time.
how to guarantee that the version control repository correctly represents the database structure that was tested.
if developer a makes a number of changes to a script and developer b makes other changes to the script, none of the developers can execute his/her entire script because the script overrides (or reverts) the changes introduced by the other developer.
in addition, there are other challenges that occur in deployment phase but originate in previous phases:
- controlling the order of the execution of scripts created by several developers.
- maintaining the change scripts on a release scope change.
instead of running many small scripts (in the same order they’ve been executed in qa) which may change the same object several times, execute fewer scripts, and change the object only once—this is difficult to practice as it is expensive to generate the script from scratch and test it.
source control – single source of truth?
anyone with sufficient database credentials may log in to the database, introduce a change, and forget to apply the change in the relevant script of the file-based version control. this is what has reportedly happened in finance, insurance, online travel, algo-trading, gaming, and other industries.
database deployment logic
another unique challenge from the database point of view is how deployment is done. can the database deployment process act as the native code—replacing the existing database/table in production with the new database/table from the development? or does it have to alter the existing database structure in production from the current state to the target state to preserve the data?
deploying native code artifacts —binaries of java, c#, c++—is done by copying the new binaries and overriding the existing ones (current state has no effect on the binary content). deploying database code changes is done by changing the structure of the database or schema from the given state (current state) to the target state (end point). when executing a script in the database, the given state (current point) must be the same as it was when the script was generated otherwise the outcome is not predictable.
when using the correct method, all steps of automated database continuous delivery are possible, from the build, to deploy, to test.
Published at DZone with permission of Yaniv Yehuda, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.