Keeping 'Hotfix' and Production Database Schemas in Sync
Hotfix servers are often a necessity, but they can create a drift between themselves and your production database. Let's close the gap and keep your data in sync.
Join the DZone community and get the full member experience.Join For Free
in this article, i’ll be showing you how to automatically compare the schema of two versions of the same database, and then subsequently deploy to the target database any differences detected in the source database. in other words, for any database object that exists in both databases but with differences, the object’s definition in the target will be altered to match its definition in the source. any objects that exist in the source but not in the target will be created, and any that exist in target but not in source will be dropped.
we can do this even without direct access to the source database. i show how to do it using both a batch script and powershell, and the sql compare command line. finally, i schedule the script to run from a sql agent job.
dealing with database drift
ideally, there should, for any version of a database, be no difference between the source of a database schema in the vcs and any other deployed database at the same version. however, we’ve all come across database systems that stray from this ideal, where, for examples, ‘wild west’ style hot fixes are applied directly to the production database.
such ‘drift’ is dangerous and shouldn’t happen, but it does. when a critical bug occurs in a busy oltp database, many dbas are often forced to short-circuit the usual dev-test-prod model for deployments. they apply the fix directly to the production database, often then pushing the applied change back down to the test and development environments, in the aftermath.
it’s a dangerous approach, with many unknowns, but organizations often feel that setting up a parallel hotfix-testing environment, which mimics production both in terms of schema and data, would cost too much in resources, time and money, and would slow down urgent fixes.
sometimes, the wise dba needs to deal with this messy reality first, in order to move work practices, gradually, towards a more orderly approach
maintaining a hotfix server
the first step away from this reckless shoot-first approach to maintaining production systems is to ensure that there are adequate facilities for testing changes. in fact, setting up a database for implementing and testing bugs and hotfixes doesn’t necessarily have to involve powerful, high-cost servers and enterprise edition sql server. in our case, we use sql server express running on a virtual machine. our process to implement, test and deploy hotfixes is as follows:
- ensure that a database on the hotfix server is regularly updated with the latest schema changes from production, taken directly, or from the latest full backup.
- when a bug occurs, import the appropriate data set that will allow us to reproduce the bug conditions. we have standard data sets that contain ‘production-like’ data for each of the major areas of functionality supported by the database.
- implement and test a fix.
- deploy the fix to vcs and to production.
- truncate the test data , having completed the fix and root-cause-analysis.
the first task on this list can be achieved by having a scheduled job in sql server agent to regularly run a sql compare command line script to update the hotfix environment’s database with any new changes introduced to the production database. over time, we hope to eradicate all direct changes to production, so changes to any environment will be deployed from source control. however, for now, we continue to accept that some small changes, directly to production, are inevitable and that we need to capture them by comparing to the live database or most recent backup.
for step 2, we can import the data set using a tool such as bcp, or by executing a command line script for sql data compare (i won’t show this here but will cover it in a separate article). you might wonder why we don’t just restore the production database into the hotfix environment and use that. if you have a very small database that has no personal or financial data, then that is fine. otherwise, it is better to maintain the hotfix server with just the data you need, obfuscated where appropriate.
creating a scheduled batch job for syncing database schemas
let’s step through how to setup a sql server agent job which keeps two database schemas in sync. i’ll run through the example twice, first creating a job that runs a batch script to deploy the changes, and then creating job that runs a powershell script.
the batch script
in this example, we have access to the production database. the agent job will run on a schedule, comparing the schemas of the
database, on my production instance
), with the schema of the same database on my hotfix instance (the default instance on my local machine), and deploying any changes from the former to the latter.
in my previous article, how to automate database synchronization using the sql compare command line , i described the basic syntax for calling the sql compare executable from the command line, to synchronize two databases, as follows:
/database2:targetdatabase /include:identical /sync
it compares the source and target databases, and use of the
parameter means that sql compare will generate a script to apply any schema changes to the target database, necessary to synchronize it with the source. the parameter
allows us to avoid raising an error in cases where the source and target databases are identical, which in our example we hope is the case.
therefore, for my example, the batch script will look as shown in listing 1:
"c:\program files (x86)\red gate\sql compare 12\sqlcompare.exe" /s1:.\dw /database1:adventureworks2014 /s2:. /database2:adventureworks2014 /include:identical /sync
create the agent job
let’s setup a new sql agent job on the hotfix server called
with a job step called
select an operating system script and paste the code from listing 1 into the command text box.
beyond the raw code for the batch script, there are several other considerations around creating and running agent jobs, including security, logging and error handling, alerting and more. i won’t delve into any of these in detail, but i’ll offer a few pointers.
security and the agent proxy account
the sql agent job will need to run using an account that has the appropriate permissions to execute cmd scripts on behalf of the sql server agent, access the required databases, and so on; if not, the job will fail with a permissions error.
you will not want agent jobs to run under an account with sysadmin privileges. instead, set up a proxy account that uses a windows login with only the necessary privileges to perform the task. following are the basic steps, but for more information, visit the sql server documentation on proxies.
- use an existing ad account (or create one) that has permissions compliant with the security policies of the organization
- use this ad account to create a credential in the sql server instance
- use the credential to create a proxy under the sql agent for the sql server instance
- use the proxy as a run as option in the job step which will be executing the task
error handling and logging
if you were running this as a windows scheduler batch job, then you’d need a version of the batch script (listing 1) that had decent error handling and logging (see, for example, listing 4 in my previous article on deploying schema changes to multiple databases using the sql compare command line ).
however, with a sql server agent job, if you set it up properly, a lot of logging, error-handling and alerting is done for you. for a starter, it will be retained in the job history, so that is the first place to look.
the agent job history, however, may not provide enough data for debugging. this is why it is better to add the
parameter to the script, which will write the output to a log file on disk.
for example, for a detailed log file with the results of the compare, the script in listing 1 can be amended at the end like this:
"c:\program files (x86)\red gate\sql compare 12\sqlcompare.exe" /s1:.\dw /database1:adventureworks2014 /s2:. /database2:adventureworks2014 /include:identical /sync /out:sqlcompareagentjoblog.txt
listing 2 will produce a
file in the sqlcompare.exe directory. to save the log file elsewhere, just add a custom path to the
parameter setting a the end, as demonstrated in listing 3:
"c:\program files (x86)\red gate\sql compare 12\sqlcompare.exe" /s1:.\dw /database1:adventureworks2014 /s2:. /database2:adventureworks2014 /include:identical /sync /out:c:\temp\sqlcompareagentjoblog.txt
if a job like this fails, the relevant people need to be notified immediately. you’ll set up an email account to be notified of errors in any step.
test the job
run the job, and if you receive an error, you’ll need to check the job history for details or, sql compare’s log file if you’ve set up logging as described above.
if the job succeeds, you’ll see a log output like this:
however, in order to really make sure that the job works, simply create a new
table in the source database:
create table [dbo].[testcmd] ( [id] [int] null, [testcmd] [nchar](10) null );
now, run the
job and check that the table exists in the destination database.
schedule the job
the job can be scheduled to run on a regular basis, and this is straight-forward using the schedules tab of the agent job setup screen:
setting up the agent job with powershell
we can perform the same task with powershell, using the following general syntax:
set-alias sqlcompare 'c:\program files (x86)\red gate\sql compare 12\sqlcompare.exe' -scope script
sqlcompare /s1:sourceserver\sourceinstance /database1:sourcedatabase /s2:targetserver\targetinstance /database2:targetdatabase /include:identical /sync
therefore, for my example, the powershell script will look as shown in listing 5.
set-alias sqlcompare 'c:\program files (x86)\red gate\sql compare 12\sqlcompare.exe' -scope script sqlcompare /s1:.\dw /database1:adventureworks2014 /s2:. /database2:adventureworks2014 /include:identical /sync
we’ll setup a new sql agent job called
with a job step called
, and then simply paste listing 5 into the command text box for the job step. all the previous discussion about security, logging, and alerting applies equally here.
re-run the job and verify in the job history that the job ran successfully. if desired, you can repeat the test from before, creating a new table, or changing an existing table, on the source database, running the job, and ensuring the changes are deployed to the target.
syncing schemas when you don’t have direct access to production
depending on the environment and the industry, access to the production system may be very restricted, and thus sql compare may not have access rights to perform direct comparisons between a production database and the hotfix database.
in such cases, sql compare can work with a backup the source, or the ddl scripts in the vsc, when we can trust the vcs is always in sync with the prod environment.
the third option is to use a dacpac file as a source for sql compare, as described in one of my previous articles .
in this article, we saw how easy it is to setup automated scheduled sql server agent jobs to run schema comparisons between two databases. in this case, we use two different methods – a cmd script and a powershell script, to achieve the same result.
often, there will be good operational reasons to prevent this sort of scheduled task being run on a production server. in the example in this article, we ran the task on the hotfix server, from the hotfix server’s sql agent, using a proxy account that had the right of access to the metadata of the production system. alternatively, it can be run from windows scheduler, but you’d lose the sql agent’s features of handling errors, logging and alerting.
in cases where direct access to production is prohibited, you’d want to compare from a backup, or a dacpac, rather than from a live server.
if you’re not already using sql compare, download a fully functional 14-day trial and discover how it lets you save time comparing and deploying sql server database schemas.
Published at DZone with permission of Feodor Georgiev, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.