Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Database Continuous Integration With SQL Clone and SQL Change Automation

DZone 's Guide to

Database Continuous Integration With SQL Clone and SQL Change Automation

Explore database continuous integration with SQL Clone and SQL Change Automation.

· Database Zone ·
Free Resource

When you are working as part of an application development team, it is worth settling down into a routine of doing a daily build from source control, and then provisioning test and development instances of the database. One major advantage is that the code and database can be scrutinized as early as possible, and you can prove that the working database can be built entirely from the code. This means that management can be confident that the assets are all in source control. It also means that tests can pick up problems as early as possible.

This requires automation. The most difficult part is in assembling individual object scripts in dependency order or, in the case of a migration approach, in the order dictated by the manifest. The most tedious part is in loading the data into the database. Of course, the time-consuming part is preparing the data, but this is out of scope for this article. We will assume that you have a standard dataset for testing, and all servers will share it.

I use SQL Change Automation (SCA) to build the database. Microsoft's SMO has a minor integration role, and I also use it to create the database. I use SQL Clone, a component of SQL Provision, to distribute the finished database to the various team-based servers who require the latest build. Once you've honed the process to your requirements, you can run it every time someone commits a database change.

A Brief Introduction to SQL Clone

There is always a slight culture shock when tackling SQL Clone for the first time, so I'll explain briefly (see How Clone Works for more). SQL Clone uses the Virtual Disk Service in x64 Windows to allow the same bytes (the 'data image') to be reused as 'clone' databases, as many times and on as many SQL Server instances as you need. SQL Server is entirely unaware that the Windows Operating system is engaging in any 'smoke and mirrors'. The data image is held only once, in a single location on the network, and never changes. Any changes that are made on each clone are stored in a differencing disk on the local clone server.

It takes little time to set up a clone, which is the time it takes to set up the Virtual Hard Disk (.vhdx) and mount it. Whatever the size of the actual database, the initial clone created from this image requires only about 40MB of disk space.

You'll need one central server, the SQL Clone Server, on which to install the SQL Clone web application, which must be visible over the network to all other machines. Your servers don't need to be part of a Windows Domain to run SQL Clone, but if you are running in a mixed development environment, the simplest approach is to provide the shared directory, for the images, on the same machine as the Clone Server. The clones can go on any SQL Server where the user assigned to the Clone Agent has network access to that shared image directory.

Any instance of SQL Server in the network can be used to host clones. To do this, they need SQL Clone agents installed on them that are assigned a local User with sysadmin rights.

Once a database starts to require a lot of data for development and testing work, the chore of provisioning all the necessary database instances with the current version of the database and data can get out-of-hand. You can do it via schema synchronization if you can be certain that the clones have the correct data, but this simply isn't realistic. Any test run is bound to make data changes. To do a series of integration tests, you need to restore the database to a known state after each test. Traditionally, this was done by backup/restore or repeated detach and attach.

Instead, I'll use SQL Clone, which makes it very quick and easy to delete and reinstate the clone as part of the tear-down, and this process can be easily automated in one line of PowerShell.

PowerShell-Automated CI

The PowerShell script I present next will allow you to maintain the provision the various databases required for test and development with the latest version of the schema, from source control, and one or more versions of data for test or development. I've provided an AdventureBuild zip file that contains both the source of the AdventureWorks database and the data we need for development and testing (as bcp files). If you're starting from a dataset that contains sensitive or personal data, you'll need to mask the data, before provisioning the development and tests servers.

I am assuming that the origin database in this case is a database being developed, and with its source kept in Source Control, so I don't backup existing clones, or save changes before deleting clones, and I don't check to see if anyone still has a session running with the clone.

The script uses SCA to do the build (but you could also use SQL Compare for this), by sync'ing to an empty database, created using SMO. You'll want to build the database on an instance running a SQL Server version that is equal to or lower than the oldest version to which you need to deploy clones. A clone can't be made to run on a version of SQL Server that is older than its parent.

We use BCP to import the data into the empty build, and then SQL Clone to create an image of the database and deploy clones to a list of 'clone' databases on one or more servers. The clones will either be created, or existing clones refreshed, with copies of the database as built by the build process. This should end up with all databases with identical metadata and data. You'll need to have the latest versions of the Clone PowerShell cmdlets installed ( Settings | PowerShell, in the Clone application).

If you have SCA but no SQL Clone, you can still do this, but it will be much slower and it will take more space. I've shown how to do this in two previous articles, one using BCP, and the other using JSON.

The solution omits some of the finer details, most of which you can find in other articles. For example, I don't generate a documentation website for the database, run a check on the code with SQL Code guard or send out information about the progress of the build. I don't show how you can apply scripts and templates to each individual clone to customize the installation, or to make the database conform with the server's settings.

The Config File

To keep things simple, I've put all the settings in a separate script. This contains a list of clones to create or update, the details of the database that must be built, where the data and source code is to be found, and the location of the Clone server and shared image directory. Obviously, you'll need to fill it all in.

$Database = 'OurBuild'
  $Repository= 'OurPathToTheScriptAndData'
  @{
      "Source" = @{
          #specify the various directories you want in order to store files and logs
          #The location of the executable SQL data insertion script.
          'DataSyncPath' = "$Repository\$Database\data";
          #where you want to put the reports for this particular database.
          'ReportPath' = "$Repository\$Database";
          #where you have (or want to put) the source of the current database.
          'DatabasePath' = "$Repository\$Database\Build";
      }
     "Build" = @{
          #now we'll specify where we want the new build. We will clone from this.
          'NewBuildServer' = 'OurServer'; #The Server 
          'NewBuildInstance' = ''; #The SQL Server instance
          'NewDatabase' = "New$Database"; #The name of the database
          'username' = 'Phil Factor';#leave blank if windows authentication
          'SQLCompareOptions' ='NoTransactions'# you sometimes need these for a complex build
      }
      "Image" = @{
          #create an image of what we built
          'Name' = "$($database)";
          'ServerURL' = 'http://CloneServer:14145';
          'ImageDirectoryURL'='\\CloneServer\Clone'
      }
      "Clones" = @(
          @{ "NetName" = "AServer"; "Database" = "$($database)1" },
          @{ "NetName" = "AnotherServer"; "Database" = "$($database)2" },
          @{ "NetName" = "YetAnotherServer"; "Database" = "$($database)3" },
          @{ "NetName" = "StillAnotherServer"; "Database" = "$($database)4" },
          @{ "NetName" = "AndYetAnotherServer"; "Database" = "$($database)5" },
          @{ "NetName" = "YeGodsAnotherServer"; "Database" = "$($database)6" }
      )
  }

This needs to be in the same directory as the following PowerShell script.

The CI Script

Here's the PowerShell script to build and fill the database and deploy the clones:

$VerbosePreference = "Continue"
  #-- just to save typing ----
  $MS = 'Microsoft.SQLServer'
  $My = "$MS.Management.Smo"
  $Mc = "$MS.Management.Common"
  $popVerbosity = $VerbosePreference
  $VerbosePreference = "Silentlycontinue"
  # the import process is very noisy if you are in verbose mode
  Import-Module sqlserver -DisableNameChecking #load the SQLPS functionality
  $VerbosePreference = $popVerbosity
  # set "Option Explicit" to catch subtle errors
  set-psdebug -strict
  $ErrorActionPreference = "stop"
  <# just to make it easier to understand, the various parameter values are structured in a 
  hierarechy. We iterate over the clones when making or updating them #>
  #First we read in the configuration from a file (do it so we can use the ISE as well)
  $Data = &"$(If($psISE) 
    {Split-Path -Path $psISE.CurrentFile.FullPath} 
    Else {$global:PSScriptRoot})\MySCACloneConfig.ps1"
  <# we read in the data as a structure. 
  Then we do some sanity checking to make sure that the data is reasonably viable.
  We apply defaults if possible #>
  $Errors = @()
  # the fourth value means -1 provide a blank default, 0 = not a directory-must be there, 
  # 1=create if not exist, 2 = must already exist
  @(($data.source.DatabasePath, 'source', 'DatabasePath', 2),
    ($data.source.DataSyncpath, 'source', 'DataSyncPath', 2),
    ($data.source.ReportPath, 'source', 'ReportPath', 2),
    ($data.Build.NewBuildServer, 'Build', 'NewBuildServer', 0),
    ($data.Build.NewBuildInstance, 'Build', 'NewBuildInstance', 0),
    ($data.Build.NewDatabase, 'Build', 'NewDatabase', 0),
    ($data.Build.username, 'Build', 'username', -1),
    ($data.Build.SQLCompareOptions, 'build', 'SQLCompareOptions', -1)
    ($data.Image.Name, 'Image', 'Name', 0),
    ($data.Image.ImageDirectoryURL, 'Image', 'ImageDirectoryURL', 0),
    ($data.Image.ServerURL, 'Image', 'ServerURL', 0)
  ) | foreach{
    if ($_[0] -eq $null) #if the parameter has'nt been provided
    {# we give a default '' else flag up an error
      if ($_[3] -eq -1) { $data.$_[1].$_[2] = '' }
      else
      { $Errors += "There is no $($_[1]).$($_[2]) defined" }
    }
    elseif ($_[3] -ge 1) #it is a directory that needs to be tested
    {
      if (-not (Test-Path -PathType Container $_[0]))
      {
        if ($_[3] -eq 2)
        {
          New-Item -ItemType Directory -Force -Path $_[0] `
               -ErrorAction silentlycontinue -ErrorVariable +Errors;
        }
        else { $Errors += "the path '$($_[0])'in $($_[1]).$($_[2])  does not exist" }
      }
    }
  }
  $TheLogfile="$($data.source.ReportPath)\ReportFile.txt"
  if ($data.build.NewBuildInstance -eq '')
      {$NewBuildServerInstance= "$($data.build.NewBuildServer)"}
  else    
      {$NewBuildServerInstance= "$($data.build.NewBuildServer)\$($data.build.NewBuildInstance)"}
  if ($NewBuildServerInstance -eq '') {$errors += 'No build server specified'}
  if ($errors.Count -eq 0) #any errors are displayed at the end
  {
    #first make sure we can connect
    $conn = new-object "$Mc.ServerConnection" $NewBuildServerInstance `
               -ErrorAction silentlycontinue -ErrorVariable +Errors;
    if ($data.build.username -ieq '')
    {
      # Crikey, this is easy, windows Passwords. Dont you love 'em?
      $conn.LoginSecure = $true;
    }
    else
    {
    <# This is more elaborate a process than you might expect because we can't assume 
    that we can use Windows authentication, because of Azure, remote servers outside 
    the domain, and other such  complications. We can't ever keep passwords for SQL
    Server authentication as part of the static script data. At this stage, we ask 
    for passwords if they aren't known, and otherwise store them as secure strings 
    on file in the user area, protected by the workstation security.
    #>
      #create a connection object to manage credentials
      $conn = new-object "$Mc.ServerConnection"
      $conn.ServerInstance = $NewBuildServerInstance
      $encryptedPasswordFile = `
      "$env:USERPROFILE\$($data.build.Username)-$($conn.ServerInstance).txt"
      # test to see if we know about the password in a secure string stored in the user area
      if (Test-Path -path $encryptedPasswordFile -PathType leaf)
      {
        #has already got this set for this login so fetch it
        $encrypted = Get-Content $encryptedPasswordFile | ConvertTo-SecureString
        $Credentials = `
        New-Object System.Management.Automation.PsCredential($data.build.Username, $encrypted)
      }
      else #then we have to ask the user for it
      {
        #hasn't got this set for this login
        $Credentials = get-credential -Credential $data.build.username
        $Credentials.Password | ConvertFrom-SecureString |
        Set-Content $encryptedPasswordFile
      }
      $conn.LoginSecure = $false;
      $conn.Login = $Credentials.UserName;
      $conn.SecurePassword = $Credentials.Password;
    }
    $data.build.ServerConnection = $conn; #this sets our server connection for the build database
  }
  <# now that we've established all the connections we need, we now build the database #>
  if ($errors.Count -eq 0)
  {
    #if the database already exists, then kill it
    $TheBuildDatabase = $data.build.NewDatabase
    $BuildServer = new-object ("$My.Server") $data.build.ServerConnection
    if ($BuildServer.Databases[$TheBuildDatabase] -ne $null)
    { $BuildServer.KillDatabase($TheBuildDatabase) }
    $BuildDatabase = `
    New-Object ("$My.Database") ($BuildServer, $TheBuildDatabase)
    $BuildDatabase.Create()
    if ($BuildDatabase.name -ne $TheBuildDatabase)
    {
      $Errors += "Can't create the database '$($TheBuildDatabase)' in '$($data.build.ServerInstance)"
    };

    if ($data.build.ServerConnection.LoginSecure)
    {
      $data.build.Connection = New-DatabaseConnection `
        -ServerInstance $NewBuildServerInstance  `
        -Database $TheBuildDatabase `
        -ErrorAction silentlycontinue -ErrorVariable +Errors;
    }
    else
    {
      $data.build.Connection = New-DatabaseConnection `
        -ServerInstance $NewBuildServerInstance  `
        -Database $TheBuildDatabase `
        -Username $data.build.ServerConnection.Login  `
        -Password $data.build.ServerConnection.Password `
        -ErrorAction silentlycontinue  `
        -ErrorVariable +Errors;
    }
    $syncResult = Sync-DatabaseSchema  `
        -Source $data.source.DatabasePath  `
        -Target $data.build.Connection `
        -AbortOnWarningLevel None  `
        -SQLCompareOptions $data.build.SQLCompareOptions   `
        -ErrorAction silentlycontinue  `
        -ErrorVariable +Errors

  }
  if ($errors.Count -eq 0) # we can put the data in the database
  {
       <# Now we BCP all the table data in. As we are using native mode the utility
              disables constraints for the table before doing the import #>
    $BuildDatabase.Tables | #for every table
    foreach {
      $filename = "$($_.schema)_$($_.Name)" -replace '[\\\/\:\.]', '-';
      $TheScriptPath = "$($data.source.  DataSyncPath)\$($filename).bcp";
      if (-not ($data.build.ServerConnection.LoginSecure))
      {
        $whatHappened = "`"$TheBuildDatabase`".`"$($_.Schema)`".`"$($_.Name)`""
        $WhatHappened +=
        BCP "`"$TheBuildDatabase`".`"$($_.Schema)`".`"$($_.Name)`""    `
          in "`"$TheScriptPath`"" -q -n -N -E "-U$($data.build.ServerConnection.Login)"    `
          "-P$($data.build.ServerConnection.Password)" "-S$($data.build.ServerConnection.ServerInstance)";
      }
      else
      {
        $WhatHappened =
        BCP "`"$TheBuildDatabase`".`"$($_.Schema)`".`"$($_.Name)`""   `
          in "`"$TheScriptPath`"" -q -N -T -E "-S$($data.build.ServerConnection.ServerInstance)";
      }
      if ($WhatHappened -like '*Error *')
      { throw ("$whatHappened adding data to $TheBuildDatabase.$filename on  $TheScriptPath") };
    }
    $result = $BuildServer.ConnectionContext.ExecuteNonQuery(
      "EXEC sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all'");
    "$($data.build.ServerConnection.ServerInstance) $TheBuildDatabase has been stocked with data"

  }
  <# we get through to Clone #>
  if ($Errors.Count -eq 0)
  {
    #initialise SQL Clone by attempting  to initate a connection with a SQL Clone Server.
    Connect-SqlClone -ServerUrl $data.image.ServerURL -ErrorAction SilentlyContinue `
             -ErrorVariable +Errors
  }
  <# Now we slaughter all existing versions of the database. It's so great that you've put everything worthwhile in Source Control and have SQL Prompt Tab Magic as well. #>
  if ($Errors.Count -eq 0)
  {
    $image = Get-SqlCloneImage  `
           -Name $data.image.Name -ErrorAction SilentlyContinue
    if ($image -ne $null) #if it already exists
    {
      $clones = Get-SqlClone  `
           -Image $image -ErrorAction SilentlyContinue

      if (!($clones -eq $null)) #delete all existing clones (normally, you'd save changes
      {
        $clones | Remove-SqlClone | Wait-SqlCloneOperation
      }
      Remove-SqlCloneImage -Image $image | Wait-SqlCloneOperation
    }
  }
  if ($Errors.Count -eq 0)
  {
   Write-Verbose " Creating new image of $NewBuildServerInstance.$($data.build.NewDatabase)"
   #create an image of what we built. We name it whatever we have specified
   $SqlServerInstance = [RedGate.SqlClone.Client.Api.Objects.SqlServerInstanceResource](Get-SqlCloneSqlServerInstance |
      Where-Object{ ($_.Serveraddress -ieq $data.build.NewBuildServer.Trim()) -and ($_.Instance.Trim() -ieq $data.build.NewBuildInstance.Trim())})
   $ImageFileDestination = Get-SqlCloneImageLocation  `
        -Path $data.Image.ImageDirectoryURL
   New-SqlCloneImage -Name $data.image.Name `
       -SqlServerInstance $SqlServerInstance `
       -DatabaseName $data.build.NewDatabase `
       -Destination $ImageFileDestination  `
       -ErrorAction silentlycontinue  `
       -ErrorVariable +Errors | Wait-SqlCloneOperation
   $ourCloneImage = Get-SqlCloneImage  `
       -Name $data.image.Name  `
       -ErrorAction SilentlyContinue
   if ($ourCloneImage -eq $null)
   {
     $Errors += "couldn't find the clone $($data.image.Name) That has just been created"
   }
   if ($ourCloneImage.State -ne 'Created')
   {$Errors += "We hit a problem with the image. It's state is $($ourCloneImage.State)"}
  }
  #clone it as whatever database is specified to whatever SQL Clone servers are specified
  $data.clones | foreach {"$($_.Netname,$_.Database ) is ok"}
  if ($Errors.Count -eq 0)
  {
    $data.clones | foreach {
      $clone = $null; $Thedatabase = $_.Database;
      #get the correct instance that has an agent installed on it.
      $sqlServerInstance = (Get-SqlCloneSqlServerInstance | Where server -ieq $_.NetName); 
      if ($sqlServerInstance -eq $null) { Throw "Unable to find the clone agent for $($_.NetName)" }
      write-verbose "Cloning $($_.Database) on $($_.NetName)"
      $clone = Get-SqlClone  `
         -ErrorAction silentlyContinue  `
         -Name "$($TheDatabase)"  `
         -Location $sqlServerInstance
      if (($clone) -ne $null)
      {
        write-warning  "Removing Clone $Thedatabase that already existed on $($_.NetName)"
        Remove-SqlClone $clone | Wait-SqlCloneOperation
      }
      Get-SqlCloneImage -Name $data.Image.Name |
      New-SqlClone -Name "$($Thedatabase)" -Location $SqlServerInstance   |
             Wait-SqlCloneOperation 
      if ($errors.Count -gt 0)
      {
        break
      }
    }
  }
  if ($errors.Count -gt 0)
    {
      $errors| foreach{Write-error $_; "$((Get-Date).ToString()): $($_) the build process was aborted">>$TheLogFile;};
      write-error("$($_)")
    }
  else
      {
      "$((Get-Date).ToString()): the build process had no errors">>$TheLogFile
      }

Conclusions

There are a remarkable number of ways of doing automated CI with SQL Server databases. This is just as well considering the rich variety of team methodologies. For me, the great advantage of SQL Clone is not just the ease of automation and the saving of disk space, but the ease of jettisoning a database you are testing and reverting to the build version. The larger the size of the database, the greater the advantage of being able to do this.

This script that I've shown here is to illustrate the possibilities. There are many things that you can add such as data generation, script checking, and data masking, but the basic facilities of SQL Clone are certainly a big help in providing Continuous Integration for the larger team and database.

References

Topics:
database ,sql change automation ,sql clone ,sql provision ,continuous integration ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}