Safely Deleting Clones and Images During Database Development and Testing
Learn how to safely delete clones and images during database development and testing.
Join the DZone community and get the full member experience.
Join For FreeThis is the third article in a series that explains how to use SQL Clone, part of SQL Provision, plus a collection of PowerShell scripts, all with a shared configuration data file to deploy, revert, customize, delete, and refresh clones for database development and testing work.
The first article, Deploying and Reverting Clones for Database Development and Testing, mapped out a Clone installation consisting of a source database, an image, and some clones. It presented a shared configuration data file and then a PowerShell clone installation script that used this config file to create a suite of clones for testing or development work. A second PowerShell script showed how to revert or roll back a clone to its original state, ready for the next set of tests or development work to begin, first ensuring that any changes made since the clone was created were saved to source control.
The second article, Scripting Custom SQL Server Clones for Database Development and Testing, showed how one could apply T-SQL modification scripts during image and clone creation to customize all clones or individual clones prior to use. Simply by adapting the data in the config file, we could use Image modification scripts to alter the image before the clones are taken from it, and both Clone templates and SQL Scripts check and alter the individual clones. These scripted modifications would then run automatically as part of installing the clones without any need to touch the code in the installation script.
Here, I present a PowerShell script that you can use to safely delete all clones and then the parent image in readiness for refreshing all development and test instances with the latest version of the database. As for the rollback process, this script aims to manage the deletion process to ensure that work doesn't get lost. By combining the deletion script with the installation script, you can, in effect, refresh all clones when the image is updated to reflect changes in the original database. The scripts taken together are intended to manage a typical provisioning cycle to keep the clones in sync with the current version of the database.
Updating Clones to the Latest Build
In testing or development, it is important to be sure of working with a known version of the database, usually the latest build. With SQL Clone, you take an image from the database that represents that build, plus data, and clone identical copies of it. Whenever the version changes, these clones need to be migrated to the new version. When you are working with a cell of test and development servers, this can present a challenge.
You can certainly apply a synchronization script in order to bring each clone up to the latest version, although this isn't always practical because each clone could be in a different state, so each clone would require its own synchronization script. Also, because the changes will be held on a differencing disk on local storage on the server, you will gradually lose one of the great advantages of SQL Clone, which is the huge saving in disk space. Also, in doing so, you would be getting around a problem that has ceased to exist, which is the length of time it takes to copy a database. With SQL Clone, it is a matter of seconds.
When working with clones, it is much better — once a new build has been successfully made and stocked with appropriate data and tested — to delete all the clones and then the image that they used. Once that is done, you can create a new image from the successful build with the same name and recreate the new clones under their previous names. As a refinement of this, with a large development or test database, you can create the new image first — under a different name — before dropping the old clones and recreating them with the new image and then finally dropping the old image.
Deleting the old clones and parent image is, in fact, a very simple process, in either the GUI or in PowerShell, by using the Remove-SqlClone
and Remove-SqlCloneImage
cmdlets.
<# now we very simply delete every clone #>
$image = Get-SqlCloneImage -Name $data.Image.Name
#with the image object, we can now delete the clones
Get-SqlClone -Image $image | foreach {
$_ | Remove-SqlClone | Wait-SqlCloneOperation
};
$null = Remove-SqlCloneImage -Image $Image
Is this likely to meet your needs? If so, read no further.
However, the main problem for developers is that it is all too easy to do some scripting on a clone — such as creating a routine or redesigning some tables — and forget to save the work to source control. Worse, someone might still be using their local clone, when suddenly it disappears! Yes, it is perfectly possible for the clone to be deleted while there are active users on the database, unless you check first to see when the last read or write took place, or some other measure of activity. The deletion of the image is also remarkably easy once the clones are deleted, but this is less likely to cause collateral damage.
This means that to be safe, we'll probably want to run some checks before the clones are deleted. The Remove-SqlClone
Cmdlet does not support the use of clone templates, so we can't run any SQL queries using these cmdlets or through the GUI before a clone is destroyed.
This is fine if you feel lucky or don't have tight timescales for test-runs. However, I'll show how to get around these problems and delete clones safely.
Safety Checks Before Deleting the Old Clones and Image
Deleting clones can require additional SQL scripts to be executed, such as exporting the results of automated tests. You can specify these in the config file as a BeforeDeleteScript. You might also require the use of an external tool to save off any recent schema changes. As an example, we'll use SQL Compare to save all the differences between the current clone and the image. We can't compare directly to the image and wouldn't want to. Instead, we create a "reference" clone, which I refer to as the "original," set it to read-only, and compare to the original and save the resulting synchronization script. This is the same technique that I used in the script to revert clones to their original state, as described in the first article.
Here, we also need to find out if there is anyone who is still actively using a clone. It is no use just checking for open SPIDs; we need to find out how recently they've used the system, which we can do using a query like this (you need to provide the name of the database of course!):
USE master
SELECT Coalesce(Min (DateDiff(MINUTE,last_read, GetDate())), 20000) AS MinsSinceLastRead,
Coalesce(Min (DateDiff(MINUTE,last_write, GetDate())), 20000) AS MinsSinceLastwrite
FROM
sys.dm_exec_connections A
INNER JOIN sys.dm_exec_sessions B ON
A.session_id = B.session_id
WHERE database_id =Db_Id('<NameOfDatabase>')
We run this query from the RemoveClonesandImage PowerShell script (presented shortly) and terminate that script after the initial checks if the result returned by the query indicates that there was recent activity in any of the clones. I use the last forty minutes as the threshold, but you can change this in the PowerShell script.
The RemoveClonesandImage Script
This script is used where your requirement is for a regular process that can automate more complex provisioning tasks. If you prefer just a simple deletion process via basic PowerShell scripts, then look at the SQL Clone documentation, which has several examples.
This script uses the same PowerShell configuration data file that we've used in the previous two articles. This will allow you to opt clones out of the "backstop comparison process" that uses SQL Compare to save any potentially-unsaved work. The structure of the config file is as follows:
You can access the RemoveClonesandImage.ps1 script, shown below, and all the other scripts for this series of articles about working with clones for development and testing work at my GitHub repository: https://github.com/Phil-Factor/SQLCloneFamily.
$VerbosePreference = "Continue"
<#
This powershell script removes an image, deleting all its clones first and backing up all
the changes to the metadata if you require it. It also checks the clone before deleting it
to make sure that there is no current activity */ It allows you to specify one or more
other SQL Scripts that you wish to use before a clone is deleted.
#>
# set "Option Explicit" to catch subtle errors
set-psdebug -strict
$ErrorActionPreference = "stop"
<# first, find out where we were executed from so we can be sure of getting the data#>
try
{ $executablepath = [System.IO.Path]::GetDirectoryName($myInvocation.MyCommand.Definition) }
catch
{
$executablepath = "$(If ($psISE)
{ Split-Path -Path $psISE.CurrentFile.FullPath }
Else { $global:PSScriptRoot })"
}
<# just to make it easier to understand, the various parameter values are structured in a
hierarchy. We iterate over the clones when making or updating them #>
$Errors = @()
#First we read in the configuration from a file (do it so we can use the ISE as well)
try {
$Data = &"$executablePath\CloneConfig.ps1"
}
catch
{
$Errors +="Could not access the config file at $executablePath\CloneConfig.ps1"
}
<# we read in the data as a structure. #>
<# now we need to find out the clone that we need to use to compare with the clone
that we want to revert to save any differences. #>
$originalClone = @()
$data.clones | foreach {
if ($_.IsOriginal -eq $true)
{ $originalClone = $_ };
}
<# check that we have got everything correctly #>
if ($originalClone.IsOriginal -ne $true)
{
$errors += 'You have not defined which clone represents the original'
}
Connect-SQLClone -ServerUrl $data.Image.ServerURL `
-ErrorAction silentlyContinue `
-ErrorVariable +Errors
if ($Errors.count -eq 0)
{
$image = Get-SqlCloneImage -Name $data.Image.Name `
-ErrorAction silentlycontinue `
-ErrorVariable +Errors
if ($Errors.Count -gt 0)
{ Write-Warning "The image $data.Image.Name can't be found" }
}
# we need to get hold of the passwords for any connection that has a userid and attach it
# to each clone object as a credential for use later.
# We save these credentials in a file within the user area, relying on NTFS security and
# encryption (gulp)
# We only ask for the password once for each server. If you change the password
# you need to delete the corresponding file in your user area.
if ($Errors.count -eq 0)
{
$data.clones | foreach {
if ($_.username -ine '')
{
#create a connection object to manage credentials
$encryptedPasswordFile = "$env:USERPROFILE\$($_.username)-$($_.Netname).txt"
# test to see if we know about the password un a secure string stored in the user area
if (Test-Path -path $encryptedPasswordFile -PathType leaf)
{
#has already got this set for this login so fetch it
$encrypted = Get-Content $encryptedPasswordFile | ConvertTo-SecureString
$_.Credentials = New-Object System.Management.Automation.PsCredential($_.username, $encrypted)
}
else #then we have to ask the user for it
{
#hasn't got this set for this login
$_.Credentials = get-credential -Credential $Username
$_.Credentials.Password | ConvertFrom-SecureString |
Set-Content "$env:USERPROFILE\$SourceLogin-$SourceServerName.txt"
}
}
}
}
if ($data.tools.SQLCompare -ne $null)
<#we define the SQLCompare alias to make calling it easier. If the user hasn't defind the location of
the tool se simply don't do the comparison #>
{
Set-Alias SQLCompare $data.tools.SQLCompare -Scope Script;
$NoSQLCompare = $false
}
else
{ $NoSQLCompare = $true }
<# now we iterate through the clones other than the original one and if a SQL Compare is required we do it
we also check to make sure that none of the clones are being used. Finally we run any or all the scripts
specified to be run before the clone is destroyed. #>
if ($Errors.count -eq 0)
{
$data.clones |
Where { (-not (($_.database -eq $originalClone.Database) -and ($_.NetName -eq $originalclone.NetName))) } |
# don't do the original because it can't be written anyway.
foreach {
# we use this string so much it is worth calculating it just once
$OurDB="$($_.Database) on $($_.NetName)"
write-verbose "Checking $OurDB"
$CloneIsThere = $True; #assume yes until proven otherwise
$sqlServerInstance = (Get-SqlCloneSqlServerInstance -ErrorAction SilentlyContinue |
Where server -ieq $_.NetName); #test if it is there
if ($sqlServerInstance -eq $null)
{
write-verbose "The Clone $OurDB was not found"; $CloneIsThere = $false
}
else
{
$clone = Get-SqlClone `
-ErrorAction silentlyContinue `
-Name "$($_.Database)" `
-Location $sqlServerInstance
if ($clone -eq $null)
{ #because it isn't there
write-verbose "The Clone $OurDB was not there";
$CloneIsThere = $false };
}
#We only do the compare if we can do, it is specified in the data, and if it is wanted for this clone
if ($_.nocheck -ne $true -and $CloneIsThere -eq $true -and $NoSQLCompare -eq $false)
{
write-verbose "checking whether anything has changed on clone $OurDB compared with $($OriginalClone.Netname) $($OriginalClone.Database)"
<# we check to make sure that the path exists to the work directory#>
if (-not (Test-Path -PathType Container "$($data.WorkDirectory)"))
{ #if the path doesnt exist, we create it
New-Item -ItemType Directory -Force -Path "$($data.WorkDirectory)" `
-ErrorAction silentlycontinue -ErrorVariable +Errors;
}
# we calculate the name of the file where to put the script that shows the changes
$OutputMigrationScript = "$($data.WorkDirectory)\$($_.Database)-$($OriginalClone.Database)"
# if there is already a script file there, we rename it
if (Test-Path -PathType Leaf "$OutputMigrationScript.sql")
{
rename-item -literalpath "$OutputMigrationScript.sql" -NewName "$OutputMigrationScript$(Get-Date -format FileDateTime).sql" -Force `
-ErrorAction silentlycontinue -ErrorVariable +Errors;
}
<# We assemble all the commandline arguments required for SQL Compare#>
$AllArgs = @("/server1:$($OriginalClone.Netname)", # The source server
"/database1:$($OriginalClone.Database)", #The name of the source database on the source server
"/server2:$($_.Netname)", #the clone
"/database2:$($_.Database)", #The name of the database on the clone server
"/scriptfile:$($OutputMigrationScript).sql",
"/include:Identical")
<# We add in extra parameters if necessary to deal with sql server authentication #>
if ($OriginalClone.username -ne '')
{
$AllArgs += "/password1:$($OriginalClone.Credentials.GetNetworkCredential().Password)"
$AllArgs += "/username1:$($OriginalClone.username)"
}
if ($_.username -ne '') # it must be SQL Server authentication
{
$AllArgs += "/password2:$($_.Credentials.GetNetworkCredential().Password)"
$AllArgs += "/username2:$($_.username)"
}
<# now we can at last run SQL Compare to save the script changes just in case #>
SQLCompare @AllArgs > "$($OutputMigrationScript).txt" #save the output
if ($?) { "The clones have now been compared (see $($OutputMigrationScript).txt)" }
else
{
if ($LASTEXITCODE -eq 63) { 'Databases were identical' }
else { $errors += "we had a comparison error! (code $LASTEXITCODE)" }
}
}
<# now we run any scripts necessary before deletion as specified in the data file #>
if ($CloneIsThere -eq $true) #we only do it if the clone is still there
{# we create a connection string to run some SQL
$ConnectionString = "Data Source=$($_.Netname);Initial Catalog=$($_.Database);"
if ($_.username -ieq '') #no user name. Windows authentication
{
$ConnectionString += ';Integrated Security=SSPI;'
}
else # we need to get that password.
{
$ConnectionString += "uid=$($_.username);pwd=""$($_.Credentials.GetNetworkCredential().Password)"";"
}
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection($connectionString)
# open a connection
$SqlConnection.Open()
# create a command
$sqlCommand = $sqlConnection.CreateCommand()
# Firstly, we do a query to see what activity there has been on this datebase recently
$sqlCommand.CommandText = "USE master
SELECT Coalesce(Min (DateDiff(MINUTE,last_read, GetDate())), 20000)
AS MinsSinceLastRead,
Coalesce(Min (DateDiff(MINUTE,last_write, GetDate())), 20000)
AS MinsSinceLastwrite
FROM sys.dm_exec_connections A
INNER JOIN sys.dm_exec_sessions B ON
A.session_id = B.session_id
WHERE database_id =Db_Id('$($_.Database)')"
$reader=$sqlCommand.ExecuteReader()
if ($reader.HasRows) #we read what data was returned.
{
while ($reader.Read())
{
$MinsSinceLastRead=$reader.GetInt32(0);
if ($MinsSinceLastRead -lt 30)
{$errors+="A user read data only $MinsSinceLastRead minutes ago on $OurDB"}
$MinsSinceLastWrite=$reader.GetInt32(1);
if ($MinsSinceLastWrite -lt 30)
{$errors+="A user wrote data only $MinsSinceLastWrite minutes ago on $OurDB"}
}
}
}
<# now we execute any extra SQL Scripts specified by the data #>
if ($_.BeforeDeleteScripts -ne $null)
{
$_.BeforeDeleteScripts.GetEnumerator() | foreach { # do each script
$sqlCommand.CommandText = ([IO.File]::ReadAllText($_))
$sqlCommand.ExecuteNonQuery()
}
}
}
}
<# now we remove the clones and the image #>
If ($Errors.count -eq 0)
{<# now we very simply delete every clone #>
$image = Get-SqlCloneImage -Name $data.Image.Name
#with the image object, we can now delete the clones
Get-SqlClone -Image $image | foreach {
write-verbose "Now deleting $($_.Name) on $((Get-SqlCloneSqlServerInstance | where Id -eq $_.LocationId).ServerAddress)"
$_ | Remove-SqlClone | Wait-SqlCloneOperation
};
write-verbose "Now removing the image $($Image.Name) taken from $($Image.OriginServerName).$($Image.OriginDatabaseName) "
$null = Remove-SqlCloneImage -Image $Image
};
<# We collect all the soft errors and deal with them here.#>
if ($errors.Count -gt 0)
{
$errors | foreach {
Write-error $_; "$((Get-Date).ToString()): $($_) the image deletion was aborted">>"$($Data.WorkDirectory)\Errors.log";
write-error("$($_)")
}
};
Recreating the New Image and Clones
Having deleted all the clones, ensuring not to disrupt ongoing development work nor lose any unsaved changes, the next part of the process is to use the take an up-to-date image of the latest build and create the batch of fresh clones using the CreateOrRenew.ps1 script (described here).
It is very likely that each new clone will need to be customized for the needs and access control requirements of the user of that clone, so the use of a shared configuration file ensures that these differences are maintained when the new clone is created. Every time you run this "refresh" process, the CreateOrRenew script will use the image modifications and clone templates specified in the configuration data file, as described here.
When you need to revert or rollback a clone to its original state, such as after a test run that affected the schema or data, you can use the RollbackClone.ps1 script, and it will reapply any existing clone templates.
Conclusion
SQL Clone is easily managed via the GUI that is provided, and the features of the PowerShell scripting are generally kept in sync so that whatever you can do in the GU, it is also possible in PowerShell. This means that any routine process can be automated easily once it has settled down.
The method of producing a series of scripts that use a shared data structure rather than using a procedural approach based on functions can be a culture shock for an application developer. From my own work with Ops people and when I was on an Ops team, I found that it was the preferred approach, which is why the arts of Bash and DOS scripting survives in a healthy state to this day. It means that a script can be scheduled and (hopefully) left, allowing the team to make any changes purely from the data structure and by checking logs.
There is a great deal of choice in the way that all this is automated in PowerShell, and here, I'm illustrating just one approach, but it should be easy to take what I've done as an example or a starting point.
Published at DZone with permission of Phil Factor. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Guide To Selecting the Right GitOps Tool - Argo CD or Flux CD
-
Cypress Tutorial: A Comprehensive Guide With Examples and Best Practices
-
Execution Type Models in Node.js
-
Why You Should Consider Using React Router V6: An Overview of Changes
Comments