SQL Change Automation with PowerShell Scripts: Getting Up-and-Running
In this article, I'll demonstrate how to use SQL Change Automation to take the contents of a source control directory for a database, check that it is possible to build from it, document it, and then create a NuGet package.
Join the DZone community and get the full member experience.
Join For FreeIn this article, I'll demonstrate how to use SQL Change Automation, a PowerShell script, to take the contents of a source control directory for a database, check that it is possible to build from it, document it, and then create a NuGet package and place the code in it along with the documentation. Finally, I'll show how to synchronize the database schema of an existing database so that it matches the schema of the source control version that we just built and validated.
I'm on the liberal wing of opinion about how to do database delivery as part of the application. No sooner does one person decide on the royal road to stress-free database delivery, someone else comes up with a wildly different idea that is just as successful. Tools that assist with this process have become a lot more flexible to use, and SQL Change Automation (SCA) is typical of this trend, having evolved from DLM Automation suite by adding support for migration-based development.
I will be writing a series of articles to illustrate how to use the PowerShell cmdlets of SCA to achieve parts of the deployment process. I'll assume you have everything installed, such as PowerShell and SCA, and that you've tried out the basic tutorials in the documentation; they are all worth working through, and once you have them running successfully, you'll be ready for more.
In this series, we'll go a lot further than a very simple install of a test database, so you'll need to install the SQLServer PowerShell module too. I'll also show how to do robust error checks so that you have more confidence that things went well.
PowerShell Scripting
When you are automating a process, the scripts that you use, whether Bash, command-line, or PowerShell, share several principles. Unlike a demonstration example of the use of a Cmdlet or an illustration of a process in an article, a real script must run unattended, and in it, you mustn't mix data, especially passwords, with process. The script must not report progress and errors to the screen because a process that runs on the scheduler doesn't have a screen. Even if there is a screen, the information provided is too ephemeral, and the significance of a strange message seen by the early morning shift operator may be missed. No, it is far better that progress, information, and errors should be logged to a file along with the time and date.
A script should either run to completion or fail, leaving the state as it was. Whether it completes or fails, it should mop up (tear down) afterwards. Where possible, it should never just "bomb out."
To present a script like this in an article would make the listing very long and extremely dull, but to present a script without any indication of how a production script should look is misleading. I've tried to sketch in some of the requirements and show how one might deal with them, but the main objective is to get you past the pain threshold of getting something working.
What the Script Does
The PowerShell script uses the SCA PowerShell cmdlets, along with the SqlServer PowerShell module, to do part of a continuous build process. This script keeps all the information we need in a single PowerShell System.Collections.Hashtable
called $config
. This keeps things neat and allows us to store the data separate from the code when we get to the point that we need to have one script for several tasks, each with their own data values.
The project will be in a local GitHub "project" directory with the object-level SQL source files in it. So that we don't tax ourselves unduly, we'll use Northwind
. A copy of this project will be attached to the article. SCA will also work happily with a single build script or an SCA offline schema model, but we'll use a SQL Compare scripts folder.
Our aim is to test that the project can be built into a database. We could also run some tSQLt unit tests to make sure that the source is good, and maybe also a SQL CodeGuard check that the source contains no code smells, but I thought that at this point we should keep the complexity down.
Checking for Connections
We'll start by checking that all the servers can be reached, with the connection information provided. The scripts use the New-DatabaseConnection cmdlet to get an SCA connection object, which we store for use later on, and the Test-DatabaseConnection cmdlet to check the connection.
Checking connections is a more elaborate a process than you might expect because we can't, in real life, assume that we can use windows authentication, because of Azure, remote servers outside the domain, and other such complications. Connection information will, therefore, include credentials, which must be stored securely.
We can't ever keep passwords for SQL Server authentication as part of the static script data. At this stage, we ask for passwords if they aren't known, and otherwise store them as secure strings on file, in the user area, protected by the workstation security. We can make the connection checks, and abort on failure, rather than wasting time starting the process before discovering a problem that is a complete job-stopper.
Checking for Valid Paths
Next, will verify that the file locations you specify exist. It is always best to check that paths are valid and react accordingly. If it is the path to the project then you'd probably want to abandon the build right there, whereas files for logs and data can be created before use.
Checking the Source
To check the source, SCA builds the empty database using Invoke-DatabaseBuild. It will be happy to use a localdb version of SQL Server to do this, but I'd much rather not to do it that way because this needs to be the same server installation, (version and features) as the one the database needs. Instead, we supply a SQL Server instance to act as host for a temporary version of the database. SCA Invoke-DatabaseBuild builds the database and if it succeeds without any problems, then we can proceed. Whatever else, any problems are logged.
Writing the documentation
Once we know that the database is valid, in the sense that it can be built from source without breaking, the next step is to produce database documentation, with the New-DatabaseDocumentation cmdlet, which uses the code from SQL Doc. It takes the extended properties in the database, and uses dependency tracker, to provide a more useful documentation of the database than is possible by just inspecting the source code.
Creating the NuGet Package
Next, we use New-DatabaseBuildArtifact to produce the build artifact, a NuGet package, containing the database source, and all the documentation, which we can then view, or install on a webserver. The NuGet package is exported to a file directory because we don't have a NuGet Server. The NuGet File can be used by SCA to update a database to the same version.
Updating the Target Database to the Same Version
Now that we have a valid build, which we have used to create the NuGet package, we can then synchronize a target database to the same version and save the script that did it using Sync-DatabaseSchema. For a build, you'll need to create the empty database on the target server, and then sync it, to update it to match the version in source control.
If this is successful, the script is saved, which will show the team exactly what was changed, to migrate the database from the previous version to the current version. They may also want to change it. It can be used as the basis of a migration script that converts a database from one specific version of the database to another. The advantage of doing a sync is that we can sync with a development server, which already has the test data, and ancillaries such as SSIS packages, already in place, saving time.
Importing the Project Configuration Information
I don't demonstrate this in the script because it would be a distraction, but you can read PSON files as PowerShell. You save a file like the one shown in Listing 1, but with the real parameter values filled in.
@{
# The SQL Change Automation project (in source control directory) to validate, test and sync
'Project' = 'MyPathToTheProjectDirectory';
# The directory to store the NuGet Package in.
'BuildArtifact' = 'MyPathToTheArtifactDirectory';
# The directory to store the log in to record any errors
'LogDirectory' = 'MyPathToTheLogDirectory';
# the name of the project and Nuget Package
'PackageId' = 'MyBigProject';
# the version in the Nuget Package
'PackageVersion' = '1.0.0';
'Databases' = @{
'Temporary' = #the temporary database. (leave blank for local)
@{
'ServerInstance' = 'MyServer'; 'Database' = 'master'; 'Username' = 'NeverYouMind';'SQLCompareOptions'='';
};
'Target' = #the target database.
@{
'ServerInstance' = 'MyDevServer'; 'Database' = 'MyDatabase'; 'Username' = 'NeverYouMind';'SQLCompareOptions'='';
}
}
}
Listing 1: BuildData.ps1
Then, you read it in to your script and do whatever checks you please.
$config=&'S:\work\programs\powershell\BuildData.ps1'
<# we read in the data as a structure. Then we do some sanity checking to make sure that the data is reasonably viable. #>
$DataError = ''
if ($config.BuildArtifact -eq $null) { $DataError += 'no $config.BuildArtifact, ' };
if ($config.Databases -eq $null) { $DataError += 'no $config.Databases, ' };
if ($config.PackageVersion -eq $null) { $DataError += 'no $config.PackageVersion, ' };
if ($config.PackageId -eq $null) { $DataError += 'no $config.PackageId, ' };
if ($config.LogDirectory -eq $null) { $DataError += 'no $config.LogDirectory, ' };
if ($config.Project -eq $null) { $DataError += 'no $config.Project, ' };
if ($DataError -ne '') { Throw "Cannot run the application because there is $DataError" }
Listing 2: Running checks for missing build data
One thing that I've left out is the PowerShell routine for taking the database version number from source control, or from whatever tool maintains it. I have a dummy version number in the script, but this ought to be over-written with the source-control-specific value. ($ git rev-parse HEAD in Git).
The PowerShell Code
Let's start with a simple version of the script, but that performs the same basic functions as the more complicated (but also more robust) one I'll show shortly.
$errorActionPreference = "stop"
#variables that you need to fill in
$TemporaryDatabaseServer = 'MyConnectionStringToTheServer'
$TargetServerInstance = 'MyTargetServerr'
$TargetDatabase = 'MyTargetDatabase'
$TargetUserName = 'MyLogin'
$TargetPassword = 'MyPassword'
$project = "MyPathTotheProject" # The SQL Change Automation project to validate, test and sync
# Validate the SQL Change Automation project
$validatedProject = Invoke-DatabaseBuild $project -TemporaryDatabaseServer $TemporaryDatabaseServer
#this builds the server temporarily to check that it can be done
#produce documentation and the nuget package
$documentation = $validatedProject | New-DatabaseDocumentation -TemporaryDatabaseServer $TemporaryDatabaseServer
$buildArtifact = $validatedProject | New-DatabaseBuildArtifact -PackageId MyDatabase -PackageVersion 1.0.0 -Documentation $documentation
$buildArtifact | Export-DatabaseBuildArtifact -Path "$project\buildArtifacts"
# Sync a database
$deploymentTargetConnection = New-DatabaseConnection -ServerInstance $TargetServerInstance -Database $TargetDatabase -Username $TargetUserName -Password $TargetPassword # Update this to use the blank database created earlier
$ConnectionErrors = @() # to store any connection errors in
$TestResult = Test-DatabaseConnection $deploymentTargetConnection -ErrorAction silentlycontinue -ErrorVariable ConnectionErrors
if ($ConnectionErrors.count -eq 0) #if we couldn't connect
{ $syncResult = Sync-DatabaseSchema -Source $validatedProject -Target $deploymentTargetConnection
$syncResult.UpdateSql
}
else
{write-warning $ConnectionErrors[0]}
Listing 3: The simplified PowerShell script
Obviously, I've just assumed a SQL Server login with a (gulp) embedded password in the script and we're a bit short of error checking. We will need to do better, which sadly will make it all seem rather more complicated. However, as we move on to do more powerful things, the value of the framework we use here will kick in.
In putting values into the script, you will need some caution. Do not put the log file in a Git directory, because Git might lock the file and prevent the script from reporting errors. If you are using Windows authentication for a server, put an empty string into the username field.
Trap
{
# Handle the exception
$err = $_.Exception
$MyErrorMessage= $err.Message
while ($err.InnerException)
{
$MyErrorMessage += $err.InnerException
};
"$((Get-Date).ToString()) - Unhandled error '$MyErrorMessage'">>$TheLogFile;
# End the script.
break
}
set-psdebug -strict
$ErrorActionPreference = "stop" #
<#variables that you need to fill in for each project. Normally I have this as a separate file and read the relevant data file in according to the project being built The script adds to the config object as it is executed #>
$config = @{
# The SQL Change Automation project (in source control directory) to validate, test and sync
'Project' = 'MyPathToTheProjectDirectory';
# The directory to store the NuGet Package in.
'BuildArtifact' = 'MyPathToTheArtifactDirectory';
# The directory in which to store the build error log.
'LogDirectory' = 'MyPathToTheLogDirectory';
# the name of the project and Nuget Package
'PackageId' = 'MyBigProject';
# the version in the Nuget Package
'PackageVersion' = '1.0.0';
'Databases' = @{
'Temporary' = #the temporary database. (leave blank for local)
@{
'ServerInstance' = 'MyServer'; 'Database' = 'master'; 'Username' = 'NeverYouMind';'SQLCompareOptions'='';
};
'Target' = #the target database.
@{
'ServerInstance' = 'MyDevServer'; 'Database' = 'MyDatabase'; 'Username' = 'NeverYouMind';'SQLCompareOptions'='';
}
}
}
# and some handy constants
$MS = 'Microsoft.SQLServer'
$My = "$MS.Management.Smo"
$errors = 0 #keep a count of the errors we encounter
$TheLogFile = "$($config.LogDirectory)\logfile.txt"
#Load SMO assemblies
$ImportErrors = @()
Import-Module SqlChangeAutomation -ErrorAction silentlycontinue -ErrorVariable +ImportErrors
Import-Module sqlserver -DisableNameChecking -ErrorAction silentlycontinue -ErrorVariable +ImportErrors
if ($ImportErrors.count -gt 0) #if we couldn't import something
{
$ImportErrors | foreach{
"The $($config.'PackageId') build gave an error '$($_)'"
"$((Get-Date).ToString()) - $_">>$TheLogFile;
write-warning $_;
$errors++ #keep a count of the errors
}
}
#check and if necessary create all directories specified by the config
@("$($config.LogDirectory)", "$($config.BuildArtifact)") | foreach{
# If necessary, create the directory for the artefact
if (-not (Test-Path -PathType Container $_))
{
# we create the directory if it doesn't already exist
New-Item -ItemType Directory -Force -Path $_;
}
}
@("$($config.Project)") | Foreach{
if (-not (Test-Path -PathType Container $_))
{
"$((Get-Date).ToString()) - the project file directory '$($config.Project)' isn't there">>$TheLogFile
Write-error "the project file directory for $($config.'PackageId'), '$($config.Project)' isn't there"
<a id="post-464606-_Hlk517452254"></a>$errors++ #keep a count of the errors
}
}
<# We'll start by checking that all the servers can be reached with the connection information. we can quickly ascertain that all the server connection information is OK We can make the connection checks, and abort on failure before wasting time starting the process.#>
$config.Databases.GetEnumerator() | foreach{
$ConnectionErrors = @() # to store any connection errors in
$Database = $_
$db = $Database.Value;
if ($db.username -ieq '')
{
# Crikey, this is easy, windows Passwords. Dont you love 'em?
$db.Connection =
New-DatabaseConnection -ServerInstance $db.ServerInstance -Database $db.Database
}
else
{
<# This is more elaborate a process than you might expect because we can't assume that we can use Windows authentication, because of Azure, remote servers outside the domain, and other such complications. We can't ever keep passwords for SQL Server authentication as part of the static script data. At this stage, we ask for passwords if they aren't known, and otherwise store them as secure strings on file in the user area, protected by the workstation security. #>
#create a connection object to manage credentials
$conn = new-object "$MS.Management.Common.ServerConnection"
$conn.ServerInstance = $db.ServerInstance
$encryptedPasswordFile="$env:USERPROFILE\$($db.Username)-$($db.ServerInstance).txt"
# test to see if we know about the password un a secure string stored in the user area
if (Test-Path -path $encryptedPasswordFile -PathType leaf)
{
#has already got this set for this login so fetch it
$encrypted = Get-Content $encryptedPasswordFile | ConvertTo-SecureString
$Credentials = New-Object System.Management.Automation.PsCredential($db.Username, $encrypted)
}
else #then we have to ask the user for it
{
#hasn't got this set for this login
$Credentials = get-credential -Credential $db.Username
$Credentials.Password | ConvertFrom-SecureString | Set-Content $encryptedPasswordFile
}
$conn.LoginSecure = $false
$conn.Login = $Credentials.UserName
$conn.SecurePassword = $Credentials.Password
#We get the password on the fly for the Cmdlet that creates the SCA connection object for each database
$db.Connection = New-DatabaseConnection -ServerInstance $db.ServerInstance -Database $db.Database -Username $conn.Login -Password $conn.Password
}
#now we can test that the credentials get to the server
$TestResult = Test-DatabaseConnection $db.Connection -ErrorAction silentlycontinue -ErrorVariable ConnectionErrors
if ($ConnectionErrors.count -gt 0) #if we couldn't connect to something
{
write-warning "$($Database.Name;) gave an error $($connectionErrors[0])";
"$((Get-Date).ToString()) - $($Database.Name;) of $($config.'PackageId') gave an error $($connectionErrors[0])">>$TheLogFile;
$errors++
}
}
if ($errors -eq 0) #if there were errors, then it gives up at this stage and reports the errors.
{
# we now check whether we can buld this without errors
$tempServerConnectionString = $config.Databases.Temporary.Connection.ConnectionString.UnmaskedValue
# Validate the SQL Change Automation project and import it inot a ScriptsFolder object
try
{
$validatedProject = Invoke-DatabaseBuild $config.Project -TemporaryDatabaseServer $tempServerConnectionString -SQLCompareOptions $config.Databases.Temporary.SQLCompareOptions 3>>$TheLogFile
}
catch #could not get the -ErrorAction silentlycontinue -ErrorVariable ConnectionErrors to work
{
$_.Exception.Message
"$($Database.Name;) of of $($config.'PackageId') couldn't be validated because $($_.Exception.Message)" | Foreach{
write-warning $_
"$((Get-Date).ToString()) - $_">>$TheLogFile;
$errors++;
}
}
}
if ($errors -eq 0) #if there were errors, then it gives up at this stage and reports the errors.
{
# OK we can now build it
#this section builds the server temporarily to check that it can be done
#Then it produces documentation and the nuget package
if ($validatedProject.GetType().Name -ne 'ScriptsFolder')
{
$TheError = "$($config.PackageId) could not be verified."
write-warning $TheError;
"$((Get-Date).ToString()) - $TheError">>$TheLogFile;
}
else
{
#get the SchemaDocumentation object that we can then add to the nuget package
$documentation = $validatedProject |
New-DatabaseDocumentation -TemporaryDatabaseServer $tempServerConnectionString
if ($documentation.GetType().Name -ne 'SchemaDocumentation')
{
$TheError = "$($config.PackageId) could not be documented."
write-warning $TheError;
"$((Get-Date).ToString()) - $TheError">>$TheLogFile;
}
$buildArtifact = $validatedProject |
New-DatabaseBuildArtifact -PackageId $config.PackageId -PackageVersion $config.PackageVersion -Documentation $documentation
if ($buildArtifact.GetType().Name -ne 'SocBuildArtifact')
{
$TheError = "$($config.PackageId) build artefact could not be created."
write-warning $TheError;
"$((Get-Date).ToString()) - $TheError">>$TheLogFile;
}
$buildArtifact | Export-DatabaseBuildArtifact -Path "$($config.BuildArtifact)" -force
}
# Sync a database
$syncResult = Sync-DatabaseSchema -Source $validatedProject -Target $config.Databases.Target.Connection
# And Save the script
$syncResult.UpdateSql>"$($config.BuildArtifact)\$($config.PackageId)$($config.version).SYNCSQL"
}
if ($errors -gt 0)
{
"$((Get-Date).ToString()) - the build process was aborted">>$TheLogFile;
}
Listing 4: The full PowerShell code (SCABuildAndSync.ps1)
Conclusions
We've looked at just a small aspect of SCA, the PowerShell Cmdlets, and just tried a small part of the functionality. There is a lot of power in there. Most of the work in the script is in checking things such as server connections, paths, and build scripts. The actual SCA tasks represented by the SCA cmdlets will be as easy as Lego to add in once there is a viable framework for doing so. To prove this point, I'll next use the same principles demonstrated here to update an entire test cell, creating the databases wherever necessary and stocking them with masked test data. I've already added the module that gives us access to SMO so I'll be showing you some examples of how we'll use that to extend SCA's functionality.
Published at DZone with permission of Phil Factor. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments