By: Tim Smith | Updated: 2017-12-29 | Comments (5) | Related: More > Database Administration
Problem
We maintain an environment with thousands of databases for different applications and internal functions and have recently faced challenges with deploying some database changes to specific environments, or environment groups without affecting other groups and allowing for some automation. Since multiple teams are involved - from developers to DBAs - and these changes do not affect the objects that our applications need (tables, views, etc.), we want this to be separate from our deployment changes. We would like to dictate what database environments the scripts will be executed against and the time they are executed, while being able to keep these records and script files for tracking purposes if we need changes.
Solution
To start with a solution for this problem, let's consider three questions:
- What happens if an invalid change occurs? In the example of developers, do they write another script and revert the changes? Do the DBAs restore a transaction log backup? Was a backup table saved and it becomes reverted? If it's for DBA maintenance, is reversion needed?
- Who is in charge of reverting a change, if an erroneous change occurs? Think of an example where a development team adds invalid configuration data to application-specific data within a database. Who reverts those data changes?
- How are our databases delineated? In the case of configuration data, are these data within databases separate from databases with customer information? This matters because reverting a change for a database by restoring a transaction log may complicate an application if a database stores both configuration and customer data.
This tip will show one way in which we can dictate where scripts are executed by tracking the scripts and the time they're executed, and we should consider the above questions as they will determine what we do next from here. For an example, if a script changes configuration data in a database that also holds customer data, as the DBA, I would require the configuration data to be backed-up before a change, as reverting a change through a transaction log would not be compatible with the database use-case (mix of configuration and customer data), unless the DBAs are willing to restore a copy and compare configuration data changes.
Create our table for scripts and environments
We'll start off by creating a table with a unique Id field (Id), server and database information, and a timestamp field with a null value. The timestamp field will only be updated when the script has been successfully executed and we want the server and database information to allow for new servers, so we avoid using a table structure that would require column expansion. Since the timestamp and id field will be used for either executing new scripts (timestamp) or updating the table when a script was executed (Id), these fields will be indexed.
CREATE TABLE tbExecScript( Id INT IDENTITY(1,1), ServerName VARCHAR(100) NOT NULL, DatabaseName VARCHAR(100) NOT NULL, ScriptName VARCHAR(100) NOT NULL, ExecuteDate DATETIME NULL ) ALTER TABLE tbExecScript ADD CONSTRAINT PK_tbExecScript_Id PRIMARY KEY CLUSTERED (Id); CREATE NONCLUSTERED INDEX IX_tbExecScript_ExecuteDate ON tbExecScript (ExecuteDate) INSERT INTO tbExecScript (ServerName,DatabaseName,ScriptName) VALUES ('OurDevServer','OurDevDB1','permissions1') , ('OurQAServer','OurQADB1','permissions1') , ('OurDevServer','OurDevDB2','addindexdata') , ('OurPreProdServer','OurPreProdDB1','updatetable') SELECT Id, ServerName, DatabaseName, 'C:\OurLocation\' + ScriptName + '.sql' AS ScriptName FROM tbExecScript WHERE ExecuteDate IS NULL
We will use the above query in our PowerShell function. The C:\Location\ is where the scripts are stored and this would change depending on where scripts may be stored. Since the files are SQL files, we attach the .sql to the end of the files.
Create our build function to execute scripts
With the above query, we load the results into a datatable and loop through each record, calling the Invoke-SqlCmd function and passing in the server, database and file name. In the below example, we are using integrated security to connect to other instances. Once a script has been executed on the appropriate server, we update the timestamp on the table using the Id field.
Function Get-ExecuteScripts { Param( [Parameter(Mandatory=$true)][string]$server , [Parameter(Mandatory=$true)][string]$database ) Process { $getscriptinfo = " SELECT Id, ServerName, DatabaseName, 'C:\OurLocation\' + ScriptName + '.sql' AS Script FROM tbExecScript WHERE ExecuteDate IS NULL " $scriptscon = New-Object System.Data.SqlClient.SqlConnection $scriptscon.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=true" $scriptcmd = New-Object System.Data.SqlClient.SqlCommand $scriptcmd.Connection = $scriptscon $scriptcmd.CommandText = $getscriptinfo $scriptcmd.CommandTimeout = 0 try { $scriptscon.Open() $scriptdetails = $scriptcmd.ExecuteReader() $scriptdatatable = New-Object System.Data.DataTable $scriptdatatable.Load($scriptdetails) foreach ($scriptrow in $scriptdatatable.Rows) { $scriptid = $scriptrow["Id"] $scriptsrv = $scriptrow["ServerName"] $scriptdb = $scriptrow["DatabaseName"] $scriptfile = $scriptrow["Script"] Write-Host "Execute $scriptfile on server $scriptsrv.$scriptdb (id of $scriptid)" Invoke-SqlCmd -ServerInstance $scriptsrv -Database $scriptdb -inputfile $scriptfile } } catch [Exception] { Write-Warning "Get-ExecuteScripts (Connection: [$server].[$database])" Write-Warning $_.Exception.Message Write-Warning "Query: $getscriptinfo --Id $scriptid" } finally { $scriptscon.Dispose() $scriptcmd.Dispose() } } } Get-ExecuteScripts -server "" -database ""
Add our command to the script for updating values in our table
Since we have an open connection to the table we're reading data for executing scripts on other servers, we will use this open connection to update the table once the script has been executed. Under the existing command, we'll add our update command:
$scriptcmd = New-Object System.Data.SqlClient.SqlCommand $scriptcmd.Connection = $scriptscon $scriptcmd.CommandText = $getscriptinfo $scriptcmd.CommandTimeout = 0 $scriptupdate = New-Object System.Data.SqlClient.SqlCommand $scriptupdate.Connection = $scriptscon
Next, on each loop we'll add the command text that will update the id on each iteration:
$scriptupdate.CommandText = "UPDATE tbExecScript SET ExecuteDate = GETDATE() WHERE Id = $scriptid" try { $scriptupdate.ExecuteNonQuery() | Out-Null } catch [Exception] { Write-Warning "Get-ExecuteScripts (Connection: [$server].[$database])" Write-Warning $_.Exception.Message Write-Warning "Query: UPDATE tbExecScript SET ExecuteDate = GETDATE() WHERE Id = $scriptid" }
The above script will update the table once the script has been executed on the appropriate server and database. If we get an error on the script execution, this step will be skipped, as it will mean the script was not successfully executed. Within a try-catch, the first failure will stop the try and move to the catch; in this case, we want to specify where the script stopped - either on the execution or the update, so we'll wrap the update in its own try-catch block. The below code sample shows the functionality of a try-catch with multiple statements and notice how the first failure within a try block will move to the catch.
try { Write-Host (10/0).ToString() Write-Host "Example one: trying ..." Write-Host (20/2).ToString() } catch [Exception] { Write-Host $_.Exception.Message } Write-Host ([Environment]::NewLine) try { Write-Host (10/2).ToString() Write-Host "Example two: trying ..." Write-Host (20/0).ToString() } catch [Exception] { Write-Host $_.Exception.Message } Write-Host ([Environment]::NewLine) try { Write-Host "Try 1" Write-Host (10/2).ToString() try { Write-Host (20/0).ToString() } catch [Exception] { Write-Host "Catch 2" Write-Host $_.Exception.Message } } catch [Exception] { Write-Host $_.Exception.Message }
The update can fail, even if the scripts run. If the scripts fail, the function will terminate. In this example, we see the result with four script executions once each script has been executed on their appropriate servers:
Finally, in some environments, we will want to consider security practices as well when thinking about this design:
- If our scripts execute from an administration server, consider separating development environments from production environments. In other words, we may have a administration server for lower environments and one for higher environments.
- Only allow port connections to the SQL Servers where scripts will be run. All other connections to and from the servers will be off with an exception being a port being open for files to be moved and then disabled.
- Add a validation step when a developer submits a script; this adds a layer of security by putting the DBA in charge of migrating the script and adds oversight.
Next Steps
- Scripts involved in changing the design or architecture should follow a normal deployment process where the change occurs with the application and both are tested. For adding, updating, and (or) removing data and possibly some DBA architectural changes (indexing, stats, etcs), this is one way to help track these scripts.
- If you group servers by function, application, or another pattern, you can create a table with the server groups and join to it from the script table. While this may reduce complexity if consistent, it may add complexity if inconsistent.
- Consider the most appropriate security design for your environment which allows flexibility for running scripts across multiple servers.
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips
Article Last Updated: 2017-12-29