By: Chad Boyd | Updated: 2007-10-15 | Comments (1) | Related: > Backup
So, here's an interesting scenario I ran into a few weeks back. A customer had a situation in which they were using a read-only database (to serve reports/ad-hoc queries during production hours) that was loaded nightly with differential data from a variety of systems. This data was imported to the system nightly from flat-files (variety of formats) using a BULK INSERT operation against each file. During non-loading times, the database was physically marked as "read-only", and during load-times only was it marked as "read-write". This database was starting to grow quite large, and to cut-down on the length of backup time and size of backups, the customer switched from a model of taking a full backup nightly to instead taking a single full backup once a week (Sunday) and then taking a single log backup after each bulk-load operation nightly. To accommodate, the database's recovery model was switched from SIMPLE to BULK_LOGGED (where it stayed at all times) The customer figured (correctly) that the nightly log backups would be smaller and quicker to complete as opposed to taking an entire full backup each night.
So, the high-level outline for a week of processing was similar to the following:
1) Full backup on Sunday after the daily data-load
2) Database marked as "read-only"
3) Nightly the database is marked
as "read-write" temporarily
3) All flat-file data is bulk-loaded
into the database (including a drop/create of an index or two)
4)
Database is set back to "read-only"
5) Transaction Log backup is
taken
The customer was surprised to see that as the week progressed, their log backups were getting larger and larger and larger until Sunday came around again and a full backup was taken, at which point the log backup for Monday would be the typical size, and each night's log backup thereafter would again become larger and larger and larger. The customer was a bit confused as to why this was occurring, since they figured the nightly log backups should only contain data that was imported in that given night's load. This in fact would be correct under typical circumstances, however in this case, there is a logical error in the processing order that is leading to the bloated log backups.
Before I discuss the logic issue, I'll first give a very, very brief overview on how minimally logged operations work with relation to what is logged during the operation and how these operations impact transaction log backups. If you already understand these concepts, you can skip this section and jump down a few paragraphs to were I start discussing the issue at hand.
When you perform a minimally logged operation in Sql Server, you are required to meet a few prerequisites, one of which is that you need to be operating in the BULK_LOGGED recovery model (I won't discuss the others here, but Books Online contains a detailed list under the "Minimally Logged Operations" and "Prerequisites for Minimal Logging in Bulk Import" topics). When you perform a minimally-logged operation, only space allocations are logged during the operation itself (along with the fact that a minimally logged operation occurred); however, as extents in the data files are affected/modified by a bulk operation, bits are flipped in the BCM pages within the file (special pages in Sql Server used to track extents that have been modified during a bulk operation - for more information, see the topic "Tracking Modified Extents" in Books Online). Then, come time for the next log backup, the log backup operation scans through these BCM pages looking for any bits set, and if it finds any, it pushes the contents of those extents out to the log backup that is currently taking place, and then resets the bits in question within the BCM pages back to a non-set value so that the next log backup doesn't need to push those same extents out to the next log backup.
So, having said that, you may already be able to determine what the problem with the above logic is - given that a log backup operation reads the BCM pages for modified extents, and then in turn flips the bits in the given BCM pages, this obviously requires that the log backup operation be able to WRITE to the BCM pages - if the database is in read-only mode, the log backup operation will be unable to flip the bits in the BCM pages, and hence the next log backup will pull the same extents (plus any others that have changed) in the said backup. In this case, the logic was to first set the database back to "read-only" mode prior to taking the log backup - simple solution here is to take the log backup while the database is still in "read-write" mode, then set the database to "read-only" after the log backup is finished. Once this simple change is made, all nightly log backups are the expected size, and no longer growing indefinitely. The new logic at a high-level became as follows:
1) Full backup on Sunday after the daily data-load
2) Database marked as "read-only"
3) Nightly the database is marked
as "read-write" temporarily
3) All flat-file data is bulk-loaded
into the database (including a drop/create of an index or two)
4)
Transaction Log backup is taken
5) Database is set back to "read-only"
Notice the subtle but important change - simply swapping the order of steps 4 and 5 from the original logic...
I've included sample code below to reproduce this same type of situation if you're interested in seeing it in action. It's important to note that a similar pattern would be noticed if you were using a differential backup model vs. a log backup model, although the sizes of the diff backups taken each progressive night wouldn't be as significantly larger (differential backups use special pages called DCM pages to pull modified extents in a differential backup operation), it just so happens that in this case the customer was using a log backup model.
use master;
go
-- Create a test database...
if db_id('bulkLoad_logBackup_sample') >
0
drop database bulkLoad_logBackup_sample;
-- create...
create
database bulkLoad_logBackup_sample on
primary (name = 'bulkLoad_logBackup_sample_primary',
size = 25mb, filename = 'c:\temp\bulkLoad_logBackup_sample.mdf'),
filegroup
data1 default (name = 'bulkLoad_logBackup_sample_data1a', size = 1gb, filename
= 'c:\temp\bulkLoad_logBackup_sample_data1a.ndf')
log on (name = 'bulkLoad_logBackup_sample_log',
size = 128mb, filename = 'c:\temp\bulkLoad_logBackup_sample.ldf');
go
-- Set to simple recovery temporarily...
alter database bulkLoad_logBackup_sample
set recovery simple;
go
use bulkLoad_logBackup_sample;
go
-- Create a sample table pre-loaded with a bunch of data that we'll use only
-- to export out data to create a sample bulk-load file...
-- This table
will currently hold records with id values between 1million and
-- 5million...
if object_id('dbo.t2') > 0
drop table dbo.t2;
-- Load...
select top
2500000 tmp.id, tmp.col1, tmp.filler
into dbo.t2
from (
select
top 3000000
row_number() over(order by a.object_id)
as id
,cast(newid() as char(36)) as col1
,cast(a.name
as char(100)) as filler
from sys.columns a
cross
join sys.columns b
cross join sys.columns c
)
tmp
where tmp.id between 200000 and 3500000;
go
-- bcp the data out to a file...this is what we'll use later to bulk import...
-- should result in about 350mb of data...
exec master.dbo.xp_cmdshell 'del
/F /Q c:\temp\bulkData.bcp', no_output;
declare @s varchar(200);
select
@s = 'bcp bulkLoad_logBackup_sample.dbo.t2 out c:\temp\bulkData.bcp -nTS ' +
@@servername;
exec master.dbo.xp_cmdshell @s, no_output;
go
-- Remove the temp usage table...
if object_id('dbo.t2') > 0
drop
table dbo.t2;
go
-- Create the table that will be used for bulk loading...
if object_id('dbo.t1') >
0
drop table dbo.t1;
-- Load data...
select top 100000
row_number()
over(order by a.object_id) as id
,cast(newid() as char(36)) as
col1
,cast(a.name as char(100)) as filler
into dbo.t1
from sys.columns a
cross join sys.columns b
cross join sys.columns
c
go
-- View the data in the t1 table currently...should be 1million records with
-- id's from 1 to 1million...
select min(id), max(id), count(*) from
dbo.t1;
go
-- Set the database to a full recovery mode...
alter database bulkLoad_logBackup_sample
set recovery full;
checkpoint;
waitfor delay '000:00:05.000';
go
-- Perform a full backup of the database to set the log in a non-auto-truncating
mode...
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_fullBackup1.bak',
no_output;
backup database bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_fullBackup1.bak';
go
-- On my system, the full backup is about 18mb in size total...
-- Let's run some simple logged statements to generate some log activity...
if object_id('dbo.t2') > 0
drop table dbo.t2;
-- create...
create table dbo.t2 (id int, col1 char(36), col2 char(200)) on data1;
go
-- insert some data...
insert dbo.t2 (id, col1, col2)
select top
100000
row_number() over(order by a.object_id) as id
,cast(newid()
as char(36)) as col1
,cast(a.name as char(100)) as filler
from sys.columns a
cross join sys.columns b;
go
-- update some
data...
update dbo.t2
set col1 = cast(newid() as char(36));
go
-- delete some data...
delete dbo.t2
where (id between
100 and 500)
or (id between 1100 and 1500)
or (id
between 2100 and 2500)
or (id between 3100 and 3500)
or (id
between 4100 and 4500)
or (id between 7000 and 9000);
go
-- Perform a log backup...
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup1.trn',
no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup1.trn';
go
-- On my system, the log backup is a little over 50mb in size...
-- At
this point, the log should be auto truncated to basically empty (usage space
that is, not on-disk space)...
-- Take another log backup to show this...
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup2.trn',
no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup2.trn';
go
-- On my system, this log backup is about 100kb in size...
-- Ok, now let's mimic a bulk-load situation...
-- First, we'll mark the
database as writable...
alter database bulkLoad_logBackup_sample set read_write;
go
-- Next, move from full recovery to bulk-logged recovery...
alter database
bulkLoad_logBackup_sample set recovery bulk_logged;
go
-- Next, bulk-load a bunch of data into the existing t1 table...minimal logging
should occur here...
bulk insert bulkLoad_logBackup_sample.dbo.t1 from 'c:\temp\bulkData.bcp'
with
( datafiletype = 'native',
tablock);
go
-- Now we should have much more data in our t1 table...
select min(id),
max(id), count(*) from dbo.t1;
go
-- Cluster the bulk-loaded table...this too will be minimally logged...
create unique clustered index ix_test on dbo.t1 (id) with sort_in_tempdb;
go
-- At this point, the log file for the "bulkLoad_logBackup_sample" database
should still not
-- be any larger than the initial 128mb in size, despite
much more than that amount of data
-- being modified (due to minimal logging
operations...).
-- Now, since we performed a bulk-logged operation in the bulk_logged recovery
model, we'd typically
-- see the next log backup be much larger than the
existing usage size of the active transaction log
-- (since a bulk-logged
operation causes minimally logged operations, which simply output space
-- allocations during the operation to the tlog, but flip extent bits in the
BCM pages for the
-- affected extents in the operation - then durin the next
log backup, any of these marked extents
-- would have their data output to
the log backup, and the BCM page bits would be flipped back);
-- however,
instead of taking a log backup at this point, let's instead set our database
back to
-- being "read-only" prior to taking the log backup...
alter database
bulkLoad_logBackup_sample set read_only;
go
-- Now, let's take a log backup and see how big it is...
exec master.dbo.xp_cmdshell
'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup3.trn', no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup3.trn';
go
-- On my system, this log backup is over 775mb in size...
-- Let's take
another log backup immediately and see what we get for a size...
exec master.dbo.xp_cmdshell
'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup4.trn', no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup4.trn';
go
-- Notice that we get yet another large log backup that is over 775mb in
size...
-- This will continue to be the case until we perform a log backup
with the database in read_write mode
-- to allow the BCM pages to be marked
(or more accurately, un-marked)...
-- Now, let's pretend we were going through yet another bulk-load cycle and see what happens...
-- First, set the db to read_write...
alter database bulkLoad_logBackup_sample
set read_write;
go
-- We're already in bulk_logged recovery, no need to
set that...
go
-- Drop the cluster so we can bulk load in a non-ordered
fashion...
drop index dbo.t1.ix_test;
go
-- Run a bulk insert again
(use the same data)...
bulk insert bulkLoad_logBackup_sample.dbo.t1 from
'c:\temp\bulkData.bcp'
with ( datafiletype = 'native',
tablock);
go
-- We should have even more data in our t1 table...(min/max id's will be
the same as the last
-- time though, since we just re-imported the same data)...
select min(id), max(id), count(*) from dbo.t1;
go
-- Cluster the bulk-loaded table...this too will be minimally logged...
-- this time we'll remove the unique attribute, since that won't work...
create clustered index ix_test on dbo.t1 (id) with sort_in_tempdb;
go
-- Again, the tlog for the "bulkLoad_logBackup_sample" database should be
no larger than the
-- initial 128mb in size...
-- Set the database back to read-only...
alter database bulkLoad_logBackup_sample
set read_only;
go
-- Take another log backup...
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup5.trn',
no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup5.trn';
go
-- How large was this one? Much larger...1.5gb or so on my machine...not
only does it include the same
-- 785mb or so from the prior bulk operation,
but it also includes all the data from the last round of
-- bulk operations
(in this case, since we're dropping/creating an index everytime, this will include
a
-- lot of redundant data in each backup...
-- If this pattern were to continue, the log backups would continue to get
larger and larger and larger
-- and larger in a neverending pattern...
-- To see how it should act (i.e. the way it acted initially), simply set
the db back to read_write:
alter database bulkLoad_logBackup_sample set read_write;
-- Then take a log backup (this will be yet another 1.5gb-ish in size...
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup6.trn',
no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup6.trn';
-- Now every log backup will be small again...even if you set the db back
to read_only first...
alter database bulkLoad_logBackup_sample set read_only;
-- This log backup will be small...100kb or so on my machine...
exec master.dbo.xp_cmdshell
'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup7.trn', no_output;
backup log bulkLoad_logBackup_sample to disk = 'c:\temp\bulkLoad_logBackup_sample_logBackup7.trn';
-- Cleanup...
/*
use master;
exec master.dbo.xp_cmdshell 'del
/F /Q c:\temp\bulkData.bcp', no_output;
exec master.dbo.xp_cmdshell 'del
/F /Q c:\temp\bulkLoad_logBackup_sample_fullBackup1.bak', no_output;
exec
master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup1.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup2.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup3.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup4.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup5.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup6.trn',
no_output;
exec master.dbo.xp_cmdshell 'del /F /Q c:\temp\bulkLoad_logBackup_sample_logBackup7.trn',
no_output;
drop database bulkLoad_logBackup_sample;
go
--*/
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips
Article Last Updated: 2007-10-15