Minimize performance impact of SQL Server DBCC CHECKDB

By:   |   Updated: 2011-06-02   |   Comments (11)   |   Related: 1 | 2 | 3 | 4 | 5 | > Database Console Commands DBCCs


Problem

Every company wants to ensure the database integrity of their database systems. If you are not running a regular database consistency check against your databases, then your databases could have corruption without your knowledge.  Therefore, it is important to run DBCC CHECKDB as part of your daily, weekly, monthly, etc. database maintenance routine.  In fact, in my previous article on this topic, I showed you how we can setup a process to Capture and Store SQL Server Database Integrity History using DBCC CHECKDB.  However, there is a cost to running DBCC CHECKDB as well. Clearly, the pros outweigh the cons, but there are considerations.  What is the performance overhead of running a DBCC CHECKDB on your production systems, as it is unavoidably an IO intensive operation?  I have a need to run DBCC CHECKDB without any performance implications.  Do you have any suggestions?  Read this tip to learn more.

Solution

The purpose of this tip is to educate the community on the following:

  • How the DBCC CHECKDB works
  • The potential performance impact
  • Considerations when running the command
  • Possible workarounds or solutions to
    • Ensure there is no database corruption
    • How to minimize the impact executing DBCC CHECKDB on a 24x7 production environment

DBCC CHECKDB Overview

In general, DBCC CHECKDB should be executed while the database is ONLINE, but it is recommended that there is minimal activity on the SQL Server during this operation. It does not acquire table locks by default. Instead, it acquires schema locks that prevent metadata changes, but allow data changes. The schema locks acquired will prevent the user from getting an exclusive table lock required to build a clustered index, drop an index, truncate the table, etc.

With the latest version of DBCC CHECKDB, it performs CHECKALLOC, CHECKTABLE and CHECKCATALOG with the one command rather than running the commands separately. These commands run in a linear fashion.  So, it is a very thorough database consistency check, looking at page allocation on disk, checking and ensuring that the page structures are properly linked, and finally checks the system tables.  Here is a simple sample command:

DBCC CHECKDB (SQLCentric);
GO

Assuming the results from the command above are successful, and you see a similar message that states "CHECKDB found 0 allocation errors and 0 consistency errors in database 'YourDB'", you can be confident that the database is consistent, has no corruption, and properly stored on your disk.

Execute DBCC CHECKDB on a Database Snapshot

As of SQL Server 2005, all the DBCC validation commands use database snapshots. Therefore, this is considered on ONLINE operation, and will keep validation operations from interfering with on-going database operations. Using snapshots allow the validation operation to see a consistent view of the data. A snapshot is created at the beginning of the CHECK command, and no locks are acquired on any of the objects being checked. The actual check operation is performed against the snapshot. Unlike regular snapshots, the "snapshot file" that DBCC CHECKDB creates cannot be configured and is invisible to the end user. This in itself, does not minimize IO, but the advantages here are to avoid any locking contention on the production database. The caveat is that it always uses space on the same disk volume as the database being checked, and therefore you must ensure adequate space to perform this operation.

You can avoid creating a snapshot and save disk space by using the WITH TABLOCK option of the DBCC command. This essentially can and should be only be done OFFLINE. Using the WITH TABLOCK option will force the DBCC to try to grab an exclusive database lock which will prevent users from accessing the database if you attempt this command during production hours.

DBCC CHECKDB (SQLCentric) WITH TABLOCK; -- This option does NOT use a snapshot
GO

You can see from the results of using WITH TABLOCK that a snapshot cannot use or check certain consistencies, as per the message below:

"DBCC CHECKDB will not check SQL Server catalog or Service Broker consistency because a database snapshot could not be created or because WITH TABLOCK was specified." Source - http://support.microsoft.com/kb/928518.

Also, keep in mind, during regular maintenance, DBCC CHECKDB should NOT be used with any of the repair options, as the database is put into single-user mode, so no other transactions can be altering data. In this case too, a snapshot is not created.

As a point of reference, here are the Repair options available with DBCC CHECKDB:

  • REPAIR_ALLOW_DATA_LOSS
    • Tries to repair all reported errors. These repairs can cause some data loss.
  • REPAIR_FAST
    • Maintains syntax for backward compatibility only. No repair actions are performed.
  • REPAIR_REBUILD
    • Performs repairs that have no possibility of data loss. This can include quick repairs, such as repairing missing rows in non-clustered indexes, and more time-consuming repairs, such as rebuilding an index.

For example, if you wish to use DBCC Repair, you must first put the database in single user mode as shown below:

ALTER DATABASE SQLCentric
SET single_user WITH ROLLBACK IMMEDIATE;
GO
DBCC CHECKDB (SQLCentric, repair_allow_data_loss);
GO

TempDB and IO Intensity

Just by the nature of the command, DBCC CHECKDB is an IO intensive operation. Above, we learned by using an internal snapshot, we can avoid potential locking and blocking.  However, another concern is related to IO caused by spooling to the TempDB database. Because the DBCC CHECKDB process makes very heavy use of the TempDB, Microsoft suggests storing TempDB on its own separate set of spindles. With properly configured disks, throughput will be much better. 

Duration and Performance Impact of DBCC CHECKDB ONLINE

If the decision is to run DBCC CHECKDB online, then in addition to the above strategies, I would recommend recording and measuring the performance impact and duration. First, test the duration of the DBCC CHECKDB against a restored copy of the database(s), so you can record an base duration.  Next, schedule the process to run in a maintenance window where there is minimal user activity.  You may already have statistics on peak vs. non-peak hours, and can schedule the code to run, at the lowest level of activity.  Based on this information you should have a good sense of the time needed.

DBCC CHECKDB WITH PHYSICAL_ONLY

One alternative to minimize contention, is to use the WITH PHYSICAL_ONLY DBCC CHECKDB option. This option limits the processing to checking the integrity of the physical structure of the page and record headers in addition to the consistency between the pages for the allocation structures (data and indexes).  This option is specifically designed to provide a low overhead check of the physical consistency of the database.  This check also detects torn pages and common hardware failures that can compromise a user's data. Perhaps you would run this more frequently and then schedule a full DBCC CHECKDB during periods of low activity. The full run of DBCC CHECKDB will take considerably longer to complete, due to the more comprehensive logical structure checks. 

Here is the sample command:

DBCC CHECKDB (SQLCentric) WITH PHYSICAL_ONLY;
GO

Backup, Restore and Run DBCC CHECKDB

Another alternative is to run the DBCC CHECKDB on another SQL Server.  You can setup a process where you restore the database to another server and run DBCC CHECKDB against it.  Since the backup process is a bit-by-bit copy of the database, upon restoring the database it will be in exactly the same state as your online database. If the database is inconsistent or there are storage or allocation problems, the backup will store these too.  As such, the results of any possible corruption should surface on the restore. This will completely eliminate the IO issue on the production server.  However, the backup, copy and restore process could be quite lengthy, and will also require the correct amount of disk space to accommodate the database.

Backup WITH CHECKSUM

The Backup WITH CHECKSUM option was introduced with SQL Server 2005 and offers some protection that occurs along with your regular backup routine. Although the "Backup WITH CHECKSUM" option will cause all existing page checksums on allocated pages to be checked, there may be some database corruption that will not be detected. Even after the checksums are enabled and the database has checksum page verification on, not all pages in the database will have page checksums. Furthermore, the I/O subsystem does not distinguish between pages with and without page checksums when causing corruption. Therefore, it is NOT a replacement for DBCC CHECKDB, however, this option will give you some insight into the integrity of your database backup.

In addition, the backup with checksum command does not perform all of the same sets of verification.  However, reading and checking all the checksums can identify some corruption. Page checksums are checked during a backup and the backup will fail if any of the page checksums are wrong, which is an indication that there is corruption. With this option, the backups should not take significantly longer, but this option does not eliminate the need for regular DBCC CHECKDB routines. 

Below is a simple example of backing up the database with the checksum option:

BACKUP DATABASE SQLCentric TO DISK = 'd:\sqldbbackups\sqlcentric.bak' WITH CHECKSUM;
GO

DBCC CHECKDB and Database Mirroring

If you have database mirroring setup, you can create a database snapshot explicitly from the mirror and run DBCC CHECKDB on that version of the database. Be aware that the I\O load from this could make the REDO queue on the mirror larger and cause the failover to take longer.  This could cause an issue for your SLA. In addition, Microsoft may require a license for this SQL Server, although you are not using the mirror for any data processing.

Furthermore, the mirror would have to actually *hit* the corrupt page as part of REDO for it to get the auto-page-repair - otherwise you would never know there is corruption until the mirroring failover occurs and a CHECKDB happens on the new principal database.  For more information, check out this article - "Mirror, Mirror".

With respect to corruption, one great feature in SQL Server 2008 R2 is the ability to automatically repair corrupt pages during database mirroring.  See this MSDN article on Automatic Page Repair During a Database Mirroring Session.  Automatic page repair is an asynchronous process that runs in the background.

You can actually view stored history on suspect pages by querying the 'suspect_pages' table, introduced in SQL Server 2005:

SELECT * 
FROM msdb..suspect_pages;
GO

You can specifically look for bad pages such as bad checksum, torn page errors by filtering by event type, as such:

-- Select nonspecific 824, bad checksum, and torn page errors.
SELECT * 
FROM msdb..suspect_pages
WHERE (event_type = 1 OR event_type = 2 OR event_type = 3);
GO

Third Party Tools

A final option is using a third party tool where you can "restore" a database from your backup and run DBCC checks against it.  You are able to do so while never actually allocating disk space for the backup because it is a virtual database.  Using one of these tools effectively off-loads the DBCC CHECKDB from the production system.  In general, the advantages of using one of these tools are that you will reduce storage overhead (of having to run DBCC against a large restored database), eliminate the load on the production system and ensure you have a consistent backup. 

The reality is that although we seek ways to minimize the performance overhead when running DBCC CHECKDB, there is NO way to run consistency checks on a database without IO impact. Also be aware that running CHECKDB, even on the production database does not give you an absolute guarantee that there is no corruption.  It just says that at the time the individual pages were read and processed, they did not have any issues. As soon as the page have been read, the IO subsystem could corrupt one. Therefore, the need for regular consistency checks is essential.  Hopefully, this tip will help you think about the importance of running DBCC CHECKDB, and provide some ways to minimize the performance impact to your database systems.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Robert Pearl Robert Pearl is a SQL MVP, and President of Pearl Knowledge Solutions, Inc., offering SQLCentric monitoring and DBA services.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips


Article Last Updated: 2011-06-02

Comments For This Article




Thursday, September 25, 2014 - 12:44:18 PM - Allen Back To Top (34718)

Robert, 


Would you please respond to a comment posted by Paul Randal? 

Do you agree with what Paul has mentioned called "Factual Errors"?


Thanks....


Saturday, February 8, 2014 - 10:32:07 AM - Paul Randal Back To Top (29373)

There are a few factual errors in this post - for example, CHECKDB does *not* take schema locks on tables, and it checks system tables as the very first thing it does, not the last. You also should point out that running CHECKDB on a mirror does not imply anything about the integrity of the principal (you don't say it does, but many people assume it does). Thanks


Friday, August 2, 2013 - 3:42:22 AM - SQLDBA Back To Top (26094)

hi,There is on edatbase where checkdb is perform after 10 minutes.

CHECKDB for database 'PrinterMGR' finished without errors on 2013-06-27 12:35:15.937 (local time). This is an informational message only; no user action is required.

Can tell me ,What is reason for it?/

 

Regards,

SQLDBA

 


Thursday, January 17, 2013 - 6:34:47 AM - Jack Whittaker Back To Top (21508)

A very helpful article - thanks!  MayI ask a supplementary about the Physical_Only option?  If I do this, it runs quicker and reduces the impact on the server.  But what does it miss out?  Can I still be confident that "the database is consistent, has no corruption, and properly stored on your disk"?


Sunday, June 5, 2011 - 8:51:45 AM - Hank Freeman Back To Top (13977)

I really am glad to see this post, for it like a view into the Majic Box of SQL Server. What an excellent present post and one to keep..
I personally, really like the DBCC UpdateUsage command as show in the script below which I used to watch production database and log file growth. It is a simple by very useful script which I have share with others, so I decided to post it here for all who visit this out post.

Enjoy !!

Hank Freeman


  /***  This is the script to obtain table row and size counts and output them to two results
      File name: SCRIPT_Stable_Size_Info(2000)sql
  SS-2000 can not use @Tables ... not yet implemented *
      ***/
  --use msdb
  go
      DBCC UPDATEUSAGE(0)
      go
      /***  This is the script to obtain table row and size counts and output them to two results
      File name: SCRIPT_Stable_Size_Info.sql
      ***/
      SET NOCOUNT ON
      Declare
  @TableName            sysname
, @var_SERVER_NAME      VARCHAR(150)
, @var_DBName           nvarchar(150)
, @var_Schema           nvarchar(120)
, @var_Table_Name       nvarchar(250)
, @var_create_date      datetime
, @var_modify_date      datetime
SET @var_SERVER_NAME    = (SELECT   CONVERT(VARchar(25), SERVERPROPERTY('servername')))
set @var_DBName = db_Name()
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[#hank_temp]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[#hank_temp] (
[id]              int IDENTITY(1, 1) NOT NULL,
[Server_Name]     varchar(150) NULL,
[DatabaseName]    nvarchar(150) NULL,
[SchemaName]      varchar(120) NULL,
[TableName]       VARCHAR(250) NULL,
[Create_Date]     datetime,
[Modify_Date]     datetime,
[NoOfRows]        bigint NULL,
[reserved_kb]     varchar(20) null,
[data_kb]         varchar(20) null,
[index_size_kb]   varchar(20) null,
[unused_kb]       varchar(20) null,
[TimeStamp]       datetime NULL,
[TimeStamp_Numb_Str] varchar(29) NULL)
ON [PRIMARY];
END;
  /**
      DBCC UPDATEUSAGE(0)  --Please reference this page for explainatin of negative values for your need to run this command first
      http://sqlserver2000.databases.aspfaq.com/why-does-sp-spaceused-return-inaccurate-values.html
 
  http://stackoverflow.com/questions/917431/sql-server-return-schema-for-sysobjects
  this site has the info on using this on SS-2000 s
      **/
      declare c cursor for
SELECT 
    f2.name AS schema_name ,f1.name,
    f1.CREATE_DATE,
    f1.MODIFY_DATE
    FROM    sys.objects  f1
    INNER JOIN sys.schemas f2 ON f1.schema_id = f2.schema_id
    WHERE f2.name != 'sys'
    AND TYPE_DESC = 'USER_TABLE'
    AND f1.is_ms_shipped <> 1
open c
      fetch Next from c into @var_Schema, @var_Table_Name,@var_create_date, @var_Modify_date
      while @@Fetch_Status = 0
      begin
    SET @TableName = 'use [' + @var_DBName + ']exec sp_spaceused  ''' + @var_Schema + '.' + @var_Table_Name + ''''
    --SET @TableName = 'use [' + @var_DBName + ']; exec sp_spaceused  '''  + @var_Table_Name + ''''
      Insert #hank_temp(Tablename,NoOfRows,reserved_kb,data_kb,index_size_kb,unused_kb) Exec (@TableName);
   update #hank_temp 
   set [SchemaName]   = @var_Schema
      ,[Server_Name]  = @var_SERVER_NAME
      ,[DatabaseName] = @var_DBName
      ,[Create_Date]  = @var_create_date
      ,[Modify_Date]  = @var_modify_date
   where [Tablename]  = @var_Table_Name and isnull([SchemaName],'') = '' ;
          fetch Next from c into @var_Schema, @var_Table_Name,@var_create_date, @var_Modify_date
        end
      close c
      deallocate c
  /*****/
      -- select * , getdate() as TimeStamp from #hank_temp Order By NoOfRows DESC ,Tablename ASC
      --SUBSTRING ( value_expression ,start_expression , length_expression )
  /*****/
      select id, SERVER_NAME AS 'Server_Name',DatabaseName as 'DatabaseName'
  ,SchemaName
  ,TableName, NoOfRows,
      --CAST ( expression AS data_type [ (length ) ])
      cast(substring(reserved_kb,1,(len(reserved_kb)-3)) as bigint) as 'reserved_kb',
      cast(substring(data_kb,1,(len(data_kb)-3)) as bigint) as 'data_kb',
      cast(substring(index_size_kb,1,(len(index_size_kb)-3)) as bigint) as 'index_size_kb',
      cast(substring(unused_kb,1,(len(unused_kb)-3)) as bigint) as 'unused_kb',
  [create_date],
  [modify_date],
      getdate() as 'Now_TimeStamp',
      cast(datepart(yyyy,getdate()) as varchar(4))
      +
      case len(cast(datepart(mm,getdate()) as varchar(4)))
        when '1' then '0' + cast(datepart(mm,getdate()) as varchar(4))
        else cast(datepart(mm,getdate()) as varchar(4))
      end
      +
      case len(cast(datepart(dd,getdate()) as varchar(4)) )
        when '1' then '0' + cast(datepart(dd,getdate()) as varchar(4)) 
        else cast(datepart(dd,getdate()) as varchar(4))
        end
      +
      case len(cast(datepart(hh,getdate()) as varchar(4)) )
        when '1' then '0' + cast(datepart(hh,getdate()) as varchar(4)) 
        else cast(datepart(hh,getdate()) as varchar(4))
        end
      +
      case len(cast(datepart(mi,getdate()) as varchar(4)) )
        when '1' then '0' + cast(datepart(mi,getdate()) as varchar(4)) 
        else cast(datepart(mi,getdate()) as varchar(4))
        end
      +
      case len(cast(datepart(ss,getdate()) as varchar(4)) )
        when '1' then '0' + cast(datepart(ss,getdate()) as varchar(4)) 
        else cast(datepart(ss,getdate()) as varchar(4))
        end as TimeStamp_Numb_Str
      from #hank_temp Order By NoOfRows DESC ,Tablename ASC
  end_of_it:
drop table #hank_temp
SET NOCOUNT OFF
/***** end ****/


--Second Script Below
/***********************************************************************************
Script File: Ref_Cur_File_Size_Base_SS-2000_WIP.sql
Author Hank Freeman
Date Created 31-Aug-2009
*************************************************************************************
Modified Date: Modified by: Modification:
31-Aug-2009 H_freeman Created
30-Sep-2009 H_freeman Updated Added PCT fields
20-Jan-2010 H_freeman Fix Group calculations pct
08-FeB-2010 H_freeman Fix the @version to properly figure out the SS Product Version
22-Mar-2010 h_freeman Added Servername and System_DTE (Because all System dates may not be Calendar date)
‘*****************************************************’
Validated by H_freeman
‘*************************************************************’
*/
begin
DECLARE @err int
declare @SQL_String1 nvarchar(2000)
declare @SQL_String2 nvarchar(2000)
declare @hf_var1 varchar(250)
declare @version varchar(50)
declare @position int
set @version = (select @@version)
set @position = (select charindex('200',@version))
--select @position
set @version = rtrim(substring(@@VERSION,@position,4))
select @version as '@version'
--xp_msver productversion
--SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')
end
/** Script how to drop a Temp Table ***/
--if exists(select 1 from tempdb..sysobjects so with (nolock) where [name] like '#file_size%')
if exists(select 1 from tempdb..sysobjects so with (nolock) where [name] like '#file_size%')
begin
drop table #file_size
end
create table [#file_size]
(
[database_name] [sysname] NULL,
[groupid] [smallint] NULL,
[groupname] sysname NULL,
[fileid] [smallint] NULL,
[file_size] [decimal](12, 2) NULL,
[space_used] [decimal](12, 2) NULL,
[free_space] [decimal](12, 2) NULL,
[pct_used] [decimal] (12, 2) NULL,
[pct_free_space] [decimal] (12, 2) NULL,
[name] [sysname] NOT NULL,
[filename] [nvarchar](260) NOT NULL
);
--
set @SQL_String2 = ' insert into [#file_size]
SELECT
DB_NAME() as ''Use_Name''
,sysfilegroups.groupid
,sysfilegroups.groupname
,fileid
,convert(decimal(12,2),round(sysfiles.size/128.000,2)) as file_size
,convert(decimal(12,2),round(fileproperty(sysfiles.name,''SpaceUsed'')/128.000,2)) as space_used
, convert(decimal(12,2),round((sysfiles.size-fileproperty(sysfiles.name,''SpaceUsed''))/128.000,2)) as free_space
,(convert(decimal(12,2),round(fileproperty(sysfiles.name,''SpaceUsed'')/128.000,2))/ convert(decimal(12,2),round(sysfiles.size/128.000,2))) as pct_used
,(convert(decimal(12,2),round((sysfiles.size-fileproperty(sysfiles.name,''SpaceUsed''))/128.000,2)) / convert(decimal(12,2),round(sysfiles.size/128.000,2)) ) as pct_free_space
,sysfiles.name
,sysfiles.filename
FROM dbo.sysfiles sysfiles
LEFT OUTER JOIN dbo.sysfilegroups sysfilegroups
ON sysfiles.groupid = sysfilegroups.groupid;'
/*****/
begin
if @version = '2000'
begin --- This is sql server 2000
Print 'SQL Server 2000 system'
declare hf_cursor Cursor for
(
SELECT name from master.dbo.sysdatabases
-- SELECT * from master.dbo.sysdatabases
)
end
else
-- SS 2005 OR 2008 AND AABOVE
begin
Print 'SQL Server 2005 / 2008 system'
declare hf_cursor Cursor for
(
SELECT name FROM sys.databases
--select * from sys.databases
WHERE STATE = 0
-- STATUS 6 = off-line
-- NON SS2000 systems
)
end
Open_Cursor:
Open hf_cursor
fetch next from hf_cursor into @hf_var1
while (@@fetch_Status <> -1)
if ((@@FETCH_STATUS) <> -2)
Begin
if ((@@FETCH_STATUS) <> -2)
Set @SQL_String1 = 'USE [' + @hf_var1 + ']'
set @SQL_String1 = @SQL_String1 + @SQL_String2
EXEC sp_executesql @SQL_String1
/********/
fetch next from hf_cursor into @hf_var1
/********/
end
--File Sizes
--select * from [#file_size]
begin --File Sizes
SELECT
@@SERVERNAME as 'ServerName'
, database_name, groupid
, ISNULL(groupname,'TLOG') groupname
, fileid, Rtrim(name) 'Name'
, file_size as 'FileSize(mb)', space_used as 'SpaceUsed(mb)'
, free_space as 'FreeSpace(mb)'
, (pct_used * 100) as 'PCT_Used'
, (pct_free_space *100) as 'PCT_Free'
, Rtrim(filename) as 'Disk Location and Filename'
, getdate() as 'System_DTE'
FROM [#file_size]
order by database_name asc, groupname asc
end
--File Group Sizes
begin
SELECT
@@SERVERNAME as 'ServerName'
,database_name, groupid
, ISNULL(groupname,'TLOG') groupname
, SUM(file_size) as 'Grp_Filesize(mb)'
, Rtrim(name) 'Name'
, SUM(space_used) as 'Grp_SpaceUsed(mb)'
, SUM(free_space) as 'Grp_FreeSpace(mb)'
, (pct_used * 100) as 'PCT_Used'
, (pct_free_space *100) as 'PCT_Free'
, Rtrim(filename) as 'Disk Location and Filename'
, getdate() as 'System_DTE'
FROM [#file_size]
GROUP BY database_name, groupid, groupname, filename, name, pct_used,pct_free_space
order by database_name asc, groupname asc
end
close hf_cursor
deallocate hf_Cursor
-- drop table [#file_size]
end


Friday, June 3, 2011 - 4:56:30 PM - Kyle Back To Top (13975)

Robert,

 

Thanks for the great article. Since running CheckDB against a restored backup provides the same results, does that mean running it against a log shipped database would also provide the same results as its primary? I could swear I read that somewhere before but can't find any supporting documentation now. Thanks,

 

Kyle


Friday, June 3, 2011 - 9:18:05 AM - Robert Pearl Back To Top (13973)

Thanks for the comments, all.

Dave,

You don't have to, but running the backup with checksum gives you a few things.
The backup checks all the page checksums as it backs the database up and will fail if it encounters a page with an invalid checksum, hence it won't backup a corrupt database
The backup process writes a checksum over the backup file, so if anything damages any portion of the backup file it will be detected if you do a restore verify only (without the backup checksum the restore verifyonly just checks headers)

Also, how reliable is Page Verify?  Check this link out that may shed some light on your question:

http://www.yohz.com/help/sqlverify/howreliableisrestoreverify.htm

Database checksum and backup checksum are different things  HTH

Hank, have a feeling your using IE 9 to post comments - some known problems w/this on MSSQLTips.  Use a different version browser to post comments.  Thx

 


Thursday, June 2, 2011 - 4:47:44 PM - Dave Wanta Back To Top (13969)

Thanks for some great information.

Can you shed some light on the difference between what you describe along the lines of running checkDB against a backup versus running a restore/verify?

My understanding is that they don't check the integrity of the same things. 

Also, what is the benefit of running a backup with CheckSum if you have the DB option of PageVerify set to checksum?

Have a great day,
Dave

 

 

 


Thursday, June 2, 2011 - 1:13:52 PM - Anu Back To Top (13968)

Good Article


Thursday, June 2, 2011 - 10:30:29 AM - Moinu Back To Top (13966)

Thanks Robert, this is a very good article for reference. Simple to follow.


Thursday, June 2, 2011 - 9:06:52 AM - Jay Bergen Back To Top (13959)

Thanks for this great article! 

I was actually just considering this for one of my prod servers, and the question came up about the impact of DBCC Checkdb on the databases.  Great timing.  I also may look into some of those tools you reference.















get free sql tips
agree to terms