Identify and Correct SQL Server Forwarded Records

By:   |   Updated: 2013-06-25   |   Comments (4)   |   Related: > Database Design


Problem

Forwarded records in SQL Server can cause performance issues on heap tables because the record outgrows the page and the database engine uses pointers to reference the data. For those that don't know what a heap is, it is a table without a clustered index. The best practice is to have a clustered index on every table, however, sometimes there are cases when a clustered index is not needed. In this case, DBA's should be aware of all heap tables and should be concerned about forwarded records causing poor performance.  In this tip, we'll discuss forwarded records and how fix them.

Solution

Forwarded records are records in a SQL Server table that have grown too large for the page that it currently resides on.  These types of records can only be found in heaps because tables with a clustered index keep the data sorted based on the clustered index.  With a heap, once the record outgrows the page, the record is moved to a new page and a forwarding pointer is left in the original location to point to the new location.

Let's look at an example of a forwarding record.

First, let's create a table and insert some records:

CREATE TABLE [dbo].[Demo](
 [ID] [int] IDENTITY(1,1),
 [Server] [nvarchar](50) NULL,
 [DatabaseName] [nvarchar](100) NULL,
 [timestamp] [datetime] default getdate())
GO 
    CREATE INDEX idx_Server ON Demo(Server)
GO
    INSERT INTO Demo
    (Server,DatabaseName)
    VALUES  ('Server1', 'DB1') 
    INSERT INTO Demo
    (Server,DatabaseName)
    VALUES  ('Server2', 'DB2')
    INSERT INTO Demo
    (Server,DatabaseName)
    VALUES  ('Server3', 'DB3')
GO 100

Forwarded records in SQL Server can cause performance issues on heap tables.

Next, let's use the following DMV query to check our table:

SELECT
    OBJECT_NAME(ps.object_id) as TableName,
    i.name as IndexName,
    ps.index_type_desc,
    ps.page_count,
    ps.avg_fragmentation_in_percent,
    ps.forwarded_record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ps
INNER JOIN sys.indexes AS i
    ON ps.OBJECT_ID = i.OBJECT_ID  
    AND ps.index_id = i.index_id
WHERE OBJECT_NAME(ps.object_id) = 'Demo'

Next, let's use the following DMV query to check our table:

Notice that the HEAP index type has 0 forwarded records.

Next we will change the table definition by modifying the DatabaseName column.

ALTER TABLE Demo ALTER COLUMN [DatabaseName] nvarchar(MAX)
        

If we run the DMV query again you will see that the forwarded_record_count has changed (along with a lot of fragmentation):

If we run the DMV query again you will see that the forwarded_record_count has changed

The page count increased to 3 on the HEAP indicating that a new page was created and 50 forwarded records were created.

In a real world situation you will probably not know exactly when this happens so by changing the WHERE clause in the DMV query you can find all the forwarded records in a database. I would monitor heap tables at least monthly and if you notice high I/O on a heap, then that should be a hint that something needs to be looked at and possibly modified.

SELECT
    OBJECT_NAME(ps.object_id) as TableName,
    i.name as IndexName,
    ps.index_type_desc,
    ps.page_count,
    ps.avg_fragmentation_in_percent,
    ps.forwarded_record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ps
INNER JOIN sys.indexes AS i
    ON ps.OBJECT_ID = i.OBJECT_ID  
    AND ps.index_id = i.index_id
WHERE forwarded_record_count > 0

On my Test DB, you will notice I have two HEAP tables that have forwarded records including the one I just created.

On my Test DB, you will notice I have two HEAP tables that have forwarded records including the one I just created.

As discussed earlier, forwarded records can cause performance issues because the record locator points to a page and the DB engine has to go to that page, see the forwarding pointer, then go to the new page which can cause I/O issues.

There are two quick ways to fix forwarded records.

  1. Add a clustered index so the table won't be a HEAP.
  2.  If adding a clustered index is not an option, use ALTER TABLE tablename REBUILD (only available in SQL Server 2008+) to rebuild the table (and also defragment).
ALTER TABLE Demo REBUILD

If adding a clustered index is not an option, use ALTER TABLE tablename REBUILD
Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Brady Upton Brady Upton is a Database Administrator and SharePoint superstar in Nashville, TN.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips


Article Last Updated: 2013-06-25

Comments For This Article




Wednesday, August 2, 2023 - 10:38:43 AM - Jeff Moden Back To Top (91449)
Yep... I know this article is over a decade old but it's still useful.

As a bit of a sidebar, remember that for HEAPS, all non-clustered indexes contain the RID (row ID) from the HEAP.

That means that when you convert one to a Clustered Index, the RIDs will be replaced by the keys of the Clustered Index and THAT will result in all non-clustered indexes being rebuilt, as well. Unless the NCIs are being used as FK targets, you might as well just disable all the NCI's, then add the clustered index, and then rebuild the NCIs to re-enable them. If nothing else, it'll save you from addition unwanted growth of the MDF file.

Remember that if you just to an ALTER TABLE REBUILD, a whole lot of RIDs are going to change and that will all affect every NCI. It could cause logical fragmentation in the indexes.

Thursday, July 11, 2013 - 10:50:41 AM - William Back To Top (25799)

Another thing that stands out in the After section is the end of line 3 "DROPPED NULL".

The NULL/NOT NULL was not specified in the alter table/alter column statement. (Watch you're ANSI settings!)

However, it doesn't make a difference with using nvarchar(MAX). The number of Forwarded records is the same.

If using nvarchar(4000) NULL, rebuilding the table is not necessary. If the NULL setting is changed, rebuild is recommended.


Wednesday, June 26, 2013 - 3:30:39 PM - Brady Back To Top (25582)

DBAdmin,

If you use DBCC PAGE before and after the ALTER COLUMN statement was used two things stand out:

1. The record size changes from 45 to 53 when adding MAX

2. A BLOB Inline Data row is added to the page:

Before:

ParentObject Object Field VALUE
Slot 0 Offset 0x60 Length 45 Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4 ID 1
Slot 0 Offset 0x60 Length 45 Slot 0 Column 2 Offset 0x19 Length 14 Length (physical) 14 Server Server1
Slot 0 Offset 0x60 Length 45 Slot 0 Column 3 Offset 0x27 Length 6 Length (physical) 6 DatabaseName DB1
Slot 0 Offset 0x60 Length 45 Slot 0 Column 4 Offset 0x8 Length 8 Length (physical) 8 timestamp 2013-06-26 09:12:09.793
PAGE HEADER: Slot 1 Offset 0x8d Length 45 Record Type PRIMARY_RECORD
PAGE HEADER: Slot 1 Offset 0x8d Length 45 Record Attributes  NULL_BITMAP VARIABLE_COLUMNS
PAGE HEADER: Slot 1 Offset 0x8d Length 45 Record Size 45

After:

ParentObject Object Field VALUE
Slot 0 Offset 0x11d Length 53 Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4 ID 1
Slot 0 Offset 0x11d Length 53 Slot 0 Column 2 Offset 0x1b Length 14 Length (physical) 14 Server Server1
Slot 0 Offset 0x11d Length 53 Slot 0 Column 67108865 Offset 0x29 Length 0 Length (physical) 6 DROPPED NULL
Slot 0 Offset 0x11d Length 53 Slot 0 Column 4 Offset 0x8 Length 8 Length (physical) 8 timestamp 2013-06-26 09:13:51.137
Slot 0 Offset 0x11d Length 53 DatabaseName = [BLOB Inline Data] Slot 0 Column 3 Offset 0x2f Length 6 Length (physical) 6 DatabaseName 0x440042003100
PAGE HEADER: Slot 1 Offset 0x152 Length 53 Record Type PRIMARY_RECORD
PAGE HEADER: Slot 1 Offset 0x152 Length 53 Record Attributes  NULL_BITMAP VARIABLE_COLUMNS
PAGE HEADER: Slot 1 Offset 0x152 Length 53 Record Size 53

This is found by using DBCC IND and DBCC PAGE


Tuesday, June 25, 2013 - 9:02:47 PM - DBAdmin Back To Top (25565)

Please comment on why changing the data type from nvarchar(100) to nvarchar(max) caused the rows to grow in size. Thanks!















get free sql tips
agree to terms