Using SQL Server Indexes to Bypass Locks

By:   |   Updated: 2008-04-25   |   Comments (10)   |   Related: 1 | 2 | 3 | 4 | 5 | 6 | 7 | > Locking and Blocking


Problem

One of the issues you'll face with SQL Server is blocking which is caused by other processes that are holding locks on objects. Until the locks are removed on an object the next process will wait before proceeding. This is a common process that runs within SQL Server to ensure data integrity, but depending on how transactions are run this can cause some issues. Are there ways to get around blocking by using different indexes to cover the queries that may be running?

Solution

In order to explain my point I am going to use one table and run queries from two different sessions.

The table that we will be using has the following columns:

  • display name - my application use this name in order to display customer related information.
  • current quota - this is a number stating the current quota (of irrelevant item) of a customer, in my scenario this field is monitored by a specific process for specific customers.
  • next month's quota - this is a number stating the planned quota for next month, note this field is used in a long running transactions calculating quota for all customers.
  • support level - a number between 1 and 3 stating the level of support granted to a customer (1=high, 3=low), we have several applications, processes and transaction (business transactions) that deal with different customers according to their support level.

Create Table Customers

IF OBJECT_ID ('customers') IS NOT NULL DROP TABLE customers  
GO   
CREATE TABLE customers  
(  
   id INT NOT NULL,  
   display_name VARCHAR(10) NOT NULL,   
   current_quota bigint NOT NULL,  
   next_month_quota bigint NOT NULL,  
   support_level smallint NOT NULL,  
   some_other_fields_size_1k CHAR(1000) NULL  
)  
GO   
ALTER TABLE customers ADD CONSTRAINT customers_pk PRIMARY KEY (id)  
GO 

As you can see I have added an additional column called "some_other_fields_size_1k". This column is simulating an additional 1K of customer data; I like adding it to my tests to make the optimizer respond to more authentic requests.

Let's fill the table with some data, we will do the following:

  • Insert 1000 records
  • Every 150 customers I'll place a customer with support level = 1 and the rest will have to settle for support level = 3.
  • Extend some quota as current and set next month's quota to 0.

Table Customers - fill with data

SET nocount ON  
DECLARE @i AS INT  
SET @i = 0  
WHILE @i<1000  
BEGIN  
   SET @i = @i + 1  
   INSERT INTO customers (  
   id,   
   display_name,   
   current_quota,   
   support_level,   
   next_month_quota,   
   some_other_fields_size_1k)   
   VALUES (  
   @i,   
   'name-' + CAST (@i AS VARCHAR (10)),   
   100000 + @i,   
   --making customer with id 150, 300, 450, 600, 750   
   -- and 900 with support level 1  
   CASE @i%150 WHEN 0 THEN 1 ELSE 3 END,   
   0,   
   'some values ...')  
END  
SET nocount OFF  
GO    

As I mentioned earlier, I have two processes that will be running:

  1. A general dashboard application that checks the status of the current quota of our top customers.
  2. A planning module that performs various calculations and changes quotas of customers based on calculations done by the application.

Dashboard Application SELECT Statement

SELECT display_name, current_quota  
FROM customers  
WHERE support_level = 1 ORDER BY 1  

Planning Module UPDATE Statement

UPDATE customers  
SET next_month_quota = <some quota>
WHERE id=<some id> 

I would like to state this scenario assumes that we are working with the default isolation level (READ COMMITTED) of Microsoft SQL Server.

READ COMMITTED isolation level allows good concurrency with reasonable integrity (some might argue it is not that good and not that reasonable - and we'll get to that but hey, most of the OLTP applications we know use it).

The core of the READ COMMITTED isolation level is composed of two building blocks called exclusive locks and shared locks that comply with the following guidelines:

  1. An exclusive lock (A.K.A. "X lock") is taken for any resource that requires a write.
  2. A shared lock (A.K.A. "S lock") is taken for any resource that requires a read.
  3. S lock can be taken on a specific resource unless there is an X lock already taken for it.
  4. X lock can be taken on a specific resource as long as there is not already another lock taken (not even an S lock).

Now to the challenge: there are cases where the planning module (long running transactions) locks a specific record and does not allow the dashboard application to view specific information for a long time; believe me, it happens - I know - I have built this scenario :).

We all know that indexes boost database performance by keeping an ordered list (tree based) of keys and a linkage to the actual data location. So when our application has some performance issues the intuitive solution would be to look at the SQL statement that does not perform well and possibly add indexes. Here is our statement that we are trying to run that is getting blocked.

Dashboard Application SELECT Statement

SELECT display_name, current_quota  
FROM customers  
WHERE support_level = 1 ORDER BY 1 

We could add an index on support_level, but since adding this index will not solve our blocking issue we will not bother adding this index.

Let's start from the beginning...

Phase 1 - Check for bottlenecks

In short, we first check the database machine's CPU, disk and memory. When we see it is not that, some of us would turn to the event log. Not finding anything unusual in the event log, scratching our head over enough time - the locks issue may pop up (hard to admit but the locks part is somehow always a surprising part :) - I usually turn to check the locks when I see that some SQL is stuck and the machine is saying: "I am going to sleep, please wake me when you need something".

Phase 2 - Drill down into locks

In order to view the locks there is of course the famous sp_lock and even sp_lock2, but I like to use my own get_locks procedure (which also gives additional information regarding the locked objects (e.g. owner object and index name etc') this can be created in any SQL Server 2005 server with the following SQL script:

get_locks stored procedure - creation script

CREATE PROCEDURE get_locks 
AS 
SELECT OBJECT_NAME(tl.resource_associated_entity_id%2147483647) obj, tl.request_session_id ,  
   tl.request_mode, tl.request_status, tl.resource_type, OBJECT_NAME (pa.OBJECT_ID) owner_obj,  
   pa.rows, tl.resource_description, si.name index_name, si.type_desc index_type_desc 
FROM sys.dm_tran_locks tl LEFT OUTER JOIN sys.partitions pa  
   ON (tl.resource_associated_entity_id = pa.hobt_id)  
   LEFT OUTER JOIN sys.indexes si  
   ON (pa.OBJECT_ID = si.OBJECT_ID AND pa.index_id = si.index_id) 
WHERE resource_type NOT LIKE 'DATABASE' 
ORDER BY tl.request_session_id, tl.resource_type 
GO 

Now let's check to see that there are any locks, by running:

get_locks 
GO 

We get the following result, which shows there are no current locks.

10 no locks

Now let's open two sessions (SQL Server Management Studio query windows), in the first (we'll call it Session A) we'll run the following SQL:

Session A - Dashboard Application SELECT

SELECT @@SPID 
GO  
SELECT display_name, current_quota FROM customers WHERE support_level = 1 ORDER BY 1 
GO 

This statement (when not blocked by others) is fetching the display_name and current_quota of customers with support level = 1:

20 simple select

In the second session (we'll call it Session B) let's run the following update statement:

Session B - Planning Module UPDATE

SELECT @@SPID --added just for session identification in the dm views. 
GO  
BEGIN TRAN 
UPDATE customers SET next_month_quota = 2500 WHERE id=150 

This is returned:

30 blocking update

The above statement opened a transaction, updated a record and did not close the transaction (with either commit or rollback command), so it is currently locking some of the resources.

Let's review the locks with get_locks

get_locks 
GO 

This time the result would be:

40 blocking update locks

What we can see here is the process # 56 (Our Session B) has locked:

  • The object 'customers' which is a table with "IX lock".
  • The page '1:334' which is actually page 334 in file #1 with "IX lock".
  • The key '(96009b9e9046)' which is an indicator describing the record with id = 150 in the 'customers_pk' with "X lock".

"IX lock" is a way to notify the database not to allow any shared locking that will block my update later on, when an object is locked with "IX lock" it:

  • Assumed that a lower granularity will acquire a specific "X Lock".
  • It allows only IX and IS locks to be acquired on the same resource.

For more information please refer to Lock Compatibility (Database Engine) in SQL Server 2005 Books Online.

Now let's re-run the dashboard application query within Session A:

Session A - Dashboard Application SELECT

SELECT @@SPID 
GO  
SELECT display_name, current_quota FROM customers WHERE support_level = 1 ORDER BY 1 
GO 

This time we can see that the query is blocked, because it is still executing.

50 blocked select

Re-checking the locks with get_locks We can see the following lock status:

get_locks 
GO 
60 blocked select locks

As you can see the previous locks from session B (spid = 56) are still there and we've got some new locks from session A (spid = 55):

  • Same as session B this session also succeeded placing an "I Lock" (this time "IS Lock") on:
    • 'customers' object (table).
    • Page '1:334' (which is actually page 334 in file #1).
  • Unlike Session B, when trying to place an "S Lock" on the key '(96009b9e9046)' which is an indicator describing the record with id = 150 in the 'customers_pk' this session finds out that there is already "X Lock" there and gets into a 'WAIT' state (see request_status column).

Phase 3 - Trying an index

Well I've started the 'Solution' description by telling that adding an index on the support_level column in order to boost performance still won't help, but it will get us closer - let's try:

-- Before doing that we'll need to: 
-- 1. Stop Session A (which is still running). 
-- 2. Rollback Session B (which is still locking). 
-- 3. Run:  
CREATE INDEX customers_ix1 ON customers (support_level) 
GO  
-- 4. Rerun Session B (for locking) 
-- 5. Re-run Session A (to be blocked) 

OK, now that we have done the above if we run Session B UPDATE, Session A SELECT and get_locks, we will get the following:

70 blocked select locks with index1

As you can see same locks as before are used plus customers_ix1 index's got a new page "IS Lock" (page 77 in file 1) and a new "S Lock" on key '(970081334a1d)' placed by Session A.

Some might ask why did Session A succeed with placing a lock on customers_ix1 (line 1 in the above table) let's check the execution plan for the select statement by running:

-- 1. Stop running Session B 
-- 2. Press Ctrl-T to change results to text: 
-- 3. Run:  
SET SHOWPLAN_TEXT ON 
GO  
SELECT display_name, current_quota FROM customers WHERE support_level = 1 ORDER BY 1  
GO  
SET SHOWPLAN_TEXT OFF 
GO 

This will yield:

usingI1

As you can see Session A is accessing the data through the customers_ix1, so it tries to place an "S Lock" on it (and of course succeeds).

Well, this encapsulates two great hints for our solution:

  1. When performing a write operation, SQL Server does not lock related indexes (e.g. our Session B did not lock customers_ix1; note it would not have locked it regardless if we used it or not!), only the relevant data row.
  2. When performing a read operation, SQL Server locks only the objects (e.g. indexes, data rows etc') that it found and used within its access path.

So current status is that we can use indexes as a solution for Session A activity as long as they do not access the actual data row.

Voila!!!

By adding an index that will cover all columns of the table that are required for Session A's specific query (A.K.A. Covering Index), we will have a solid bypass without requiring specific access to the actual data row. Having achieved this, Session A will not be blocked while fetching data from a row that is actually blocked for writing (as done by Session B).

Adding the index is done by the following:

-- Before doing that we'll need to: 
-- 1. Rollback Session B which is still locking. 
-- 2. Run:  
CREATE INDEX customers_ix2 ON customers (support_level, display_name, current_quota) 
GO 

Now let's re-run the locking update of Session B and the select for Session A.

Session B

SELECT @@SPID --added just for session identification in the dm views. 
GO  
BEGIN TRAN 
UPDATE customers SET next_month_quota = 2500 WHERE id=150 

Session A

SELECT display_name, current_quota  
FROM customers  
WHERE support_level = 1 ORDER BY 1  

Trying to run Session A's SELECT statement (while Session B's transaction is still locking) will produce the following results without being blocked:

80 simple select success

Review New Query Plan

SET SHOWPLAN_TEXT ON 
GO  
SELECT display_name, current_quota FROM customers WHERE support_level = 1 ORDER BY 1  
GO  
SET SHOWPLAN_TEXT OFF 
GO 

the result is:

usingI2

As you can see the only database object that is participating in fetching the data is customers_ix2 which we've just created. By using this covering index we were able to use a totally different index than the update statement and therefore have the statement complete without issue.

Some might argue that we would not need to go through a whole article just for putting a covering index and they might be correct, still there are certain situations when a covering index is not that straight forward, but it can be used to overcome locking challenges such as the above example.

Next Steps
  • Review you database looking for blocking issues can be done in more that one way, allow me to suggest a few:
    • Random usage of get_locks above or any other T-SQL script that identify blocking sessions.
    • SQL Server Profiler (Using the Lock:Timeout event).
    • Other 3rd party tools.
  • Indexing with all columns as part of the index key is not a must, as you can learn to Improve Performance with SQL Server 2005 Covering Index Enhancements.
  • Finally, when modeling our data in the database for some applications (e.g. standard OLTP application) we think and design by gathering entities' attributes with same granularity into the same database table (A.K.A. the process of normalizing out database tables), we might want to consider going a bit further than that and design according to data access not only according to data belonging.


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Tal Olier Tal Olier is a database expert currently working for HP holding various positions in IT and R&D departments.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips


Article Last Updated: 2008-04-25

Comments For This Article




Tuesday, January 22, 2019 - 10:16:42 AM - B ROBERTSON Back To Top (78841)

Appreciate the detailed scenario based explanations. Regards, Robertson (Director-Oracle database) Bangalore


Monday, September 23, 2013 - 6:02:18 PM - David Back To Top (26904)

I found this article looking for some answers why our database was locking up during SELECTS within transactions. This is an excellent read and I found the solution in the fact that I just had to add an index.

Thank you for the detailed information, most of the online results where not as deep as this article!


Monday, May 12, 2008 - 7:31:21 PM - tal.olier Back To Top (977)

Hello ahains,

First I'd like to thank you for commenting my article it is a great pleasure knowing all this "writing stuff" do make any difference (also read your blog post...).

Second, well, your observation (i.e. "update *will* take a lock on the index if the update changes any of the columns that the index covers") is correct but a bit confusing from the reader POV (I have a case with a given update and cannot change that...), the thing I wanted to show here was that update will "lock" your reading statement even if no data update occurred and to show there is a cure for that (also imply some of us practice this cure by adding covering index without knowing it solves locking issues - I did) .

So when I built the article I noticed it should cover the reader POV i.e. first we have an update, then we have a read that is blocked. So I actually tried embedding your statement in two phrases in my article:
1. When performing a write operation, SQL Server does not lock related indexes (e.g. our Session B did not lock customers_ix1; note it would not have locked it regardless if we used it or not!), only the relevant data row.
2. By adding an index that will cover all columns of the table that are required for Session A's specific query (A.K.A. Covering Index), we will have a solid bypass without requiring specific access to the actual data row.

Regarding the update of a value to the same value (i.e. 3=3), noticed that also and do think it is intentionally done by SQL Server, great idea for a future research :)

hope I covered all points you've raised.

Thanks,

--Tal ()


Monday, May 12, 2008 - 7:03:52 PM - tal.olier Back To Top (976)

 mentioned and quota (sorry, was too enthusiastic to complete my answer)

:)

--Tal.


Monday, May 12, 2008 - 7:01:09 PM - tal.olier Back To Top (975)

Hi,

As mentioed above using NOLOCK hint affects transaction read consistency i.e. when using it you can never be sure that you are reading correct information; I specially chose quota example since we do not like to see our qouta incorrect (think of you private bank account quota ;)...).

Anyway the point of the article was not to "the point of this was to read columns that were not changed" but rather give an example of how we can bypass a lock created because of a writer even not having update on our "going to be read" data, still blocks it. In the other case if the writer would have updated our "going to be read" data we still want SQL Server to lock it and not let us read it until writer is done; using the NOLOCK hint would not give us this and we may be reading wrong information.

 Hope this helps,

 --Tal ()

 


Tuesday, May 6, 2008 - 1:02:04 AM - BrianJones Back To Top (947)

re WITH(NOLOCK) 605 error : http://support.microsoft.com/default.aspx?scid=kb;en-us;235880

Reading this, I think that this would only occur if the data is being moved due to an update of a clustered index. I tend to make my clustered indexes non-updatable so that may answer why I have not come across this problem.

However, I can see merit in using the covering index from a performance perspective as the server will read the index tree and not look at the table. And this is probably why a covering index is not subject to the row lock. I will certainly incorporate this idea into future designs should adding an index be a viable possibility. I think, in actual fact, I probably do exactly this but didn't realise that the covering index was the reason behind it all.

As a side thought, would using WITH(NOLOCK) and the covering index increase the performance again (no intent locks will be created for a start) and should not suffer from the 605 error also ? Might be worth a look.


Friday, May 2, 2008 - 9:33:28 AM - ahains Back To Top (941)

For info on the exception you may hit when using with(nolock), google for: nolock data movement

I think the point can be summed up as:

IF

Access pattern of column A is lots of DML, and access pattern of column B is lots of reads and few DML

THEN

IF you have a covering index, the reads from column B will experience zero blocking from DML to column A because the DML operation will only lock the base table and not the index. Note that you still have full transactional consistency in your reads -- if there is an infrequent DML to column B you will see any appropriate isolation.

ELSE IF you do not have a covering index, column B will experience significant blocking from DML to column A because the DML operation will lock the whole row of the base table and the reads against column B must also use the base table.

I had a need for this kind of pattern for a project so I made a blog entry with an overview


Friday, May 2, 2008 - 8:44:25 AM - BrianJones Back To Top (940)

I agree with the comment that the data will not be transactionally consistent with WITH(NOLOCK), however as the point of this was to read columns that were not changed then I don't see the reason for using the index. As you quite rightly point out if the data changed is on a column in the index then the index will be locked anyway. Also, the point of the article I thought was to enable you to read rows that are locked for a long period of time, and if this data affects the index you are using then it looks to me that it doesn't answer the problem - getting the data consistently.

 I've not heard of queries erroring on a WITH(NOLOCK) and have not come across it on any of the databases I've written. I guess if the data is highly used, and the query you are executing is very slow then you will get discrepancies (data updated whilst query takes place not sure what that would do), but I would then suggest that you are writing heavy reports against a transactional database so should think about creating a replicated reporting database and remove the load from the data entry.

 I think I must be missing something here, because I can't really see how this will help over WITH(NOLOCK)


Friday, May 2, 2008 - 8:06:04 AM - ahains Back To Top (939)

Using the with(nolock) hint means you can/will get dirty reads that are not transactionally consistent. The other common warning against it is the query may error out due to data movement if the page your query is processing moves. I always use with nolock hint when running big / long running reports that don't need to be 100% accurate.

Indexes certainly increase overhead of dml so like everything in sql the decision of whether or not to add one is 'it depends'. If it is a busy table with lots of reads then certainly it can increase query throughput.

I think the article fails to bring up the important point that the update *will* take a lock on the index if the update changes any of the columns that the index covers. Here is an example that demonstrates this:

create table t1 (id int, lastAgg int, pending int, currentAgg as lastAgg+pending)
insert t1 values (1, 20, 2)
insert t1 values (2, 30, 3)
insert t1 values (3, 40, 4)
go
--create clustered index t1_cidx_id_agg on t1(id)
--go
create unique nonclustered index t1_idx_id_lastAgg on t1(id) include (lastAgg)
go
/*in session 1 update a column not in the covering index*/
begin tran
update t1 set pending = 1 where id = 2;

/*in session 2 following, using index hint since table and data is compact so query plan may otherwise use the clustered index and invalidate the test*/
select lastAgg from t1 with (index = t1_idx_id_lastAgg) where id = 2
/*result: not blocked by session 1*/
go

/*rollback or commit the previous session before starting the next test*/
/*in session 1 update the column that is covered by the index*/
begin tran
    update t1 set lastAgg = 21 where id = 2;
/*in session 2 following, using index hint since table and data is compact so query plan may otherwise use the clustered index and invalidate the test*/
select lastAgg from t1 with (index = t1_idx_id_lastAgg) where id = 2
/*result: session 2 is blocked by session 1*/
go

This table/index pattern can be used to implement a deferred update aggregation table. All of the readers that can afford a time lag query the lastAgg column. All readers that require up to date info query the currentAgg column and take the hit that they will be blocked by concurrent writers. All of the updates write to the pending column. A scheduled task or other background process occassionally goes through the table and does: set lastAgg=lastAgg+pending, pending=0, dirty=0.

Side note 1: Note that it does not matter if the base table has a clustered index or is a heap. 

Side note 2: I don't know if this always holds true, but I observe that if your query does an update/set on a column that is covered by the index but does *not* actually change the value (i.e. set to value 3 and current value is already 3), then the index is not locked.


Friday, May 2, 2008 - 5:04:26 AM - BrianJones Back To Top (937)

Just wondering if there was any reason that the table hint WITH(NOLOCK) could not have been used ? That way you will use existing indexes and can retrieve all of the data, and not need to create another index. I try to only create indexes where a performance gain is required, especially if it is a busy table.















get free sql tips
agree to terms