Top 5 Reasons for Wrong Results in SQL Server

By:   |   Updated: 2016-11-17   |   Comments (6)   |   Related: > TSQL


Problem

Many people have come across situations where their queries are returning incorrect results - sometimes this is due to a bug in SQL Server, but it is far more common that it is less sinister than that.

Solution

Having spent quite a bit of time supporting a wide variety of users with an assortment of "wrong results" problems, I've found that there are actually only a few common scenarios that lead to incorrect results - and usually it is a logic or assumption problem.

The five things we will cover are:

  1. Date Literals
  2. BETWEEN / EOMONTH()
  3. Binary, case-sensitive, or wrong collation search
  4. Mixing up join / filter criteria
  5. Using READ UNCOMMITTED / NOLOCK

Let's get started.

1 - Date Literals

SQL queries are often constructed within application code or dynamic SQL, and date strings typed by the user are copied and pasted, as is, into the query text. The problem is that the user does not always speak the same language or regional settings as the server. So when a Canadian types 9/8/2016 into the textbox, meaning August 9th, the query ends up like this:

 ... WHERE EventDate = '9/8/2016';

Since the server is set up with US English, SQL Server has to interpret that as September 8th. The user might not even notice that the events returned are from the wrong date, since often internationalization is missed at the presentation layer too - so it will display 9/8/2016, just like the user typed.

A slight variation on this theme is when 9/8/2016 is passed directly into the query, without proper delimiters, so it ends up looking like this:

 ... WHERE EventDate = 9/8/2016;

This won't possibly return the right data, even if they were expecting September 8th. The problem is that SQL Server allows for all kinds of sloppy implicit conversions; in this case, it thinks you are intentionally writing an expression that divides three integers. Nine divided by eight divided by 2016 is zero, therefore SQL Server implicitly converts your expression to the base date (1900-01-01) plus zero days. Evidence:

 SELECT a = 9/8/2016, b = CONVERT(datetime, 9/8/2016);

 /* 
     a      b
     ----   -----------------------
     0      1900-01-01 00:00:00.000 
 */

Moral: Always use proper date/time parameters, and whenever using a string representation at any layer, use a culture invariant format like yyyymmdd. This may mean forcing your users to select from a calendar control or set of drop-downs, because if you're letting them type, you can never be absolutely certain whether they meant August 9th or September 8th.  

2 - BETWEEN / EOMONTH()

One query strategy that I have always found problematic is the use of BETWEEN for date range queries. Microsoft didn't do many people any favors adding the EOMONTH() function in SQL Server 2012.

BETWEEN is problematic in part because it is ambiguous - what exactly does "between" mean? If I say "between 4 PM and 8 PM," it may seem clear, but should something that happens at exactly 8 PM be included? What about at 8:01 PM? And 8:00:00.0000001? Even in plain English, it's not always clear whether the start and end of the range should be included.

Next, BETWEEN forces query writers to determine the end of a period. If you are writing a report that covers the month of February, how exactly do you determine the end of February? Leap year might seem to be the biggest complication, but it's actually harder to determine the last instant on the last day of the month. With datetime, that's usually calculated by subtracting 3 milliseconds from March 1st. But if the underlying data type changes, or any parameter or variable used along the way, this value can round up or down unexpectedly.

Example:

 DECLARE @start datetime = '20160201',
         @end   datetime = DATEADD(MILLISECOND, -3, '20160301');

 DECLARE @d date            = @end,
         @eom datetime      = EOMONTH(@end),
         @sdt smalldatetime = @end,
         @dt2 datetime2(7)  = @end;

 SELECT @dt,  -- 2016-02-29 23:59:59.997
        @d,   -- 2016-02-29
        @eom, -- 2016-02-29 00:00:00.000
        @sdt, -- 2016-03-01 00:00:00
        @dt2; -- 2016-02-29 23:59:59.9966667

If the column is a datetime column, but the parameters or variables are manipulated in this way, then you will have incorrect results:

  • If the parameter is date, or is implicitly converted to a day boundary by EOMONTH(), then BETWEEN @start AND @d will miss anything from February 29th with a time other than midnight.
  • If the parameter is smalldatetime, then BETWEEN @start AND @sdt will include any data from March 1st with a time of midnight.
  • If the parameter is datetime2, the end range is rounded down, so in theory, you could miss data from 23:59:59.997 (or .9999999 if the underlying data type is datetime2).

EOMONTH(), in particular, seems to trick a lot of people into thinking it returns the end of the month, when in fact - as demonstrated above - it returns midnight on the last day of the month. Which of course is not the same thing. Now, BETWEEN would work here if the underlying data type is date, but I strongly recommend against using that as an exception to the rule, and using other methods elsewhere.

Moral: Always use an open-ended range for date range queries, even when using the date data type.

It is always easier to find the beginning of the next day, week, month, what have you, than it is to find the end of the current period. And, more importantly, an open-ended range is not vulnerable to rounding and conversion issues. The following can never return the wrong results (e.g. will always return all of the data for February, and only the data for February), regardless of what you do to the data types of the parameters or underlying columns:

     DECLARE @start datetime = '20160201',
             @end   datetime = DATEADD(MONTH, 1, '20160201');
    
     SELECT ... WHERE datetime_column >= @start
       AND datetime_column < @end;

3 - Binary, case-sensitive, or wrong collation search

I often recommend people develop on a case-sensitive or binary collation. Like learning to drive using a stick shift in the winter, this can prevent surprises and make sure your code can be deployed anywhere and continue to function.

Let me illustrate with a simple example. Let's create two databases, one with case sensitive collation, and the other case insensitive:

 CREATE DATABASE CaseInsensitive COLLATE Latin1_General_CI_AS;
 GO
 CREATE DATABASE CaseSensitive COLLATE Latin1_General_CS_AS;
 GO

Now, let's create a basic table, something you might do without having any idea about the collation of the database (or the server-level collation of the instance).

 USE CaseInsensitive;
 GO
 CREATE TABLE dbo.x(LastName nvarchar(32) PRIMARY KEY);
 INSERT dbo.x(LastName) VALUES(N'Bertrand'),(N'ߐ');

 DECLARE @ln_lower   nvarchar(32) = N'bertrand', -- wrong case
         @ln_varchar  varchar(32) = 'ߐ';         -- not Unicode

 SELECT LastName FROM dbo.x WHERE LastName = @ln_lower;   -- 1 row
 SELECT LastName FROM dbo.x WHERE LastName = @ln_varchar; -- 0 rows

Because the column uses a case insensitive collation, bertrand and Bertrand still match. However, even in a case sensitive collation, not all Unicode strings will match an "equivalent" varchar string - the strings simply won't be the same, as a conversion to varchar can lose information.

If we repeat this exact same code sequence on the case sensitive database, bertrand no longer matches Bertrand, and the Unicode problem still exists.

Another example

Let's say we have a simple table with a Unicode column, and we want to find all the names where the first letter is in lower case. We'd expect we could do this with a case sensitive collation, but collations are funny:

 DECLARE @x TABLE(LastName nvarchar(32));
 
 INSERT @x(LastName) VALUES(N'berthelette'),(N'Bertrand');
 
 -- now, use COLLATE to get all lower case letters:
 
 SELECT LastName FROM @x
   WHERE LastName COLLATE Latin1_General_CS_AS LIKE N'[a-z]%';

 SELECT LastName FROM @x
   WHERE LastName LIKE N'[a-z]%' COLLATE Latin1_General_CS_AS;
 
 -- wait a minute, why is Bertrand returned?
 -- This collation uses dictionary sorting, which means aAbBcC...
 
 -- A workaround is to use a BIN/BIN2 collation, which use code-point.
 -- Meaning abc...ABC... 
 -- Now only berthelette is returned:
 
 SELECT LastName FROM @x
   WHERE LastName COLLATE Latin1_General_BIN LIKE N'[a-z]%';
 
 SELECT LastName FROM @x
   WHERE LastName COLLATE Latin1_General_BIN2 LIKE N'[a-z]%';

 SELECT LastName FROM @x
   WHERE LastName LIKE N'[a-z]%' COLLATE Latin1_General_BIN;
 
 SELECT LastName FROM @x
   WHERE LastName LIKE N'[a-z]%' COLLATE Latin1_General_BIN2;

Moral: For search columns where you want to support case-insensitive searches, define them using an explicit, case-insensitive collation. Also, always use the N prefix when dealing with Unicode strings, not just to ensure correct results, but also to avoid costly implicit conversions. Other workarounds include applying LOWER() or UPPER() functions to both sides of the WHERE clause, but these approaches can kill sargability (as can the COLLATE clause shown above).

4 - Mixing up join / filter criteria

A situation I see a lot out on forums is when a user is performing an outer join but the query behaves like an inner join. This is often because some of the criteria meant for the outer table is expressed in the WHERE clause instead of the JOIN clause. It will be easier to explain with an example; let's say we have a table representing students, and another table representing how many pets a student has:

 CREATE TABLE dbo.Students
 (
   StudentID int PRIMARY KEY,
   FirstName nvarchar(32)
 );
 
 CREATE TABLE dbo.PetCount
 (
   StudentID int PRIMARY KEY
     FOREIGN KEY REFERENCES dbo.Students(StudentID),
   PetCount tinyint
 );
 
 INSERT dbo.Students(StudentID, FirstName)
   VALUES(1, N'Aaron'),(2, N'Madeline'),(3, N'Nora');
   
 INSERT dbo.PetCount(StudentID, PetCount)
   VALUES(1, 1), (2, 5);

Here we see that Aaron has one pet, Madeline has five pets, and Nora has no pets (no row in the PetCount table).

Now, let's say you need a query that would return all students with fewer than three pets. You would start with a left join:

 SELECT s.FirstName, pc.PetCount
   FROM dbo.Students AS s
   LEFT OUTER JOIN dbo.PetCount AS pc
   ON s.StudentID = pc.StudentID;
   
   /* which returns:

   FirstName   PetCount
   ---------   --------
   Aaron       1
   Madeline    5
   Nora        NULL 
   */

This looks good so far, but we still need to add a filter to remove Madeline from the result set. What I see most people try at first is they add a WHERE clause:

 SELECT s.FirstName, pc.PetCount
   FROM dbo.Students AS s
   LEFT OUTER JOIN dbo.PetCount AS pc
   ON s.StudentID = pc.StudentID
   WHERE pc.PetCount < 3;
   
   /* which returns:

   FirstName   PetCount
   ---------   --------
   Aaron       1
   */

Well, that didn't work so well. Sure, we got rid of Madeline, who has 5 pets, but we also got rid of Nora, who has 0 pets. Many people struggle to understand this, but the easiest way to explain it is if you add a WHERE clause to the outer table, you change the outer join to an inner join. From a logical perspective, what's happening is that the filter can only return true for values of pc.PetCount that are known to be less than 3. Since PetCount for Nora is NULL, not 0, it is unknown if that is less than 3, so the row is rejected.

Next, they try moving the filter to the join clause:

 SELECT s.FirstName, pc.PetCount
   FROM dbo.Students AS s
   LEFT OUTER JOIN dbo.PetCount AS pc
   ON s.StudentID = pc.StudentID
   AND pc.PetCount < 3;

   /* which returns:
   
   FirstName   PetCount
   ---------   --------
   Aaron       1
   Madeline    NULL
   Nora        NULL 
   */

This might be even more confusing - what happened to Madeline's pets? Changing the WHERE to AND means that, essentially, the filter only applies to the data that comes from the outer table. All rows are returned from Students but data from PetCount is only included when it matches the filter.

The way you have to solve this particular query is to use a WHERE clause that whether PetCount - or 0, when PetCount doesn't exist - is less than 3:

 SELECT s.FirstName, PetCount = COALESCE(pc.PetCount, 0)
   FROM dbo.Students AS s
   LEFT OUTER JOIN dbo.PetCount AS pc
   ON s.StudentID = pc.StudentID
   WHERE COALESCE(pc.PetCount, 0) < 3;
   
   /* which finally returns what we expect:
   
   FirstName   PetCount
   ---------   --------
   Aaron       1
   Nora        0 
   */

Typically, the problems I see with outer joins is that a filter is mistakenly placed in the WHERE clause. In this case, that's exactly where it belonged.

Moral: Strive to understand the different logical join mechanisms, and make sure to test against all edge cases before being satisfied that your query logic is sound.  

5 - Using READ UNCOMMITTED / NOLOCK

Last but not least, NOLOCK is a really popular table hint, but most people think it's just a "go faster" button and don't realize (or ignore) that it actually still incurs locks, so its name is not entirely accurate. And as I explained in Bad habits : Putting NOLOCK everywhere, there can be corruption issues and exceptions, but incorrect results can happen too. These can manifest in four different ways, all due to the way SQL Server can scan the data:

  1. Dirty read: You read an uncommitted row that later gets rolled back.
  2. Skipped row: You miss a row entirely, because it moves from ahead of your current scan position to behind it.
  3. Re-read row: You read a row twice, because it moves from behind your current scan position to ahead of it.
  4. Half-updated row: You read a value from a column that is literally in the middle of an update.

The latter is the most interesting case to me, because it is not obvious (and I didn't realize it until I saw it demonstrated by Paul White). The other three scenarios are often known trade-offs - sure, for the benefits, it's worth it to them to occasionally see a row that never existed, or miss a row, or read the same row twice. But seeing a single value that has been partially updated borders on corruption to me. Let's illustrate with an example using an nvarchar(max) column. In one query window, create the following table, and start this infinite loop:

 CREATE TABLE dbo.PixieDust
 (
   ID int PRIMARY KEY,
   LOB nvarchar(max) NOT NULL,
 );
 
 -- insert a value that will span at least two pages:
 INSERT dbo.PixieDust(ID, LOB)
 VALUES(1, REPLICATE(CONVERT(nvarchar(max), N'A'), 4097));

 SET NOCOUNT ON;
 
 DECLARE 
   @AllAs nvarchar(max) = REPLICATE(CONVERT(nvarchar(max), N'A'), 4097),
   @AllZs nvarchar(max) = REPLICATE(CONVERT(nvarchar(max), N'Z'), 4097);
 
 WHILE 1 = 1
 BEGIN
   UPDATE dbo.PixieDust
     SET LOB = @AllAs
     WHERE ID = 1;
 
   UPDATE dbo.PixieDust
     SET LOB = @AllZs
     WHERE ID = 1;
 END;

Now, in a second query window, run this loop, which will halt as soon as the NOLOCK query reads a value that has been half updated. It shouldn't take long:

 SET NOCOUNT ON;
 
 DECLARE 
   @ValueRead nvarchar(max) = N'',
   @AllAs nvarchar(max) = REPLICATE(CONVERT(nvarchar(max), N'A'), 4097),
   @AllZs nvarchar(max) = REPLICATE(CONVERT(nvarchar(max), N'Z'), 4097);
 
 WHILE 1 = 1
 BEGIN
   SELECT @ValueRead = LOB
     FROM dbo.PixieDust WITH (NOLOCK)
     WHERE ID = 1;
 
   IF @ValueRead NOT IN (@AllAs, @AllZs)
   BEGIN
     PRINT LEFT (@ValueRead, 32);
     PRINT RIGHT(@ValueRead, 32);
     BREAK;
   END
 END;

Before too long, you should see this output, meaning that the value read by the NOLOCK query consisted of some As and some Zs:

 
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

Just one simple example where NOLOCK can make a real mess of your data. (Don't forget to go back to the first query window, stop that infinite loop, and drop the table!)

Moral: Only use NOLOCK where the risk of the above side effects is acceptable (and where this method actually improves performance or concurrency in a meaningful way). I would strongly consider READ COMMITTED SNAPSHOT as an alternative.

Now, I am not saying NOLOCK doesn't have its place - it is certainly fine for ballpark-style queries, ad hoc queries against large tables, non-production systems, and other scenarios. But I have witnessed many shops where it is dictated by policy that all queries must use NOLOCK. I would be very careful about this in production systems where accuracy is important; in my experience, accuracy is usually pretty important.

Conclusion

These are five of the most common reasons I see for "wrong results" issues in SQL Server. Do you see a common scenario that I haven't mentioned? Let me know in the comments below!

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Aaron Bertrand Aaron Bertrand (@AaronBertrand) is a passionate technologist with industry experience dating back to Classic ASP and SQL Server 6.5. He is editor-in-chief of the performance-related blog, SQLPerformance.com, and also blogs at sqlblog.org.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips


Article Last Updated: 2016-11-17

Comments For This Article




Tuesday, December 20, 2016 - 12:23:52 PM - George Back To Top (45009)

 I see issues with multiple clauses in the WHERE statement and they consist of a mixture of OR and AND statements.  Without proper hierarchy with parenthesis, the wrong results are all-but guaranteed to be returned.  These are really tough for new programmers to find and to understand.  And unformatted/unaligned code makes it even harder to find them.


Friday, November 18, 2016 - 9:13:40 AM - Aaron Bertrand Back To Top (43801)

Barry, now you're talking scope creep. I didn't want to get into the NULL / unknown / 0 conversation as that would have made the tip twice as long. Discussion for a different day.


Thursday, November 17, 2016 - 7:38:15 PM - Barry Goldman Back To Top (43795)

 

I like these tips. 

But in your case 4 - Mixing up join / filter criteria, you state "Well, that didn't work so well. Sure, we got rid of Madeline, who has 5 pets, but we also got rid of Nora, who has 0 pets. 

My comment is that Nora doesn't have 0 pets!

Nora has 'not recorded' or 'unknown' pets!  Which is different.

It therefore would have been helpful to expand that discussion to make the reader think more about the difference between 'Null' and 'zero'.

Personally, I would not expect to see Nora returned if the question was 'How many students have fewer than 3 pets'.

If this were a live database that I had written and the answer was critical, I would return two ansers : One student has fewer then 3 pets and 1 student has an unknown number of pets."

Anyway - just some thoughts. Otherwise great reading as I still havae so much to learn.


Thursday, November 17, 2016 - 5:36:02 PM - AZJim Back To Top (43793)

Aaron ... very valuable tips.  Thank you.  I just wanted to bring up the subject of Uncommitted Read in respect to my 30+ years as a DBA.  I started with mainframes, went to Unix databases, and ended up with SQL Server.  Uncommitted Read (aka dirty read) has been around for a long time (much longer than SQL Server).  I have worked in aerospace, transportation, financial, and government industries.  The shops all had strict production controls.  We never had a problem with Uncommitted Read -- we even used it routinely in credit card applications.  While I believe that Uncommitted Read poses a slight risk, the risk is so small that the benefits out weigh these risks.  I would not say this if the shop I worked in had a high degree of rollbacks (due to poor testing) or if the OLTP database design was denormalized.  But sometimes you don't have a choice for the database design so you have to go with what you are comfortable with.  Nevertheless, what you shared was comprehensive enough to give anyone enough information to make a good decision.


Thursday, November 17, 2016 - 11:05:06 AM - Jeremy Kadlec Back To Top (43789)

Aaron,

Congrats on your 100th tip!  On behalf of the community we sincerely thank you for all that you have done.  Your examples, explanations and commentary are second to none.  We sincerely appreciate all that you do!  Best wishes on another 100 tips!

Thank you,
Jeremy Kadlec
MSSQLTips.com Community Co-Leader


Thursday, November 17, 2016 - 10:06:07 AM - Kris Maly Back To Top (43786)

 Awesome

Hat's of for your experience and keep educating the community

Thanks a lot















get free sql tips
agree to terms