Compare SQL Server Performance of In-Memory vs Disk Tables

By:   |   Updated: 2014-09-25   |   Comments (6)   |   Related: > In Memory OLTP


Problem

With the release of SQL Server 2014 there came a lot of new features. One of the more interesting ones is the “In-Memory OLTP” feature. One would assume that storing a table in memory would most definitely be faster than using the conventional disk-based table storage engine. But just how much faster is it? This tip will use a few sample SELECT, INSERT, UPDATE and DELETE statements to compare the performance difference between each type of table storage.

Solution

For those that are not familiar with the In-Memory OLTP feature the following two links will give a good overview of the feature, outline the requirements for using it, as well as provide some sample TSQL to get you started.

Sample Table Setup

In order to run our test scenario let's create two identical tables with the only difference being that one with be stored the traditional way on disk and the other will be stored using the new feature in memory.

-- Table creation logic for disk-based storage
CREATE TABLE testtable_regular ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL);

-- Table creation logic for in-memory storage                        
CREATE TABLE testtable_inmemory ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON);

Now that we have a couple of tables let's run our first test. Since the tables are empty we'll test INSERTs first and put similar data into each table with SQL Profiler running and compare the results. Here is the T-SQL to perform the inserts.

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   INSERT INTO testtable_regular (col1, col2, col3, col4) 
       VALUES (@val,@val % 10,@val,'TEST' + CAST(@val AS VARCHAR))
   SELECT @val=@val+1
END
GO

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   INSERT INTO testtable_inmemory (col1, col2, col3, col4) 
       VALUES (@val,@val % 10,@val,'TEST' + CAST(@val AS VARCHAR))
   SELECT @val=@val+1
END
GO

Looking at the results we can see that as one would expect the In-Memory table inserts have almost no disk activity compared to quite a bit of disk activity from the disk-based table. There are however two interesting things to note from this test. First, even though the disk-based table did have to perform much more IO it was only marginally slower in overall duration. Second, the disk-based table actually used less CPU than the In-Memory table to perform the inserts.

Table Type Duration (ms) CPU (ms) Reads Writes
Disk Based 117260 10078 1014317 1364
In Memory 112693 11266 3 1

Next we'll take a look how some SELECT statements perform. We'll look at querying both a single record from each table as well as querying a range of records. Here is the T-SQL for each of these test cases.

SELECT * FROM testtable_regular WHERE [col1]=11493
SELECT * FROM testtable_regular WHERE [col1]=51943
SELECT * FROM testtable_regular WHERE [col1]=101873
SELECT * FROM testtable_regular WHERE [col1]=131320
SELECT * FROM testtable_regular WHERE [col1]=181623
GO

SELECT * FROM testtable_regular WHERE [col1] BETWEEN 11493 AND 51943
SELECT * FROM testtable_regular WHERE [col1] BETWEEN 131320 AND 181623
GO

SELECT * FROM testtable_inmemory WHERE [col1]=11493
SELECT * FROM testtable_inmemory WHERE [col1]=51943
SELECT * FROM testtable_inmemory WHERE [col1]=101873
SELECT * FROM testtable_inmemory WHERE [col1]=131320
SELECT * FROM testtable_inmemory WHERE [col1]=181623
GO

SELECT * FROM testtable_inmemory WHERE [col1] BETWEEN 11493 AND 51943
SELECT * FROM testtable_inmemory WHERE [col1] BETWEEN 131320 AND 181623
GO

As with the first test we run these statements with SQL Profiler running. Looking at the results below we see that in every aspect the In-Memory table outperformed the disk-based table. The only measure that was a little bit close was the overall duration of the query, but this probably can be explained by the fact the query has to return the data and both queries return the same number of records.

Table Type Query Type Duration (ms) CPU (ms) Reads Writes
Disk Based Single Row 208 0 15 0
In Memory Single Row 0 0 0 0
Disk Based Multiple Row 887 110 1796 0
In Memory Multiple Row 690 62 0 0

Finally, let's take a look at some UPDATE and DELETE statements and see how they compare. Below are some T-SQL statements to perform each of these operations.

UPDATE testtable_regular SET col3=col3+10 WHERE [col1]=21493
UPDATE testtable_regular SET col3=col3+10 WHERE [col1]=61943
UPDATE testtable_regular SET col3=col3+10 WHERE [col1]=111873
UPDATE testtable_regular SET col3=col3+10 WHERE [col1]=141320
UPDATE testtable_regular SET col3=col3+10 WHERE [col1]=191623
GO

UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=21493
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=61943
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=111873
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=141320
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=191623
GO

DELETE FROM testtable_regular WHERE [col1]=12494
DELETE FROM testtable_regular WHERE [col1]=31944
DELETE FROM testtable_regular WHERE [col1]=91874
DELETE FROM testtable_regular WHERE [col1]=131321
DELETE FROM testtable_regular WHERE [col1]=171624
GO


DELETE FROM testtable_inmemory WHERE [col1]=12494
DELETE FROM testtable_inmemory WHERE [col1]=31944
DELETE FROM testtable_inmemory WHERE [col1]=91874
DELETE FROM testtable_inmemory WHERE [col1]=131321
DELETE FROM testtable_inmemory WHERE [col1]=171624
GO

Looking at the SQL Profiler results from these statements we again see that the In-Memory table outperforms the disk-based table in every single category.

Table Type Query Type Duration (ms) CPU (ms) Reads Writes
Disk Based UPDATE 5 0 25 0
In Memory UPDATE 3 0 0 0
Disk Based DELETE 5 0 45 0
In Memory DELETE 3 0 0 0

Summary

It's easy to see from these simple test results why the "In-Memory OLTP" feature of SQL Server 2014 has received so much attention. With the performance benefits we realized above the only question left is how much memory can I get into my server?

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Ben Snaidero Ben Snaidero has been a SQL Server and Oracle DBA for over 10 years and focuses on performance tuning.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips


Article Last Updated: 2014-09-25

Comments For This Article




Thursday, June 30, 2016 - 4:53:54 AM - panos Back To Top (41793)

Hi,

 

Does anyone have any results with 2016? It is supposed to be more effecient than 2014 with in memory tables.

Thanks,

Panos.


Tuesday, September 29, 2015 - 10:57:30 AM - Peter Boswell Back To Top (38782)

Ben,

I just completed an experiment with a large simple table. I converted it from disk to in Memory. However, this table contains important tax data and data duration is required.

In my experiment, the In Memory table is about 1 gig. The guides and wizards provided by Microsoft with SQL Server 2014 were excellent. In this experiment the code writes thousands of rows of tax data into this table as part of a tax calculation.

Here are the results:

The In Memory operations for a large tax calculation took twice as long with the In Memory table as with the disk table. SQL Profiler indicated that the In Memory was faster event by event. But SQL Profiler does not track the time SQL Server takes to write out the In Memory file stream tables to persist the changes. So there is just a lot of time between SQL Profiler events (up to 500 milliseconds). I could find no SQL Profiler event to track this. Using the Disk Resource Monitor, I could see these files, ranging form 16 meg to 208 meg, being rewritten. Now not all of them are updated, just those that change. 

Selected complex queries that included this table converted to In Memory, took 5x as long and performed 5x reads on the disk based tables. So the Query Optimizer needs to be improved when working with both In Memory and disk based tables. Not surprising for something so new.

As it stands today, my recommendation within my company is to limit In Memory Tables to items like Tax Rates which are read with simple queries and do not get updated often.

I hope Microsoft continues to find ways to let DBAs push selected tables into memory for selected operations (even primary key ranges). 

Thanks,

Peter


Friday, May 22, 2015 - 5:31:45 PM - Bob Back To Top (37262)

Have you ever noticed in-memory tables performing more slowly than a regular, permenant table?

 

Bob


Friday, October 10, 2014 - 11:16:43 AM - Cristiano Ghersi Back To Top (34923)

Ben, thank you for your post.

I'm evaluating as well as you this great new feature. 

Unfortunately in real scenarios the limitation of InMemory tables requires a much deeper analysis: i.e. clustered indexes on disk based tables can definitely achieve better performances than non clustered ones, not to talk about tables with composite PKs, which are tough for hash index in on memory tables...

So, I think this article is a very good starting point and I will love to read other your articles where other kind of tables are compared (multiple-field PKs, clustered PK in disk table vs nonclustered PKs in memory tables, saving only schema vs also data in inmemory tables, hash indexes tuning and performances, etc.)

Thank you

Cristiano

 


Friday, October 10, 2014 - 7:01:35 AM - Colin Allen Back To Top (34918)

AZJim, If you read the links tha Ben provides, the second describes why it isn't the same as having a normal table permanently in memory. The queries will use the Hekaton database engine which is optimised of in memory tables and uses lock and latch free structures. It introduces a lot of restrictions as well, so if you are not prepared to redesign the application the easy option is more memory or SSDs


Thursday, September 25, 2014 - 4:28:01 PM - AZJim Back To Top (34724)

I would suspect that the real-world elapsed/duration times would be closer to the multi-row numbers.  Most of the time in transactional based systems the data page will already be in memory for disk-based tables (no always, but a majority of the time).  The question that has to be asked is whether the cost of manually directing memory is a better cost proposition to keeping the memory under the control of the database manager.  Perhaps in some cases smaller tables might get normally get their data pages overwritten and thus would be good candidates for in-memory tables.  Now if you have a heavily used non-clustering/covering index that could be directed there, I could see great value.

For a star schema designed data warehouse, I could see a greater value.  Large queries (lots of merge joins, order bys, avgs, etc.) would overwrite the entire buffer poll.  Keeping the "non-monster" dimension tables in memory would make sense.















get free sql tips
agree to terms