By: Aaron Bertrand | Updated: 2019-01-29 | Comments | Related: > Dynamic Management Views and Functions
Problem
As a third-party vendor, I am exposed to a large number of queries that look at SQL Server metadata – catalog views and Dynamic Management Views (DMVs). These views expose a lot of crucial and useful information about the system, but they can be a double-edged sword, as they are updated frequently and your code must adapt to match. The problem is that, as soon as you start taking advantage of a new column, that query will fail on older versions. Unless you force all of your customers to upgrade to the latest version of SQL Server, you then need to implement some type of branching or conditional logic in your code to run the right form against each version.
Solution
A simple example is
sys.sql_modules. This view was introduced in SQL Server 2005 as a replacement
for the legacy view syscomments
, and contains information about views,
functions, triggers, and procedures. SQL Server 2014 added uses_native_compilation
,
and now in SQL Server 2019, we have inline_type
and is_inlineable
.
In order to pull this information when it is available, and not fail with an error
when the columns aren't present, we need to do something like this:
DECLARE @MajorVersion int;
-- 9 = 2005 13 = 2016
-- 10 = 2008/R2 14 = 2017
-- 11 = 2012 15 = 2019
-- 12 = 2014 SELECT @MajorVersion = PARSENAME(CONVERT(varchar(32),SERVERPROPERTY('ProductVersion')),4); IF @MajorVersion >= 15
BEGIN
SELECT [object_id],
uses_native_compilation,
inline_type,
is_inlineable
FROM sys.sql_modules;
END IF @MajorVersion >= 12 AND @MajorVersion < 15
BEGIN
SELECT [object_id],
uses_native_compilation,
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;
END IF @MajorVersion < 12
BEGIN
SELECT [object_id],
uses_native_compilation = CONVERT(bit, NULL), -- or 0
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;
END
That looks like valid code, but it won't run as is on older versions, because the parser will recognize the invalid column references before evaluating the conditionals (even in a stored procedure, deferred name resolution won't forgive these compilation issues):
Msg 207, Level 16, State 1
Invalid column name 'inline_type'. Msg 207, Level 16, State 1
Invalid column name 'is_inlineable'.
We actually have to use conditional logic and dynamic SQL:
DECLARE @MajorVersion int, @sql nvarchar(max); -- 9 = 2005 13 = 2016
-- 10 = 2008/2008 R2 14 = 2017
-- 11 = 2012 15 = 2019
-- 12 = 2014 SET @MajorVersion = CONVERT(int,PARSENAME(CONVERT(varchar(32),
SERVERPROPERTY('ProductVersion')),4)); IF @MajorVersion >= 15
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation,
inline_type,
is_inlineable
FROM sys.sql_modules;';
END IF @MajorVersion >= 12 AND @MajorVersion < 15
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation,
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;';
END IF @MajorVersion < 12
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation = CONVERT(bit, NULL), -- or 0
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;';
END EXEC sys.sp_executesql @sql;
Still, this has multiple problems. In addition to become largely unreadable spaghetti
code, it will be tedious to maintain a library of which columns were added in which
release (and a mapping of release to ProductVersion
). If a column is
added in 2019, then back-ported to 2017 CU16 and 2016 SP2 CU12, this logic just
became a lot more convoluted, because you'll need full ProductVersion
evaluation, as opposed to just the major build number. And this is not really much
easier if you avoid stored procedures and generate these as ad hoc queries in your
application code; the branching logic will just potentially be in a different language.
Rather than test by version, I think it's safer to test by presence (a commandment I learned back during the first set of browser wars). This way, you don't need to remember which specific build introduced a given column. You'll still need dynamic SQL, though:
DECLARE @sql nvarchar(max); IF EXISTS (SELECT 1 FROM sys.all_columns AS c
INNER JOIN sys.all_objects AS o
ON c.[object_id] = o.[object_id]
AND o.name = N'sql_modules'
AND o.[schema_id] = 4
AND c.name = N'inline_type'
)
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation,
inline_type,
is_inlineable
FROM sys.sql_modules;';
END ELSE IF EXISTS (SELECT 1 FROM sys.all_columns AS c
INNER JOIN sys.all_objects AS o
ON c.[object_id] = o.[object_id]
AND o.name = N'sql_modules'
AND o.[schema_id] = 4
AND c.name = N'uses_native_compilation'
)
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation,
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;';
END ELSE
BEGIN
SET @sql = N' SELECT [object_id],
uses_native_compilation = CONVERT(bit, NULL), -- or 0
inline_type = CONVERT(bit,NULL), -- or 0
is_inlineable = CONVERT(bit, NULL) -- or 0
FROM sys.sql_modules;';
END EXEC sys.sp_executesql @sql;
And I would shorthand to the following, to avoid some of the repetition (whether this is better or worse is rather subjective):
DECLARE @sql nvarchar(max), @cols nvarchar(max) = N'[object_id]'; SET @cols += N',
uses_native_compilation'
+ CASE WHEN NOT EXISTS (SELECT 1 FROM sys.all_columns AS c
INNER JOIN sys.all_objects AS o
ON c.[object_id] = o.[object_id]
AND o.name = N'sql_modules'
AND o.[schema_id] = 4
AND c.name = N'uses_native_compilation'
)
THEN N' = CONVERT(bit, NULL)' ELSE N'' END
+ CASE WHEN NOT EXISTS (SELECT 1 FROM sys.all_columns AS c
INNER JOIN sys.all_objects AS o
ON c.[object_id] = o.[object_id]
AND o.name = N'sql_modules'
AND o.[schema_id] = 4
AND c.name = N'inline_type'
)
THEN N',
inline_type,
is_inlineable'
ELSE N',
inline_type = CONVERT(bit, NULL),
is_inlineable = CONVERT(nit, NULL)'
END; SET @sql = N' SELECT ' + @cols + N'
FROM sys.sql_modules;'; EXEC sys.sp_executesql @sql;
That works, but it is an equally ugly and unmaintainable mess.
I think there is a better solution that allows you to have a single query, without dynamic SQL. It involves a quirky but definitely intentional feature of the SQL language, where you can use unqualified column references to point to an object at a different scope. This is better illustrated with a quick and simple example:
CREATE TABLE dbo.TableA(a tinyint);
INSERT dbo.TableA(a) VALUES(1),(2); CREATE TABLE dbo.TableB(b tinyint);
INSERT dbo.TableB(b) VALUES(3),(4);
GO -- this should error, but it does not: SELECT* FROM dbo.TableA
WHERE EXISTS
(
SELECT a FROM dbo.TableB
);
Obviously, there is no column named a
in TableB
. What
happens here is SQL Server sees there is no such column at that scope, so it traverses
up to the next scope and checks if there is such a column there. This seems like
a pretty forgiving aspect of the language, and seems to almost defy the intent of
the code as written, but we can capitalize on this.
Imagine you have a schema change that is rolling out – you're adding
the column c
to TableB
, and you have to deploy code to
your application that supports column c
, even before it exists (in
many scenarios, especially with distributed applications, you can't synchronize
the app and database releases). Using the above example of projection, you could
say:
;WITH new_columns AS
(
SELECT c = CONVERT(tinyint, NULL)
)
SELECT b,c FROM dbo.TableB
CROSS APPLY new_columns;
When TableB does not have the column c, the output is the following:
b c
---- ----
3 NULL
4 NULL
The query didn't find c in the table, so it went up a level and found the dummy column we created, instead of returning the parsing error you probably expected. Then, after we add the column to TableB:
ALTER TABLE dbo.TableB ADD c tinyint NOT NULL DEFAULT(5);
We try the query again, and now we *do* get the error we expected, because the reference became ambiguous:
Msg 209, Level 16, State 1
Ambiguous column name 'c'.
But if we apply an additional layer of nesting, we can make this query work both before and after the column is added:
;WITH new_columns AS
(
SELECT c = CONVERT(tinyint, NULL)
)
SELECT b.* FROM new_columns
CROSS APPLY(SELECT b,c FROM dbo.TableB) b;
In the current state, with the column added, the output is now:
b c
---- ----
3 5
4 5
But if we drop or rename column c, the output reverts to:
b c
---- ----
3 NULL
4 NULL
So now our query is forward- and backward-compatible, and we can deploy it to the application ahead of the actual schema change.
Now, let's apply this to our DMV query above.
;WITH new_columns AS
(
SELECT
uses_native_compilation = CONVERT(bit, NULL),
inline_type = CONVERT(bit, NULL),
is_inlineable = CONVERT(bit, NULL)
)
SELECT m.*
FROM new_columns
CROSS APPLY
(
SELECT /* other cols, */
-- repeat those dummy column names here:
uses_native_compilation, inline_type, is_inlineable
FROM sys.sql_modules
) AS m;
Note that I'm using SELECT * here, to keep the code nice and tidy, but
this is definitely not a best practice. You could easily name all the columns both
inside and outside the derived table m
, e.g.:
;WITH new_columns AS
(
-- build your list of "maybe supported" dummy columns here
SELECT
uses_native_compilation = CONVERT(bit, NULL),
inline_type = CONVERT(bit, NULL),
is_inlineable = CONVERT(bit, NULL)
)
SELECT
-- must reference the table alias here:
m.[object_id],
/* , ... other cols in the view but NOT in the CTE ... , */
m.uses_native_compilation,
m.inline_type,
m.is_inlineable
FROM new_columns
CROSS APPLY
(
SELECT
-- can't reference the table alias here:
[object_id],
/* , ... other cols in the view but NOT in the CTE... , */
uses_native_compilation,
inline_type,
is_inlineable
FROM sys.sql_modules
) AS m;
…just don't try to qualify the columns inside of m
that you have also defined in new_columns
, otherwise they will never
traverse to the outer scope.
A more interesting case is the DMV sys.dm_os_sys_info, which has added new columns over time and has had some columns renamed (and changed in magnitude, but that's a different story). The following columns were changed or added after SQL Server 2008 was released:
This is a lot more complex than the previous example; imagine maintaining all
that conditional logic for seven different states, and having to revisit for every
new release? This gives my approach more value. We can use the exact same pattern
to create a query (or our own view) that will pull info from this DMV on any version.
I'm going to leave the latter part of the query as SELECT *
for
brevity but please feel free to expand those column lists as you see fit:
;WITH potential_columns AS
(
SELECT
-- in case we don't encounter the new columns:
affinity_type = CONVERT(int , NULL),
affinity_type_desc = CONVERT(varchar(60) , NULL),
process_kernel_time_ms = CONVERT(bigint , NULL),
process_user_time_ms = CONVERT(bigint , NULL),
time_source = CONVERT(int , NULL),
time_source_desc = CONVERT(nvarchar(60) , NULL),
virtual_machine_type = CONVERT(int , NULL),
virtual_machine_type_desc = CONVERT(nvarchar(60) , NULL),
virtual_memory_kb = CONVERT(bigint , NULL),
physical_memory_kb = CONVERT(bigint , NULL),
committed_kb = CONVERT(int , NULL),
committed_target_kb = CONVERT(int , NULL),
visible_target_kb = CONVERT(int , NULL),
sql_memory_model = CONVERT(int , NULL),
sql_memory_model_desc = CONVERT(nvarchar(120) , NULL),
softnuma_configuration = CONVERT(int , NULL),
softnuma_configuration_desc = CONVERT(nvarchar(60) , NULL),
socket_count = CONVERT(int , NULL),
cores_per_socket = CONVERT(int , NULL),
numa_node_count = CONVERT(int , NULL),
process_physical_affinity = CONVERT(nvarchar(3072), NULL), -- in case we do encounter the old columns:
physical_memory_in_bytes = CONVERT(bigint , NULL),
virtual_memory_in_bytes = CONVERT(bigint , NULL),
bpool_committed = CONVERT(int , NULL),
bpool_commit_target = CONVERT(int , NULL),
bpool_visible = CONVERT(int , NULL)
)
SELECT m.*
FROM potential_columns
CROSS APPLY
(
SELECT cpu_ticks, ms_ticks, cpu_count,
/* ... other columns *not* in the list above... , */
affinity_type,
affinity_type_desc,
process_kernel_time_ms,
process_user_time_ms,
time_source,
time_source_desc,
virtual_machine_type,
virtual_machine_type_desc,
physical_memory_kb,
virtual_memory_kb,
committed_kb,
committed_target_kb,
visible_target_kb,
sql_memory_model,
sql_memory_model_desc,
softnuma_configuration,
softnuma_configuration_desc,
socket_count,
cores_per_socket,
numa_node_count,
process_physical_affinity,
physical_memory_in_bytes,
virtual_memory_in_bytes,
bpool_committed,
bpool_commit_target,
bpool_visible
FROM sys.dm_os_sys_info
) AS m;
That is not pretty to look at, and if ordinal position of the columns is important
to your application, you will have to tinker with that as well. For some people,
it is arguable that old-school conditional logic and branching will be easier to
maintain. In any case, by defining dummy columns up front, these queries will work
against any version of SQL Server that has the sql_modules
and
dm_os_sys_info
views (I don't suggest trying them against SQL
Server 2000; that's a different problem). I want to thank
Martin Smith,
Andriy M,
and Ypercube
for making this solution obvious to me.
Summary
It is possible, through a little-known property of the SQL language, to make catalog view and DMV queries backward-compatible, and even insulate them from future changes. This may not be the most intuitive approach, and it even violates a best practice that should be used elsewhere ("always properly qualify any column reference"), but for this niche use case it might be the lesser of many evils.
Next Steps
Read on for related tips and other resources:
- Make your SQL Server database changes backward compatible when adding a new column, dropping a column, renaming an entity, and changing a relationship
- Understanding Catalog Views in SQL Server 2005 and 2008
- SQL Server Monitoring Scripts with the DMVs
- SQL Server Collation and Case Sensitive DMVs
- Dynamic Management Views
- All Dynamic Management Views / Functions tips
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips
Article Last Updated: 2019-01-29