Can switching compatibility level from 100 to 130 lead to locking or deadlock problems?

1.5k views Asked by At

We are currently testing a switch to compatibility level (cl) 130, up from 100, in our development environment (sql server 2016). After the switch we noticed some errors:

could not execute batch command.[SQL: SQL not available] Transaction (Process ID 54) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

After some research, tracing and debugging I was able to confirm that we actually had a deadlock-issue. We use a .net application which, using nhibernate, accesses the database. Some internal tasks (within the .net application) can be set up to us parallelism to finish quicker. These tasks usually split their workload in a way that a (row) deadlock should be impossible. I.e. Task 1 and Task 2 may access Table A and Table B at roughly the same time, but they will never access the same rows within each table.

The tasks call some stored procedures which do something simple like:

UPDATE dbo.Tab1
SET dbo.Tab1.Col1 = 'ValueY'
FROM dbo.Tab2
JOIN dbo.Tab3
JOIN dbo.Tab4
…
WHERE Tab1.Col.2 = 'ValueX'

In essence this will run through Tab1, search for rows to be updated and update those.

This was all running fine in compatibility level (cl) 100. After the switch to cl 130 we sometimes have deadlocks, we did not encounter before.

The deadlock graph shows two Key locks on the same object id / hobt id, where two different server processes are holding a X-Lock and requesting a U.

If I add irrelevant Rows to Table Tab1, for this particular Test, it will increase the number of pages to 23 and there are no more issues.

I have read, that this whole issue might be caused by the small number of rows/pages. The optimizer/server behaves differently compared to a table with millions of rows which leads to a different lock behaviour and might result in deadlocks.

But this does not answer my Question: Does the compatibility level, when switching from 100 to 130, directly affect locking and may even cause deadlock issues, where there were none before?

PS: This is not a lock escalation issue as I have turned it off for Table Tab1.

1

There are 1 answers

1
Dan Guzman On

Does the compatibility level, when switching from 100 to 130, directly affect locking and may even cause deadlock issues, where there were none before?

Directly, no. Indirectly, yes. Blocking and deadlocking is often a result of suboptimal execution plans, touching more data than needed for the task at hand.

When the database compatibility level is changed to SQL 2014 or later, the new cardinality estimator is used by default. This can result in different execution plans, better or worse, than with the legacy CE used in the lower compatibility level. It may be that some queries suffer from plan regression.

Try using ALTER DATABASE SCOPED CONFIGURATION to use the legacy CE even with the newer compatibility level:

USE YourDatabase;
ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION=ON;

If this mitigates the blocking and deadlocking, take a look at the query plans for query and index tuning opportunities. It is not uncommon for queries in need of tuning to regress further after an upgrade. Also, you might try turning on the QUERY_OPTIMIZER_HOTFIXES option, which is off by default due to an abundance of caution.

If you find only a few cases where the legacy CE is needed, consider adding the OPTION (USE HINT('FORCE_LEGACY_CARDINALITY_ESTIMATION')) query hint to those queries. That will allow you to use the latest CE by default.