I have a long running stored procedure with lot of statements. After analyzing identified few statements which are taking most time. Those statements are all update statements.

Looking at the execution plan, the query scans the source table in parallel in few seconds, and then passed it to gather streams operation which then passes to

enter image description here

This is somewhat similar to below, and we see same behavior with the index creation statements too causing slowness.

https://brentozar.com/archive/2019/01/why-do-some-indexes-create-faster-than-others/

Table has 60 million records and is a heap as we do lot of data loads, updates and deletes.

Reading the source is not a problem as it completes in few seconds, but actual update which happens serially is taking most time.

2

There are 2 answers

2
Stu On

A few suggestions to try:

if you have indexes on the target table, dropping them before and recreating after should improve insert performance.

Add insert into [Table] with (tablock) hint to the table you are inserting into, this will enable sql server to lock the table exclusively and will allow the insert to also run in parallel.

Alternatively if that doesn't yield an improvement try adding a maxdop 1 hint to the query.

0
Max Emanuelsson On

How often do you UPDATE the rows in this heap? Because, unlike clustered indexes, heaps will use a RID to find specific rows. But the thing is that (unless you specifically rebuild this) when you update a row, the last row will still remain where it was and now point to the new location instead, increasing the number of lookups that is needed for each time you perform an update on a row. I don't really think that is something that will be affected here, but could you possible see what happens if you add a clustered index on the table and see how the update times are affected?

Also, I don't assume you got some heavy trigger on the table, doing a bunch of stuff as well, right?

Additionally, since you are referring to an article by Brent Ozar, he does advocate to break updates into batches of no more than 4000 rows a time, as that has both been proven to be the fastest and will be below the 5000 rows X-lock that will occur during updates.