Does the sort order of a SQL Server 2008+ clustered index impact the insert performance?
The datatype in the specific case is integer
and the inserted values are ascending (Identity
). Therefore, the sort order of the index would be opposite to the sort order of the values to be inserted.
My guess is, that it will have an impact, but I don’t know, maybe SQL Server has some optimizations for this case or it’s internal data storage format is indifferent to this.
Please note that the question is about the INSERT
performance, not SELECT
.
Update
To be more clear about the question: What happens when the values which will be inserted (integer
) are in reverse order (ASC
) to the ordering of the clustered index (DESC
)?
There is a difference. Inserting out of Cluster Order causes massive fragmentation.
When you run the following code the DESC clustered index is generating additional UPDATE operations at the NONLEAF level.
The two Insert statements produce exactly the same Execution Plan but when looking at the operational stats the differences show up against [nonleaf_update_count].
There is an extra –under the hood- operation going on when SQL is working with DESC index that runs against the IDENTITY. This is because the DESC table is becoming fragmented (rows inserted at the start of the page) and additional updates occur to maintain the B-tree structure.
The most noticeable thing about this example is that the DESC Clustered Index becomes over 99% fragmented. This is recreating the same bad behaviour as using a random GUID for a clustered index. The below code demonstrates the fragmentation.
UPDATE:
On some test environments I'm also seeing that the DESC table is subject to more WAITS with an increase in [page_io_latch_wait_count] and [page_io_latch_wait_in_ms]
UPDATE:
Some discussion has arisen about what is the point of a Descending Index when SQL can perform Backward Scans. Please read this article about the limitations of Backward Scans.