Do wide indexes slow queries down?

I got asked this question during one of my TechEd sessions. It was a bit complex to answer there and then, so I answered by mail a few days later, and I thought it would make a good blog post if expanded a bit more.

Assume a table with 5 columns (imaginatively called A, B, C, D and Z). A and B are ints, C is a datetime and D is a char(50). Let’s assume that Z is an int, identity, primary key and has the clustered index

If I put an index on A and B, the size of the index key (ignoring headers and other overhead) is 8 bytes. Add in the clustering key for row location and each index leaf record in 12 bytes wide. That means that (at 100% fill factor) there are around 660 index rows per leaf page. Since I’m ignoring headers, this rough calc will not exactly match reality, but it’s close enough.

If the table has 100000 rows, the the leaf level of the index consists of 152 pages.

To calculate the number of pages above, one should know that for each leaf page, the next level higher has one index row in – the first value on the page. Hence, for 152 leaf pages, the level above will have 152 index rows as 12 bytes each totalling 1824 bytes. This will fit on one page, so the level above the leaf is the index root, and this index has only two levels.

To seek a single row from this index hence requires 2 page reads.

Now, the clustered index has row sizes of around (again ignoring row headers) 70 bytes. At that size, there will be 115 rows per page, for a total of 870 pages. For the level above, only the clustering key needs to be in the pages (4 bytes) along with the page pointer (which is a binary(6)) and 870 index entries are required. At 10 bytes/row that means that 2 pages are required at this level. That means that there is one more level to the index – the root, making the clustered index a three level index.

To seek a single row from the clustered index hence requires 3 page reads.

Let’s take a quick example query.
SELECT A,B FROM TestTable WHERE A=2 and B=16
Let’s further assume that this returns 100 rows, and that those 100 rows are situated on two neighbouring pages of the nonclustered index (worst case). To completely satisfy this query requires 3 page reads. 2 to find the beginning of the range and a third on the next page (index leaf level have next and previous page pointers). Very fast query, very few IOs.

Now a second query
SELECT A,D FROM TestTable WHERE A=2 and B=16

The index rows for this query are on the same 100 leaf pages of the nonclustered index. The problem is, the noncluster doesn’t contain D, and so a bookmark/key lookup is required for each row. 100 key lookups at 3 reads each means that in total this query will take 303 page reads.

What if we widened the nonclustered index? I’ll put D as an include column.

Now, the leaf index rows are around 62 bytes in size, meaning that the nonclustered index has  130 rows per page and a total of 770 leaf pages.

The upper level of the index still only requires 12 bytes, and so 770 leaf pages requires 770 index entries at 12 bytes each is 9240 bytes, for a total of 2 pages. This means that another level of the index is required and so this nonclustered index is three levels deep and a single row seek requires 3 reads.

Back to the two examples.

SELECT A,B FROM TestTable WHERE A=2 and B=16
This still returns 100 rows and those 100 rows are still across two index leaf pages (again assuming worst case). Now this query requires 4 reads, 3 to find the start of the range and 1 to get the next page

SELECT A,D FROM TestTable WHERE A=2 and B=16
This also still returns 100 rows and those 100 rows are also still across two index leaf pages. The difference now is that D is contained within the index and so no bookmark/key lookup is required. This query also requires only 4 page reads, 3 to find the start and 1 to get the next page.

So, the width of the index rows has more than quadrupled, and the queries that use index seeks only require one more read.

The theory looks nice, let’s see if reality agrees with it.

Create Table TestingWideIndexes (
A int not null,
B int not null,
C datetime not null,
D char(50) not null,
Z int identity primary key — defaults to clustered
)
GO

Insert into TestingWideIndexes (A,B,C,D)
SELECT ID/15, FLOOR((RAND(ID*1422)*10)), DATEADD(hh, ID, ‘1990/01/01’), ‘x’ FROM (
SELECT top 100000 a.number*100+b.number AS ID
from master..spt_values a cross join master..spt_values b
where a.name is null and b.name is null) x

CREATE INDEX Testing_1 ON TestingWideIndexes (A, B)

SELECT * from sys.dm_db_index_physical_stats(db_id(), object_id(‘TestingWideIndexes’), 4, 0, DEFAULT )
— page count 186

SET STATISTICS IO ON
GO

SELECT A,B FROM TestingWideIndexes WHERE A=180 and B=3
— Table ‘TestingWideIndexes’. Scan count 1, logical reads 2
— SQL Server Execution Times:
—   CPU time = 0 ms,  elapsed time = 0 ms.

SELECT A,D FROM TestingWideIndexes WHERE A=180 and B=3
— Table ‘TestingWideIndexes’. Scan count 1, logical reads 268
–SQL Server Execution Times:
—   CPU time = 16 ms,  elapsed time = 0 ms.

DROP INDEX Testing_1 ON TestingWideIndexes
CREATE INDEX Testing_2 ON TestingWideIndexes (A, B) INCLUDE (D)

SELECT A,B FROM TestingWideIndexes WHERE A=180 and B=3
— Table ‘TestingWideIndexes’. Scan count 1, logical reads 3
— SQL Server Execution Times:
—   CPU time = 0 ms,  elapsed time = 0 ms.

SELECT A,D FROM TestingWideIndexes WHERE A=180 and B=3
— Table ‘TestingWideIndexes’. Scan count 1, logical reads 3
— SQL Server Execution Times:
—   CPU time = 0 ms,  elapsed time = 0 ms.

So to answer the question, the wider index hasn’t slowed the query down, it’s made it faster.

All well and good. What about scans though? With more index leaf pages, they’ll require more IOs and should be slower.

— with both indexes created
select A, B from TestingWideIndexes WITH (Index = Testing_1)
select A, B from TestingWideIndexes WITH (Index = Testing_2)

Results:

Table ‘TestingWideIndexes’. Scan count 1, logical reads 188
SQL Server Execution Times:
CPU time = 32 ms,  elapsed time = 15141 ms.

Table ‘TestingWideIndexes’. Scan count 1, logical reads 813
SQL Server Execution Times:
CPU time = 31 ms,  elapsed time = 15660 ms.

I ran it several times and the CPU times varied on each run, sometimes the wide index had a higher CPU time, sometimes the narrow one did. They were always very close. So again here, the wider index has not slowed the query down.

9 Comments

  1. Martin

    Very nice article. Will make one think before creating indexes just for the sake of creating indexes next time.

    Reply
  2. Michelle

    Great article, thank you for sharing! 🙂

    Reply
  3. Erik

    Thanks for writing the well explained articles on wide indexes. I have a related question concerning placement of primary key fields in a table. Given that row width dictates the number of data pages, does it make a peformance difference depending on whether primary key columns are located on the left or right side of a tables’s definition. I have a client with an existing database and all of the key columns are on the far left of the table definition. In many cases a text field precedes a primary key IDENTITY int column. Would this database perform better if the key columns were moved to the far left?

    Reply
  4. Gail

    The primary key should be on the column (or set of columns) that uniquely identifies a row. The clustered index is a different matter, it may be enforcing the pk or it may be elsewhere with the pk being enforced by a nonclustered index.

    The defined position of the columns is completely irrelevance. SQL doesn’t even necessarily store the columns on the page in the order that they’re defined in the table.

    Reply
  5. Erik

    Thanks for your reply. In my post, I meant to say “far right” not far left, but it looks like you understood anyway.

    So even if the table has a clustered index on the pk, and the pk field(s) is(are) to the far right in the table def, it will not have ANY affect on performance, verdad? For some reason, it seems like if the keys were too far right that SQL Server may have to read additional pages before it encounters the key data.

    Reply
  6. Gail

    The order you specify the columns is not necessarily the order they are stored in. I don’t understand your concern about reading additional pages. A row’s data is stored in a single page (excluding row overflow and LOB), hence the reason for the 8kb limit on row size.
    In the non-leaf levels of the index, only the index key is stored, not the other columns.

    There’s an article on SQLServerCentral today on basics of clustered indexes, may help you. http://www.sqlservercentral.com/articles/Indexing/68563/

    Reply
  7. jesi

    Hi,
    from the article
    “To calculate the number of pages above, one should know that for each leaf page, the next level higher has two index rows in – the first value on the page and the last. Hence, for 152 leaf pages, the level above will have 304 index rows.”

    so i created a table with just one column(int), inserted 100000 records into it(unique values) and i used dbcc page to look at one of the intermediate level pages and from the output i found that for each leaf level page there is only one entry in the intermediate level page and it stores only the starting key value for that root level page. below he output of intermediate page

    FileId PageId Row Level ChildFileId ChildPageId a (key) UNIQUIFIER (key) KeyHashValue
    —— ———– —— —— ———– ———– ———– —————- —————-
    1 1704 0 1 1 1640 NULL NULL (030014bf17fb)
    1 1704 1 1 1 1641 72 NULL (030014bf17fb)
    1 1704 2 1 1 1642 143 NULL (030014bf17fb)
    1 1704 3 1 1 1643 214 NULL (030014bf17fb)
    1 1704 4 1 1 1644 285 NULL (030014bf17fb)
    1 1704 5 1 1 1645 356 NULL (030014bf17fb)
    1 1704 6 1 1 1646 427 NULL (030014bf17fb)
    1 1704 7 1 1 1647 498 NULL (030014bf17fb)

    am i doing something wrong here

    Reply
    1. Gail

      No, you didn’t. I need to edit that post, it’s wrong.

      Reply
  8. Gail

    Fixed. Sorry about that, been meaning to come back and fix this for a few weeks.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.