# Syndication

## Index selectivity and index scans

There was a question raised some time back ‘If an index is not selective, will the query operators that use it always be index scans’?

It’s an interesting question and requires a look at what’s going on behind the scenes in order to answer properly..

### Selectivity

Selectivity is a measure of what portion of the table satisfies a particular query predicate. The Microsoft whitepaper on statistics as used by the query optimiser defines selectivity as follows.

The fraction of rows from the input set of the predicate that satisfy the predicate. More sophisticated selectivity measures are also used to estimate the number of rows produced by joins, DISTINCT, and other operators.

Bart Duncan wrote a nice detailed blog post a while back explaining the difference between density, selectivity and cardinality. In summary, indexes have density, a measure of how unique the left-based column subsets within them are; predicates have selectivity, a measure of what portion of the table they affect; operators have cardinality, a measure of how many rows the operator processes.

Indexes cannot be said to be selective or not, they can only be said to have a high or low density. It is possible for a predicate on a very low density column (unique) to have a very poor selectivity (large percentage of the table affected) Imagine ID > 0 where ID is an int identity column. The column is unique, but the predicate affects the entire table. Low density (which is good), but poor selectivity.

So let’s alter the original question. “If an index has a high density (not very unique, lots of duplicate values), will query operators against it always be index scans rather than index seeks?”

### Seek and Scan

Before we go on, I want to quickly look at the main difference between a seek operation and a scan operation.

A seek is an operation which navigates down the index’s b-tree looking for a row or for the start/end of a range of rows. A seek requires a predicate and that predicate must be of the form that can be used as a search argument (SARGable)

A scan is a read of part or all of the leaf level of an index.

### High-density indexes

So what exactly is the problem with a high density index? In short, it returns a lot of rows for any predicate filters against it (unless there’s a TOP involved, but let’s ignore those cases here). If the index has a high density (and lets assume for simplicity there’s no data skew here), any predicate using that index automatically has a poor selectivity, it returns a large portion of the table.

If we take as an example a 100 000 row table, with an column called status that has 4 values only, then, assuming that the distribution of those values is equal, a query with a predicate searching for one of those values will read 25000 rows. If we have a nonclustered index on that integer column, it works out that the nonclustered index has 223 pages at the leaf level and is 2 levels deep in total. Given that the four values have equal distribution, an index seek to retrieve the rows for one of those status values will require approximately 57 pages to be read.

Is the index scan better?

The scan will read all the leaf pages, that’s what a scan does (ignoring cases like min, max and top where it can scan and read only part of the index). So if SQL decided to use an index scan because of the high density of the index it will have to read all 100 000 rows on all 223 pages (plus the index root page)

57 pages for the index seek vs 224 pages for the index scan. Looks pretty obvious which is better. To prove that I’m not making things up, let me test this and get actual numbers.

First the setup:

```CREATE TABLE TestingIndexSeeks (
Status INT,
Filler CHAR(795) DEFAULT ''
);

INSERT INTO TestingIndexSeeks (Status)
SELECT NTILE(4) OVER (ORDER BY (SELECT 1)) AS Status FROM (
SELECT TOP (100000) 1 AS Number FROM sys.columns a CROSS JOIN sys.columns b
) sub

CREATE NONCLUSTERED INDEX idx_Testing_Status ON dbo.TestingIndexSeeks (Status)

GO```

Then the  test:

```SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESEEK) WHERE Status = 3

SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESCAN) WHERE Status = 3```

Statistics IO for the two queries:

Seek

Scan

Yup, index seek is better and the one that the optimiser choses if it is allowed to chose.

### High density indexes and the clustered index

So why the confusion around index scans on high density indexes? I suspect it’s because of the way the optimiser handles noncovering indexes where the predicates are not selective. This has nothing to do with the efficiency of the seek or scan operators on the nonclustered index though, it’s got to do with the mechanism used for the key lookup.

If a nonclustered index that SQL could use for a query is not covering, then for each row in that resultset it has to do a lookup back to the cluster/heap for the rest of the columns. Those key (or RID) lookups are expensive operations. If there are too many needed then the optimiser switches to a scan, not of the nonclustered index (it would be pointless, it’s still not covering), but of the clustered index because that at least has all the columns needed for the query (it could also switch to a scan of a different nonclustered index if there is one that’s covering but with columns in the wrong order to be seekable)

### Summary

In summary, does having a high density nonclustered index result in index scans of that index? No (unless the predicate is not SARGable), however it can result in scans of a different index (probably the cluster) if the index is not covering for a query and that high density index being unused.

## Repairing a replication subscriber

It’s a question which has come up a couple of times. If a subscriber of a transactional replication publication becomes corrupt, is running CheckDB with repair allow data loss safe?

The theory is, since the subscriber is a copy of another database, allowing CheckDB to discard data in the process of repairing won’t actually cause data loss (the data is still there in the publisher).

As with many things, it’s partially true, however there’s a little more to it that just that.

To see why, let’s set up some replication and intentionally corrupt some data in the subscriber and run a repair.

First, the source DB. I’m going to keep it really simple so that we can see the effects. A database with a single table called Customers, populated with 100 000 rows via SQLDataGenerator. The filler column simulates other columns that would be in a real table.

```CREATE TABLE Customers (
CustomerID INT IDENTITY PRIMARY KEY,
FirstName VARCHAR(50),
Surname VARCHAR(50),
AccountNumber CHAR(15) UNIQUE,
AccountStatus VARCHAR(25),
Filler CHAR(200) DEFAULT ''
)```

Once populated, we set up a transactional replication publication with default settings, and for simplicity have the subscriber be a second database on the same instance.  we query the table on the subscriber, we have exactly the same number of rows as the publisher has.

To simulate a misbehaving IO subsystem, I’m going to take the subscription DB offline, open the mdf in a hex editor, scribble on top of a few pages in the user table, then bring the DB back online. A checkDB returns a couple of pages of errors ending with:

CHECKDB found 0 allocation errors and 16 consistency errors in table ‘Customers’ (object ID 421576540).

CHECKDB found 0 allocation errors and 18 consistency errors in database ‘ReplicationSubscriber’.

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (ReplicationSubscriber).

Minimum level to repair is REPAIR_ALLOW_DATA_LOSS and, based on the error messages, doing so will deallocate five pages. Let’s run the repair and then see what’s left in the table.

We’ve lost 155 rows from the Customers table. The replication publisher still has them, but the subscriber now does not. So what happens if someone makes some data modifications on the publisher and changes one of the rows that was lost at the subscriber?

```UPDATE dbo.Customers
SET AccountStatus = 'Closed'
WHERE CustomerID = 48700;
GO

INSERT INTO dbo.Customers
(FirstName,
Surname,
AccountNumber,
AccountStatus
)
VALUES  ('Mary',
'White',
'4985563214AN'
'Pending'
)```

Give it a few seconds to replicate and then let’s check the table in the publisher. The new customer isn’t there… If we go and have a look at the replication, it’s throwing errors.

“The row was not found at the Subscriber when applying the replicated command. “, which is quite true, the row’s not there any longer.

What’s happened is that one of the rows updated on the publisher was lost when CheckDB repaired the database. The rows lost due to the repair weren’t automatically fetched from the publisher, there’s no mechanism in transactional replication for that to happen. Hence when the row was updated on the publisher, the update statement was replicated and the row on the subscriber couldn’t be found, causing the replication to fail, retry and fail again. It will keep failing until it is either reinitialised or the missing row is manually added to the subscriber.

What we needed to have done, to prevent this outcome, was after the CheckDB deallocated pages, to manually (or with something like SQLDataCompare) to sync the subscriber with the publisher and explicitly add back the rows which were on the deallocated pages. The primary key values must be kept the same, as that’s what replication uses to identify which rows in the subscriber need to be updated.

So to go back to the original question, yes it’s probably fine to run CheckDB with repair on a replication subscription, providing that the data deallocated as part of the repair is manually synced from the publisher afterwards. If it isn’t, the replication will break as soon as one of the rows affected by the repair is updated or deleted at the publisher.

## Unusual errors with Distributed Replay

I’ve come to really like Distributed Replay in the last couple of years. I’ve used it to do a scale test, I’ve used it to test a workload for performance regressions before upgrading. It has a lot of possibilities.

One problem with it is there’s no GUI, so configuring it requires a fair bit of time spent playing around on the command line or in the depths of Component Services.

Another problem is that there aren’t a lot of people using it, so there isn’t a lot of detail on what to do if something goes wrong.

Make that ‘when something goes wrong’, DReplay is a little finicky.

The more common errors are firewall and COM related and they appear in several blog posts, a search for the error code usually turns up a resolution. However there was an error which I ran into recently which turned up exactly 0 results in google. So, to fix that problem, here’s a description of the error, the circumstances and what turned out to be the cause of the error.

To start, the scenario. The preprocess of the trace files had been done, the firewall configured, the COM setting changes made. The services were running, no errors showing in the logs. I’d used DReplay on the machine previously with the same processed trace file and it had worked fine. This time, however…

“Error DReplay   Failed to get client information from controller.”

After turning up nothing in google, I spent half the afternoon checking logs, restarting the services, restarting the computer, checking and rechecking the firewall and the COM settings. I finally went and checked the details of the controller and client services.

Anyone spotted the problem yet? For those who haven’t, let me highlight pieces of those last two screenshots.

Distributed Replay is not instanced. If there are two versions of SQL Server installed on the machine, and this laptop has SQL 2012 and SQL 2014, and the replay controller and client were installed with both, then the service points to the executables from the most recent installation. The older version’s executables are still there however, and they still execute. They throw errors, but the errors do not, in any way, indicate that there’s a version problem.

The above error is what the Replay option of DReplay returns. If the preprocess is run from the incorrect directory, the error returned is “Error DReplay   Object reference not set to an instance of an object.”

The fix is as simple as changing to the correct directory and running the correct version of DReplay, the one that matches the version which the services point to.

## How not to do a Masters degree

There are many, many guides to successfully completing a post grad degree, so I am not going to add to them. Instead, this is a list of things, based on personal experience, that you can do to make it somewhere between very difficult and impossible to complete that Masters or Doctorate degree.

To be clear, I’m talking about the degrees which are mostly, if not entirely, based on research and a dissertation, not coursework.

## Do the degree while holding down a full-time job

Initially this looks like it’ll work out fine. Work Monday to Friday, work on the degree on Saturday and Sunday. For the first few months it does work fine.

But there’s one thing that a dissertation requires and that’s a large amount of dedicated time. Time to read the literature. Time to come up with the hypothesis or research questions. Time to design an experiment. Time to conduct that experiment. Time to revise the experiment, conduct it again, revise again, conduct again… Time to analyse the results. Time to write up the results. Time to edit, rewrite, edit, rewrite, edit…

As the months pass, and as the initial enthusiasm and fun fades, so it becomes harder and harder to spend most of the weekend on the research, weekend after weekend after weekend, for a couple of years

Sure, it’s possible to complete a Masters degree while working full time, but it’s like playing a game on Insane difficulty level.

## Decide that you want to get heavily involved in the SQL community

The SQL community are a great bunch of people and there’s a lot of encouragement to jump in and get involved, for many good reasons

Posting on forums is great in many ways, it boosts your confidence, it gets you recognitions and it’s a good way to get to know features you’ll never use in your regular job.

Blogging and writing are a great way to share knowledge, and there’s a thrill from watching the page hit count go up, from the first comment, from the complements, especially when your blog post gets referenced by others as the authoritative article on a subject.

But it takes time. Lots of time. Articles can take days of work, blog posts can take anything from a few minutes to many hours depending on the subject and the depth of the post. Presenting takes lots and lots of prep time. An hour-long presentation can easily require a day of prep, and that’s once you’ve done several presentations. The first one can take many days of writing slides, rewriting slides, writing demos and rehearsing the presentation several times. Forums will take every minute you’re willing to give to them and more, and there’s the constant temptation of ‘just one more post…’

And where’s that time going to come from? The time that would otherwise have been spent on the research and dissertation.

## Get burned out, and don’t seek help

I’m probably going to get flak for this, but it has to be said.

As an industry in general, we are too reluctant to ask for help. I don’t mean technical help (though that too in many cases).

We are too eager to put on a pedestal the person who works 12 hours a day, 6 days a week, the person who pulls all-nighters on a regular basis, the person who never takes vacation because he ‘doesn’t have time’. We’re conditioned to see extreme hours worked as a sign of dedication, of what is needed to get ahead.

It’s not heroic. It’s not required. It’s not something to be admired.

It’s stupid.

Long days are sometimes required, weekend work is sometimes necessary, but they both should be the exception, not the norm. Excessive overtime, if needed to meet deadlines, should be followed with a discussion on what went wrong such that the overtime was required. Was the estimation inadequate? Was the project analysis flawed? Did the scope creep (or gallop)? Were people working on multiple projects at the same time while the project plan expected them to be dedicated? Were assumptions not met or essential infrastructure delayed?

If overtime is frequently required, then management has failed at their job. A developer working 12 hours a day, 6 days a week for months on end is not a sign of dedication or heroics, it’s a sign that someone, somewhere in management is not doing their job properly.

This kind of workload and times are not sustainable. They lead to mistakes and buggy code, they lead to burnout and they lead to people, top people, walking away and never coming back.

I wrote about burnout a few years ago. Looking back now, after another recent brush with it, I don’t think I went far enough in what I said there.

It is not a flaw, it is not a weakness, it is not admitting failure, to seek professional help for burnout. It’s far better to do so than to suffer for years with the effects.

And to get this back on topic, trying to work through burnout is like trying to swim through syrup. Lots of effort, little progress, very easy to give up and stop trying. That’s not going to do wonders for that research and the couple-hundred page dissertation that needs to be done.

Despite all of that, in case anyone is wondering, graduation is mid-September

## SQL University: Advanced Indexing – Indexing Strategies

Right, I know it’s Friday and everyone’s tired and looking forward to the weekend, but I do need to finish off this indexing section and I’ll try to keep this short and interesting and hopefully keep everyone awake.

There’s no shortage of information available on how to create indexes. Hell, I’ve written a copious amount myself. Most of these many articles however are written from the point of indexing single queries. What you chose for a where clause, what has to go into the include to create the perfect index for this query. Now that’s all well and good, but I’ve never met a system that had only one query per table (maybe there is such a system out there, but I’ve never found it)

So what I’m going to try to do today is address the topic of a strategy for indexing. How to approach indexing, not for a single query, but for the system as a whole. I won’t be able to cover this in-depth, this is material worthy of an entire book chapter, if not an entire book, but I can at least touch on the essential portions.

Now, there’s two main positions that we could be in when considering indexing strategies for an entire system
1) A brand new system that’s still in development
2) An existing system that’s being used actively.

One at a time…

### Indexing strategies for a brand new system

Start by choosing a good clustered index. What makes a good clustered index? Well, it depends 🙂

The clustered index is the base, it will affect each and every nonclustered index, and it’s not trivial to change once the system is in use, so chose carefully. I’m not saying another word on the subject of a clustered index, not today.

Once that’s done…

## SQL University: Advanced Indexing – Filtered Indexes

Welcome back to day 2 of Advanced Indexing. Today we’re going to look at a feature that was added in SQL Server 2008 – filtered indexes.

In versions previous, indexes were always on the entire table. An index would always have the same number of rows as the table it was built on did (which is why COUNT(*) can just scan the smallest index on the table)

With filtered indexes, it’s possible to have an index that’s built on a subset of the rows in the table. The definition for a filtered index contains a WHERE clause predicate that determines if a row in the table will be in the index or not.

This can be a major advantage on really large tables where most queries are only interested in a small fraction of the table. A normal index would be based on the entire table regardless of the fact that most of the table is of no interest, meaning the index would be larger than necessary, deeper than necessary and take up more space than would be ideal. With a filtered index on just the interesting portion of the table, the index size is kept to a minimum, meaning it’s shallower than an index on the entire table and hence more efficient.

A simple example of a filtered index would be

```CREATE NONCLUSTERED INDEX idx_Example
ON Account (AccountNumber)
WHERE Active = 1;```

There are two main uses of a filtered index:
1) Enforcing moderately complex uniqueness requirements
2) Supporting queries that filter on common subsets of a table

### Filtered indexes and unique columns

One very interesting use of filtered indexes is in enforcing uniqueness over portions of a table. One requirement that come up again and again is to have a nullable column that must have unique entries in it, but whose entries are optional. Basically, the column must be unique or null. Sounds easy, but the problem is that a unique index allows only one null. So much for nulls not being equal to anything including other nulls.

Prior to SQL 2008 implementing such a constraint required computed columns, indexed views or triggers. With SQL 2008’s filtered indexes, it’s trivial.

## SQL University: Advanced indexing – Sorting and Grouping

Good day everyone and welcome to another week of SQL University. I know we’re getting close to the end of the year and everyone’s looking forward to a nice long vacation soaking up the sun at the beach, but a little bit of attention would be nice. Thank you.

This week is Advanced Indexing, and I mean advanced, none of that selectivity, SARGable, predicate stuff that gets repeated all over the place. If you need a refresher on the basics before we get started, the following can be considered pre-requisite reading for this course

There’s also some additional background material available for more enthusiastic students:

Right, now that the admin has been handled, let’s get straight into things. Nothing like starting at the deep end…

Most people would rightly associate indexes with where clause predicates and joins, after all, the main usage of an index is to reduce the rows in consideration for a query as fast as possible. However there’s another portion of your queries that indexes can, if appropriately designed, help with – grouping and sorting.

Sorting is an extremely expensive operation, especially on large numbers of rows. For the academics in the audience, the algorithmic complexity of sorting is above linear, the time to sort a set of data increases faster than the number of items in the list. The common sorting algorithms have an average time complexity of O(n log n). It’s better than O(n2), but it can still hurt at the higher row counts.

O(n2) on the left, O(n log n) on the right (Thanks to QuickMath)

Right, the non-academics can wake up now.

The other reason that sorting hurts is that it needs a memory grant to do the sort. If there’s a shortage of memory the query could have to wait a while for the memory to be granted and, if the optimiser mis-estimates the number of rows to be sorted, the memory grant could be insufficient and the sort would have to spill into TempDB. You don’t want that happening.

Finally, sort is a blocking operator in the execution plan (all rows must have been fetched from the source before any output can be returned), and so the client won’t start seeing rows returned until the entire sort has been completed. This can make the query feel like it takes longer than it really does.

Grouping and aggregation are much the same. To aggregate one set of values based on another set of values, SQL has to get all the like values of the grouping columns together so that it can do the aggregation. That sounds suspiciously like a sort doesn’t it?

SQL doesn’t always sort to do aggregations, but the alternative – hash tables – isn’t exactly free (homework exercise – read up on hash tables)

So for both sorting and grouping, the query processor’s job would be a lot easier if there was some way that it could get the data ordered by the sorting or grouping columns without having to do the work of actually sorting. Sounds impossible? No.

## 24 Hours of PASS Questions

I finally found the time to work through the questions from the 24 Hours of PASS session that I did. Thanks to everyone that attended the event

Q1: Can you filter execution plans for sort warnings?

No. The sort and hash warnings don’t appear in the execution plan. You’d have to trace for the hash and sort warning events and correlate that with either batch/statement started and completed events or with the run-time plan events.

Q2: To get the query executions can you just add the statement completed or batch completed events?

The Statement Completed and Batch completed events have durations and can be filtered on that duration. The problem however is that the execution plan events (eg showplan all, showplan xml, statistics profile, statistics xml) have no duration column. Hence the execution plan events can’t be filtered on duration even though the statement_completed and batch_completed events can.

Q3: If you run a Profiler trace and all the plans are being pulled from cache will that mean that no ShowPlan event data will be shown in the trace?

Depends which event is being traced. There are events for query compile (showplan all for query compile and showplan xml for query compile) that only fire when the query compiles, so those will not fire if the plan is being pulled from cache. The other execution plan events are all fired each time the query executes.

Q4: Why can you sometimes get nulls or blank strings for the query plan from sys.dm_exec_query_plan

From Books Online:

Under the following conditions, no Showplan output is returned in the query_plan column of the returned table for sys.dm_exec_query_plan:

• If the query plan that is specified by using plan_handle has been evicted from the plan cache, the query_plan column of the returned table is null.
• Some Transact-SQL statements are not cached, such as bulk operation statements or statements containing string literals larger than 8 KB in size. XML Showplans for such statements cannot be retrieved by using sys.dm_exec_query_plan unless the batch is currently executing because they do not exist in the cache.

Q5: Do the execution plans from the DMVs contain execution information?

No. The plans extracted from cache contain compile-time information only.

If you enjoyed this and are going to be at PASS Summit this year, it’s not too late to sign up for the all-day precon (All about Execution Plans) that Grant Fritchey (blog|twitter) and I are doing.

## Compiles and recompiles

I want to spend some time over the next few months looking at query compilation and the plan cache, and there’s a couple concepts that I want to get cleared up beforehand. The first of those is around two terms that are often used interchangeably, compile and recompile.

### Compile

A compile occurs when a query is given to the query optimiser and, when it does a lookup into the plan cache, no matching plan is found. The optimism must then compile the query, generating an execution plan, must add that plan to the plan cache (in most cases) and then must pass that plan onto the query execution engine so that the query can be executed. (http://technet.microsoft.com/en-us/library/Cc966425)

### Recompile

A recompile is something slightly different. For a recompile, the optimiser must find a matching plan when it queries the plan cache, must hand that cached plan over to the query execution engine and then while doing validation checks the execution engine must determine that then query plan is no longer valid and request the optimiser to partially or completely recompile the query. (http://technet.microsoft.com/en-us/library/Cc966425)

Subtle difference. Both cases result in the optimiser generating an execution plan, but the reasons can be different. Also worth noting is that a compile results in a new plan in the cache, a recompile simply replaces an existing plan.

Another difference since SQL 2005 – a compile is for the entire batch, but a recompile can be for just a single statement within the batch.

Now the theory’s dealt with, let’s look at some examples and see how we can track these two events and try and get a better understanding of which occurs when and how they look.

The tools I’m going to use to track these are performance monitor with the compiles/sec and recompiles/sec counters and SQL Profiler with the event SP:StmtRecompile event (there’s no profiler event for compilation). I’ll also check what’s in the plan cache after each test.

The first one’s going to be very simplistic, a query run against an empty plan cache.

```DBCC FREEPROCCACHE
GO

EXEC dbo.OutStandingTotalByStatus
GO```

What we get from that is a non-zero value for SQL Compilations/sec (perfmon) and the following from profiler (The SQL Recompiles/sec remains 0)

and the plan cache now contains one plan with one use. (for more info on how the CacheMiss and CacheInsert events work, see http://www.sqlinthewild.co.za/index.php/2010/07/27/hit-and-miss/ and http://www.sqlinthewild.co.za/index.php/2010/08/31/come-and-gone/)

In this case, I hope it was clear, we had a compile occur (empty plan cache before, new plan added to cache).

Now what happens if, with no clearing of the cache nor anything else being done, I mark that procedure for recompile and run it again?

Coming to the PASS Summit in October this year? Excellent!

I say excellent, because Grant (blog|twitter) and I are presenting a full-day seminar on the Monday, all about execution plans. Excited already? If not, let me give you a taste of what we’ll be discussing.

Grant will be kicking things off with a look at how the optimiser works. Not 500 level deep internals (that’d take all day by itself), but an overview of how the optimiser turns this

```SELECT t.Title, u.UserName, u.UserLevel, t.LastModified, t.LastPoster

into this

as well as some DMVs that give a view on what the optimiser is doing.

After that, I’ll be digging into what the plan cache is, how it’s structured and how plans are matched to incoming queries. I’ll also cover how you might go about monitoring SQL’s usage of the cache.

After that we’ll discuss the myriad and varied ways to get execution plans out of SQL and what the various different options for that return and what some of the terms involved really mean (estimated execution plans anyone?).

Once all that’s out there (which will probably take the entire morning) it’ll be onto the fun of reading the execution plans, what those various icons represent and what all their arcane properties are telling you. Maybe, just maybe we’ll also have a look at the raw XML form of the plan, just for fun.

And because theory’s not very useful without something to apply it to, we’ll be ending off the day by seeing how you can use the information from the execution plan to tune your queries. After all, everyone wants better performance, right?

Still not completely convinced? Then check out the 24 hours of PASS coming in September. Both Grant and I are doing a pre-con preview. I’m doing extracting execution plans and Grant’s doing a portion of the reading execution plans section.

Right, now that everyone’s dying to attend this, I’m going to toss out a little bit of a challenge. If you’re coming to our precon and you have an execution plan that you don’t understand (maybe a strange operator, maybe a property with a value you’ve never seen), mail it to me (gail@<blog domain>) with a note as to what you want clarifying and we might discuss it during the pre-con. Some rules, just to keep things sane

• No confidential stuff. I don’t want legal trouble, neither do you.
• If the exec plan takes 5 minutes to open in SSMS, I’m deleting it.
• If it’s multiple pages in SSMS, I’m deleting it.
• I don’t promise to look at anyone’s plans, it depends how we’re doing on time.