Author Archive: Gail

Day 4 – Monitoring, availability and tough problems

I skipped the keynote on Thursday to spend some time in the SQL Lounge. One of the people there did a demo of a set of scripts, jobs and reports called DMVStat. It’s up on the net somewhere. I don’t have the link right now, but I’ll see if I can dig it up in a day or so.

The first session was on analysing the plan cache. It wasn’t a particularly deep session, just covering how to get execution plans in SQL 2005 (the plan cache DMVs).

The SQL CAT team did a presentation on high availability in the afternoon. Not as good as the session on MySpace, but that would be hard to top.

Bob Ward ran the only level 500 session of the conference, covering debugging difficult problems. The kind of problems that he sees as a senior escalation engineer at PSS. He discusses latch waits, slow IOs, corrupt databases, access violations, memory problems and unexpected shutdowns. It felt something like standing under a waterfall, but it was a brilliant session.

The afternoon wrapped up with a discussion on practical performance monitoring by Andrew Kelly. He went over perfmon, profiler, wait stats, disk stats and showed some techniques for managing the load of data.

All in all, that was a very successful day. One more day to go… 

Day 3

Wednesday at PASS is the first day of the real conference. The day started off with the usual keynote. Ted Kummert of Microsoft went through the data vision that microsoft has, complete with a whole lot of demos.

The part that most caught my eye was the demo of some new SQL 2008 features, including the resource governor with its ability to restrict resource usage depending on properties of the connection (eg application name, host name, login name, etc). The policy-based management should make policy enforcement much easier now, especially since policies can be applied across multiple servers in one operation.

The new spatial data types look cool. I can’t see immediate uses for them myself, but I do like them.

Finally, something that had the entire audience cheering, intellisense in management studio. About time. Something that I also saw but wasn’t mentioned was what appeared to be syntax checking as you type, much like visual studio has. Not sure how far that goes (to objects or just to key words) but it does look interesting.



Precon – Query plans

The second day pre-conference that I attended was by Kalen Delany, all about query plans.

The first part of the session was an overview of the various methods of getting a query plan, from the showplan options for estimated plans, to the profile options for actual execution plans, the graphical options and the usage of SQL Profiler to get both actual and estimated plans.

She then briefly covered sub-optimal plans, without going into detail on query tuning. Stuff like cardinality estimation errors and potentially slow operators (scans, sorts, hashes)

After lunch we delved into details on the plan cache, including what constitutes a plan, how to view them and what conditions there are around plan reuse. This covered adhoc plans, prepared plans and object plans (stored procedures), as well as recompiles and the downsides of plan reuse.

Finally there was a section on query hints and plan guides, for use when the optimiser just won’t do what you want it to do.

The evening was a great deal of fun, with the opening reception and the SQLServerCentral party. I had the opportunity to take part in the quiz bowl. Got eliminated in the first round (damn movie questions) but was still good fun. Won a couple books. More reading material is always a good thing.

Preconference – Performance Toolset workshop

Spent the first day of the conference at the PSS Bootcamp. The PSS guys always put on a good show as they take people through what they do to solve customer’s problems.

The first part of the day was devoted to a performance tuning methodology. What do you do when the users are complaining that the server’s slow. The presenter went through the methodology that the PSS engineers use when presented with a performance problem.

Most of the process is aimed at finding the problem query or identifying a resource bottleneck on the server.

If the problem is currently occurring, one of the main tools is the performance dashboard, a new report introduced into Management studio with SQL 2005 SP2

If the problem is not currently occurring, then it’s necessary to use SQLDiag, profiler, perfmon or a combination of them. A very interesting new tool that they introduced is a data aggregation and reporting tool for performance data – SQL Nexus. The updated version is supposed to be available by end November.

The session finished with a brief look at some of the new features of SQL 2008 that would help out with performance issues. One of the big ones, at least for me, is the performance warehouse. SQL can be configured to collect performance related data continuously in the background and save that into a data warehouse. There are a collection of reports built into management studio that report off this data. Used properly, that should make finding performance problems much easier than currently.

The other feature in 2008 that looks fantastic – a dependency checker that actually works. Sounds great 

Off to PASS

Well I’m off to PASS tonight. total of 18 hours of flying and 7 or so hours sitting around in London Heathrow airport. What fun.

Looking forward to the conference. Hopefully I’ll get a chance to chat with some people I met last year.

I’ll probably be reporting on some of the sessions while I’m there. If there’s anyone who reads this blog that’s going to be at pass, look me up and say hi. Just look for someone wearing a nametagĀ  with the name ‘Gail’ and country ‘South Africa’

On a haunted house

The second session of the haunted house adventure went down far better than I could have ever hoped. In fact, the players asked to stay late so that they could finish it, they were having so much fun.

They survived the haunted house and uncovered the reason behind all the strange occurrences. they couldn’t prevent a thug from making off with the knife that had been the focus of all the strange events, but that’s fine. It adds possibilities for the future.

Everyone was enthusiastic, interested and most importantly, involved in the story. I’m still on a bit of a buzz from the game and I’m very psyched for the campaign.

Next up, depending on the players, either investigating the happenings at the cathedral, visiting a museum exhibit, or attending the cultural festival.

Shrinking databases

Or “Order the pages, shuffle the pages.

Do you ever shrink your data files? I’ve personally never been fond of it, especially for production databases. After all, they’ll simply have to grow again and, especially if the data files are on independent drives, there’s little difference between space free on the drive or space free in the data file. There is also a more insidious reason for not shrinking a database.

Let’s take a very simple database (The creation code is at the end of the post). I have two tables, both with a tens of thousands of rows. Both tables have a clustered index on a uniqueidentifier and are heavily fragmented (>99%).

DBCC SHOWCONTIG(LargeTable1) -- 99.30%
DBCC SHOWCONTIG(LargeTable2) -- 99.21%

To fix the fragmentation, rebuild both indexes. That fixes the fragmentation, but now the data file is using almost twice the space necessary.

DBCC ShowFileStats -- 3363 extents total, 1697 used (215 MB total, 106 MB free)

So, shrink the database to release the wasted space back to the OS

DBCC SHRINKDATABASE (TestingShrink, 10) -- Shrink to 10% free

That’s fixed the space issue. But now, have another look at those two indexes that were just rebuilt.


Execution plans, estimated vs actual

This is the second post on execution plans. I’m going to briefly discuss estimated execution plans and actual execution plans, the differences between them and when you would want to use which.

First however, a bit on query execution, just so that I know everyone’s on the same page.

When a query is submitted to SQL Server (and for simplicity I’m going to assume it’s a straight select statement not a procedure) the query is parsed, then bound to the underlying objects(tables, views, functions, etc). Once the binding is complete, the query passes to the query optimiser. The optimiser produces one or more suitable execution plans for the query (more on that in a later post). The query is then passed into the query execution engine, which does the memory grants, picks a parallelism option, if necessary and executes the various query operations.

Estimated execution plans

When an estimated execution plan is requested for a query, the query goes through the parsing, binding and optimisation phases, but does not get executed.


Back in the GM chair

This last sunday saw me taking back the GM chair for the group that I play with. For the past year we’ve been playing my friend Phillip’s Per-rune game. (details and an in-character journal on my web site)

My campaign is a modern day supernatural game, a bit like Buffy, but darker. It’s set in the historical city of Oxford, in England, in the year 2002. More details are available on the campaign web site. The pages aren’t finished, there’s a lot of links that go no where.

All in all, the game went off without a hitch. Lots of admin-type stuff to start, reminders of clues, shopping, etc, etc but less than I expected.

Now let’s see if the characters can unravel the mysteries of a haunted house, and if they can survive to tell the tale.

I’ll probably comment here occationally on on significant bits of the campaign as they happen.