Latest Posts

Books of 2017

Right, beginning of a new year, so time to look at what I read in the last year and what of it I can recommend.

Please ignore that it’s almost February. I’m going to pretend that the year starts with February, that way I don’t have to think about where January went.

First thing to mention is a change of tracking method. The blog plugin I was using to track my books doesn’t work under HTTPS. It gives odd errors when adding or updating books. I wasn’t in the mood for debugging php, instead the entire tracking of books and reading dates has been moved to GoodReads (with a few hiccups along the way)

That does make it easy to get the retrospective of 2017 at least: https://www.goodreads.com/user/year_in_books/2017/19743140

77 books read, my goal was 75 and I was surprised that this time I made the goal (helped partially by a week vacation in December where I read 10 books)

There were a few of the year’s books that I consider standouts.

Caliban’s War, Abaddon’s Gate and Cibola Burns

The Expanse series is fantastic, and in these three books things go from bad (proto-molecule infested moon crashes into Venus) to worse, to weird, to really, really bad.

Complex characters, complex plot, very little black and white and every solution to a problem creates a few more problems along the way.

Ordination and StillBright

I’m a sucker for paladins. A friend pointed out a few years back that most of my D&D characters have paladin tendencies. Case in point, my current character is a 5e LG Fighter with 2 levels of Cleric.

These are the story about a battle-weary knight who, after being chosen by a forgotten goddess, strives to become a beacon of hope to the war-torn world

Stiger’s Tigers, The Tiger, The Tiger’s Fate

Roman-type empire in a fantasy world. Junior legion officer thrown to the wolves and manages to change the world in the process.

I like the ‘roman legions in a fantasy world’ idea and, other than this and Codex Alera, I haven’t seen good treatments of the idea.

Arcanum Unbounded

Short stories by Brandon Sanderson in the Cosmere shared universe.
Enough said.

 

For 2018 I’m setting the same goal, 75 books. I have piles of unread books all over the house, so I’m not short on material, just on time.

Hunting for the True Location, with Machine Learning

Some context first.

My company puts on a year end function every year. It’s at some resort or other, and the important thing for this post is that we’re not told the location in advance. We find out when we get there (by bus).

What we are told, about a month ahead of the event, is approximate distances from 3-4 locations. These are where the bus pickup sites are. The locations are:

  • Head Office
  • Near Clearwater Mall
  • Fourways
  • Centurion

The distances given aren’t correct. And, as a result, there’s usually several attempts by various people to figure out where the year end function will be in advance.

I thought I’d join in this year, using some machine learning on those distances.

Now, I should mention that this is a very poor use for ML. Mainly because of a lack of data. I should have hundreds of data points for a decent prediction. I have 2 or 3 data points, for 4 different locations. Still, it’s what I have to work with.

First, the starting data. The distances for this year are:

  • Clearwater mall: 63 KM
  • Centurion: 56 KM
  • Fourways: 43 KM
  • HQ: 20 KM
  • Cape Town: 1447 KM

I’m going to ignore Cape Town for training, as it only had a distance previously specified in 2015, and so I only have one piece of data.

Plotting this on a map makes it clear that the distances have been ‘massaged’ (I’m plotting ‘as the bird flies’, not driving distance for ease of plotting, I’ll use driving distances for the training)

yef_720

Let’s look at previous years.

2016

Actual location: Seasons Sport and Spa (pin on the map below)

Actual distances calculated with Google Maps, driving distance, shortest route.

  • Clearwater: Given distance – 80KM. Actual distance – 67KM
  • Centurion : Given distance – 88KM. Actual distance – 47KM
  • Fourways : Given distance – 68KM. Actual distance – 52KM
  • HQ: Given distance – 115KM. Actual distance – 75KM

YEF2016

2015

Actual location: Vaal River Country Lodge.

Actual distances calculated with Google Maps, driving distance, shortest route.

  • Clearwater Mall: Given distance –  51KM. Actual distance – 79KM
  • Centurion: Given distance – 110KM. Actual distance – 118KM
  • Fourways: Given distance – 89KM. Actual distance – 97KM

YEF2015

2014

Actual location: Askari Game Lodge.

Actual distances calculated with Google Maps, driving distance, shortest route.

  • Clearwater Mall: Given distance –  52KM. Actual distance – 79KM
  • HQ: Given distance – 90KM. Actual distance – 118KM

YEF2014

With that, I have the following training data:

Location Given Distance Error in Distance (Given – Actual)
Clearwater Mall 80 13
Clearwater Mall 51 -28
Clearwater Mall 52 -27
Centurion 88 41
Centurion 110 -8
Fourways 68 -6
Fourways 89 -8
HQ 115 40
HQ 90 -28

Now to stick those into a linear regression and see if I can predict the error on this year’s measurements.

I need to mention that with so little data, the accuracy of the linear regression is going to be very low. I’m as likely to get the correct results from linear regression as I am to get correct results from rolling a couple of d20s.

That said, onwards to untrustworthy results.

Once the starting values are loaded into R, creating a simple model is as easy as

m <- lm(Error ~ Location + Distance, data=YEF)

Then load up this year’s values into another data frame, and predict.

predict(m, YEFPredict)

The errors come out as:

  • Clearwater Mall: -12
  • Centurion: -18
  • Fourways: -35
  • HQ: -60

Giving final estimated distances (Given – Error) as

  • Clearwater Mall: 75KM
  • Centurion: 74KM
  • Fourways: 78KM
  • HQ: 80KM

YEF2017

Maybe I should have stuck to using dice.

Books of 2016

I set myself a reading goal of 75 books for last year, and managed 73. I’m not overly happy about that, there were months where I barely managed to read anything

Books2016

The full list, with Amazon links is at the end of this post, I’ll mention a few of the standout books first.

Dust and Light, and its sequel Ash and Silver

A novel magic system, complex politics, a war, an ancient mystery and the main character is slap in the middle of all of them, and he doesn’t remember why.

An interesting theme in these is on memory and what we are if our memory is stripped away.

Halting State

Near-future Scotland. The book starts with a bank robbery, and the suspects are a bunch of orcs and a dragon. The robbery occurred in a persistent, online world, and the police are a little out of their depth. It gets more complicated from there.

Song for Arbonne

A beautifully written story of the land of Arbonne, land of troubadours and joglars and courtly love, worshipping a goddess and ruled by a Queen; and a land to the north where only the warrior god is worshipped and the king and high priest have sworn to conquer Arbonne.

Pandora’s Star

The first story of the Commonwealth saga, a futuristic society where space travel is almost unknown as wormholes link the worlds of the commonwealth together, and where people can live forever thanks to memory implants and rejuvenation techniques.

It all starts when an astronomer observes a star disappearing, enveloped in an instant by some form of Dyson sphere.

The Bands of Mourning

The last in the sequel series to Mistborn, we return to the world of Allomancy and mists. It’s hundreds of years after the end of “Hero of Ages”, the world is in an early Industrial Age.

This book completes the adventures of Wax and Wayne, started in Allow of Law and continued in Shadows of Self.

City of Stairs and its sequel City of Blades

Another completely different fantasy setting. For centuries the Divinities had ruled and protected the continent, their miracles feeding the people, protecting them, etc. Then on one day, the Divinities were killed and civilisation on the continent collapsed.

Almost 100 years later strange things with a divine feel to them are happening and must be investigated.

What If?

A book full of strange questions and well-researched answers, such as “What would happen if the Earth stopped spinning?” (Hint: Bad things would happen), or “What would happen if you tried to hit a baseball travelling at 90 percent of the speed of light?” (Hint: Bad things would happen).

It’s hilarious, it’s well-researched, it’s fantastic.

Full list:

Dust and Light: A Sanctuary Novel by Carol Berg
Skin Game: A Novel of the Dresden Files by Jim Butcher
Sacrifice (Star Wars: Legacy of the Force, Book 5) by Karen Traviss
Inferno (Star Wars: Legacy of the Force, Book 6) by Troy Denning
Fury (Star Wars: Legacy of the Force, Book 7) by Aaron Allston
Revelation (Star Wars: Legacy of the Force, Book 8) by Karen Traviss
The Republic of Thieves (Gentleman Bastards) by Scott Lynch
Ash and Silver: A Sanctuary Novel by Carol Berg
Arthur (The Pendragon Cycle, Book 3) by Stephen R. Lawhead
City of Stairs (The Divine Cities) by Robert Jackson Bennett
This May Go On Your Permanent Record by Kelly Swails
Words of Radiance: Part Two (The Stormlight Archive) by Brandon Sanderson
Rookie Privateer (Privateer Tales) (Volume 1) by Jamie McFarlane
Invincible (Star Wars: Legacy of the Force, Book 9) by Troy Denning
Shattered: The Iron Druid Chronicles by Kevin Hearne
White Tiger (Dark Heavens, Book 1) by Kylie Chan
Something More Than Night by Ian Tregillis
Crown of Renewal (Legend of Paksenarrion) by Elizabeth Moon
Halting State (Ace Science Fiction) by Charles Stross
The Crimson Campaign (The Powder Mage Trilogy) by Brian McClellan
The Long Way Down (Daniel Faust) (Volume 1) by Craig Schaefer
London Falling by Paul Cornell
Learning R by Richard Cotton
Song for Arbonne by Guy Gavriel Kay
Pendragon (The Pendragon Cycle, Book 4) by Stephen R. Lawhead
The First Casualty by Mike Moscoe
Dragons In The Stars (Star Rigger) by Jeffrey A. Carver
Girl on the Moon by Jack McDonald Burnett
Skinwalker (Jane Yellowrock, Book 1) by Faith Hunter
Terms of Enlistment (Frontlines) by Marko Kloos
Rath’s Deception (The Janus Group) (Volume 1) by Piers Platt
Valour by John Gwynne
Death from the Skies!: The Science Behind the End of the World by Philip Plait Ph.D.
The Fabric of the Cosmos: Space, Time, and the Texture of Reality by Brian Greene
Flex by Ferrett Steinmetz
Sword Coast Adventurer’s Guide by Wizards RPG Team
Lines of Departure (Frontlines) by Marko Kloos
The Dark Ability (Volume 1) by D.K. Holmberg
Shadows of Self: A Mistborn Novel by Brandon Sanderson
ATLAS (ATLAS Series) by Isaac Hooke
Pandora’s Star (The Commonwealth Saga) by Peter F. Hamilton
Interim Errantry: Three Tales of the Young Wizards by Diane Duane
Leviathan Wakes (The Expanse Book 1) by James S.A. Corey
Path of Destruction (Star Wars: Darth Bane, Book 1) by Drew Karpyshyn
Manifold: Time by Stephen Baxter
The Bands of Mourning: A Mistborn Novel by Brandon Sanderson
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku
Ruin (The Faithful and the Fallen) by John Gwynne
Virtual Destruction: Craig Kreident (Craig Kreident Thrillers) (Volume 1) by Kevin J Anderson, Doug Beason
Before the Awakening (Star Wars) by Greg Rucka
The Weapon of a Jedi: A Luke Skywalker Adventure by Jason Fry
Parley (Privateer Tales) (Volume 3) by Jamie McFarlane
Calamity (The Reckoners) by Brandon Sanderson
Grail (The Pendragon Cycle, Book 5) by Stephen R. Lawhead
Into the Black (Odyssey One) by Evan Currie
Avalon:: The Return of King Arthur by Stephen R. Lawhead
Meeting Infinity by Gregory Benford, James S.A. Corey, Madeline Ashby, Aliette de Bodard, Kameron Hurley, John Barnes, S
Desert Rising by Kelley Grant
Deepsix by Jack McDevitt
The Steel Remains by Richard Morgan
Child of the Daystar (The Wings of War Book 1) by Bryce O’Connor
The Terran Privateer (Duchy of Terra) (Volume 1) by Glynn Stewart
Throne Of Jade by Naomi Novik
Wireless by Charles Stross
Outriders by Jay Posey
The Vorrh by Brian Catling
The Engines Of God by Jack McDevitt
Parallel Worlds: A Journey Through Creation, Higher Dimensions, and the Future of the Cosmos by Michio Kaku
What If?: Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe
City of Blades (The Divine Cities) by Robert Jackson Bennett
To Hold the Bridge: Tales from the Old Kingdom and Beyond by Garath Nix
Footfall by Larry Niven, Jerry Pournelle
Brandon Sanderson’s White Sand Volume 1 by Brandon Sanderson, Rik Hoskin

What is a SARGable predicate?

‘SARGable’ is a weird term. It gets bandied around a lot when talking about indexes and whether queries can seek on indexes. The term’s an abbreviation, ‘SARG’ stands for Search ARGument, and it means that the predicate can be executed using an index seek.

Lovely. So a predicate must be SARGable to be able to use an index seek, and it must be able to use an index seek to be SARGable. A completely circular definition.

So what does it actually mean for a predicate to be SARGable? (and we’ll assume for this discussion that there are suitable indexes available)

The most general form for a predicate is <expression> <operator> <expression>. To be SARGable, a predicate must, on one side, have a column, not an expression on a column. So, <column> <operator> <expression>

SELECT * FROM Numbers
WHERE Number = 42;

Seek1

SELECT * FROM Numbers
WHERE Number + 0 = 42;

Scan1

SELECT * FROM Numbers
WHERE Number = 42 + 0;

Seek2

Any1 function on a column will prevent an index seek from happening, even if the function would not change the column’s value or the way the operator is applied, as seen in the above case. Zero added to an integer doesn’t change the value of the column, but is still sufficient to prevent an index seek operation from happening.

While I haven’t yet found any production code where the predicate is of the form ‘Column + 0’ = @Value’, I have seen many cases where there are less obvious cases of functions on columns that do nothing other than to prevent index seeks.

UPPER(Column) = UPPER(@Variable) in a case-insensitive database is one of them, RTRIM(COLUMN) = @Variable is another. SQL ignores trailing spaces when comparing strings.

The other requirement for a predicate to be SARGable, for SQL Server at least, is that the column and expression are of the same data type or, if the data types differ, such that the expression will be implicitly converted to the data type of the column.

SELECT 1 FROM SomeTable
WHERE StringColumn = 0;

Scan2

SELECT 1 FROM SomeTable
WHERE StringColumn = ‘0’;

Seek3

There are some exceptions here. Comparing a DATE column to a DATETIME value would normally implicitly convert the column to DATETIME (more precise data type), but that doesn’t cause index scans. Neither does comparing an ascii column to a unicode string, at least in some collations.

In generally though, conversions should be explicit and decided on by the developer, not left up to what SQL server decides.

What about operators?

The majority are fine. Equality, Inequality, IN (with a list of values), IS NULL all allow index usage. EXIST and IN with a subquery are treated like joins, which may or may not use indexes depending on the join type chosen.

LIKE is a slight special case. Predicates with LIKE are only SARGable if the wildcard is not at the start of the string.

SELECT 1 FROM SomeStrings
WHERE ASCIIString LIKE 'A%'

Seek4

SELECT 1 FROM SomeStrings
WHERE ASCIIString LIKE '%A'

Scan3

There are blog posts that claim that adding NOT makes a predicate non-SARGable. In the general case that’s not true.

SELECT * FROM Numbers
WHERE NOT Number > 100;

Seek5

SELECT * FROM Numbers
WHERE NOT Number <= 100;

Seek6

SELECT * FROM Numbers
WHERE NOT Number = 137;

Seek7

These index seeks are returning most of the table, but there’s nothing in the definition of ‘SARGable’ that requires small portions of the table to be returned.

That’s mostly that for SARGable in SQL Server. It’s mostly about having no functions on the column and no implicit conversions of the column.

(1) An explicit CAST of a DATE column to DATETIME still leaves the predicate SARGable. This is an exception that’s been specifically coded into the optimiser.

I’m a Pluralsight author!

My first course, Identifying & Fixing Performance Issues Caused by Parameter Sniffing, was published two weeks ago. It won’t be the last.

Recording the course was a voyage of discovery. Until then, I’d only ever done live presentations, blog posts and articles. I initially thought that the recording would be similar to live presentations, but it’s nothing close.

For presentations, I do them off-the-cuff. Oh, I rehearse them beforehand, but I have no script, no speakers notes, no cue cards. My slides are as much for me, to direct what I’m talking about at each point, as they are for the audience. Two presentations using the same slide deck are not going to be the same.

I initially tried that method for recording, and it was a mess. Because the mistakes and half-sentences and corrections that are fine in a live presentation are not fine with a recording. The recording has to be near-perfect, all those little mistakes have to be edited out and re-recorded.

The first recording I did I had to record three times and there were still places wrong.

So I switched to scripting the entire thing, and then just ‘reading’ the script, taking care that it didn’t sound like reading a script. That works much better, but the time to write the script is huge. I speak at roughly 160 words/minute. A full page in Word, with default font and spacing, is around 600 words. 30 minutes of recording means around 8-9 pages of script.

The recording, I’ve found, is the least time-consuming part of the exercise (which is good, because it’s only quiet enough to record after 8PM, I live just off a busy road with a school a block away)

The editing is the most tedious. 20 minutes of finished video requires around 40 minutes or more of recording and probably 1.5-2 hours of editing, more if it was a demo.

The demos are still a problem and one where I need to figure out a good process. What I did for this course was to record the video of the demo, do a quick edit to take out mistakes, then record the voice over the top, then edit them together. While it works, it’s a monumental pain. A 15 minute demo took 3 hours of editing to put together.

For the next one I’ll try recording the demos, video and audio, in smaller chunks. Hopefully will make it easier to piece the audio and video together. The finished clips can easily be edited together at the end. Hopefully that’ll make the demos less of a pain.

Pass Summit 2016 abstract reviews

Following on Steve’s blog post on summit abstracts, I decided to publish mine.

My comments are not intended as an attack on the program committee, it’s an incredibly hard job that you couldn’t pay me to do (I’ve done similar work before and hated it). What I hope comes out of this, and the other posts which Steve reviewed, are that comments are more constructive and can be used to improve the abstract.

When the initial mails were sent out, I had one general session and one lightning talk accepted. Since then, I’ve been asked to also present one of the sessions that was listed as alternate.

So, without any editing, the abstracts and comments:

All about Indexes (half-day, level 300, declined)

Indexes are essential to good database performance, but it can be hard to decide what indexes to create and the &#39;rules&#39; around indexes often appear to be vague or downright contradictory.

In this session we&#39;ll dive deep into indexes, have a look at their architecture and internal structure and how that affects the way that indexes are used in query execution. We’ll look at why clustered indexes are recommended on almost all tables and how their architecture affects the choice of columns. We’ll look at nonclustered indexes; their architecture and how query design affects what indexes should be created to support various queries.

Comments

Abstract starts off really well, but last sentence has some structure/repetition issues. Level might be appropriate, but I wonder if in a 3 hour session you can cover enough detail to justify a 300 level.

Nice abstract and overall flow between title, abstract and goals.

Abstract seems well written and gives decent insight into session contents. It would be better to mention 2014 & 2016 features to make the session more attractive. The topic of indexes is somewhat overdone and may have some difficulty attracting attendees. Goals are decent but somewhat generic and could benefit from being more tangible. 25% demo on this topic seems a bit low to keep attendees engaged for a 3 hour session.

Need more meat in abstract. Are you going to talk about column, filtered or other index types? Only 25% demos, you have 3 hours???

Could cover more breadth in 3 hours. Filtered, sparse, columnstore etc.

I don’t see this for a half-day session. The goal are just too weak. And only 25% demo’s? You mean you are going to be talking for 3 hours and demoing for an hour? zzzzz. Need to flesh this abstract out a lot more.

Subjective: For a 3h long session, even at lvl300 , would expect a bit more demos to keep the attendees from nodding off…

Interesting combination. One wonders if I can cover enough material in 3 hours to make it a level 300 and others imply that there’s not enough material to spend 3 hours on.

Thing is, I’ve given this presentation twice before, as a 3 hour session. Pass Summit in Charlotte and SQLBits in London. It fits into 3 hours, providing there aren’t too many questions, that is. If I was going to add clustered and nonclustered columnstores and Hekaton’s range and hash indexes (and the interactions between them, eg non-clustered rowstore index on a clustered columnstore), then it would be a full day precon at least.

There are no 2014 or 2016 features mentioned, because the rowstore indexes didn’t change much in either version. Sure, I could call out the increased key size, but that’s not exactly abstract material. Maybe a footnote on one slide.

As for the topic being overdone and will have difficulty attracting attendees, there are only two sessions on rowstore indexes this year, a precon (Kendra Little) and a 100-level general session (Kathi Kellenberger), plus a few internals presentations and performance-related presentations that include indexing. Every time I’ve given an indexing presentation at a SQLSaturday, Pass Summit or SQLBits I’ve had a packed room. Sure, it isn’t shiny and new, but it’s probably relevant to most systems we design, develop and administer.

“You mean you are going to be talking for 3 hours and demoing for an hour? “ <snark_mode = on> No, I planned to do the middle hour as a mime act. </snark_mode>

Last time I checked, a 3 hour session meant talking for 3 hours. I’m sorry that indexing sounds so dry and boring that attendees are sure to fall asleep during it. I haven’t yet had anyone falling asleep in my indexing sessions. Maybe everyone who did, did so quietly and I didn’t notice. Maybe I should bring fireworks next time.

Go, Go, Query Store! (general session, level 200, accepted)

One of the hardest things to do in SQL is to identify the cause of a sudden degradation in performance. The DMVs don’t persist information over a restart of the instance and, unless there was already some query benchmarking (and there almost never is), answering the question of how the queries behaved last week needs a time machine. Up until now, that is. The addition of the Query Store to SQL Server 2016 makes identifying and resolving performance regressions a breeze.

In this session we’ll take a look at what the Query Store is and how it works, before diving into a scenario where overall performance suddenly degraded, and we’ll see why Query Store is the best new feature in SQL Server 2016, bar none.

Comments

Topic: Great topic. SQL Server 2016 and new shiny features like QS will definitely draw a crowd.
Abstract: Very well written with clear supportive goals.
Subjective: This sounds like a great session I would like to attend. The one issue I have is that it is listed as only 25% demo. Demos are the best way to who off QS functionality. 50% demo would be better.

I would like to attend this session

The outline seems to clearly describe the contents of the presentation. The topic and goals should be compelling to attendees. The target audience should be big enough to support this session. There appears to be a reasonable amount of live demonstrations in relation to the topic being presented.

Abstract: detailed, clear
topic: relevant and new, one of the more interesting feature of sql server 2016
Subjective rating: a good session

Abstract: well written, good topic
Topic: good topic, eye catching
Subjective: good session, would be interested in this

Nothing much to say here. The session will likely be closer to 50% demos, the original plan was 25%, but when I built the slides for SQL Saturday Iceland, I ended up putting more demos in and less slides.

Is that a parameter I smell? (general session, level 300, declined)

All too often a forum post on erratic query performance is met with a reply &quot;Oh, it&#39;s parameter sniffing. You can fix it with &lt;insert random solution here&gt;.&quot; The problem with answer, even if it has identified the cause, is that it’s only part true. Parameter sniffing is not simply a problem that needs fixing; it&#39;s an essential part of well-performing queries. In most cases.

Come to this session to learn what Parameter Sniffing really is and why it’s a good thing, most of the time. Learn how to identify the scenarios where it’s not good, why a feature that is supposed to improve query performance sometimes degrades it, and what your options are for resolving the problems when they do occur.

Comments

Abstract: Well presented, explains purpose of session.
Topic: catchy title, still timely and relevant.
Subjective: Nice mix of demo for this topic.

title not cute, if someone does not know about this they would not come Good abstract and good goals

Abstract: Grammar is a bit rough making the abstract difficult to read.

topic is decent . while it not “latest” or “hot”, it address a real life problem that developers can face. Abstract is a bit too generic and could benefit from more tangible details. Goals are okay but somewhat generic. demo % is decent.

Thanks for submitting a session that covers a really hot button topic in database queries! Looking forward to seeing it.

Again, fair enough, nothing much I can say about these comments.

On Transactions and Atomic Operations (general session, level 200, declined)

If there’s one thing that we, as SQL developers, do, it&#39;s not use enough transactions.

Transactions are critical when multiple changes need to be made entirely or not at all, but even given that it’s rare to see transactions used at all in most production code

In this session, we&#39;ll look at what transactions are and why we should use them. We&#39;ll explore the effects transactions have on locking and the transaction log. We&#39;ll investigate methods of handling errors and undoing data modifications, and we&#39;ll see why nested transactions are a lie.

Comments

Abstract: Grammar in first sentence is difficult to read. Too many commas.

good topic, need more meat in abstract and goals.

topic may not be “latest” but its a foundation subject and of relevance to most database developers. abstract clearly conveys what to expect from the session. goals are somewhat terse and could benefit from tangible details. demo % is a bit low.

Atomic Operations is in the title, but not used/defined in the abstract itself.

Abstract: Second sentence is a bit run-on and incoherent.
Topic: Fit for purpose, but likely a bit niche, as the author already seems to fears.
Subjective: Would have expected at least 50% demo – after all the easiest way to understand the need for TRANS is to see data buggered up by colliding updates.

Yes, I do have a tendency to torture my commas (they always confess at the end) and play with grammar. I will try to remember next year to write with simple, plain grammar. I do need to flesh this out a bit for next time it’s submitted anywhere.

I’m not sure what in the abstract conveys my fear that it’s a niche session, I don’t think it is. From working on various clients’ production systems over the last few years, I very seldom see explicit transactions. It’s something we don’t do enough, not a niche topic.

I’m also not sure that the reviewer meant by ‘colliding updates’. Updates, like all statements, are always part of a transaction, even if it is just one automatically started and committed. If two sessions were to try and update the same column, same rows at the same time to different values, the outcome will be as if one or the other had run, not a mixture of the two. And wrapping the update in an explicit transaction won’t change that behaviour

The Many Latencies of TempDB (general session, level 400, originally alternate, later accepted)

TempDB gets a bad rap when it comes to performance and scalability, and it’s all-too-often well deserved. A badly configured TempDB can have devastating effects on throughput. Combine that with poor queries and, well, you didn’t have any plans for the weekend, right?

In this session we’ll look at some common causes of TempDB contention, both query-based and configuration-based. We’ll look at guidelines for configuring TempDB and when and why you’d make various changes, and we’ll cover more ways to monitor TempDB than you can shake a stick at.

Now, about those weekend plans…

Comments

Abstract: Clear and well written abstract. No prerequisite listed.
Topic: Topic and goals seem like they would be appealing to attendees.
Subjective: Sounds like a good session that people would benefit from.

Abstract: interesting
Topic: relevant
Subjecttive rating: compelling, high level

Abstract: The outline and details of this abstract are well written
Topic: This is a good topic
Subjective: I may attend this session

The session prerequisites are helpful. A great topic and the Abstract is almost good. It is funny but not enough information about the session.

Session prerequisites: TBC?
Goal 1: Correct the “&#39;”
Objective: I would like to attend this session.

The pre-reqs here were a mistake on my part. Despite checking and re-checking the abstracts, that mistake slipped through (tbc = to be completed).

I would love to have corrected the “&#39;”, as well as the &quot; and &gt; that appeared scattered through the abstracts, but they appeared on submit and editing afterwards didn’t help. Something, somewhere in the website messed up the HTML encoding a bit.

Watch a query run. Run, query, run! (Lightning talk, level 100, declined)

Previously if you wanted to get any run-time statistics for a query, you had to include the actual execution plan and run the query to completion. No more! New in SQL 2016 is the live query statistics that let you watch the execution of a query, in real time, and see when the operators run and where the data flows.

In this lightning talk we’ll look at how the live query statistics works and discuss some scenarios where this will really help debugging strange query behaviour.

Comments

Perfect topic/abstract for Lightning Talk, sign me up!

interesting and current topic. well written abstract with good details of session contents. Goals are terse but clear.

Sounds like a good topic for a Lightning Talk.

Great topic and good intro to query store. Hopefully this talk will provide concise knowledge.
Topic is good, will interest a large amount of attendess.
Level is excellent – it’s a new feature so 100 level is great. I would have liked to see mention of query store in the title.

good topic

New topic on new technology. Small enough to be interesting

Interesting and great topic. Abstract and Goals talks to each other. New features is always a good place to grab peoples attention.

The comment on wanting query store to appear in the title is mystifying, because this session has nothing to do with query store. It’s showing off the Live Query Statistics (which, while nice, is too small to warrant anything more than a lightning session alone)

Why Temporal Tables and Stretch Database Are Best Friends (Lightning talk, level 200, accepted)

SQL Server 2016 introduced, among several other new features, Temporal tables and Stretch database.

From a distance, they appear not to have much to do with each other. Temporal tables allow you to query the table as it was at a point in time. Stretch allows for tables to span the earthed SQL Server and the cloud.

In this short session we’ll look at why these two features work spectacularly well together and why a stretched temporal table makes perfect sense.

Comments

Clear understanding of the objective, and as a “new” feature people will drawn to this.

Perfect Lightning Talk abstract, focused topic and something new with the latest version of SQL Server.

Sounds interesting!

interesting topic – new and relevant. Abstract is brief but gives good insight into session contents. Goals are terse, but clear and tangible. No demos may be off-putting to some attendees.

Earthed SQL is an interesting expression and certainly made me think.
There’s two new features here – is that too much for a 10 minute lightening talk?
If good connection is made early between the two then this could be good
Three goals seem to sum up the presentation well.

Try just one topic in 10 minutes or create a 75 or 3 hour session on new features of 2016

No real “pull” for a short session.

“No demos may be off-putting”. Um, what? This is a 10-minute lightning talk, I’m not demoing features in 10 minutes and I’d be very surprised if attendees expected demos in a 10 minute session.

I can’t take credit for the “Earthed SQL” expression, that one’s from Rimma’s Keynote presentation in 2014

SQL Server 2016 features: R services

One of the more interesting features in SQL 2016 is the integration of the R language.

For those who haven’t seen it before, R is a statistical and data analysis language. It’s been around for ages, and has become popular in recent years.

R looks something like this (and I make no promises that this is well-written R). Taken from a morse-code related challenge

MessageLetters <- str_split(Message, "")

MessageEncoded <- list(1:length(MessageLetters))

ListOfDots <- lapply(lapply(c(MaxCharacterLength:1), function(x) rep.int(".", times = x)), function(x) str_c(x, collapse=''))
ListOfDashes <- lapply(lapply(c(MaxCharacterLength:1), function(x) rep.int("-", times = x)), function(x) str_c(x, collapse=''))

If you’re interested in learning R, I found the Learning R book to be very good.

SQL 2016 offers the ability to run R from a SQL Server session. It’s not that SQL suddenly understands R, it doesn’t. Instead it can call out to the R runtime, pass data to it and get data back

Installing the R components are very easy.

2016-05-23_09-39-43

And there’s an extra licence to accept.

2016-05-23_09-44-48

It’s worth noting that the pre-installed Azure gallery image for RC3 does not include the R services. Whether the RTM one will or not remains to be seen, but I’d suggest installing manually for now.

Once installed, it has to be enabled with sp_configure.

EXEC sp_configure 'external scripts enabled', 1
RECONFIGURE

It’s not currently very intuitive to use. The current way R code is run is similar to dynamic SQL, with the same inherent difficulties in debugging.

EXEC sp_execute_external_script
  @language = N'R',
  @script = N'data(iris)
    OutputDataSet <- head(iris)'
  WITH RESULT SETS (([Sepal.Length] NUMERIC(4,2) NOT NULL, [Sepal.Width] NUMERIC(4,2) NOT NULL, [Petal.Length] NUMERIC(4,2) NOT NULL, [Petal.Width]  NUMERIC(4,2) NOT NULL, [Species] VARCHAR(30)));
go

It’s possible to pass data in as well, using a parameter named @input_data_1 (there’s no @input_data_2) and from what I can tell from the documentation @parameter1, which takes a comma-delimited list of values for parameters defined with @params. There’s no examples using these that I can find, so it’s a little unclear how they precisely work.

See https://msdn.microsoft.com/en-us/library/mt604368.aspx and https://msdn.microsoft.com/en-us/library/mt591993.aspx for more details.

It’s not fast. The above piece of T-SQL took ~4 seconds to execute. This is on an Azure A3 VM. Not a great machine admittedly, but the R code, which just returns the first 6 rows of a built-in data set, ran in under a second on my desktop. This is likely not something you’ll be doing as part of an OLTP process.

I hope this external_script method is temporary. It’s ugly, hard to troubleshoot, and it means I have to write my R somewhere else, probably R Studio, maybe Visual Studio, and move it over once tested and working. I’d much rather see something like

CREATE PROCEDURE GetIrisData
  WITH Language = 'R' -- or USQL or Python or …
  AS
…
GO

Maybe in SQL Server 2020?

SQL Server 2016 features: Temporal Tables

Another new feature in SQL 2016 is the Temporal Table (or System Versioning, as its referred to in the documentation). It allows a table to be versioned, in terms of data, and for queries to access rows of the table as they were at some earlier point in time,

I’m quite excited about this, because while we’ve always been able to do this manually, with triggers or CDC or CT, it’s been anything but trivial. I remember trying to implement a form of temporal tables back in SQL 2000, using triggers, and it was an absolute pain in the neck.

So how does it work? Let’s start with a normal un-versioned table.

CREATE TABLE dbo.Stock (
  StockReferenceID INT IDENTITY(1, 1) NOT NULL,
  IssueID INT NULL,
  Condition VARCHAR(10) NULL,
  AvailableQty SMALLINT NULL,
  Price NUMERIC(8, 2) NULL,
  PRIMARY KEY CLUSTERED (StockReferenceID ASC)
);

To make that a temporal table, we need to add two columns, a row start date and a row end date.

ALTER TABLE Stock
  ADD PERIOD FOR SYSTEM_TIME (RowStartDate, RowEndDate),
    RowStartDate DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL DEFAULT GETDATE(),
    RowEndDate DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.99999999');

It’s a little complicated. From my, admittedly limited, testing, the NOT NULL and the DEFAULT are required. The start time’s default needs to be GETDATE() and the end time’s default needs to be the max value of the data type used.

Hidden is an interesting property, it means that the columns won’t appear if SELECT * FROM Stock… is run. The only way to see the column values is to explicitly state them in the SELECT clause.

TemporalHiddenColumns_thumb.png

I wonder if that property will be available for other columns in the future. It would be nice to be able to mark large blob columns as HIDDEN so that SELECT * doesn’t end up pulling many MB per row back.

That ALTER adds the row’s two time stamps. To enable the versioning then just requires

ALTER TABLE Stock
  SET (SYSTEM_VERSIONING = ON);

Once that’s done, the table gains a second, linked, table that contains the history of the rows.

TemporalTable-Object-Explorer_thumb.png

421576540 is the object_id for the Stock table. If is also possible to specify the name for the history table in the ALTER TABLE statement, if preferred.

The history table can be queried directly. The start and end times aren’t hidden in this one.

TemporalHistoryTable.png

Or, the temporal table can be queried with a new clause added to the FROM, FOR SYSTEM TIME… Full details at https://msdn.microsoft.com/en-us/library/mt591018.aspx

HistoryOfRow

Very neat.

The one thing that does need mentioning. This is not an audit solution. If the table is altered, history can be lost. If the table is dropped, history is definitely lost. Auditing requirements are such that audit records should survive both. Use this for historical views of how the data looked, and if you need an audit as well, look at something like SQLAudit.

Upcoming conferences

It’s shaping up to a busy year for conferences, well busy by my standards that is. While I’m unfortunately missing SQLBits, I’ll still be getting a chance to enjoy an English summer.

InsideSQL

The InsideSQL conference is a new conference organised by Neil Hambly. It’s a deep-dive conference, with longer sessions than many conferences offer, and an opportunity to dig deep into topics.

I’m presenting two sessions there, first a look at SQL waits, why there are waits and what various waits types mean, second a nice deep discussion on SQL Server indexes.

SQLSaturday Iceland

Iceland has been on my to-visit list for some time, so a SQLSaturday there? Perfect excuse for a short visit and a bit of exploration. I wonder if there’s any chance I’ll get to see the Aurora Borealis.

I’m doing a full-day precon on the Thursday (Friday is an Icelandic public holiday) on execution plans, as well as a regular session on Query Store on the Saturday.

South African SQLSaturdays

Fast forward to September, the South African SQL Saturdays are again running on back-to-back weekends. Johannesburg on the 3rd of September, Cape Town on the 10th and Durban planned for the 17th.

We’d love to have more international speakers join us for these. The local weather is lovely in September, and the exchange rate to the dollar/pound so poor that you won’t believe how cheap things are here. Come down for a two week African holiday, and get three SQLSaturday presentations in on the side.

SQL 2016 features: Stretch Database

Stretch database allows for a table to span an ‘earthed’ SQL Server instance and an Azure SQL Database. It allows for parts (or all) of a table, presumably older, less used parts, to be stored in Azure instead of on local servers. This could be very valuable for companies that are obliged to retain transactional data for long periods of time, but don’t want that data filling up the SAN/flash array.

After having played with it, as it is in RC2, I have some misgivings. It’s still a useful feature, but probably not as useful as I initially assumed when it was announced.

To start with, the price. Stretch is advertised as an alternative to expensive enterprise-grade storage. The storage part is cheap, it’s costed as ‘Read-Access Geographically Redundant Storage’ blob storage.

PriceStorage

Then there’s the compute costs

PriceCompute

The highest tier is 2000 DSU at $25/hour. To compare the costs to SQL Database, a P2 has the same compute costs as the lowest tier of Stretch, and that’s with a preview discount applied to Stretch. It’s going to be a hard sell to my clients at that price (though that may be partially because of the R15=$1 exchange rate).

The restrictions on what tables are eligible are limiting too. The documented forbidden data types aren’t too much of a problem. This feature’s intended for transactional tables, maybe audit tables and the disallowed data types are complex ones. HierarchyID, Geography, XML, SQL_Variant.

A bigger concern are the disallowed features. No computed columns, no defaults, no check constraints, can’t be referenced by a foreign key. I can’t think of too many transactional tables I’ve seen that don’t have one or more of those.

It’s looking more like an archive table, specifically designed to be stretchable will be needed, rather than stretching the transactional table itself. I haven’t tested whether it’s possible to stretch a partitioned table (or partition a stretched table) in order to partition switch into a stretched table. If it is, that may be the way to go.

I have another concern about stretch that’s related to debugging it. When I tested in RC2, my table was listed as valid by the stretch wizard, but when I tried, the ALTER TABLE succeeded but no data was moved. It turned out that the Numeric data type wasn’t allowed (A bug in RC2 I suspect, not an intentional limitation), but the problem wasn’t clear from the stretch-related DMVs. The problem is still present in RC3

StretchDMV

The actual error message was no where to be found. The new built-in extended event session specifically for stretch tables was of no additional help.

StretchXE

The error log contained a different message, but still not one that pinpointed the problem.

This blog post was based on RC2 and written before the release of RC3, however post RC3 testing has shown no change yet. I hope at least the DMVs are expanded before RTM to include actual error messages and more details. We don’t need new features that are hard to diagnose.

As for the other limitations, I’m hoping that Stretch will be like Hekaton, very limited in its first version and expanded out in the next major version. It’s an interesting feature with potential, I’d hate to see that potential go to waste.