Posted on

The BREAK Keyword in Transact SQL

BREAK is a useful piece of control-of-flow language in SQL Server, but it only works in specific situations.

Want to take the quiz first?

Take the quiz on BREAK here, then head back here to see how you compare to other folks who took the quiz.

Break Questions, Answers, and Explanations

1) If you run this code in a SQL Server Management Studio session, what will appear in the Messages window?




Msg 135, Level 15, State 1, Line 6
Cannot use a BREAK statement outside the scope of a WHILE statement.

How’d people do?

  • Correct: 237 (77%)
  • Incorrect: 70 (23%)

I know, it seems like this should work! But, in fact, BREAK only breaks you out of a WHILE loop.

For other purposes, you may want to use RETURN.

2) If you run this code in a SQL Server Management Studio session, what will appear in the Messages window?

WHILE 1 = 1
    PRINT 2

Answer: 2

This one was a little trickier…

  • Correct: 202 (66%)
  • Incorrect: 105 (34%)

In this case BREAK will cause us to exit that WHILE loop (so we won’t be stuck in it forever). BREAK doesn’t cause the execution of the whole batch to terminate, though, so we go on to the next statement, and PRINT 2 is executed.

3) If you run this code in a SQL Server Management Studio session, what will appear in the Messages window?

WHILE 1 = 1
IF 1 = 1
    PRINT 2


Msg 135, Level 15, State 1, Line 6
Cannot use a BREAK statement outside the scope of a WHILE statement.

This one foiled even more folks:

  • Correct: 135 (44%)
  • Incorrect: 172 (56%)

The key to this one is that we haven’t defined a statement block. That means that the WHILE statement applies to the next statement only. The IF statement also applies to the next statement only. The BREAK is left out in the cold, and it will throw an error if it is not part of a WHILE loop.

But it’s a tricky question, because it might seem like SQL Server would get to the PRINT and be stuck in an infinite loop, never getting to the point that it throws the error. That doesn’t happen – as soon as we execute, the parse process sees the mistake and we get the error right away.

4) If you run this code in a SQL Server Management Studio session, what will appear in the Messages window?

WHILE 1 = 1
    PRINT 2;

Answer: Commands completed successfully.

  • Correct: 239 (78%)
  • Incorrect: 68 (22%)

In this case, the BREAK and the PRINT are within a defined BEGIN / END statement block together, inside a WHILE loop. When we hit the BREAK it exits the loop, never getting to the PRINT statement.


Posted on

Using RETURN in Transact SQL: Quiz Results

RETURN is a “control of flow” language keyword in Transact-SQL. It’s very useful for managing when code does — and does NOT — get executed in procedural code, and it can also be used to communicate the status from procedural code.

But not everyone knows how to use RETURN, and learning about it can be awkward because of some quirks of SQL Server Management Studio. It is well worth taking the time to get used to it, though!

Want to test your knowledge?

Take this quiz on RETURN, then head back here to see how you compare to other folks who took the quiz.

Note: I’m going to implement a change soon so that only the four most recent quizzes are open to everyone. I am still going to post questions and answers in follow up posts, but the fun interactive bit won’t be as open (after a month for each quiz).

So now is a good time to go through my quiz archive if you haven’t already!

Quiz results and Discussion of RETURN

I changed the wording on question #4 on July 4 after Nick posted a message about the wording — and he was totally right, there was more than one defensible correct answer.

Changing the wording did not significantly change the average score, it turns out. The average percentage dropped from 65% to 64% once that question was clarified. (That’s not a bad percentage at all — I think a lot of folks aren’t used to using RETURN, and hopefully the quiz made them curious on how it can be useful.)


  • Correct: 292 (78%)
  • Incorrect: 84 (22%)

Answer: 2

The key to this question is that RETURN controls flow within a given batch, and the default batch separator is the word “GO”.

If you want to stop the execution of a script you’re running in SSMS, the RETURN keyword won’t stop everything if you have more than one batch.


  • Correct: 249 (66%)
  • Incorrect: 127 (34%)

Answer: Commands completed successfully.

In this question, RETURN is shown inside a single batch. It will stop the flow of execution before the PRINT statement, and SSMS will simply report, “Commands completed successfully.”

IF 1 = 1

  • Correct: 256 (68%)
  • Incorrect: 120 (32%)

Answer: Commands completed successfully.

This time we have everything in one batch statement again, but we have the RETURN inside of a BEGIN / END block.

The gist of this question is: does RETURN cause it to exit only the BEGIN/END block, or that entire block of code?

The answer is that it causes it to exit from the entire block of code, it will never reach the PRINT statement.

    RETURN 1;

EXEC dbo.test;
  • Correct: 29 (58%)
  • Incorrect: 21 (42%)

Answer: Commands completed successfully.

The wording in this post is the “corrected” wording. The original wording was less clear, so I’ve reported the correct/incorrect ratios for July 5 – 9, after the wording was changed.

This one may seem silly, but I think it’s very important to know that SSMS will not report anything in the Messages tab in the case. It doesn’t mean that RETURN didn’t work, it simply means that the SSMS interface doesn’t show it!

If you want to capture the value that RETURN sends back and do something with it, you need to assign it to something, like this:

    RETURN 1;

EXEC @RESULT = dbo.test;


Used in this way, RETURN can be incredibly useful when it comes to returning status from procedural code.

Posted on

Find the Scalar Function Call: What it Means If It’s Hiding in Probe Residual

User defined functions are fairly simple to create in SQL Server, but figuring out exactly how they are being used can take a little time – even in a simple execution plan. This is because the functions can be tucked away into lots of different operators in the plan, even join operators.

The good news: once you learn what to look for, it’s not terribly complicated. There are some fancy terms involved, but the definitions are pretty simple.

Here’s a quick look at a function call hidden in a hash match operator, which is being used to implement an inner join.

The estimated query execution plan

This query uses the Microsoft WideWorldImporters sample database. The query calls the built in scalar user defined function, Website.CalculateCustomerPrice:

FROM Sales.Invoices AS i
JOIN Sales.InvoiceLines AS il
    ON i.InvoiceID=il.InvoiceID
    [Website].[CalculateCustomerPrice] ( i.CustomerID, il.StockItemID, i. InvoiceDate ) 

Here’s the query execution plan:

There’s not a lot of places for the function call to hide in this particular plan, right? No Compute Scalars to look into at all.

If we look at the properties of the Hash Match join, we can find the function call:

How does the hash join work?

The description at the top of this tool tip is quite helpful when it comes to remembering how hash matches work: SQL Server is building a table in memory from the Sales.Invoices table. It’s then “probing” into it with items from the Sales.InvoicesLines table.

The “Hash Keys Probe” part of the tool tip tells us that it is looking for matches based on the InvoiceID column (our join column).

What about the ‘Probe Residual’?

After we find matches based on the customer id, we have more work “left over” — that’s the “residual” bit.

For every row that matches, SQL Server is plugging values into the Website.CalculateCustomerPrice() function and comparing the result to the Unit price column, just like we asked for in the where clause.

In other words, this is happening for every row in Sales.InvoiceLines that has a matching row in Sales.Invoices.

Which is every single invoice & invoice line, as it turns out.


If you’re ever looking for an example of a query where collecting an actual execution plan dramatically slows down execution (observe effect), this is a good one. On instances I’ve tested, this query takes 14-18 seconds to run when no plan is collected, and about a minute longer when you enable actual plan collection.


When you’re looking at execution plans, it can be very tricky to spot user defined functions in the plan:

  • You need to not only check into Compute Scalar operators if they are around, but also look carefully at the properties of other operators, including joins
  • If you spot the function call as part of a “probe residual” on a hash join, it is being applied to every row that meets the initial join criteria
Posted on


My most recent Quizletter featured a quiz on ORDER BY in TSQL, complete with questions on the OFFSET and FETCH clauses which we got in SQL Server 2012.

One great thing about teaching TSQL School is that it reminds me of these cool options that can be easy to forget.

Want to take the quiz before you read the answers?

Take the quiz here first, if you’d like.  If you get 100%, you probably don’t need to read this post 😉

Overall Quiz Stats

Folks did pretty well on this quiz, but lots of people did fall into one mean little trap that I left in one question. (Sorry! I can be tricksy.) Here are the stats at the time of writing this post:

Entries: 436 Average Score (Out of 4): 2.64 Average Percentage: 66%

Now for the fun stuff, let’s talk about answers!

Question 1

1) How will the results of this query be ordered (and why)?
FROM Application.People;

Correct: 76%
Answer: No ordering is guaranteed in this case

This is very important to know: although the answers may come back ordered by PersonID in this case, you cannot count on that always being the case, and it could change at any point. Ordering of results is never guaranteed unless you explicitly use an ORDER BY in your query.

Question 2

2) What will this ORDER BY do?*
FROM Application.People

Correct: 90%
Answer: Order the results by the first column listed, PersonID

Numbers used like this in the ORDER BY clause represent the column position. I was surprised that so many people knew the answer to this one, to be honest. It is a little bit troubling, because Microsoft calls out using numbers as positional representations in order by as an anti-pattern that should be avoided.

But, I confess, I personally do use this myself when I’m writing quick and dirty code (and sometimes forget to fix it, too).

Question 3

3) What will this OFFSET / FETCH clause do?*
FROM Application.People

Correct: 26%
Answer: This query will throw a syntax error

Why? You can only use OFFSET with an ORDER BY. (This ain’t no TOP operator which doesn’t require ORDER BY, this is a part of ORDER BY.)

I asked this question to try to get a sense of how many people are currently using the OFFSET clause already in code and would pick out the goof based on familiarity. Seems like not many! That’s OK, but it’s good to know how this works for those times when it may come in handy.

Question 4

4) What will this OFFSET clause do?*
FROM Application.People

Correct: 72%
Answer: Return all the rows except for the 10 with the lowest PersonIDs

You are allowed to specify OFFSET without a FETCH NEXT. In this case, we are ordering by PersonID. We didn’t specify ascending or descending, so the default ascending sort order is used for the order. We are OFFSET-ing the first 10 rows ASC, so that’s the lowest 10 PersonIDs. We’ll get all the other rows, because we haven’t limited how many will be fetched.

And now, for a terrible joke

Unlike cursors, order by has finally made fetch happen. (Sorry.)

Nice work, folks!

Posted on

Perks for My SQLPASS Summit Precon: Prevent ‘Fake News’ in Your Data! Isolation Levels Demystified

I’m proud, giddy, and just plain overly excited to be giving a pre conference session at the SQL PASS Summit in Seattle, Washington on November 5, 2018.

This is going to be a fun, demo-packed day which will teach you skills you can use throughout your career as a developer, database administrator, or data professional.

Why attend?

While features and tricks change from year to year (or even month to month), understanding isolation levels is critical for your long term future.

The Session: Prevent ‘Fake News’ in Your Data! Isolation Levels Demystified

Are your users seeing bad data? If you’re using the default isolation level of read committed in SQL Server, chances are that sometimes your users get incorrect results. If NOLOCK hints lurk in your code, the odds are even higher that sometimes your customers see information that just isn’t right.

In this day-long, demo packed session, you’ll learn why a single statement may read rows twice, miss rows entirely, or return combinations of data that never existed in the database — and why that’s not a bug. You’ll learn what “read phenomena” are, which isolation levels are vulnerable to them, and the performance trade-offs which come from raising your isolation level to protect your users from bad data.

You’ll see how isolation levels work with newer technologies such as columnstore indexes, In-Memory OLTP, and Always On Availability Groups. You’ll discover why version-based isolation levels can be awesome, and what you need to look out for with these isolation levels to avoid race conditions that produce– you guessed it– incorrect results.

At the end of the day, we’ll pull together all this information into a guide. You’ll leave the seminar with the tools and knowledge to choose the right isolation levels for new and existing applications based on business and performance requirements.

What Does it Cost?

A single pre-conference session is $499. You can also bundle pre-conference sessions with registration at the full Summit.

The Perks

Attendees at my pre-conference session will not only get all the demos we show live, and a download of the PDF of slides, but also….

Perk: Watch It Again Later

Access to a recorded online session of my 8 hour seminar, “Conquer Blocking & Isolation Levels” for a full year.

Want to revisit a topic and get the nitty gritty details? You’ll be able to watch anytime, online.

Note: the live pre-conference sessions at PASS aren’t recorded for purchase anymore– however, you can buy recordings of “regular” sessions during the week from PASS in different formats.

Perk: A Month of SQLChallenges!

For four weeks following the conference, I’ll be hosting SQLChallenges on blocking and isolation level problems.

  • Attendees will get access to the problems and a chance to apply the skills they’ve learned.
  • I’ll hold a live, online session going over sample solutions for the problem. Attend, share your solution in Slack if you’d like, and ask questions.
  • Can’t make the SQLChallenge live? The recording will be posted and you’ll get online access for a year. All SQLChallenges support Q&A in the course pages.

Perk: Private Follow-Up Quizzes

Want even more learning? Following the SQLChallenges, attendees will get access to a series of four weekly quizzes about blocking and isolation levels to confirm your knowledge.

Combining this will the month of SQLChallenges will fully establish your new knowledge in place, getting you the most value for your dollar.

Register here

I hope to see you at the PASS Summit! Sign up here to join my pre-conference session.

Postscript: SQL Symposium subscribers get access to all these perks as well. As sustaining members during the first year of SQLWorkbooks, SQL Symposium members get it all 🙂 

Posted on

Row Width Impact on Version Store Usage under Snapshot Isolation

A question came up in my webcast today for subscribers on the topic of Snapshot and Read Committed Snapshot Isolation – what impact will enabling these have on my SQL Server instance?

I talked a bit about how, for tempdb, this can vary widely depending on the number of updates and deletes running against the instance. (This is a simplification, there is more overhead, but this is often the biggest component.)

But after the webcast, I realized that the impact on the version store isn’t simply on the number of updates and deletes run. It will also vary based on the width of your rows — how many columns are in the table, what data types the columns have, and what data you have in the table.

Even if you change one very small column, the entire row is versioned – sort of

My favorite thing about SQL Server is that it includes rich tools to let us learn how the engine works by setting up simple demos. It’s truly awesome.

So I went to work to demonstrate row width impact on the version store — when only a tiny bit column is changed in the row.

Here’s how I did the test:

  • I created two tables, dbo.Narrow and dbo.Wide. They each each have a bit column named bitsy, along with some other columns.
  • I inserted one row in each table, but I put a lot more data into the row in dbo.Wide.
  • I allowed snapshot isolation on the database
  • I began a transaction in another session under snapshot isolation and left the transaction open (so version store cleanup wouldn’t kick in while I looked around)
  • I updated the bit column named bitsy for the single row in each table, thereby generating a row-version in tempdb for each table

The code I ran to test this is here, if you’d like to play around with it.

The row in dbo.Narrow is 38 bytes of in-row data

I measured the width of the largest row in dbo.Narrow using sys.dm_db_index_physical_stats. It only has one row, so the max width IS our row width. It’s a lil bitty 38 bytes.

The row in dbo.Wide is 8,054 bytes of in-row data, and 8,012 bytes off row

On the other side of the scale is dbo.Wide. This takes up 8,050 of in-row data (this happened because the ‘j’ column is a char column, which is fixed width). It also is using 8,012 bytes of LOB_DATA storage, because I used the REPLICATE function to put a lot of garbage into the k column in dbo.Wide.

Let’s look at the version store!

We can query the version store through sys.dm_tran_version_store. Querying this in detail on a production system may not be great if there are a lot of versions, but this is an idle test instance and I’ve only updated two rows.

There are two rows in my version store. The Narrow table has 24 bytes of versioning data, the Wide table has 8,040 bytes of versioning data.

Observation 1: my whole row was versioned except for the LOB data

Large object data stored on LOB pages, like that in my NVARCHAR(MAX) column, can be versioned. Snapshot Isolation and Read Committed Snapshot Isolation work with LOB data too, and it’s easy to verify that that they can be versioned by updating the column named ‘k’ in this demo.

But in this case, I don’t see them in the version store.

That’s awesome– because I didn’t update the column that’s stored on LOB pages! I only updated a bit type column on that row. SQL Server is being clever about this.

Observation 2: my versioned rows are 14 bytes smaller

Comparing the size of my row-versions with the rows in the table:

  • dbo.Narrow: 38 bytes in the table row, 24 bytes in the row version = 14 bytes difference
  • dbo.Wide: 8,054 bytes of in-row data in the table row, 8,040 bytes in the version store = 14 bytes difference

That 14 bytes is a transaction timestamp/pointer piece of information in row of the data table itself that is an additional part of the overhead of row versioning — but it doesn’t have to be copied into tempdb as well.

Summing up: Wider rows will generally mean more tempdb usage for versioning

Even if you’re just updating one tiny column in your table, the entire row will be versioned — with one important exception.

My testing shows that data stored on LOB pages isn’t appearing in the version store unless I modify that data itself (not simply modifying another column in the row). Note: I haven’t found this explicitly documented while searching, and I only tested this against SQL Server 2017, so there may well be times when this is not the case– but it does make sense that SQL Server could do this quite elegantly for LOB pages.

What should I do if I am considering enabling Snapshot Isolation?

If you have the ability to load test or run a ‘replay’ workload outside of your production environment, that’s absolutely the best bet.

If that isn’t realistic in your world, you can baseline performance on your current production workload, then test allowing snapshot isolation against the database before anything uses it at all. Versioning will still take place, and you can compare performance with your established baseline.

Want to learn more about transaction isolation?

Join me for my upcoming online SQL Seminar on isolation levels and blocking (a few seats are currently on sale) – or come to my in-person pre-conference session at the SQLPASS Summit on preventing fake news in your data.

Posted on

Code: Fixing a Filtered Index Which is Causing Data Modifications to Fail

This question came up in a webcast recently: if a filtered index is causing data modifications to fail, does disabling the filtered index fix the problem?

I wasn’t 100% sure — I couldn’t remember the last time I’d tried, if I ever had. So….

Let’s test it!

First, let’s reproduce the problem: we’ll create a filtered index on a table, then set up a session who can’t read from it due to an ANSI_SETTINGS conflict. (A list of required SET OPTIONS for filtered indexes is documented here.)

Here’s my filtered index

I created this in the WideWorldImporters sample database.

CREATE INDEX ix_SalesOrders_filtered_IsUndersupplyBackordered_OrderDate_INCLUDES ON
  Sales.Orders ( OrderDate )
  INCLUDE ( OrderID, CustomerID, SalespersonPersonID, PickedByPersonID, ContactPersonID, BackorderOrderID, ExpectedDeliveryDate, CustomerPurchaseOrderNumber, IsUndersupplyBackordered, Comments, DeliveryInstructions, InternalComments, PickingCompletedWhen, LastEditedBy, LastEditedWhen)
  WHERE (IsUndersupplyBackordered = 0);

Now, let’s make our session incompatible with the index

All I have to do to make an insert fail is violate one of those required ANSI settings in my session.

Here we go!


  INSERT Sales.Orders ( [CustomerID], [SalespersonPersonID], [PickedByPersonID], [ContactPersonID], [BackorderOrderID], [OrderDate], [ExpectedDeliveryDate], [CustomerPurchaseOrderNumber], [IsUndersupplyBackordered], [Comments], [DeliveryInstructions], [InternalComments], [PickingCompletedWhen], [LastEditedBy], [LastEditedWhen])
  SELECT TOP (1) [CustomerID], [SalespersonPersonID], [PickedByPersonID], [ContactPersonID], [BackorderOrderID], [OrderDate], [ExpectedDeliveryDate], [CustomerPurchaseOrderNumber], [IsUndersupplyBackordered], [Comments], [DeliveryInstructions], [InternalComments], [PickingCompletedWhen], [LastEditedBy], [LastEditedWhen]
  FROM Sales.Orders;

This results in the Error 1934

Msg 1934, Level 16, State 1, Line 25
INSERT failed because the following SET options have incorrect settings: 'ANSI_NULLS'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

OK! Now to disable the filtered index

I suck all the pages out of the filtered index, leaving only the metadata behind:

ALTER INDEX ix_SalesOrders_filtered_IsUndersupplyBackordered_OrderDate_INCLUDES ON
  Sales.Orders DISABLE;

And now, when I retry my insert….

(1 row affected)

It worked! As long as this filtered index is disabled, the insert works again for the session with the incompatible ANSI_NULLS setting.

Personally, I’d rather drop the filtered index

Rebuilding the filtered index will cause this problem to happen again — until all sessions modifying the table have the required se ssion settings in place.

So to prevent actual rebuilds, I’d rather drop the index until everything is in order for the index to be recreated.