Posted on

Using RETURN in Transact SQL: Quiz Results

RETURN is a “control of flow” language keyword in Transact-SQL. It’s very useful for managing when code does — and does NOT — get executed in procedural code, and it can also be used to communicate the status from procedural code.

But not everyone knows how to use RETURN, and learning about it can be awkward because of some quirks of SQL Server Management Studio. It is well worth taking the time to get used to it, though!

Want to test your knowledge?

Take this quiz on RETURN, then head back here to see how you compare to other folks who took the quiz.

Note: I’m going to implement a change soon so that only the four most recent quizzes are open to everyone. I am still going to post questions and answers in follow up posts, but the fun interactive bit won’t be as open (after a month for each quiz).

So now is a good time to go through my quiz archive if you haven’t already!

Quiz results and Discussion of RETURN

I changed the wording on question #4 on July 4 after Nick posted a message about the wording — and he was totally right, there was more than one defensible correct answer.

Changing the wording did not significantly change the average score, it turns out. The average percentage dropped from 65% to 64% once that question was clarified. (That’s not a bad percentage at all — I think a lot of folks aren’t used to using RETURN, and hopefully the quiz made them curious on how it can be useful.)


  • Correct: 292 (78%)
  • Incorrect: 84 (22%)

Answer: 2

The key to this question is that RETURN controls flow within a given batch, and the default batch separator is the word “GO”.

If you want to stop the execution of a script you’re running in SSMS, the RETURN keyword won’t stop everything if you have more than one batch.


  • Correct: 249 (66%)
  • Incorrect: 127 (34%)

Answer: Commands completed successfully.

In this question, RETURN is shown inside a single batch. It will stop the flow of execution before the PRINT statement, and SSMS will simply report, “Commands completed successfully.”

IF 1 = 1

  • Correct: 256 (68%)
  • Incorrect: 120 (32%)

Answer: Commands completed successfully.

This time we have everything in one batch statement again, but we have the RETURN inside of a BEGIN / END block.

The gist of this question is: does RETURN cause it to exit only the BEGIN/END block, or that entire block of code?

The answer is that it causes it to exit from the entire block of code, it will never reach the PRINT statement.

    RETURN 1;

EXEC dbo.test;
  • Correct: 29 (58%)
  • Incorrect: 21 (42%)

Answer: Commands completed successfully.

The wording in this post is the “corrected” wording. The original wording was less clear, so I’ve reported the correct/incorrect ratios for July 5 – 9, after the wording was changed.

This one may seem silly, but I think it’s very important to know that SSMS will not report anything in the Messages tab in the case. It doesn’t mean that RETURN didn’t work, it simply means that the SSMS interface doesn’t show it!

If you want to capture the value that RETURN sends back and do something with it, you need to assign it to something, like this:

    RETURN 1;

EXEC @RESULT = dbo.test;


Used in this way, RETURN can be incredibly useful when it comes to returning status from procedural code.

Posted on


My most recent Quizletter featured a quiz on ORDER BY in TSQL, complete with questions on the OFFSET and FETCH clauses which we got in SQL Server 2012.

One great thing about teaching TSQL School is that it reminds me of these cool options that can be easy to forget.

Want to take the quiz before you read the answers?

Take the quiz here first, if you’d like.  If you get 100%, you probably don’t need to read this post 😉

Overall Quiz Stats

Folks did pretty well on this quiz, but lots of people did fall into one mean little trap that I left in one question. (Sorry! I can be tricksy.) Here are the stats at the time of writing this post:

Entries: 436 Average Score (Out of 4): 2.64 Average Percentage: 66%

Now for the fun stuff, let’s talk about answers!

Question 1

1) How will the results of this query be ordered (and why)?
FROM Application.People;

Correct: 76%
Answer: No ordering is guaranteed in this case

This is very important to know: although the answers may come back ordered by PersonID in this case, you cannot count on that always being the case, and it could change at any point. Ordering of results is never guaranteed unless you explicitly use an ORDER BY in your query.

Question 2

2) What will this ORDER BY do?*
FROM Application.People

Correct: 90%
Answer: Order the results by the first column listed, PersonID

Numbers used like this in the ORDER BY clause represent the column position. I was surprised that so many people knew the answer to this one, to be honest. It is a little bit troubling, because Microsoft calls out using numbers as positional representations in order by as an anti-pattern that should be avoided.

But, I confess, I personally do use this myself when I’m writing quick and dirty code (and sometimes forget to fix it, too).

Question 3

3) What will this OFFSET / FETCH clause do?*
FROM Application.People

Correct: 26%
Answer: This query will throw a syntax error

Why? You can only use OFFSET with an ORDER BY. (This ain’t no TOP operator which doesn’t require ORDER BY, this is a part of ORDER BY.)

I asked this question to try to get a sense of how many people are currently using the OFFSET clause already in code and would pick out the goof based on familiarity. Seems like not many! That’s OK, but it’s good to know how this works for those times when it may come in handy.

Question 4

4) What will this OFFSET clause do?*
FROM Application.People

Correct: 72%
Answer: Return all the rows except for the 10 with the lowest PersonIDs

You are allowed to specify OFFSET without a FETCH NEXT. In this case, we are ordering by PersonID. We didn’t specify ascending or descending, so the default ascending sort order is used for the order. We are OFFSET-ing the first 10 rows ASC, so that’s the lowest 10 PersonIDs. We’ll get all the other rows, because we haven’t limited how many will be fetched.

And now, for a terrible joke

Unlike cursors, order by has finally made fetch happen. (Sorry.)

Nice work, folks!

Posted on

When Should I Use Explicit Transactions for Single Statements?

I got a great question about transactions by email recently.

Here it is:

When should you use transactions?

From what I have read online the answer is whenever you can, but I haven’t found a good why answer.

I understand that you should be using them when you want a set of events to succeed or fail. But what if you just have a single statement that doesn’t depend on other statements – why should you use transactions?

We have experienced some errors when we used them and so I was wondering what merits does it have that make it worth the code maintenance, especially since SQL Server implicitly creates transactions.

Part of why this is extra confusing is that there are three types of transactions. First up, let’s clarify a few terms.

Explicit vs. Implicit vs. Autocommit Transactions

1. Explicit transactions

This is the type of transaction where you explicitly start the transaction with BEGIN TRAN or BEGIN TRANSACTION, and later complete the work with either a COMMIT statement or undo it with a ROLLBACK statement.

As our questioner writes, if you have multiple statements that should either succeed or fail as an individual unit, BEGIN TRAN allows you to bundle them together.

2. Implicit transactions

Implicit transactions are a bit weird, and I typically only run into them when applications have been written for a different relational database and then ported to SQL Server.

To use implicit transactions, you run the statement: SET IMPLICIT_TRANSACTIONS ON;

This puts you in a mode where running a statement which accesses a table implies that you ran BEGIN TRAN. You must explicitly COMMIT or ROLL BACK when you’re done with a unit of work, whether it’s one or more statements.

I find implicit transactions confusing, and I don’t use it.

3. Autocommit transactions

If you do not enable implicit transactions, and you don’t start an explicit transaction, you are in the default “autocommit” mode.

This mode means that individual statements are automatically committed or rolled back as whole units. You can’t end up in a place where only half your statement is committed.

Our question is really about whether there are unseen problems with this default mode of autocommit for single-statement units of work.

So, to rephrase the question…

Squirrel with nuts

Are there times when we should use an explicit transaction for single statements, instead of relying on autocommit?

Generally, you’re fine staying in the default autocommit mode for singleton statements that are entirely their own unit of work.

I’ve fun into a few instances where creating a larger unit of work for modifications helped performance, however! Example: are you doing a lot of tiny modifications?

Sometimes you get better performance bundling multiple statements into a transaction

Way back when I was a Junior DBA, there was a project to modify a lot of data in our OLTP database. We had a big change coming up which required a lot of new data to be present, and some data to be modified for customers.

We couldn’t have much downtime for the change, so a plan was hatched to get the data all set up in production while the system was live, then to cut over to the new feature as the actual change. This was tested out in staging, and everything was great, but slow.

The staging environment was generally slow, so that didn’t mean much by itself.

But by the time we got to deploy the change to production, it was slow as well. Luckily, we had a plan to make it faster.

  • Indexes had been deployed to help make the modifications as fast as possible
  • The scripts to make the data changes were set up to be able to be safely stopped at any time and restarted, in case of blocking problems or any concern about performance
  • The scripts had a @BatchSize parameter, which allowed the DBAs to determine how many modification statements were run inside a single transaction
  • The scripts had a @WaitDuration parameter, which allowed the DBAs to determine how many milliseconds to wait between batches

Each modification statement was a single unit of work, but the script ran faster when multiple modification statements were committed at once!  @BatchSize = 1 was not awesome, it was very slow.

This is because lots of tiny little commits can put a lot of pressure on your transaction log, causing a bottleneck

It’s like wanting to read someone a long story, but making a phone call for every single word.

If the person is very busy and can’t talk to only you for a whole hour, to the exclusion of everyone else, then you want to figure out how long they can spare for a call, and how many words you can fit into a single call.

Amit Banerjee does a demo and breaks everything down in a great way in this classic post.

Explicit and autocommit transactions show up differently in some performance counters

I don’t think this is a reason to use explicit transactions, it’s just something to be aware of.

The basics are this:

  • Modifications in autocommit mode and select statements in explicit transactions both show up under Transactions/sec and Batch Requests/sec
  • Select statements in autocommit mode do not show up under the Transactions/sec counter, but they DO show up under Batch Requests/sec

You can watch a demo of it on my post here.

Generally speaking, I think this is a reason to use the Batch requests/sec counter over Transactions/sec (not a reason to go adding transactions to all your singleton select statements).

What about marked transactions?

The idea behind marked transactions sounds pretty cool: you can mark a transaction as a recovery point.

Let’s say I’m releasing a big change that touches multiple databases. I could use marked transactions to allow me to restore those databases all to the mark, so that I could perhaps restore them to right before the change was deployed, or right after it was complete (depending on which I used).

The reason I’m not crazy about using these “just in case,” or for very frequent use, is that there some overhead:

  • They use space in your transaction logs in each database they touch
  • They use space in msdb, and if you use a ton of these, you may need to do some maintenance there
  • There is some complexity around using this with multiple instances (details here)

I’m not against marked transactions if they meet a real need and the team has the time to monitor their overhead. For large releases that touch multiple databases, SAN snapshots are often more convenient (although obviously they are not tied to a transaction).

Do you have a question?

Send it to me here!