Posted on 2 Comments

Query Store Cleanup Can be Part of a Blocking Chain

Forgetfulness can lead to learning something new. This is a bit of a nightmare when it happens in production, but a treat when it happens in an isolated test system– and that’s how I learned this.

I left a bit of blocking open on my test VM, and forgot about it.

I was using the BabbyNames sample database. In one session, I had run:

begin tran
    alter table ref.FirstName add foo int null

I left this session sleeping, its open transaction holding a schema modification lock against the ref.FirstName table.

In another session, I ran:

select * from ref.FirstName

I did my testing with this setup, then went back to editing video, and forgot about it.

I forgot to unblock it.

Later, I came back and wanted to measure something in Query Store

This is an isolated test system, so I went to clean out Query Store as a reset. I didn’t need any of the old information in there, so I ran:

ALTER DATABASE BabbyNames SET QUERY_STORE CLEAR ALL;
GO

I was surprised when I didn’t see this complete very quickly, as it normally does.

I checked for blocking in sp_WhoIsActive, and found that my cleanup was blocked

I used Adam Machanic’s free sp_WhoIsActive procedure to check what was going on. Here is the blocking chain:

Clearing Query Store created two sessions

Session 40 shows no sql_text, but it appears when I run the QUERY_STORE CLEAN all command, and disappears when I cancel it. It also shows not “sql_command” if I run sp_WhoIsActive with @get_outer_command=1, and it shows no lock information if I use @get_locks=1.

It’s clearly getting locks, because it’s blocking its little friend, session 74, but it’s just not showing what it’s doing.

Meanwhile, session 74 is waiting on a lock on sys.sysschobjs

Although we can’t see the locks directly on session 40, I can see that session 74 (who is blocked by session 40, who is in turn locked by the alter table) is waiting for a shared key lock on the cost index on the sys.sysschobjs system table.

Sys.sysschobjs is documented, it “Exists in every database. Each row represents an object in the database.”

If I connect to the Dedicated Admin connection, I can query the sys.syssschobjs table – and I can read it if I allow dirty reads (seeing the uncommitted data):

USE BabbyNames
select * from sys.sysschobjs (NOLOCK)
where name = 'FirstName';
GO

There is a modified date on this table (column name modified), which was updated around when I started the modification on the table.

I don’t think this is a bad thing – I’m writing this post just to document it

One cool thing in Query Store is that it has some knowledge of the objects in a database. It will tell you if a query is part of a procedure, for example.

That’s a good thing, but it is probably one of the reasons that Query Store reads from the same system tables that may be locked if we modify the schema of objects.

Hopefully, most folks don’t:

  • Have long running transactions that modify objects in production
  • Clear out Query Store data often in production (it’s useful information, and the basis for some cool features)

If you are unfortunate enough to have BOTH of these patterns, you may run into blocking.

This could also occur if you are trying to clear out Query Store when an offline index rebuild is running.

Posted on 2 Comments
Posted on Leave a comment

Auto-Tuning: Automatic Plan Correction is Cleared on Restart

The most delicious plans are mauve, but teal is a close second

The new Enterprise Automatic Tuning feature in SQL Server 2017 may sound intimidating at first — one question I get a lot lately is whether or not there’s a future for DBAs. Will Auto-Tune mean we don’t need any more human tuning?

Well, not anytime super soon.

I’m a big fan of the Automatic Plan Correction feature in SQL Server 2017, but it reminds me a bit of the missing index DMVs we got in SQL Server 2005: the suggestions are a fantastic indication of where you need to look and use your powerful human brain.

Automatic plan correction is temporary — and that’s a good thing

One of the things I love about this feature is that it wisely includes some caution.

If you’ve enabled automatic tuning and it finds what looks like a plan regression, it won’t just force a plan and forget about it.

Let’s say we have a query where the average CPU time with Plan A is 2 seconds, and the average CPU time for Plan B is 10 seconds. If Automatic Plan Correction kicks in, we go into a state where Plan A is being verified.

Trust, but verify

Is Plan A really always faster? It may be that after Plan A is being ever so gently forced (ok, that’s not gentle), the query is run with different parameters, that make Plan A skew horribly wrong, and our average CPU time goes to 12 seconds. SQL Server is looking for that.

Un-force when things change

Automatic Plan Correction is also pretty skittish when it comes to change.

Change indexes on the table? Whoops, we better un-force everything in case there’s something that could be faster! Update statistics on the table? Same thing!

And also…

Automatic Plan Corrections don’t persist over restarts

I did a little demo of this, just to prove it to myself.

I have an Automatic Plan Correction in verification state. Here’s a view of it in sys.dm_db_tuning_recommendations:

But then, I restart my SQL Server instance:

Don’t tease me about instance stacking on my test machine! I keep the Standard Edition instance off most of the time 😀

Once the instance comes back up, Automatic Plan correction is no longer in place.

Here’s a different view, after restart. Looking at Query Id 25 in the Queries with High Variation report, there are NO check boxes in those bubbles to the right. Nothing is being forced.

Everything is also gone from view in the sys.dm_db_tuning_recommendations. That data isn’t persisted after restarts.

The good news: we don’t completely start from scratch

My Query Store still has data about past performance of the plan.

After restart, if my query happens to compile with the “slow plan”, the fast plan can be identified from the history capture in Query Store before the restart.

In other words, Automatic Plan Correction doesn’t only consider information since the last restart when making its suggestions. I tested, and just by repeatedly running the “slow plan” for Query Id 25, a recommendation popped up. Since I have automatic tuning enabled for this database, it was automatically applied.

Check boxes indicate that plan forcing is currently active

Think of Automatic Tuning as an assistant DBA

And it’s better than just a random assistant! It’s an assistant that can help point you in the direction of problems like bad parameter sniffing, which are really hard to track down.

If your environment is so sensitive that you don’t trust your assistant to just go tuning things, that’s fine! You don’t have to enable Automatic Tuning, and you can still look at the suggestions manually (as long as you’ve got Enterprise Edition).

But at this point, and for the foreseeable future, your assistant still needs you to dig into the code, indexes, and architecture to find a better long term solution than just freezing the plan. Because if just freezing the plan was enough, this feature would be a whole lot simpler.

Posted on Leave a comment
Posted on Leave a comment

What is a “Morally Equivalent Execution Plan,” and Why Is It Good?

I recently wrote a fairly complicated post, called “Forced Plan Confusion: Is_Forced vs Use Plan = True.”

You do NOT need to go read that post to understand this one. I think I’ve found simpler way to explain the most important part of that post – and why it’s A Very Good Thing.

Plan forcing in Query Store is clever

The gist of this post is that plan forcing in Query Store is not the same as just putting duct-tape on an execution plan.

Query Store is a little more flexible than that. It has an understanding that if you force something too tightly, it may break.

It leaves a little wiggle room. SQL Server may compile a “Morally Equivalent Plan” if a query plan has been forced, but the optimizer looks at the plan and the “compile for” values and sees that the query plan is the wrong size pants for that query.

Step 1: I put a plan in Query Store

To get this party started, I restore the WideWorldImporters sample database and enable Query Store. I run a query and get its plan into cache.

The plan looks like this — notice that @packagetypeid is parameterized…

This plan has an estimated cost of .0067516. It thinks one row is going to flow through the whole query.

Step 2: I force that plan

I run a query against the Query Store DMVs to figure out the query_id and plan_id for this query.

Then I force it by running:

exec sp_query_store_force_plan @query_id=1, @plan_id=1;
GO

That one-row plan you see above is now forced!

Step 3: I cause a recompile to happen

The forced plan is currently in my execution plan cache. I want to cause a recompile to happen, so I clear out the cache for this database on my test instance:

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
GO

The next time my query runs, it’s going to get a fresh compile. Yes, it has a forced plan, but let’s see what happens.

Step 4: I run the same query, but with a different value for @packagetypeid

When SQL Server compiles my query, it doesn’t blindly give me the exact forced plan with its one row estimates.

Instead, it sniffs the value of my parameters. It uses that sniffed value to look at the statistics. And it sees that I’m pretty likely to have more than one row flowing through this plan. Based on the statistics, it looks more like 221,241 rows!

So, clever SQL Server gives me a “morally equivalent plan”.

Meet the “morally equivalent plan”

Notice that this plan has the same shape as the plan above. That’s the “moral equivalency.” We’ve got our Seek -> Nested Loops -> Key Lookups. Compute Scalar feeds into Stream Aggregate, etc.

But there are some differences!

  • We have thicker row estimate bars, based on that statistic that was sniffed
  • My plan estimated subtree cost is 51.0062 (the forced plan is .0067516)
  • My plan has a green hint that points out my indexing on OrderLines isn’t so great — and my costs are different on my operators!
    • The Key Lookup is now estimated at 97% of the work, whereas before it was 50%
    • SQL Server isn’t thrilled about doing that lookup 221,241 times. It was no big deal to just do it once, so there was no missing index request when this was optimized for a different parameter.

The “morally equivalent plan” has its own query_plan_id, query_plan_hash, and shows up as separate in Query Store

Both the forced plan and the “morally equivalent” plan are visible in Query Store. It can be a little confusing to identify them if you haven’t seen them before.

  • The forced plan has is_forced = 1 in sys.query_store_plan and will have a check mark in the Query Store reports.
  • The “morally equivalent plan” will have “Use Plan=true” in the properties of the left-most operator in the plan.

The good news: if you open the “Queries with forced plans” report, both of them show up there.

Same shape, sized to fit the parameters it compiled with

I think this “wiggle room” is a terrific feature for forced plans.

Forcing plans is a somewhat crude thing to do — just because a forced plan works better today, doesn’t mean that the forced plan is going to be great tomorrow, if a bunch of my data changes. Or if a different customer is running the procedure, one who has quite a different data profile.

This feature is kind of like a stretchy waistband for the sweatpants of an execution plan. Maybe you need a bigger memory grant than the one forced with the original plan? Here’s a little wiggle room.

Plus, if I happen to check up on my forced queries in Query Store, I can notice that often when this runs it’s saying it could use a little indexing help. And just the fact that I’m sometimes getting a “morally equivalent plan” is an indication to me that I haven’t forced something that’s perfect, I should look deeper.

I love that!

I didn’t come up with the term “morally equivalent plan”

I heard this term from Erland Sommarskog, who said that he heard the term for Conor Cunningham. I love that it’s unique, not an acronym (yet), and it seems like a good description to me – so let’s popularize it among those of us who talk about plans 🙂

And if you’d like to play with the code for this, it is based on a sample from Jovan Popovic published under the MIT license: grab it here.

Posted on Leave a comment
Posted on 2 Comments

Forced Plan Confusion: Is_Forced vs Use Plan = True

Identifying that a query plan has been bossed around in Query Store can be a bit tricky, because it can appear in different ways.

This is a long post, but there is a recap at the bottom if you’re short on time. Or just vote up my suggestion to make this easier to see: vote here

1. The “Original FORCER” plan

This type of forced plan will have a check mark in its bubble on the graphical Query Store reports, and also have is_forced = 1 in sys.query_store_plan.

However, this type of forced plan will NOT have “Use plan” = true in the properties of the top leftmost operator.

2. The “Slightly Different FORCEE” plan

This type of forced plan with NOT have a check mark on in its bubble on the Query Store reports, and will NOT have is_forced = 1 in sys.query_store plan.

But it WILL have “Use plan” = true in the properties of the top leftmost operator.

I see these differences both if a plan is manually forced, or if I’m using Automatic Tuning

Whether or not you have the 2017 Automatic Plan Correction feature enabled (part of Auto-Tuning), you’re clicking the graphical “Force Plan” button in the Query Store reports, or you’re running sys.sp_query_store_force_plan, you will see these differences.

Let’s take a look at a demo

In this post, I’m going to show you how this works if I manually run sys.sp_query_store_force_plan. The demo code is at the bottom of the post and also supports letting Automatic Plan Correction force the plan, so you can run it either way and play with it for yourself.

Thanks to Jovan Popovic of Microsoft for publishing the original code under the MIT license. I adapted his code for this demo.

This is a parameter sniffing problem. First we get the ‘fast plan’ in cache

I’m using manual plan forcing in Query Store for this demo, so first up I:

  • Restore the WideWorldImporters sampled database from WideWorldImporters-Full.bak
  • Configure Query Store and make sure it’s enabled
  • Make sure AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = OFF);

Then I start a workload. I have changed the demo slightly from Jovan’s original, and in our version we run a SELECT query from Sales.OrderLines where the PackageTypeID = 5.

EXEC sp_executesql N'select avg([UnitPrice]*[Quantity])
            from Sales.OrderLines
            where PackageTypeID = @packagetypeid', N'@packagetypeid int',
          @packagetypeid = 5;
GO 60 -

We run this query 60 times.

This gets a nested loop plan in cache for our query with an average CPU time of .06 milliseconds. Here’s what it looks like in the Top Resource Consumers Report:

Click for a larger image

But now… we have a recompile, and a different parameter is sniffed

Lots of things can cause a recompile: memory pressure, data in the tables changing, or someone running ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE.

In this case, it’s the third one.

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
GO

/* Get slow plan in cache */
EXEC sp_executesql N'select avg([UnitPrice]*[Quantity])
            from Sales.OrderLines
            where PackageTypeID = @packagetypeid', N'@packagetypeid int',
          @packagetypeid = 0;
GO

The query that happens to compile now for our query has @packagetypeid = 0.

This query doesn’t get a nested loop join, it gets a hash match join. It doesn’t take too long to run by itself, but unfortunately this plan is slow as dirt when it is reused for a value like @packagetypeid = 7.

That runs 20 times, and this plan has an average CPU time of 389 seconds.

EXEC sp_executesql N'select avg([UnitPrice]*[Quantity])
            from Sales.OrderLines
            where PackageTypeID = @packagetypeid', N'@packagetypeid int', @packagetypeid = 7;
go 20

I haven’t enabled Automatic Plan Correction, but I get a suggestion

Because I’m on SQL Server 2017 and I have Query Store configured, sys.dm_db_tuning_recommendations has a little heads-up for me, if I know to look there.

Click for a larger image

It looks like I’ve got a plan re-use problem, aka ‘Bad Parameter Sniffing’.

The right thing to do would be to look into why I’m getting different plans

If I were to do the right thing, I would try to figure out how to stabilize the plans without forcing anything.

But this post isn’t about doing the right thing, it’s about forcing!

Let’s say I take the script from the suggestion, and manually force this plan with TSQL

exec sys.sp_query_store_force_plan @query_id = 18, @plan_id = 1
GO

I have now forced plan_id 1!

If I look at Top Resource Consumers, plan_id 1 now has a check mark by it:

After forcing the plan, the query runs some more

Our query just runs once, again with @packagetypeid = 7.

This is the first time the query has run with @packagetypeid = 7 while being forced to use a plan compiled for @packagetypeid=7.

EXEC sp_executesql N'select avg([UnitPrice]*[Quantity])
            from Sales.OrderLines
            where PackageTypeID = @packagetypeid', N'@packagetypeid int', @packagetypeid = 7;
GO

Whoa, now we have THREE plans

Our query uses the forced plan, but, wow, it gets a new plan_id: 10. Here’s what plan_id 10 looks like with its plan:

Click for a larger image

Plan_id 10 has no check-mark, but it has been forced to use the nested loop plan shape.

However, notice that it got a missing index request, and the bars in the plan are much thicker than the nested loop plan in the screenshot above.

Plan_id 10 is a forced plan, but it was compiled for @packagetypeid=7

If we look in the properties of the SELECT operator on plan_id 10, I can see a couple of things:

Click for a larger image

Parameter compiled value

In the Parameter List, @packagetypeid = 7. That is the value this was compiled for. That influenced the costing. That’s good, because there WERE more rows flowing into that stream aggregate. The higher estimates are also responsible for the green hint notifying us that we should take a look at our indexes, because it could be better.

Use plan = true

This plan was generated with a directive to use this plan shape. It didn’t just happen.

Plan 10[not forced]

Look right above the SELECT operator in the plan. It says “not forced”.

But, uh….. Use plan = true.

If it wasn’t forced, it was leaned on pretty heavily!

I read that “not forced” as, “This is the exact plan that was forced.” To know if it was forced but has different costs, you have to look for that use_plan=true.

Here’s what it looks like in sys.query_store_plan

The checkmark in the graphic Query Store reports is reflecting the plan which has is_forced_plan = 1.

For plan_id 10, if you dig into the plan stored in the query_plan column in the DMV, you will find that “Use plan=true” property, just like we saw in the report.

Recap: Plan FORCER and Plan FORCEE

This isn’t easy to summarize, but I’m going to give it my best shot!

When plans are forced in Query Store, whether it’s done by a user or by Automatic Tuning, there is more than one type of “forced” plan.

Plan FORCER: The exact plan which is forced will show with a check mark in Query Store reports, and will have Is_Forced = 1 in sys.query_store_plan.

Sometimes (probably often), you will see Slightly Different Plan FORCEEs: Once a plan is forced, either that exact plan can be compiled (which will have a checkmark and is_forced=1, it’s the same plan id), or sometimes a slightly different FORCEE can be compiled with different costs and different row estimates. This has the same shape as the forced plan, but you’ll see different shaped bars in it, perhaps a missing index suggestion and different memory grants. This FORCEE plan does not show with a check mark, has Is_forced = 0, but does have a plan property of ‘Use plan’ = true on the top leftmost operator.

I totally made up the words “FORCER” and “Slightly Different Plan FORCEE”, for the lack of better terms. And they make me laugh.

Wanna play with it yourself?

Here’s the demo code in a gist.

Want this to be easier to see?

Vote up my suggestion to make this easier to see: vote here.

Subscribers: watch live demos of this with more scripts in the course on Automatic Plan Correction! New users, subscription options are here.

Posted on 2 Comments
Posted on Leave a comment

Can I Force Multiple Plans for a Query in Query Store?

Nope.

At least, not right now.

I started thinking about this when I noticed that the sys.sp_query_store_unforce_plan requires you to specify both a @query_id and a @plan_id.

If there’s only ever one plan that can be forced for a query, why would I need to specify the @plan_id?

I’ve got no insider knowledge on this, I just started thinking about it.

Current behavior: SQL Server 2017 allows you to run sys.sp_query_store_force_plan multiple times for the same @query_id, but…

It will only force the plan for the most recent statement you ran.

If I have a query with id=4 which has two plans with plan ids 3, and 5, and I run this TSQL all at once:

EXEC sys.sp_query_store_force_plan @query_id = 4, @plan_id= 3;
GO

EXEC sys.sp_query_store_force_plan @query_id = 4, @plan_id= 5;
GO

For a brief moment after the first statement runs, @plan_id 3 will be forced.

After both statements complete, only @plan_id 5 will be forced.

There can only be one!

Why would it be cool to be able to force more than one plan?

We might have a parameterized query that we want multiple plans for, depending on how it’s executed.

The query is executed with @parameter_x = null, I want Plan A.

Otherwise I want Plan B.

Could be nifty!

It is possible that @plan_id is required to prevent accidents

Perhaps it’s required to specify the @plan_id in case I make a mistake, and specify the wrong query id for a plan id, and it wants to make sure I really have it right?

It’s possible, but doesn’t seem super likely to me. I’d rather optimistically hope that some day plan forcing will become even more flexible.

Will it ever happen? I have no idea 😀

Posted on Leave a comment
Posted on Leave a comment

Attempting to Force a ‘Narrow’ Plan on an Update with Query Store

When you force a plan with Query Store, you may not get what you expect.

Here’s an example.

My update statement has two plans recorded by Query Store

I have an update statement for a user database table which is known as “Query 15” in Query Store. Query 15 has two plans: Plan Id 17, and Plan Id 19.

Here’s Plan Id 17 – we’ll call it the “wide plan”. It has an estimated subtree cost of 401.579. I generate this plan if I compile the procedure dbo.UserDatabaseTableTest with @RowID = 800000. This plan gets ‘FULL’ optimization.

Click for a larger image

Here’s Plan Id 19 – we’ll call it the “narrow plan”. It has an estimated subtree cost of .272. I generate this plan if I compile the procedure dbo.UserDatabaseTableTest with @RowID = 8000. This is considered a ‘TRIVIAL’ plan.

Click for a larger image

I want to force the “narrow”/ trivial plan

I decide that I want to force Plan Id 19. I click the Force Plan button, confirm, and immediately get a check mark that Query Store is going to try to force this plan:

My narrow plan is nowhere to be found anymore!

Now that I’ve forced this plan, I can’t generate Plan Id 19 anymore. No matter how I compile my procedure, I end up with Plan Id 17. Even if I clear my procedure cache. Even if I restart my SQL Server.

Not only do I get Plan Id 17 (wide plan), when I look at the actual execution plan, it shows that “Use plan = true”… in other words, that it is being forced.

This update statement’s query plan shows an estimated subtree cost of .1896… (less than 1):

Click for a larger image

How does this look in Query Store’s Top 25 resource consumers report?

Good news: the report accurately shows that I’m getting Plan Id 17 consistently, although PlanId 19 is still forced (indicated by the check mark):

I say this is good news because it clearly shows that I’m trying to force something that I’m not getting, which is something very useful for me to figure out.

This is not considered a “failure”

When I check the Query Store DMVs, force_failure_count is 0. The last_force_failure_reason_desc is NONE.

Query Store didn’t fail to apply the narrow plan. Instead, it’s just deciding not to give it to me, now that I’ve forced that plan.

Seems kinda like an adolescent, doesn’t it?

Could something more be going on here? Further tests…

I decided to do some more testing to make sure the behavior I was seeing wasn’t due to something else.

Dropping the table at the beginning of the procedure didn’t matter

My procedure recreated the table being updated at the beginning of each run, using a DROP IF EXISTS / CREATE TABLE pattern. I changed this to a CREATE TABLE (if not exists) / TRUNCATE TABLE pattern.

The behavior remained the same: once I forced the trivial “narrow” plan, I always got a wide plan — no matter what @RowID value I compiled the procedure for.

Adding an ‘OPTIMIZE FOR’ hint to the UPDATE statement didn’t matter

I was quite surprised by the result of this test, but I probably shouldn’t have been!

I added an OPTIMIZE FOR hint to the end of my update statement, like this:

As long as the plan is NOT forced in Query Store, this query reliably gets the narrow update plan — because no matter what @RowID is passed in, the low estimate of 800 is “sniffed” every time the procedure is compiled.

However, once I force that narrow plan in Query Store, guess what happens?

Yep, I get a new “wide plan” which shows full optimization…

Click for a larger image

For you observant folks out there, I recreated the procedure and I changed this statement by adding the OPTIMIZE FOR hint… so the object_id for the procedure and the query number / plan id numbers have all changed.

Query Store isn’t supposed to force exact plans

When you force a plan in Query Store, you aren’t guaranteed to get the exact plan that you’re forcing. For another example of this check out Andrew Kelly’s post here. I haven’t found an example of folks from Microsoft documenting this, oddly, but it does seem to be true based on experimentation.

This experiment does raise the question for me: can you force other TRIVIAL plans with Query Store? Or will a forced plan always have FULL optimization? I am adding that to my research list for the future… but even if I identify an answer, it will very likely be an answer “for now” which could change at some point.

What to do with this information

When it comes to plan forcing, whether it’s manually done or “automatically tuned”…

  • Consider plan forcing a temporary fix. Look at changing code or indexes to find a permanently stable fast plan that doesn’t require forcing.
  • Look at whether the plan being forced is the plan that you’re getting after it is forced! It may not be identical, even if it’s not failing 😀

 

Posted on Leave a comment
Posted on 2 Comments

sp_query_store_flush_db, Query Store, and Backups

An old fashioned Qual-E Store
An old fashioned Qual-E Store
Quail-E-Store

One of the cool things about Query Store is that you can back it up with the database.

This isn’t perfect for every situation, because you must back up the whole database. If you’ve got a multi-terabyte database, your Query Store isn’t super portable! But still, it’s great to have an option.

There are a few things worth knowing about how Query Store works with backups and restores.

If Query Store is read/write when you back up a database, it restores the same way

Let’s say your production database is collecting data with Query Store. When you restore a full backup elsewhere, the Query Store data from production restores with it, and Query Store stays in the same read-write state.

Maybe you want this, maybe you don’t!

It’s a good idea to check if Query Store is enabled and its settings after a restore. You can do this simply with the following query:

SELECT *
FROM sys.database_query_store_options;
GO

This is a database level query, so use the database before you run it.

If you want to back up very recent activity in Query Store, run sys.sp_query_store_flush_db before the backup

Query Store was designed to be clever, and to minimize its impact on your performance. Query Store only flushes its data from memory to disk periodically. You get to control this by setting the data flush interval in the Query Store settings for a database. (Read more about this in Microsoft Documentation here.)

The default value for Query Store data flush is 15 minutes. That means that in the case of a crash, you might lose up to around 15 minutes of activity.

Even if you’ve lowered this, you might want to make sure that a backup contains the very latest activity, particularly if you’re taking the backup to get Query Store data for someone to look at.

You can manually flush Query Store data to disk before the backup by running the built in procedure, sys.sp_query_store_flush_db in that database.

Want to play around with some sample code?

Here is some very simple code that creates a database, enabled query store, and lets you easily play around with and test sys.sp_query_store_flush_db.

Posted on 2 Comments