SQL Server DBA

Quantifying Query Tuning Perfomance

Query tuning is a lot of fun. At times it can be challenging and frustrating but ultimately getting good performance gains is immensely satisfying. I have been doing a fair bit of this at work recently, and one of the challenges I have been facing is to quantify the improvements. Total running time is simple to measure, but I have been looking for a way to quantify the improvements in I/O and CPU usage as well. Thankfully, the SQL Profiler does a great job of this.

In the past I have queried sys.dm_exec_query_stats to extract CPU and I/O statistics. However, the current query I am working on is particularly troublesome due to nested cursors. The nested cursors make it difficult to capture all the statements from sys.dm_exec_query_stats, and they also make it impossible to generate an execution plan (roughly 1 million iterations, which result in an enormous plan). Even in SQL Profiler, this query produced 300 MB of trace data in about 3 minutes! Thankfully trimming back the trace helped limit the amount of data captured.


Figure 1. SQL Server Profiler



I have been reading some other posts and decided to give the SQL Profiler a go. Setting up a trace is simple. From SSMS click on the Tools menu, and select SQL Profiler (see Fig. 1).





Figure 2. Expand Events

Figure 2. Event Selection


Create a new trace, name it and save it to file. Then click on Event Selection. Here I have chosen to show all Events and all columns. There is a huge array of events and metrics to choose from, but in most cases you might just find that the default trace is sufficient.




Figure 3. SQL: Batch Completed

Figure 3. SQL: Batch Completed


From the default trace, the Event I was most interested in was SQL: Batch Completed. This event captures the overall CPU, Reads, Writes and running time of the stored procedure. When I present the solution to the client, I want to be able to quantify an overall performance improvement. These metrics are ideal for this, and help to justify the changes we have made to the code.



Figure 4. SP: Stmt Completed

Figure 4. SP: Stmt Completed


It is possible to drill down even further if you like. In this example I am interested in one particular stored procedure, so I captured some more events related to stored procedures. In particular, I focused on the SP:Stmt Completed event, which gives detailed information about each statement in the procedure. From this you can highlight the most expensive statements as further justification for the changes made.


Before running the trace, I filtered the trace to only include events that were run by myself. To do this, click on Column Filters, scroll down to LoginName and enter your login name under LIKE. Finally, the trace itself yields the information you want, as shown in Fig. 5:

Figure 5. Trace Output

Figure 5. Trace Output

We can see from the trace that the stored procedure ran in approx. 11 seconds, with CPU close to 5000 ms and read / writes well over 1 million! When you take figures like these back to your client (and their development team, if there is one!) it is not difficult to quantify and justify the changes made.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s