下载 > 资源分类 >  开发技术 >  其它 > 微软内部资料-SQL性能优化1

微软内部资料-SQL性能优化1

2009-11-27 上传大小:1.94MB

Contents 
Overview	1
Lesson 1: Gathering and Evaluating Core Information	3
Lesson 2: Using Performance Monitor	19
Lesson 3: Using SQL Profiler	35
Lesson 4: Using Index Tuning Wizard	57
Lesson 5: Using
 Other System Tools	67


Module 2: Tools – Exploring the Conditions


 
 
 
Overview
	 

At the end of this module, you will be able to:
	List the basic set of information to collect to help narrow down and identify the problem.
	Explore and confirm conditions or messages presented or already known.
	Describe Performance Monitor and how to use it to troubleshoot performance issues.
	Analyze a Performance Monitor log to identify potential performance bottlenecks.
	List DBCC commands relevant to performance troubleshooting and describe how they are used.
	Analyze DBCC command output to identify potential performance bottlenecks.
	List Trace Flags relevant to performance troubleshooting and describe how they are used.
	Analyze Trace Flags output, if any, to identify potential performance bottlenecks.
	List Profiler Events and their respective data columns relevant to performance troubleshooting and describe how they are used.
	Choose and log the necessary events to troubleshoot performance issues.
	Analyze Profiler Log to identify potential performance bottlenecks.
	Describe the Index Tuning Wizard’s architecture.
	List the command arguments for the ITWiz executable.
	Discuss considerations when using the Index Tuning Wizard.
	List other tools and commands relevant to performance troubleshooting 
	Describe what information can be collected using TList, Pviewer, Pstat, and Vmstat.
	Analyze information collected using TList, Pviewer, Pstat, and Vmstat.
	Explain why scripts are used to collect information.
	Discuss examples of how scripts can be used.
	Describe the use of Microsoft® SQL Server™ Profiler to replay trace and simulate stress.
	Describe the use of OStress tool to simulate stress.
	List external load simulation tools.

 
Lesson 1: Gathering and Evaluating Core Information
 

What You Will Learn
After completing this lesson, you will be able to:
	List the basic set of information to collect to help narrowing down and identify the problem.
	Explore and confirm conditions or messages presented or already known.

Recommended Reading
 	Q272688 – PRB: Sqldiag Utility defaults to SQL 7.0 path if 7.0 is default instance
Q115519 - INF: How to Scan SQL Errorlog or DBCC Output for errors
Q232060 - HOWTO: MDAC Setup Troubleshooting Guide
Q200103 - INF: SQL Server 7.0 Startup Parameters
Q247710 – INF: How to Identify Which Edition of SQL Server 7.0 Is Running
Q255861 – “Event Log FileIs Corrupt” Error When Opening an Event Log File

 
Purposes of Gathering and Evaluating Core Information
 

Verify Information Provided by Customer
Typically, you get a statement from the customer indicating what he or she perceives as the problem. As more information is gathered from the customer directly, it is important to make sure that all the information provided is accurate and up-to-date so that you do not get distracted and waste time and effort chasing the problem down the wrong path.  
For example, one of the questions you typically would ask the customer is, “What has changed recently?” If the customer tells you that they have recently applied a Service Pack to the product, you might have a very reasonable belief that the Service Pack is contributing to their problem. However, before calling you, in his or her own effort to correct the problem, the customer might have already removed the Service Pack. In this situation, the Service Pack is very unlikely to have anything to do with the problem. While the customer might not have told you specifically what version of SQL Server they are running, you might have made the incorrect assumption based on the answer they have provided.
This kind of miscommunication and mistake is very easy to make. Collecting the basic or core information helps you catch any false assumptions or misinformation.
 
Look for Evidence to Support Your Hypothesis
After the customer has described their problem to you, you might have a rough idea or hypothesis of what they might be running into. In this situation, in order to verify your theory, you need to gather more information.
If the customer tells you that the application is running slow, and if you believe the customer’s problem is caused by poor query performance of a particular stored procedure, it may be wise to collect a SQL Profiler Trace and a Performance Monitor Log to make sure there aren’t any other issues that are actually more pressing than the performance of a particular stored procedure. For example, the stored procedure might not run very frequently and the real culprit is another stored procedure that does not take too long to finish but is executed very frequently.
Identify Other Causes that can be Easily Overlooked
Although many performance issues are very difficult to resolve, some of them are fairly straightforward. It is more effective to first try to identify or eliminate common and easily identifiable causes of the problem first before delving deep into the problem.
For example, when your customer tells you that their server is running sluggishly and they have recently upgraded their SQL Server from version 6.5, you might be tempted to conclude that their problem is related to their recent upgrade. In situations like this, it is always a good idea to look at the basic information to make sure you have not missed something. In addition to asking more questions about other changes to the server as a whole, you might want to check the errorlog to make sure SQL Server has not generated a lot of stack dumps, that SQL Server is configured appropriately, and so on before diving into the case looking for upgrade related issues.
Gathering Hints to Help Formulate Your Hypothesis
If your customer tells you that their server is running slowly but does not give you any hints as far as what might be causing the problem, you need to gather information to help you figure out how to start attacking the problem. Gathering and analyzing the core information gives you a starting point from which you can formulate a course of action.

 
Gathering Core Information
 

Same Time Frame
All information must be gathered at approximately the same time. This is important because you need to be able to cross reference information from various sources. For example, the performance monitor log you gathered is showing a steady increasing memory usage for SQL Server. By cross-referencing the errorlog, you find out that SQL Server has just recently been started and it is normal for SQL Server to continue to commit more memory. However, the same increase in memory usage might signal a problem if SQL Server has been running for a long time. In this situation, you might want to find out what SQL Server is doing by investigating the SQL Profiler Trace to see what query might be running at this time to help explain the memory usage pattern. Therefore, the information gathered is truly useful when all information can be cross-referenced.
 
SQLDiag.Exe
This utility makes it easier to gather useful and relevant information. However, you can gather the same information manually when necessary if for some reason SQLDiag.exe cannot be run.
The default file name for the output of sqldiag.exe is sqldiag.txt, which is located in the \LOG directory of your SQL Server installation.
 	For more information, see the following Knowledge Base article: Q272688
PRB: Sqldiag Utility defaults to SQL 7.0 path if 7.0 is default instance

Although SQLDiag calls MSINFO32, there are some categories not reported. As such, you may find it advantageous to run MSINFO32 yourself reporting on some of the other categories such as running tasks, loaded modules, and drivers.
	
	Note
	MSINFO32 is fully documented in the Windows® 2000 help.
	
The following table shows the types of information SQLDiag provides.
SQLDiag Output
Information	Useful For
Errorlogs	Errors, Configuration Changes, Xp Calls, SP_OA* Calls, Net Libraries, Thread/Fiber Mode
sp_configure	Locating Non-Standard Options
sp_who	Current Users, SPID, ECID
sp_lock	Current Lock Status
sp_helpdb	Databases, Size, Status, Recovery Model, DB Options Enabled (Auto Statistics, Torn Page Detection and so on)
xp_msver	SQL Server Version, Numbers Of Processors, Processor Mask, Physical Memory
sp_helpextendedproc	All Extended Procedures
sysprocesses	SPID, KPID, Wait Type/Time, Open Tran, CPU, I/O, Memory Usage, Status And Command
DBCC inputbuffer()	Event Type, Parameters And Command
Head Blockers	Identifying Blocking Spids
MSINFO32	System Configuration Issues

 

MSINFO32 Output
Categories Reported by SQLDiag	Useful Categories Not Reported 
SystemSummary
ResourcesConflicts
ResourcesIRQS
ComponentsNetwork
ComponentsStorage
ComponentsProblemDevices
SWEnvEnvVars
SWEnvNetConn
SWEnvServices
SWEnvProgramGroup
SWEnvStartupPrograms	SWEnvRunningTasks
SWEnvLoadedModules
SWEnvDrivers

	
	Note
	You can also get similar information by exporting the information provided by the MMC snap-in Computer Management: System Information.
	
Event Log
It is useful to save both the application and system event logs and save them as text file type. Saving the log as a text file makes it easier to read and search for information.
Startup Parameters
SQL Server startup parameters can be obtained by running SQL Server Enterprise Manager and going to the General tab of the particular SQL Server instance’s Properties, and clicking Startup Parameters.
You can also find this out by looking in the registry:
 	HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\<Instance Name>\MSSQLServer\Parameters

DBCC TRACESTATUS (-1)
This command shows you all the trace flags that have been turned on globally for a particular SQL Server instance.
 
Performance Monitor Log, SQL Profiler Trace, and Monitoring Script Output
Once you have discussed the problem with the customer, you should have a good understanding of what information to gather. If the customer indicates that it is a slow query performance issue, then it would make sense to collect SQL Profiler Trace. If the customer indicates that blocking is occurring on the server, then you might want to collect locking and blocking information using a monitoring script. You might also decide to collect a performance monitor log, a SQL Profiler Trace, and locking and blocking information just to make sure you have covered all the bases if you have very few hints on what might be happening to the server.
Sometimes, the criticalness of the issue might dictate the amount of information you gather initially. If the issue is so urgent that it needs to be resolved immediately, then the more information you gather up front, the more likely the issue can be resolved quickly. This must be balanced with the need not to overly burden the customer with tasks that might not contribute to the resolution of the problem. A more customer friendly approach would be to collect some information to give you a better idea of what might be the problem, formulate a course of action, test your hypothesis, and so on (see Module 1 for a discussion of the DETECT method).
 
Evaluating Core Information - Overview
 

The next few pages detail how to evaluate and analyze the core information you have gathered.

 
Scan Errorlog and Event Log
 

Looking for Anything Unusual
There are a number of things you want to look at in the errorlog and event logs. For example, what SQL Server edition is the customer running and is their performance issue perhaps related to them running the Personal Edition installed by some third party consultant instead of the Enterprise Edition they have purchased? Did they turn on Priority Boost? Does the server name match the Instance Name? Are they using the Multi-protocol net library and if so, is encryption turned on and thus slowing down network traffic? Did the performance problem have anything to do with the long database recovery they experienced? Were all the databases recovered successfully? Are there any stack dumps in the errorlog? What global trace flags have been turned on? Are there any OS errors or hardware errors such as those related to SCSI devices? Are there any unusual messages in the errorlog?
 	For more information about how to scan the errorlog efficiently,
see the following Knowledge Base article: Q115519
INF: How to Scan SQL Errorlog or DBCC Output for errors

 
Note Whether Service Pack or Hot Fix is Applied
It is important to know which version of SQL Server the customer is running, as there are performance related bugs that are fixed in certain Service Packs or Hot Fixes. However, it is also true that some regression bugs are introduced in a specific Service Pack or Hot Fix version.
In addition, if the customer had problems with installation, there might be incompatible or older versions of DLLs lying around that can negatively impact SQL Server. You need to review the DLL versions shown in the errorlog to make sure they are compatible. For SQL Server DLLs, the DLL version should be at least 2000.80.194. For MDAC components, SQL Server 2000 requires MDAC 2.6 and above.
 	For more information about how to troubleshoot the MDAC installation,
see the following Knowledge Base article: Q232060
HOWTO: MDAC Setup Troubleshooting Guide

In addition, the Component Checker Tool downloadable from http://msdn.microsoft.com can help you determine which DLL versions and components are installed. Here is a copy of the tool’s description from the Web site:
The Component Checker is designed to help you determine installed version information and diagnose installation issues with the Microsoft Data Access Components (MDAC). Component Checker runs on the following operating systems: Microsoft Windows 95, Windows 98, Windows NT® 4.0, and Windows 2000. Only 32-bit and 64-bit platforms are supported. Component Checker has been updated to include MDAC 2.6 data.
Getting a Good Overall Picture
Looking at the errorlog and event logs gives you a general picture of how “healthy” the server is. If there are a lot of errors and messages displayed, the customer might be running into other problems that manifest themselves as performance issues. It is critical to resolve any platform errors or problems first before we can effectively troubleshoot performance issues.
 
Check SQL Server and Database Configuration
 

Output from sp_configure and sp_helpdb shows you how the server and databases are configured. A good starting point on how to evaluate the output is to note which configuration values are changed from the default value. After the non-default values are identified, you can then make a quick judgment on how they can impact performance. This should not be your final judgment because you must look at other factors to decide whether they make sense or not. You need to come back and revisit them after you have a better understanding of how the applications, the users, and other competing demand for resources are played against the server.
For example, a fixed memory configuration may be a good idea if multiple instances are running on the same computer or if the instance is a clustered instance.
Another thing to note is that many of these configuration values can be set dynamically so that what you are seeing here may have been adjusted already. You may have to request new output for these configuration values later to see if they have been changed or not.
 	For more discussion about sever and database configurations, see Module 7 Lesson 1 topics “Improperly Tuned SQL Server Configuration” and “Inappropriate Database Configuration.”

 
Review SQL Server Startup Parameters and Trace Flags Used
 

The typical SQL Server startup parameters include: -d, -e, and -l to indicate respectively location of the master.mdf file, the errorlog file, and the master.ldf file. Any other startup parameters should be noted.
 	For more information about what other startup options are available and how they are used, see also… Books Online topic “Startup Options” (adminsql.chm::/ad_1_start_8m43.htm)

 	For more information, see the following Knowledge Base article: Q200103
INF: SQL Server 7.0 Startup Parameters

In addition, it is important to find out what global trace flags are turned on so that you can evaluate their impact on performance.
 	See appendix for more information about the use of trace flags.

 
Review Output from sysprocesses, sp_who, sp_lock
 

Outputs Represent a Snapshot in Time
Unless you have a series of outputs from sysprocesses, sp_who, and sp_lock, you may not be able to decide whether any issues such as blocking and high levels of server activity should really concern you. Blocking is a natural part of an active relational database implementation so there is no need to worry about them except when you have prolonged blocking occurring. And in those cases, the user response time usually shoots up significantly. You cannot find out if extended blocking is occurring when you are just looking at a snapshot in time.
Server Activity
To see how busy SQL Server was when the outputs were collected, from the output from sysprocesses or sp_who, you can count how many SPIDs or processes are connected to SQL Server at that point in time. Also, you can check to see how many of the connections are actually doing work by looking at their status and counting how many of the processes have the status of “runnable.” Having a runnable status indicates that a process is actually doing work at that moment in time.
 
Check for Processes with Open Transactions
Scan the open_tran column of sysprocesses to see which processes have an open transaction. For those processes that have an open transaction, check to see if any of them has a sleeping status. This is a good indication of a badly coded application or that a user, using a tool such as Query Analyzer, has started an explicit transaction but has not yet committed this transaction.
Check for Blocking Occurrences
Scan the BLK column (from sp_who) or the blocked column of sysprocesses for any non-zero value. This should show you quickly whether any processes are blocked.
Look for Process with High waittime Value
Scan the waittime column of sysprocesses for high values (when comparing them to other processes). This helps you identify those processes that have waited for resources for a long period of time.
Look for Process with High CPU and Physical IO
Scan the cpu column and the physical_io column of sysprocesses for high values (when comparing them to other processes). This helps you gain some insight into whether SQL Server is running some queries that are very resource intensive. You can also look at the memusage column of sysprocesses for high memory usage processes also.
 
Identify Third-Party Extended Stored Procedures
 

Check Output from sp_helpExtendedProc
Compare the output received to see if any extended stored procedures listed are not shipped with SQL Server 2000. When the problem reported has anything to do with CPU spin, server “hang”, or SQL Server shutting down un-expectedly (an un-handled exception may have occurred), if third-party extended stored procedures are present, special attention should be paid to those procedures.
Check for OLE DB Provider
An OLE DB provider, if loaded in-process, can cause the same problems as an extended stored procedure. You can check to see if an OLE DB provider is loaded in-process or not by doing one of the following:
	Using Enterprise Manager, right-click a linked server definition and click Properties. On the General tab, click Provider Options. If the Allow InProcess option is selected, then the OLE DB provider is loaded in-process.
	Use Tlist.exe to see what DLLs are loaded under the process space of SQL Server. Some common OLE DB providers have names similar to MSDAORA.DLL, which represents an OLE DB provider for Oracle RDBMS.
 	For more information about how to use Tlist.exe, see also…Lesson 5


 

 
Lesson 2: Using Performance Monitor
 

What You Will Learn
After completing this lesson, you will be able to:
	Describe Performance Monitor and how t to use it to troubleshoot performance issues.
	Analyze a Performance Monitor log to identify potential performance bottlenecks.

Recommended Reading
 	Q150934 - How to Create a Performance Monitor Log for NT Troubleshooting
Q175658 - How to gather Information for effective troubleshooting of Performance
Q174482 - Performance Monitor Chart View Limited to 100 Data Points
Q175658 - Gathering Information for Troubleshooting Performance Issues  
Q177655 - Negative Values in Performance Monitor Data
Q195654 - Methods for Condensing a Performance Monitor Log File
Q248993 - Performance Object Is Not Displayed in Performance Monitor
Q253264 - Creating sequentially numbered performance monitor logs 
Q167050 - CPU Usage Limited to 100% on Multiprocessor Computer (NT40)


 
Performance Monitor Overview
 

Windows NT and Windows 2000 provide tools to monitor the system’s performance. The tools allow you to track nearly every aspect of your system’s performance, including memory and disk performance, processor usage, and network statistics. In addition, applications can provide custom counters to track their performance features. SQL Server 2000 provides many specific counters that allow you to monitor a variety of functional areas within SQL Server.
Although the performance monitor tool can be used for a number of purposes (such as detecting trends and changes to allow proactive system upgrades or to monitor effects of hardware or software additions), in this module we are focusing on the ability of performance monitor to uncover resource limits.
Uncovering Resource Limits
Performance issues can typically be attributed to limitations on resources available on a particular system. These resource limitations can be thought of as bottlenecks. As was discussed in Module 1 of this course, a bottleneck represents an area of the system that restricts the flow of work. Removing a particular bottleneck ultimately exposes another bottleneck within the same system. Resource limits, or bottlenecks, can exist in any one of the following major areas and should be investigated in this order.
Memory
Memory represents the most likely place for a bottleneck to appear. A limitation in memory resource can manifest in other areas such as disk and processor. This is particular true for a system with SQL Server running. For example, when SQL Server runs out of buffer (memory), the lazywriter must be activated to free buffers for reuse and thus causing CPU cycles to be used by lazywriter routines and possibly physical disk I/O to occur when buffers have to be written back to disk.
Disk I/O
SQL Server uses disk resources extensively. A contention in disk resources can significantly degrade SQL Server’s performance. For example, placing a database log file on the same physical disk as an operating system paging file can have a high potential of degrading system performance. When both the database log files and the paging files are actively used, by SQL Server and operating system respectively, this can result in high level of disk head movement. High disk head movement slows down how quickly log records can be written. As database log files are written synchronously, transactions must wait for the disk to finish writing the log records before they are truly committed. Thus, disk usage by SQL Server must be carefully planned to avoid resource contention and limitations. The discovery and resolution of disk resource contention can greatly improve performance of SQL Server.
Processor
SQL Server uses processors heavily. Starting with SQL Server 7.0, the query optimizer processes a significant number of options to find the best query plan available within a reasonable amount of time. It is common to see processor usage go up when compared to SQL Server 6.5 as SQL Server 6.5 has relatively limited options for query optimizing.
Network I/O
There can be a lot of network traffic between SQL Server and its clients. Application implementation can have a significant impact on the amount of network traffic that goes between clients and server.
 
Purposes of Using Performance Monitor
 

Gathering Evidence
The Performance Monitor and System Monitor enable you to gather data to support your theory about what might be happening to your server. Because the Performance Monitor log is an overview of the internals of the computer and does not tell you what the users observe, it would obviously be helpful if the customer has already reported to you his or her observations as far as what actions are slow. If you are not quite sure what is going on with the server, this provides you with a starting point from which to launch further investigations.
Capturing Overall State of the Computer
The Performance Monitor log does not show you why data points for a certain counter are high or low, it just shows you the values of the data points captured within the log.
Capturing Changes in the State of the Computer
The Performance Monitor captures changes in the state of the computer, not just a snapshot of the currently running state of the computer. One aspect of troubleshooting is knowing what is normal and what is abnormal. It is always a good practice to track performance of all the major components of the system to compare to when problems begin emerging. This gives some additional perspective on the problem. Any changes to the state of the system can be compared against this baseline.
If these baseline logs are not available, you just need to look for changes within the log itself: look at relative counters instead of absolute counters; or compare the log received from your customer to other computers you have logged previously.
 
Sysmon vs. Perfmon
 

Most of the topics discussed in this lesson are applicable to both System Monitor and Performance Monitor. You can start system monitor and performance monitor under Windows 2000 and Windows NT by running the executable Perfmon.exe.
If you are more comfortable with the Performance Monitor interface, you can use the Perfmon4.exe from the Windows 2000 Resource Kit to get the same user interface as Performance Monitor when running under the Windows 2000 operating system.
 
Preparing to Capture Performance Monitor Log
 

If Appropriate, Disable all Third-Party Services and Services not Currently Needed
If the issue occurs randomly with different time intervals between occurrences, then a Performance (System) Monitor log should be just one of many troubleshooting steps; the other steps should be tried while data is being gathered. One such step is to disable ALL unnecessary drivers and services that may impact the performance of the computer.
IIS and content indexing are examples of services can be disabled if not needed. Suggested third-party services are screen savers, virus scanners, and file replication agents. This could simplify your trouble shooting effort.
Synchronize time Between Computers if Monitoring Remotely
The performance tools can be used locally or remotely. From one central location, it is possible to gather data on all the important systems in an enterprise. If remote monitoring is done, it is important to make sure that the remote computer you use to monitor SQL Server has its time synchronized with the computer running SQL Server. Because time is used as reference point in most cases, if time is not synchronized between the computers, you cannot effectively use the Performance Monitor log to cross-reference other information collected.
Log remotely if crash is possible
If the target computer may potentially crash, it is better to log remotely, thereby avoiding having to recover a potentially lost or damaged log file.
Log locally for a network
If logging network data, however, it is best to log locally. Logging remotely creates network traffic that may affect the results.
 
Logging Information Rather Than Charting
 

Logging is a more reliable way to capture the whole state of the computer for the following reasons.
	You never know ahead of time exactly what counters you may need to troubleshoot the problem.
	Charting captures and displays at most 100 data points before it draws over previous information.

 
Time Interval
 

Long Enough Time Interval
Make sure that the log runs long enough to capture the problem. However, if a computer is having a problem that eventually blue-screens a system, the log normally does not have to run all the way up to the blue screen. Normally the symptoms that lead to the blue screen are visible before the system actually goes down. For example, if a blue screen occurs every three days, there might be enough data after two days.
Choose Appropriate Sample Interval
In general, if data is being sampled in less than four second intervals, the log itself impacts system performance. Use the following as a guideline:
Time to Exhibit Problem	Sample Interval
1 to 2 hours	4 seconds
1 day	30 seconds
5 days	180 seconds

	
	Tip
	It is rare to capture Performance Monitor logs for 5 days in a row. In those situations where you are considering logging for such a long period of time, consider creating five logs—each logging for one day. This way, you can use a more frequent sampling interval.
	

 
Objects to Capture
 

Diskperf –y
In order for the diskperf –y command to take effect, the computer must be rebooted. The -y switch turns on both logical and physical disk counters for both Windows NT and Windows 2000.
Although all disk counters (physical and logical) are turned off on Windows NT by default, physical disk counters are turned on by default on Windows 2000. You can use the -YV switch to turn on logical disk counters (or use the -Y switch for both logical and physical disk counters).
Capture All Objects if Possible
Although capturing a Performance Monitor log is an iterative process, in the interest of reducing the time to resolve the issue, most of the time it is best if data for all objects are captured. This reduces the number of times you request that the customer capture additional logs.
 
Objects to Log if All Objects is Not Feasible
Although capturing Performance Monitor log typically does not have a major impact on the system being monitored (unless you sample too frequently), if the customer is really concerned about the performance impact of capturing a Performance Monitor log, you should are least capture the following Objects:
	Cache, Memory, Objects, Paging File, Physical Disk, Logical Disk, Process, Thread, Processor, Server, System
	All SQL Server objects
	If troubleshooting network related problems, you also want to capture all objects related to the protocol stacks installed
	1qa\

Include Object and Thread Objects
When creating a log it is beneficial to always include the Object and Thread objects in the log. Some counters may not appear unless these objects are included.
More Log Please
As previously indicated, capturing a Performance Monitor log is typically an iterative process; you may have to go back and ask the customer to capture additional log information if you are unable to determine the cause of the problem with the logs already captured.
To make it easier for the customer, you need to explain to the customer before the first performance log is gathered that more information may need to be gathered later. You can also help the customer collect the performance monitor log by sending explicit instructions and or templates, as well as by addressing the customer’s concerns regarding disk space requirements and performance impact on data collection.
 
Other Logging Considerations
 

One Log per File
When creating a log file, always create one log per file. It can become very confusing if data from several different log sessions are logged together.
Check Disk Space
Always remember to check the disk space available for log files.
	
	Important
	If the log is planned to run for many hours or days, it may exceed the free space available on disk.
	
Re-logging a Log
If the log file is too big, it may be easier to condense the log.
 	For more information, see the following Knowledge Base article: Q195654
Methods for Condensing a Performance Monitor Log File

One Gigabyte Maximum Log Size Limit
The maximum log size is 1 gigabyte and is too big to handle easily. If you are capturing a Performance Monitor log for a long period of time, for Windows NT server it may be best to use a batch file to stop and restart logging to reduce the size of each log file captured. For Windows 2000 servers, you can use the Sysmon schedule feature to control when to start and stop logging to control the size of the log file.
 
Re-logging Events Using Relog.exe
 

Relog.exe
There may be times when you would like to reduce the data set of a log file or convert the log file to a different format. Although Windows 2000 does not provide a comprehensive conversion tool, Windows XP provides a tool called Relog.exe for re-logging System Monitor events.
To help you leverage this tool, follow these steps:
1.	View log file data: import the .blg file captured during testing.
2.	Select counters that you would like to chart.
3.	Right-click chart and Save As HTML.
4.	Open the newly created html file in Microsoft NotePad.
5.	Delete everything except the counters you would like to chart.
Example: \\server_name\Active Server Pages\Requests Total.
6.	Save the file as a text (.txt) file with ANSI encoding.
relog /Input:Logfile.blg /Output:output .csv /Settings:Counters.txt

	
	Note
	The tool provided with Windows XP will NOT work on Windows 2000. You should not send Relog.exe to the customer. You use it to reduce the log file size after you have received the log from the customer.
	

 
Displaying and Charting Log File
 

Vertical Scale
Perfmon is limited to displaying a vertical scale of 100 percent; Sysmon does not have such a limitation.
Sysmon Compensates for a Different Time Zone
If the Performance Monitor log is captured in a time zone different from the time zone in which you are located, Sysmon compensates for this. If you are using information in the log to cross-reference other information captured, such as a SQL Profiler Trace, you need to be careful when trying to match the time of occurrence with what you see in the log. You may have to adjust you computer’s time zone and match it with your customer’s computer in order to easily cross reference the performance monitor log with other information.
You can send the following T-SQL script snippet to the customer to find out the time zone of the customer’s server:
PRINT '-- Server time bias (ActiveTimeBias, includes daylight savings -- minutes)'
EXEC xp_regread 'HKEY_LOCAL_MACHINE', 
  'SYSTEM\CurrentControlSet\Control\TimeZoneInformation', 'ActiveTimeBias'
PRINT '-- Server time bias (Bias, ignores daylight savings -- minutes)'
EXEC xp_regread 'HKEY_LOCAL_MACHINE', 
  'SYSTEM\CurrentControlSet\Control\TimeZoneInformation', 'Bias'
	
The output of this should show you the number of minutes away from Greenwich Mean Time (GMT).
For Trends Always Chart the Whole Log
To begin an examination of a log file, chart the whole log file to spot any overall trends. After a trend has been identified, more focus can be put on that particular time frame.
Use Multiple Charts against One Log
It is beneficial to use multiple charts against one log file in order to keep the amount of information viewed at the same time to a manageable level. Interesting areas of the chart can be matched up with Event Log entries or a SQL Profiler Trace to give a clearer picture on what might be happening at that particular point in time.
Adjust the Scale
To keep the counters on the chart viewable, adjust the range on each counter individually so that it falls within the chart scale. For Sysmon, this is done by right-clicking the chart pane; select Properties, select the Data tab, select the counter to modify, and then select the scale from the Scale list box. Select the appropriate scale so that the counters all can be viewed on the same chart.
Use the Time Range Slider 
Use the Time Range slider on the Source tab of the Properties dialog box to focus in on a particular time frame. It might be easier to see some of the data when extraneous data is removed from the graph. The arrow keys can be used to move one period at a time. Remember that only 100 data points can be viewed and factored into the statistical calculations at any given time, so use the slider to examine all of the data.
Out of Bounds Data
Data that ends before the rightmost border indicates that the display is showing all the available data. A point or two over the top of the chart usually means a divide by zero or a wrap in a counter. This is especially true if the value is 4294967296 (2^32) or 65535 (2^16). These values usually can be ignored.
Sysmon and Perfmon
Although Sysmon can display log files captured by Perfmon, there have been problems with some counter not being displayed correctly. You should consider using Perfmon4.exe provided by the Windows 2000 Resource kit when using computers running Windows 2000 to display and chart log files captured by Perfmon.

 
Analyzing the Data
 

Patterns and High/Low Counter Values
It is helpful to look at patterns when charting the counters. The following example shows a correlation between different counters. This correlation is found by looking at the patterns displayed. In this case, it is obvious that Pool Nonpaged Bytes is related to SNMP Private Bytes. More importantly, both these values increase only when % Processor Time is active for the SNMP process.
 
It also helps to look at abnormally high or abnormally low values for different counters to spot potential problems. Of course, knowing what “normal” is can help tremendously. That is why having a baseline is helpful in troubleshooting issues.
Look for Other Evidence to Support Your Theory
After you have found a counter that you suspect was an indicator of what the problem might be, it is time to look for other evidence to support your theory. For example; if the % Process Time value remains high, it may not represent a CPU bottleneck. This counter might actually reflect a low physical memory condition. When memory consumption is high, the operating system memory manager might have to perform large number of hard page faults. A high number of hard page faults can lead to a high %Privilege Time value, which is part of the %Process Time value.
Here is another example to illustrate this point. If you see SQL Server Total Server Memory climbing steadily, typically this does not represent a memory leak. Buffer Pool continues to increase when SQL Server has not reached its target memory and there is no external memory pressure.
Look at Other Information to Corroborate Your Findings
In addition to finding evidence within the data of the Performance Monitor log file, you also want to check other sources of information to corroborate your findings. Other sources of information include: errorlog, SQL Profiler Trace, DBCC command output, and so on.
 
Lesson 3: Using SQL Profiler
 

What You Will Learn
After completing this lesson, you will be able to:
	List Profiler Events and their respective data columns relevant to performance troubleshooting and describe how they are used.
	Choose and log the necessary events to troubleshoot performance issues.
	Analyze Profiler Log to identify potential performance bottlenecks.

Recommended Reading
 	Q214799 – INF: SQL Profiler From Command Line Parameters to Temp File Usage
Q279033 – INF: Identifying Cascading Referential Integrity in SQL Profiler
Q258990 – INF: Trace in SQL Server by Using Extended Stored Procedures
Q237932 – The function fn_trace_gettable cannot read rollover files generated by SQL Profiler
Q270599 – INF: How to Programmatically Load Trace Files into Tables
Q270629 – BUG: Profiler trace files not generated when files exist
Q237932 - The function fn_trace_gettable cannot read rollover files generated by SQL Profiler
Q283696 – INF: Job to Monitor SQL Server 2000 Performance and Activity
Q283725 – INF: How to View SQL Server 2000 Blocking Data
Q283784 – INF: How to View SQL Server 2000 Activity Data
Q283786 – INF: How to Monitor SQL Server 2000 Traces
Q283886 – INF: How to View SQL Server 2000 Performance Data



 
SQL Profiler Overview
 


 
SQL Profiler Features
 

Users can create and save trace definitions – horizontal and vertical filtering options – to a file with .tdf extension. This file is called a trace template. The template file can later be used to set filtering for a new trace.

 
Events, Event Classes, and Event Categories
 

Connect and Disconnect events in SQL Server 7.0 are replaced with Security Audit: Audit Login and Audit Logout events in SQL Server 2000.
Only Capture Events Needed
Capturing events that are not needed not only impact the size of the trace file but also can significantly impact the performance of SQL Server. Events should be chosen with care. This is particularly true when dealing with Locks and Scan events.
	
	Caution
The Locks and Scans events should be used with caution because they can generate voluminous trace output and degrade performance of SQL Server.
	
 Starting vs. Completed Events
Starting events such as SQL:BatchStarting and RPC:Starting events should be captured when you do not expect the event to actually finish. For example, if you are investigating SQL Server exceptions (stack dumps are found in errorlog) or when you are checking to see if some batches may be aborted due to parsing errors.
Completed events such as SQL:BatchCompleted and RPC:Completed provide you with additional information such as the Duration of the events. Completed events are usually preferred over starting events.
 
Data Columns
 

Data Columns
SQL Profiler allows you to select data columns that represent the information you would like returned when a trace is running. The data displayed in SQL Profiler can be displayed either in the order the events occur or in a group based on one or a combination of data columns.
Data Columns and Events
It is important to note that different events place information in different sets of data columns. The data columns describe the data collected for each of the event classes captured in the trace. Because the event class determines the type of data collected, not all data columns are applicable to all event classes. For example, the Binary Data data column, when captured for the Lock:Acquired event class, contains the value of the locked page ID or row but has no value for the Integer Data event class.
Events and Data Columns to Capture
The following table shows the major performance related events and their corresponding data columns.
 	For more information regarding the relationships between events and the corresponding data columns, see also… Books Online

 

Event Categories
Events	Data Columns

Cursors	CursorOpen, CursorClose, CursorImplicitConversion	Event Class, Integer Data, Binary Data, Handle
Database	Data & Log File: Auto grow, Auto shrink	Event Class, Duration, File Name, End Time, Integer Data
Errors and Warnings	Attention, Errorlog, Eventlog, Exception, Hash warning, Missing Column Statistics, Sort warning	Event Class, Event Sub Class, Error, Severity, Text Data, Binary Data, Integer Data
Locks1	Time Out, Deadlock, Deadlock Chain	Event Class, Mode, Binary Data, Integer Data, Object ID, Index ID, Event Sub Class, End Time
Objects	Auto Stats	Event Class
Performance	Degree of Parallelism, Execution Plan, Show Plan All, Show Plan Text, Show Plan Statistics	Event Class, Event Subclass, Integer Data, Binary Data, Text Data, Object ID
Scans	Started, Stopped	Event Class, Mode, Index ID, Object ID, Transaction ID, Duration, Reads, End Time
Security Audit	Audit Login Event, Audit Logout Event, Audit Login Failed Event, Audit DBCC Event, Audit Add DB User Event	Event Class, Event Sub Class, Text Data, Binary Data, Success, Duration, Reads, Writes, CPU, DBUserName, Database Name
Sessions	ExistingConnection	Event Class, Binary Data
Stored Procedures	RPC:Starting, RPC:Completed, SP:Starting, SP:Completed, SP:Recompile, SP:StmtStarting, SP:StmtCompleted	Event Class, Event Sub Class, Start Time, End Time, Duration, CPU, Reads, Writes, Text Data, Object Name, Object ID, Object Type, Nested Level
Transactions	DTCTransaction, SQLTransaction, TransactionLog	Event Class, Event Sub Class, Binary Data, Integer Data, End Time, Reads, Writes, CPU, Duration, Transaction ID, Object Name
TSQL	Exec Prepared SQL, Prepare SQL, Unprepare SQL, SQL:BatchStarting, SQL:BatchCompleted, SQL:StmtStarting, SQL:StmtCompleted	Event Class, Start Time, End Time, Duration, CPU, Reads, Writes, Text Data, Integer Data, Binary Data, Handle, Object Name, Object ID, Nest Level

	
	Important
	The Locks and Scans events should be used with caution because they can generate voluminous trace output and degrade performance of SQL Server.
	
 
Creating Traces
 

Unlike SQL Server 7.0, when using SQL Profiler in SQL Server 2000 to create a trace, by default all “Event Classes” and all “Data Columns” are available.
 
Saving Trace Output
 

To File:
Max File Size – 1 GB limit. When maximum file size is reached, tracing stops unless you have enabled the File Rollover option.
Enable File Rollover (default) – Creates multiple files automatically when maximum file size is reached. Disk size is the limit.
•	Bug 235534: Profiler may not generate the trace files if the files with the same name already exist in the specified directory.
 	For more information, see the following Knowledge Base article: Q270629
BUG: Profiler trace files not generated when files exist

•	Bug 235519: ::fn_trace_gettable generating error 567 when 'DEFAULT' is the second parameter after tracing the server using file Max Size and Rollover option.

Server Processes Trace data – Use this to avoid missing events. Although this option is not turned on by default, it is recommended that you use this option whenever possible. When this option is selected, all of the tracing and file writing is done on SQL Server. It is limited to a file share point that the Server login can access. For remote SQL Servers, you can only specify the file using UNC naming convention.
To Table
Max Rows – When this is reached, the tracing continues without logging the events.
 
Filtering Data
 

Typically, both pre-filtering and post-filtering are done when capturing data. Pre-filtering is done to reduce system overhead on creating the trace file, post-filtering is done to analyze the captured data.

 
Creating Trace Using System Stored Procedures (1 of 2)
 

Black Box Option in sp_trace_create
Unlike in SQL server 7.0, the black box trace file, blackbox.trc, is created in the \Data directory of SQL Server installation. You cannot specify the filename nor can you specify specific events to capture (black box trace files only contain TSQL:Batch events).
The black box trace file is created as soon as you start the trace with the following command:
DECLARE @RC int, @TraceID int
EXEC sp_trace_create @TraceID output, 8

The black box trace file by default has a 5-megabyte (MB) maximum file size with the rollover option turned on. Information is written to the trace in 128-kilobyte (KB) increments. Therefore, you might not see the latest information in the black box trace file unless you either stop and remove the trace or stop SQL Server.
 	For more information about how to use these system stored procedures, see also… Books Online

 
Creating Trace Using System Stored Procedures (2 of 2)
 


::fn_trace_gettable
Assuming that there is a trace file called my_trace.trc in the C:\ directory, here is an example on how to use the ::fn_trace_gettable function:
USE tempdb 
SELECT * 
  INTO temp_trc 
  FROM ::fn_trace_gettable(‘c:\my_trace.trc’, default)
	
 	For more information, see the following Knowledge Base article: Q258990
INF: Trace in SQL Server by Using Extended Stored Procedures

 
Creating Trace Using System Stored Procedures – An Example
 

To assist the customer in creating a trace it is often best to supply them with a script that creates the trace to your specifications.
	
	Tip
	You can easily create a script by tracing the SQL Profiler application itself, but in SQL Server 2000, you can also simply script a trace by using the File/Script Trace/For SQL Server 2000.
	
	To use system stored procedures for tracing
1.	Create the trace by running the sp_trace_create procedure.
2.	After a trace is created with a TraceID returned, events are added using the stored proc sp_trace_setevent by specifying each event by its event_id (event number) and adding every data column you are interested in capturing.
3.	Then use the sp_trace_setfilter procedure to specify filters for the events.

Always make sure that the Return Code and the Trace IDs are selected so that you know the status of the trace.
	
	Important
All parameters to these system stored procedures are strongly typed.
	
 
Tracing for Performance Issues
 

Create a SQL Profiler Baseline
You want to create a SQL Profiler baseline just as you create a Performance Monitor log baseline. A baseline is used as a base for comparison when troubleshooting performance issues. You should create a SQL Profiler Trace baseline when you create a Performance Monitor log baseline. These two baselines should be created at the same time.
You can create baseline for a specific application that runs against a particular SQL Server by filtering on the specific application name. If the application only runs on a few computers, you can filter for specific host names or computer names.
You should also try to create a baseline for:
	Peak and off-peak hours of operation.
	Production query or batch command response times.
	Database backup and restore completion times.

 
Using Profiler to Trap Error Messages 
 

Profiler can be used to capture errors and informational messages returned to a client from SQL Server.
The Raiserror command is not traced as an exception unless it is written to the Event Log and/or Errorlog. When Raiserror is not written to the Event Log or Errorlog it appears like any other statement under SQL:StmtCompleted and the SQL:BatchCompleted event classes. When the Raiserror command is used with the WITH LOG option, or if the error is configured to always be written to the Event Log, the Raiserror command is traced in Profiler under the Errorlog and Event Log event classes in the Errors and Warnings category.
Compile time errors are trapped in Profiler in the Exception event class in the Errors and Warnings category. This can be easily demonstrated by passing a statement that you know will fail at compile time, like “Sel * from xxx.”
KB article Q199037 states that trace flag 3602 can be used with trace flag 3605 as SQL Server startup parameters or command line arguments to send all messages that are returned to the client to the Errorlog and Event Log. However, in SQL Server 2000 if these trace flags are used, the actual message is not displayed at the client, that is, the values assigned to each parameter are not visible at the client (which the KB article does not specify).

 
Analyzing SQL Profiler Trace
 

Group by Event Class
Grouping by event class allows you to gain some insight as to how events are distributed. For example, grouping by the event class ‘Attention’ allows you to see if there are many queries being cancelled by users or applications. Excessive attention events might be an indication of poor query performance – users tired of waiting for their queries to return. If you are also seeing excessive blocking in addition to finding large number of attention events, then you might have an application that does not properly handle query cancellation.
Grouping by event class also allows you to easily find events that are typically problematic. For example, if you find “Missing Column Statistics” events, you can assume that the query associated with this event might not have the best query plan since columns statistics were updated as part of its optimization.
Limiting Events
This may allow you to focus your attention on the events that interest you. For example, if you are looking for slow performing queries or stored procedures, you can reduce your scope of analysis by looking at the SQL:StmtCompleted and RPC:Completed events. Once you have isolated those events that interest you, you can then look for the longest running event by scanning for the query with the highest duration column value. You can also use this method to find out which query is being executed over and over again.
 
Group by Data Column
Grouping by data column can help you identify resource intensive queries. You can find out which query takes the longest to run, which query consumes the most CPU time, and which query does the most reads or writes.
When you GROUP BY Data Columns in the Profiler Trace file using the SQL Profiler client application, you do not see the nesting like in SQL Server 7.0 Profiler. The rows are only displayed in the order of the GROUP BY data columns. There is no expanding of the grouped events like there was available in SQL Server 7.0.
	
	Note
	The operations Group By, Order By, and Aggregate cannot be used against the TextData Column.
	

 
Further Analysis
 

Because it is generally a bad practice to capture events that you do not think you need, to minimize the impact of tracing, sometimes you might not have captured the events that you need to identify the cause of the problem. In these situations, you need to capture additional information to verify your previous findings.

 
Analyzing Trace Using T-SQL Commands
 

The fn_trace_gettable function can be used to load a single trace file into a table if you have disabled the rollover option. You can load multiple trace files using this function if you have enabled the rollover option.
	
	Note
	If you are using rollover trace files generated by the SQL Profiler application, the function fn_trace_gettable cannot load the rollover files into a table. It can only read rollover files generated by system stored procedures and load them into a table.
	
 	For more information, see Knowledge Base article Q270599 “INF: How to Programmatically Load Trace Files into Tables”.

 
Known Issues
 

Loading Large Trace File Takes a Long Time
When compared to SQL Server 7.0 Profiler, using Profiler from SQL Server 2000 can take significantly longer if the trace file is large. However, after the file is loaded, you can scroll the display with ease and better speed than was available for Profiler from SQL Server 7.0.
Number of Reads – Profiler versus SET STATISTICS IO ON
Number of READS in Profiler output is very different from READS as seen in the result set after running SET STATISTICS IO ON. Statistics I/O records reads for an individual table. It counts reads of the particular table and has very fine control. The Profiler (trace) reads I/Os out of the PSS data structure. These are basically incremented every time the function bufget is called. This function is called for many things including worktables, page allocation, and other things. Therefore, they have different values.
Profiler Will Trace SQL:StmtStarting When SP:StmtStarting is Specified


 
Tips and Hints
 

When examining trace/file/table contents you can set bookmarks and navigate between them using F2 / CTRL+F2 sequence. You automatically see how many events are in a trace/file/table by looking at in the status bar below. You can also examine the total number of Profiler connections displayed in the main window status bar.
Because all parameters to the system stored procedures for SQL Trace are strongly typed, all string parameters used should be in UNICODE rather than simple ASCII.
In Filtering, “Include” has precedence over “Exclude.”
Loading the trace file is asynchronous, unlike in SQL Server 7.0.
In the case of large trace files, after post filtering a trace, save the new, concise, and scenario-specific trace to a new file.
For RPC events, the Completed events show values for OUTPUT values going back to the client in the DECLARE/SET syntax.
For the Audit Login event class, the “Integer Data” data column stores the value for Network Packet Size.
The “Text Data” data column shows the Network Library as a comment for the Audit Login and Existing Connection event classes.

 
Other Tips and Issues:
	Bug # 230802: Profiler taking up lot of space in TMP specified location
	234655: Profiler should have an option to change location of temporary files 
	The trace files when opened in Profiler use up disk space in the Windows default TMP directory and not memory. This DCR wants to change the Temporary file location to change at will. Postponed to Yukon.
	You can set an option to start tracing automatically upon making a connection using Profiler Options.
	You can start Profiler using command line arguments.
	When saving to a file you can enable the file rollover option so you can back up files written to the server without interrupting the trace.
	From the menu, you can extract and save trace definitions from an existing trace file or table into a new template.
	If Profiler displays data on the screen only, choose “Save As” later which persists the data in either a local file or a table.
	Screen Buffer Size is limited to TMP directory’s disk space and not restricted to available free memory. Two temporary files are created. This way Profiler can open file of any practical size without consuming any real memory. Disk space is the only thing that is required. The two files are a necessity. The first one contains fixed size columns (int4, int8, datetime), and all rows there have fixed length. The second file contains all variable size columns (text, binary data).
	Tracing stops when you exit the SQL Profiler client application; to create a trace that runs even when Profiler is not running, use sp_trace_create.
	You can specify to stop tracing on an error, but this is not possible for C2 audit traces.
	When a trace file is saved to a table, it is not possible to Group By “textdata” column because it is of type ntext.

 

 
Lesson 4: Using Index Tuning Wizard
 

What You Will Learn
After completing this lesson, you will be able to:
	Describe the Index Tuning Wizard’s architecture.
	List the command arguments for the ITWiz executable.
	Discuss considerations when using the Index Tuning Wizard.

Recommended Reading 
	“Index Tuning Wizard for SQL Server 2000” white paper on the Microsoft Developer’s Network (MSDN) at the following URL: http://msdn.microsoft.com/library/default.asp?URL=/library/techart/itwforsql.htm 

 
Index Tuning Wizard Overview
 

The Index Tuning Wizard allows you to select and create an optimal set of indexes and statistics for a SQL Server 2000 database without requiring an expert understanding of the structure of the database, the workload, or the internals of SQL Server.
A workload consists of a SQL script or a SQL Profiler trace saved to a file or table containing SQL batch or remote procedure call (RPC) event classes and at least the Event Class and Text data columns.
A recommendation consists of SQL statements that can be executed to create new, more effective indexes and, if wanted, to drop existing indexes that are ineffective. Indexed views can be recommended on platforms that support their use.
 	For more information, see the Microsoft Developer’s Network (MSDN) white paper “Index Tuning Wizard for SQL Server 2000.”


 
Index Tuning Architecture
 

The Index Tuning Wizard takes as input a workload on a specified database. The tool iterates through several alternative sets of indexes and indexed views called configurations. It then chooses the configuration that results in the lowest cost for the given workload.
Evaluating a configuration by materializing it physically is not practical because this approach requires adding and dropping indexes, which can be resource-intensive and affect operational queries on the system. Therefore, the Index Tuning Wizard must simulate a configuration without materializing it.
SQL Server 2000 has been extended to support the ability to simulate a configuration and estimate the cost of evaluating a query for a simulated configuration.

 
ITWiz.EXE
 

Beginning with SQL Server 2000, the Index Tuning Wizard can be run from the command line rather than interactively. This allows index analysis to be automatically performed during off-peak hours.
ITWiz command line options:
-D database_name {-S server_name}
{-U user_name -P password | -E}
{-i workload_file | -t workload_table_name}
[-f tuning_feature_set (0, 1, 2)]
[-K keep_existing_indexes (0, 1)] 
[-M recommendation_quality (0, 1, 2)] 
[-B storage_bound (in MB)]
[-n number_of_queries (def 200)] 
[-C max_multicolumn_width (default 16)] 
[-T table_list_file]
[-o script_file_name] 
[-m minimum_improvement (x%)] 
[-F overwrite output file] 
[-v verbose]

 	For detailed discussion of the command line parameters of ITWiz.exe, see SQL Server 2000 Books Online.

 
Tips and Hints
 

Temporary Objects and ITWiz
Because temporary objects do not have persisted storage, by design, the Index Tuning Wizard cannot effective sample the data of temporary objects. Therefore, stored procedures that use temporary objects cannot be effectively tuned by the wizard.
Perform Against Test Version
The Index Tuning Wizard can consume significant CPU and memory resources during analysis. It is recommended that tuning should be performed against a test server with production data and schema loaded, rather than the actual production server.
Hints and ITWiz
The Index Tuning Wizard includes any index hint or query hint in the final recommendation, even if the index is not optimal for the table. Indexes on other tables referenced in the query may be proposed and recommended; however, all indexes specified as hints are always part of the final recommendation. Hints can prevent the Index Tuning Wizard from choosing a better execution plan. Consider removing any index hint from queries before analyzing the workload.
Triggers and ITWiz
If the trace contains the SQL::StmtCompleted event class, then any triggers that are executed are also logged in the trace file. However, such events are never passed into the Index Tuning Wizard by the Profiler code (currently only SQL::SQLBatchCompleted events are passed in).
 
Other Considerations
The Index Tuning Wizard may not make index suggestions if: 
	There is not enough data in the tables being sampled.
	The suggested indexes do not offer enough projected improvement in query performance over existing indexes.

In addition, the Index Tuning Wizard does not recommend indexes on: 
	Tables referenced by cross-database queries that do not exist in the currently selected database.
	System tables.
	PRIMARY KEY constraints and unique indexes.

The recommendations made by the Index Tuning Wizard are based only on the information contained in the workload file; if this is not a representative sample of all the work that must be performed on the database, the recommendations may have a detrimental affect on other operations.
For example, if the workload consists solely of SELECT statements needed for several end-of-month reports, the indexes recommended do not take the daily INSERT, UPDATE, and DELETE operations into consideration, and of course the index maintenance involved with these operations may be too expensive a cost to pay for faster end-of-month operations. Be especially careful in this regard when allowing ITWiz to recommend the dropping of indexes.
 
Lab 2.1 Index Tuning Wizard
 

Lab 2.1 Index Tuning Wizard
Exercise 1 – Using Index Tuning Wizard
Student will use the Index Tuning Wizard to analyze the PierrotImports database to see if additional indexes need to be created to support a specific query as well as a captured workload (SQL Profiler Trace) file.
 
Overview
What You Will Learn
After completing this lab, you will be able to:
	Use the Index Tuning Wizard to check if indexes need to be created to support queries and workloads.

Before You Begin
Prerequisites
To complete this lab, you need the following:
	Windows 2000 Server
	C:\LABS\M2\LAB1\PROBLEMQUERY.SQL file 
	SQL Server 2000 with “PierrotImports” database attached.
	To attach the PierrotImports database
1.	Run the PierrotImports.exe in the C:\Labs\Backup directory 
2.	Unzip the files to C:\Labs\M2\LAB1 and mount the database. The zip file should automatically mount the database for you.

Estimated time to complete this lab:  20 minutes

 
Exercise 1 
Using the Index Tuning Wizard
In this exercise, you will use the Index Tuning Wizard to create indexes that improve the performance of a query.
Goal
Use the Index Tuning Wizard to analyze query and workload and review the recommendations presented.
Scenario
As a novice DBA, you have had some difficulty determining which indexes to create on the Orders table to improve query performance. The Index Tuning Wizard allows you to select and create an optimal set o
...展开收缩
综合评分:4.1(34位用户评分)
开通VIP C币充值 立即下载

评论共有2条

name
wlf_xa2014-07-20 13:31:44
介绍的比较细,需要耐心的读,还是有很多收获的
name
lixivip32012-07-03 08:55:03
是重复的。。是3.。。

评论资源

您不能发表评论,可能是以下原因:

登录后才能评论

待评论资源
 

热门专辑

开发技术热门标签

VIP会员动态

关闭
img

spring mvc+mybatis+mysql+maven+bootstrap 整合实现增删查改简单实例.zip

资源所需积分/C币 当前拥有积分 当前拥有C币
5 0 0
为了良好体验,不建议使用迅雷下载
确认下载
img

微软内部资料-SQL性能优化1

会员到期时间: 剩余下载个数: 剩余C币:593 剩余积分:0
为了良好体验,不建议使用迅雷下载
VIP下载
您今日下载次数已达上限(为了良好下载体验及使用,每位用户24小时之内最多可下载20个资源)

积分不足!

资源所需积分/C币 当前拥有积分
您可以选择
开通VIP
4000万
程序员的必选
600万
绿色安全资源
现在开通
立省522元
或者
购买C币兑换积分 C币抽奖
img

资源所需积分/C币 当前拥有积分 当前拥有C币
5 4 45
(仅够下载10个资源)
为了良好体验,不建议使用迅雷下载
确认下载
img

资源所需积分/C币 当前拥有积分 当前拥有C币
5 0 0
为了良好体验,不建议使用迅雷下载
C币充值 开通VIP
img

资源所需积分/C币 当前拥有积分 当前拥有C币
5 4 45
您的积分不足,将扣除 10 C币
为了良好体验,不建议使用迅雷下载
确认下载
下载

兑换成功

你当前的下载分为234开始下载资源
你还不是VIP会员
开通VIP会员权限,免积分下载
立即开通

你下载资源过于频繁,请输入验证码

您因违反CSDN下载频道规则而被锁定帐户,如有疑问,请联络:webmaster@csdn.net!

举报

若举报审核通过,可奖励20下载分

  • 举报人:
  • 被举报人:
  • 举报的资源分:
  • *类型:
  • *详细原因: