Monitoring the Default Trace

There is a trace enabled and always running on every SQL 2005/2008 instance by default? … !

“Default trace provides troubleshooting assistance to database administrators by ensuring that they have the log data necessary to diagnose problems the first time they occur.” – SQL Server 2008 R2 Books Online

It is easy to forget the Default Trace is there because it is effortless and has no visibility in the SSMS UI. Nonetheless, the Default Trace is loaded with info that can help you. I highly recommend it. It does not produce large trace output files yet provides valuable insights – mostly about changes that have already happened  – that can be referenced when an issue arises. Gives you a much improved chance to catch a problem the first time it happens anyway – even when you aren’t looking.

What is in the Default Trace?

The Default Trace targets the kind of changes on your database server that have been common problems for folks in the past.  Not too much in the way of query execution performance data in here. The events collected of the Default Trace are revealed by the query:

SELECT DISTINCT evt.name

FROM sys.traces trc

CROSS APPLY fn_trace_geteventinfo(trc.id) info

INNER JOIN sys.trace_events evt

ON info.eventid = evt.trace_event_id

WHERE trc.is_default = 1

My active SQL 2008 R2 Developer Edition shows 17 Audit Events and 18 Trace events are written to this log. No where near the complete server change monitor coverage provided by SQLClue. And only a subset of the Audit events described in the SQLAudit_Application_Event_Log.sql script found at the end of this article -the ones I found could be collected without too much output volume – nonetheless with the Default Trace a useful snapshot of the most recent change history is always available. Enough to identify application layer changes, serious index problems, auto grows, full-text, mirroring, DBCC activity and error activity all in one spot.  A mighty handy resource for any live environment:

Audit Add DB User Event
Audit Add Login to Server Role Event
Audit Add Member to DB Role Event
Audit Add Role Event
Audit Addlogin Event
Audit Backup/Restore Event
Audit Change Audit Event
Audit Change Database Owner
Audit Database Scope GDR Event
Audit DBCC Event
Audit Login Change Property Event
Audit Login Failed
Audit Login GDR Event
Audit Schema Object GDR Event
Audit Schema Object Take Ownership Event
Audit Server Alter Trace Event
Audit Server Starts And Stops
Data File Auto Grow
Data File Auto Shrink
Database Mirroring State Change
ErrorLog
FT:Crawl Aborted
FT:Crawl Started
FT:Crawl Stopped
Hash Warning
Log File Auto Grow
Log File Auto Shrink
Missing Column Statistics
Missing Join Predicate
Object:Altered
Object:Created
Object:Deleted
Plan Guide Unsuccessful
Server Memory Change
Sort Warnings

January 2012 update: My SQL Server 2012 RC0 instance is similarly configured but with only 16 trace events. The FT:Crawl Aborted is gone. As Amit correctly points out in his great comment (Thank you Thank You Thank Amit! And Mark! Only comments to any of my posts. Much appreciated!), changes on the server from the default trace are in the Server Change History standard report, but keep in mind that that report includes only the 3 Object trace counters and excludes tempdb. There are a couple of other redundancies between this report and standard reports, but this one gives you a birds eye view of all the counters collected. That can be helpful in a crisis. And finally, there is a ReportViewer 2010 Beta compatible version of the report in the SQLClue open source project on github  – the readme.txt explains how to convert from rdl to rdlc if you drill down to the SQLClue/ReportViewerReports folder. Just reverse that process to make and rdl file out of an rdlc file that happens to include data set information. I am currently adding this report to the SQLClue UI. Hoping they don’t take too long to get the 2010 ReportViewer control out of beta.

Any principal with the ALTER TRACE server permission (or CONTROL SERVER – the container of ALTER TRACE)  can open traces in SQL Profiler. By default this will include all members of sysadmin fixed server role on the SQL Server instance. See the  Books Online topic “Profiler, permissions” for more details.  To verify your access try the fn_my_permissions () system function try:

SELECT *

FROM fn_my_permissions(NULL, ‘SERVER’)

WHERE permission_name = ‘ALTER TRACE’;

or the inline system function HAS_PERMS_BY_NAME():

SELECT HAS_PERMS_BY_NAME(null, null, ‘ALTER TRACE’);

There is almost always some interesting stuff in the Default Trace.

In Fact, if your default trace files are huge and you have not looked inside, you could be in for quite a surprise indeed. The root of your problems might have been right there in the Default Trace all along. Check out Ajmer Dhariwal’s tips for making use of Default Trace data and Amit Banerjee’s post summarization of how CSS works with Default Trace data.

Output from the Default Trace is written to the same folder as the the SQL  Server ErrorLog. If you do no know this path, it is written near the top the the ErrorLog each time SQL Server starts. The path can also be obtained by looking at the Startup Parameters in the “Advanced” tab of the interesting SQL Server’s Properties in the SQL Server Configuration Manager or as a query for the [path] column of the sys.traces catalog view, or by using the fn_trace_getinfo() system function.

SELECT path FROM sys.traces WHERE is_default = 1;

Almost always the default trace will be trace_id = 1 in the sys.traces view. That is not guaranteed. Better to use is_default, especially when automating the path fetch from the view.

To check the status of the default trace on any 2005 or better instance run:

EXEC sp_configure ‘show advanced options’, 1; RECONFIGURE;

EXEC sp_configure ‘default trace enabled’;

Obviously, the trace can be disabled. For most scenarios, it is remembering this resource is there that needs to change, not the fact that the Default Trace is active.

The Default Trace uses the roll-over option for trace output to limit file growth to 20MB. Also, each time the SQL Server service starts a new Default Trace file is added to the sequence. This results in the creation of a sequentially numbered series of  Default Trace output files that are then maintained by SQL Server in a manner similar to the database engine’s ErrorLog and SQLAgent logs files. If you restart the service frequently the available files may all be small. If you never restart, each file could contain 20MB of data.  Likewise, the number of logged events will vary from one scenario to the next just by the nature of the different scenarios.

The last few files are kept, the oldest file ages out once a new file is created. By default the last 5 trace output files remain available. Unlike the ErrorLog and SQLAgent Log, there is no mechanism in SSMS to modify the number of trace files to keep.   (vote for a more configurable feature at connect.microsoft.com)

The currently active trace files are queried as in:

SELECT cat.name AS [EventCategory]
     , evt.name AS [EventClass]
		 , ISNULL( sub.subclass_name, 'n/a') AS [EventSubClass]
		 , COALESCE( trc.DatabaseName, DB_NAME(trc.DatabaseID), 'n/a' ) as [Database]
		 , COALESCE( SUSER_SNAME(trc.LoginSid), trc.NtUserName, 'n/a' ) AS [Login]
     , ISNULL( trc.ApplicationName, 'n/a') As Application
     , trc.StartTime
     , ISNULL( trc.HostName, 'n/a') AS Host
     , ISNULL( type.ObjectType, '' ) AS ObjType
     , ISNULL( 'Severity: ' + CAST( trc.Severity AS NCHAR(4) ), '' ) 
     + ISNULL( 'State: ' + CAST( trc.State AS NCHAR(4) ), '' ) 
     + ISNULL( 'Line: ' + CAST( trc.LineNumber AS NVARCHAR(11) ), '' ) 
     + ISNULL( ' Error: ' + CAST( trc.Error AS NCHAR(11) ) + CRLF, '' )
     + ISNULL( 'TextData: ' + CAST( trc.TextData AS NVARCHAR(MAX) ) + CRLF, '')
     + ISNULL( 'Parent Object: ' + trc.ParentName + CRLF, '' )
     + ISNULL( CASE WHEN trc.EventClass = 55 
                    THEN 'Plan Node: ' ELSE 'Object Id: ' END
               + CAST( trc.ObjectId AS NVARCHAR(11) ) + CRLF, '')
     + ISNULL( 'Object: ' + ISNULL( OBJECT_SCHEMA_NAME( trc.ObjectId, trc.DatabaseId ) + '.', '' )
     + ISNULL(trc.ObjectName, OBJECT_NAME( trc.ObjectId, trc.DatabaseId ) ) + CRLF, '')
     + ISNULL( 'IndexId: ' + CAST( trc.IndexId AS NCHAR(11) ) + BLANK + CRLF, '' )
     + CASE WHEN trc.Permissions IS NOT NULL
            THEN 'Permissions Checked: ' 
                + CASE WHEN trc.Permissions & 1 = 1 
                       THEN TAB + N'SELECT ALL' + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 2 = 2 
                       THEN TAB + N'UPDATE ALL'  + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 4 = 4 
                       THEN TAB + N'REFERENCES ALL' + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 8 = 8 
                       THEN TAB + N'INSERT' + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 16 = 16 
                       THEN TAB + N'DELETE'  + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 32 = 32 
                       THEN TAB + N'EXECUTE'  + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 4096 = 4096 
                       THEN TAB + N'SELECT ANY' + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 8192 = 8192 
                       THEN TAB + N'UPDATE ANY' + CRLF 
                       ELSE BLANK END 
                + CASE WHEN trc.Permissions & 16384 = 16384 
                       THEN TAB + N'REFERENCES ANY' + CRLF 
                       ELSE BLANK END 
            ELSE ''
            END 
+ ISNULL( 'SPID: ' + CAST(trc.SPID AS NCHAR(10)) + CRLF, '')
+ ISNULL( 'PID: ' + CAST(trc.ClientProcessId AS NCHAR(10)) + CRLF, '') AS [ExtendedInfo]
FROM sys.traces t
CROSS APPLY sys.fn_trace_gettable( LEFT( t.[path]
                                       , LEN(t.[path]) - PATINDEX( '%\%'
                                       , REVERSE(t.[path]) ) ) + '\log.trc'
                                 , DEFAULT ) As trc 
INNER JOIN sys.trace_events evt
ON trc.EventClass = evt.trace_event_id
CROSS JOIN ( SELECT CHAR(10) + CHAR(13) AS [CRLF]
            , CHAR(32) AS [BLANK]
            , CHAR(9) AS [TAB] ) AS sym
INNER JOIN sys.trace_categories cat
ON evt.category_id = cat.category_id
LEFT JOIN sys.trace_subclass_values AS sub
ON trc.EventClass = sub.trace_event_id
AND trc.EventSubClass = sub.subclass_value 
LEFT JOIN ( SELECT DISTINCT sub.subclass_name AS ObjectType
                          , sub.subclass_value 
            FROM sys.trace_events AS evt 
            JOIN sys.trace_subclass_values AS sub 
            ON evt.trace_event_id = sub.trace_event_id 
            JOIN sys.trace_columns AS col 
            ON sub.trace_column_id = col.trace_column_id 
            WHERE col.[name] = 'ObjectType' ) AS type
ON trc.ObjectType = type.subclass_value
WHERE t.is_default = 1
AND trc.StartTime BETWEEN GETDATE() - 1 AND GETDATE()
ORDER BY StartTime;

I use this query in a report that can make looking at the default trace quick and easy. The data sources are embedded queries of sys.fn_trace_gettable(). Consequently nothing else is needed to make this report run. No user database objects like stored procedures – or even a user database – are required or used by the report. The report is coded so it will work in SSMS 2008 though I prefer to use the report from the Report Builder Preview pane. That means I need BIDS 2008 or VS 2008 installed wherever I want to see this report. The advantage is in the flexibility. More on that below.

DefaultTrace

Or with a time-line chart and graphical column headers in the tablix with Report Builder 3 and usable in SSMS 2012.

DefaultTrace11

The report is intended to provide an overview of recent entries at a glance. The row of data bars embedded in the Tablix provide a visual indication as to the number and distribution of events at each level in the Category->Class->Subclass Trace Event Class Model. Drilldown is optional but available. In the screen shot, the Security Audit Category is where most of the rows are in this sample. The red bar indicates 1224 events in the specified time interval for this category. The blue bar next to it in the the Class column indicates that all events are from a single class in that category. Hovering on a bar in the [Event Class] or [Event Subclass] columns for a quick view of the class and/or subclass depicted by that bar.

Upon expanding the Database category, it becomes obvious that not all classes have subclasses the placeholder “<empty>” is added. Also obvious is that the tempdb seems to need to grow often. Definitely worth digging into. Both a larger tempdb log size at startup and a verification that there is not something whacky with tempdb usage seem advisable.

DefaultTraceExpanded

The use of Visual Studio (VS) or Business Intelligence Developers Studio (BIDS)  as a monitoring interface is not adequately leveraged in the wild. Executing this report from Report Builder’s Preview window using BIDS (or Visual Studio 2008) in lieu of deploying to an application Report Server is well suited for administrative reporting – especially when only one – occasionally two – person(s) will ever use the report. This is not a suggestion to support or allow ad hoc report development in your production environment. Nor should this be used as premise to not test any query before use in the production environment. Nor am I advocating that you run reports on the console of production database servers. Visual Studio can crash unexpectedly and may have other undesirable side effects on the Windows host that may require a reboot. Plan accordingly. The tool only needs to be where it can see the live database servers, not directly on any production server.

A host system other than a live database server for running these administrative reports from BIDS  or – if BIDS is not an option – a Report Server deployment is advised. Using the report development environment makes it simple to change the target SQL Server Data Source or tweak the report layout as might be helpful. Using the Report Server, the report is easily modified to be runnable against one of a pre-selected list of servers else from one of a list of servers that you can fetch. Books Online has pretty good info on dynamic data sources for the Report Server method.

Within VS/BIDS, changing the Source SQL Server instance in Report Builder is quite easy.

      1. Locate the Report Data panel
      2. expand the Data Sources folder
      3. Open the properties dialog for ‘DataSource1’
      4. Replace the SQL Instance (shown as “.\SQLEXPRESS” below) with the desired target SQL Server Instance.
      5. Hit the OK button to save the changes.

DefaultTraceDataSource

Before displaying a 1200 trace output for browse, it may be better to further limit the number of rows if possible. To limit the rows in the Tablix pop open the “Filter” expander  to provide click through filtering to show only one Database, Application, Host, and or Login. Shown below, the cursor is hovering over the “.Net SQLClient Data Provider”. The tooltip indicates 1234 events. Still too many to browse.

DefaultTraceFilters

I can select the filter anyway, then I can look at for a database filter to go with it. However, without doing any of that I can see that the Security Audit events are one subclass and mostly in a single bar in the logarithmically scaled filter charts. Instead of browsing those 1200 rows in the Tablix, in this case it may be more productive to query the system frunction from SSMS.

Get the report’s .rdl here. There are a few versions in the repository for different releases of SSMS.

Bill

SQLAudit_Application_Event_Log.sql


USE [master];

GO

If NOT EXISTS (SELECT * FROM sys.server_audits 
               WHERE [name] = 'ApplicationEventLogAudits')
	CREATE SERVER AUDIT [ApplicationEventLogAudits]
	TO APPLICATION_LOG
	WITH	(	QUEUE_DELAY = 1000
		    , ON_FAILURE = CONTINUE );

GO

If NOT EXISTS (SELECT * FROM sys.server_audit_specifications 
               WHERE [name] = 'ServerApplicationEventLogAudits')
	CREATE SERVER AUDIT SPECIFICATION [ServerApplicationEventLogAudits]
	FOR SERVER AUDIT [ApplicationEventLogAudits]
  ADD (AUDIT_CHANGE_GROUP),
  ADD (BACKUP_RESTORE_GROUP),
  ADD (BROKER_LOGIN_GROUP),
  ADD (DATABASE_CHANGE_GROUP),
  --ADD (DATABASE_OPERATION_GROUP), -- too much noise
  ADD (DATABASE_MIRRORING_LOGIN_GROUP),
  --ADD (DATABASE_OBJECT_ACCESS_GROUP), -- too much noise
  --ADD (DATABASE_OBJECT_CHANGE_GROUP), -- too much noise
  ADD (DATABASE_OBJECT_OWNERSHIP_CHANGE_GROUP),
  ADD (DATABASE_OBJECT_PERMISSION_CHANGE_GROUP),
  ADD (DATABASE_OWNERSHIP_CHANGE_GROUP),
  ADD (DATABASE_PRINCIPAL_CHANGE_GROUP),
  --ADD (DATABASE_PRINCIPAL_IMPERSONATION_GROUP), -- too much noise
  ADD (DATABASE_PERMISSION_CHANGE_GROUP),
  ADD (DATABASE_ROLE_MEMBER_CHANGE_GROUP),
  --ADD (DBCC_GROUP), -- too much noise
  ADD (FAILED_LOGIN_GROUP),
  ADD (LOGIN_CHANGE_PASSWORD_GROUP),
  --ADD (LOGOUT_GROUP), -- too much noise
  --ADD (SCHEMA_OBJECT_ACCESS_GROUP), -- too much noise
  --ADD (SCHEMA_OBJECT_CHANGE_GROUP), -- too much noise
  ADD (SCHEMA_OBJECT_OWNERSHIP_CHANGE_GROUP),
  ADD (SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP),
  ADD (SERVER_OBJECT_CHANGE_GROUP),
  ADD (SERVER_OBJECT_OWNERSHIP_CHANGE_GROUP),
  ADD (SERVER_OBJECT_PERMISSION_CHANGE_GROUP),
  --ADD (SERVER_OPERATION_GROUP), -- too much noise
  ADD (SERVER_PERMISSION_CHANGE_GROUP),
  ADD (SERVER_PRINCIPAL_CHANGE_GROUP), 
  --ADD (SERVER_PRINCIPAL_IMPERSONATION_GROUP), -- too much noise
  ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP),
  ADD (SERVER_STATE_CHANGE_GROUP),
  --ADD (SUCCESSFUL_LOGIN_GROUP), -- too much noise
  ADD (TRACE_CHANGE_GROUP)
  WITH (STATE = OFF)
	
GO
	
ALTER SERVER AUDIT SPECIFICATION [ServerApplicationEventLogAudits]
 WITH (STATE = ON)
 	
GO

ALTER SERVER AUDIT	[ApplicationEventLogAudits]
 WITH (STATE = ON)

GO

evt
Advertisements
This entry was posted in Monitoring. Bookmark the permalink.

2 Responses to Monitoring the Default Trace

  1. Good example on how you can RDL files to make reporting easier for data collected using the default trace. This can be deployed to SSMS report folder. The Schema Change History Standard Report does specifically that to report schema changes on the instance.

  2. mssqldude says:

    Nice job! I am using your base RDL on my instances and it works great. Thanks for sharing!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s