Transaction Log Overview (Part Two)

One of the biggest performance intensive operations in SQL Server is an auto-grow event of the online transaction log file.

Auto-Growth events are occurring when the data/log file is trying to expand size due to out of space.The amount that grows, depends on the file growth option provided for the database. Recently, in one of the project, it has been observed that the Auto growth event occurred 4 times for one of data file and a delay in response time.

Whats the problem with Auto Growth?

Auto Growth can not be stopped completely, however, a careful configuration of Growth option can control the number of Auto Growth events.The problem with Auto Growth is that during the growth event, the database would be halted until the growth event has completed. For a higher size, it can take more time there by a huge delay in processing for the request and in turn several performance issues.In addition, when the space is allocating, it may not be a contiguous space resulting a physical fragmentation of the database file on the disk. If the number of Auto Growth is high, then there will be more physical fragmentation.

Usually Increase the transaction log file as this will avoid as many auto growths.

Increase the FILEGROWTH parameter to a large value. The value is dependant on workload analysis

We’ll start with 8 Steps to Optimizing your Transaction Log – not necessarily in any order. It’s best to review all of these to make sure you have a comprehensive view of how to improve both performance and recovery of your transaction logs!

1) Try to dedicate the transaction log portion of your database to its own phyiscal disk. In high volume OLTP system, isolating the transaction log can allow the disk head to be ready for the next write by not having other files contend for the physical disk resource. If your database already exists, the best way to “move” your transaction log is to detach your database (sp_detach_db) and then reattach it (sp_attach_db). Here’s a great KB article titled:Moving SQL Server databases to a new location with Detach/Attach.

2) Defrag the disk(s) on which your transaction logs reside. This will get rid of external fragmentation of the transaction log – better known as disk file fragmentation. This will require that your server be taken offline but if your files have had a lot of autogrowth and/or they reside on a disk with a lot of other files that have been modified, then all of your files (incl. the transaction log file) are likely to be interleaved and fragmented. The good news is that this is a one time operation (unless the files continue to grow and shrink excessively – but we’ll fix that as well). The bad news is that you’ll need to shutdown SQL Server and then use some form of system tool – Microsoft or third party – to defrag. On Windows XP and Windows Server 2003 machines, the disk defragmenter tool can be launched by running: dfrg.msc.

3) Create only ONE transaction log file. Even though you can create multiple transaction log files, you only need one… SQL Server DOES not “stripe” across multiple transaction log files. Instead, SQL Server uses the transaction log files sequentially. While this might sound bad – it’s not. If you want to get better performance out of the transaction log, place it on faster disks and/or a more performant disk (RAID) configuration. You could place the transaction log on a RAID 0 array but you’ll likely want fault tolerance as well as performance. Isolation, as per number 1 is a good start but if you also need increased capacity then using hardware based RAID to combine disks for both better performance and redundancy is your next choice. See 4.

4) Not only should you try to isolate the transaction log to its own physical disk but you should make sure that the logical/physical disk configuration is as efficient as possible. Try to use an isolated RAID 1 mirroring set if you don’t need significant capacity. If you need a greater capacity OR you want better performance, consider a combination of RAID 0 and RAID 1 (either RAID 0 + 1 or RAID 1 + 0). While RAID 0 + 1 can often offer better performance, RAID 1 + 0 offers better reliability.

Also Read:   9 Ways To Verify The Purity Of Honey

5) Don’t be caught up in nothing but transaction log speed, you’ll also want to make sure that your transaction log is always available as this can help you in times of disaster. Even if the data is damaged, if the transaction log is available and you have a series of backups up to and including the last transaction log then you can make a final backup of the transaction log that will represent all of the changes since your last transaction log backup. If this backup is possible (it’s called backing up the “tail” of the log), then you can achieve up-to-the-minute recovery. This is only possible when the log is available. By placing the log on mirrored disks you can increase your chances of recovering data and minimize data loss!

6) Create transaction log files with a reasonable initial size. When you create a database it’s ideal to PRE-ALLOCATE both your data files and your transaction log file. A little bit of capacity planning goes a long way… Now, if you think that you’ve got absolutely no idea how to size your transaction log you’re going to need, well – here are the things that have the greatest impact:

  • Type of activity – transaction processing or decision support
  • Frequency of that activity – the more frequent the changes, the faster the transaction log will grow
  • Recovery Model – the recovery model of the database
  • Frequency of transaction log backups
  • Whether or not replication is used (since the log reader relies on the transaction log)

I wish I could give you a rough idea on sizing but if you look only at database size and none of these other factors, you could end up with a transaction log that’s seriously oversized or seriously undersized. I’ve seen recommendations of 10-25% of the size of the data and you can use that but I would also add a bit of common sense. A larger database with very frequent transaction log backups may not need a transaction log that’s even 1% of the size of the data… The best way is to setup your development environment similar to that of your production environment (including backup jobs) and then see how the transaction log grows. If you have a lot of autogrowth (because your guess was wrong), you can later clean up the fragmentation that has occurred and get back to a reasonable, intact, and optimal transaction log.

7) Don’t let autogrowth get out of control. As important as capacity planning, you’re not likely to be spot-on in your estimates. I don’t recommend completely turning off autogrowth but in general I also don’t like SQL Server 2000’s default growth rate (or max size). In general, I would recommend setting ALL of the transaction log files attributes: initial size, growth rate AND maximum size. For the growth rate, I recommend something that can be allocated somewhat quickly and something of a fixed size. In general, I recommend a value which is less than or equal to 1GB (based on total size) but something that doesn’t mean that you’re going to autogrow again soon. So, for databases whose transaction logs are under 1GB then you might set autogrow to somewhere between 20 and 100 MB. For those of you who have transaction logs measured in GB, then I’d set the autogrowth to 500MB or 1GB. In general, I don’t like the percentage because it needs to be calculated (which I realize isn’t really all that big of a deal) but the larger the file the larger the autogrowth and the longer it takes. In SQL Server 2000, autogrowth can create blocking…so, it’s best to minimize this in general.

8) Check/fix your internal fragmentation. OK, so this is the one that really helped the customer in Turkey. Often, when transaction logs are not pre-allocated and/or when there’s been a lot of autogrowths, the transaction log can become internally fragmented. Internally your transaction logs are broken down into smaller more granular chunks called VLFs (Virtual Log Files). The size and number of VLFs you’ll have depends largely on the size that the chunk is when it’s added to you transaction log. If you add a new chunk to the transaction log which is 20MB (through autogrowth or through manual growth) then the number of VLFs that are added is 4. If you add a chunk which is greater than 64MB but less than or equal to 1GB, you’ll add 8 VLFs. If you add more than 1GB then you’ll add 16VLFs. In general, most transaction logs will only have 20 or 30 VLFs – even 50 could be reasonable depending on the total size of the transaction log. However, in many cases what happens is that excessive autogrowths can cause an excessive number of VLFs to be added – sometimes resulting in hundreds of VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities and you may even see degradation in performance when transaction log backups occur. To see how many VLFs you have solely look at the number of rows returned by DBCC LOGINFO. The number of rows returned equals the number of VLFs your transaction log file has. If you have more than 50, I would recommend fixing it and adjusting your autogrowth so that it doesn’t occur as fequently. To get rid of all of the execessive VLFs, follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:

  1. Wait for an inactive time of day (ideally, it would be best to put the database into single user mode first) and then clear all transaction log activity through a regular transaction log backup. If you’re using the simple recovery model then you don’t need to do a log backup… Instead, just clear the transaction log by running a checkpoint.
Also Read:   Drinking Alkaline Water Is Good Or Bad For Your Health?

 BACKUP LOG databasename TO devicename  (BACKUP LOG [my_database] TO  DISK = N'M:\mydb.bak')

  1. Shrink the log to as small a size as possible (truncateonly)

DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)

NOTE: if you don’t know the logical filename of your transaction log use sp_helpfile to list all of your database files.

  1. Alter the database to modify the transaction log file to the appropriate size – in one step

ALTER DATABASE databasename
MODIFY FILE 

      NAME = transactionloglogicalfilename 
    , SIZE = newtotalsize

    , FILEGROWTH = newfilegrowthsize
)

If the number of Auto Growth is high, then there will be more physical fragmentation.

How do you find the Auto Growth events?

This blog post is neither intended nor explained the cause of the growth or its resolution.Lets us look at it later. Here, We would see some of common methods to identify the Auto Growth events in SQL Server.

–Using default trace

Default trace is a very powerful way of capturing issues in SQL Server. Some of common EventClasses and its description is given below.

/*

EventClass Description
********************************************************
18 -Audit Server Starts And Stops
20 -Audit Login Failed
22 -Error log
46 -Object:Created
47 -Object:Deleted
55 -Hash Warning
69 -Sort Warnings
79 -Missing Column Statistics
80 -Missing Join Predicate
81 -Server Memory Change
92 -Data File Auto-Grow
93 -Log File Auto-Grow
116 -Audit DBCC Event
164 -Object:Altered
175 -Audit Server Alter Trace Event
*/

Now, lets look at the way to capture Auto Growth Event from trace as below.

DECLARE @filename NVARCHAR(1000);

-- Get the name of the current default trace
SELECT @filename = CAST(value AS NVARCHAR(1000))
FROM ::fn_trace_getinfo(DEFAULT)
WHERE traceid = 1 AND property = 2;

--To find the filename without rollover number
SET @filename = REVERSE(@filename);
SET @filename = REVERSE(SUBSTRING(@filename,CHARINDEX('_',@filename)+1,LEN(@filename))) 
					+ REVERSE(SUBSTRING(@filename,1,CHARINDEX('.',@filename)));

-- Get the details
SELECT 
  Trace.StartTime, Trace.EndTime,TraceEvent.name AS EventName,Trace.DatabaseName  
,Trace.Filename,Trace.EventClass,(Trace.IntegerData*8)/1024.0 AS GrowthMB 
,(Trace.duration/1000)AS DurationInMS
FROM ::fn_trace_gettable(@filename, DEFAULT) AS Trace 
INNER JOIN sys.trace_events AS TraceEvent ON Trace.EventClass = TraceEvent.trace_event_id  
WHERE (Trace.EventClass = 92  
    OR Trace.EventClass = 93) 
ORDER BY Trace.StartTime

Using SQL Error log

SQL Server will provide the Auto Growth events in the Error log as below snapshot. The errorlog information contains the file name and database with the time taken to process the Auto Growth.

autogrowth_errorlog-file

 

 

 

 

 Sources :  sqlshacksqlskills , sqlskills , sqlzealots , sqlauthority

Rating: 5.0. From 1 vote.
Please wait...
Naser Olia

Naser Olia

Software Engineer, Programmer and web designer. I am passionate about healthy lifestyle, Science and knowledge in general and Personal development is one of the most important things to me.

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz
More in Computer Technical Issues, HardText +, Science and Technology
Transaction Log Overview (Part One)

What is a transaction log? A transaction log is a file – integral part of every SQL Server database. It

Close