+44 (0)20 3051 3595 | info@coeo.com | Client portal login

Are your database backups completing as quickly as they could?

Les Reading

Maintaining database backups is an essential part of most DBA's or Sys Admin's job descriptions, yet it's common to find deployments of larger databases where the time taken to run a full backup means that they are not being taken as regularly as possible. Using differential backups is all well and good but for some databases they are almost as large as a full backup, especially after a 3 or 4 days. In this post I will run through some optimisation tips that may help you keep backup and verification times to a minimum.

Let's start with some best practices that are sometimes over looked:
  1. Ensure you are backing up to a separate disk or set of disks than those where the database lives - this actually vital to ensuring a single failure does not wipe out data and the most recent backup and you should check you really are using separate disks, not just a different logical drive on the same physical disks or RAID array.
  2. If you have it, use backup compression - it has been available in Enterprise Edition since SQL Server 2008 and Standard Edition since SQL Server 2012 but is disabled by default.
  3. Backup jobs should have two steps - the actual backup, followed by a backup file verification; which benefits from compression more than the actual backup process.

In order to illustrate the difference these make I performed some tests with different conditions to demonstrate how the backup and verify duration of even a small database can be optimised. These tests were carried out on my laptop with extra storage provided by external USB 3.0 devices.

My laptop has SQL Server 2014 Developer Edition installed and is configured with:
CPU RAM Drive 1 Drive 2 Drive 3 SSD
Core i7 Dual Core with Hyper Threading (4 logical Processors) 8 GB DDR3 1600MHz USB3.0 5400rpm SATA USB3.0 5400rpm SATA USB3.0 5400rpm SATA Samsung 512GB mSATA
I ran two test scenarios, the first where the 6GB test database is held on a single slow USB 3.0 drive, the second, where it is held on the low latency internal SSD drive. Backups will be taken to a mixture of single and multiple drives with compression enabled and disabled. It should be noted that the SSD drive is considerably faster than the mechanical drives connected via USB3.0.
To create the slowest possible baseline I first backed up the test database without any compression onto the same drive it is hosted on. This took 5 minutes and 15 seconds to  backup and verify the file.
Test Set 1 - Data held on single slow SATA Drive (Drive 1)
Destination Compression Enabled Y/N # Backup Files Backup Duration Verification Duration Total Duration
Drive 2 N 1 1 minute 35 seconds 1 min 28 seconds 3 min 3 seconds
Drive 2 Y 1 1 minute 31 seconds 23 seconds 1 min 54 seconds
Drives 2 + 3 Y 2 1 minute 29 seconds 13 seconds 1 min 42 seconds
SSD Y 4 1 minute 25 seconds 6 seconds 1 min 31 seconds
At the end of the first set of tests it is apparent that the average read speed of the drive holding the database will not allow the backup duration to fall much with only a 10 second reduction in backup duration. We can see that the gain come from compression as the read-only verification process now has a much smaller file to scan.

 

In the second set of tests I moved the test database onto the fast SSD drive to remove the bottleneck. This is also closer to the real world where backup storage is often  slower (and therefore cheaper) compared to data and log drives to keep costs down.
Test Set 2 - Data held on single fast SSD
Destination Compression Enabled Y/N # Backup Files Backup Duration Verification Duration Total Duration
Drive 2 N 1 1 min 18 seconds 1 min 13 seconds 2 min 31 seconds
Drives 1 + 2 + 3 N 3 25 seconds 25 seconds 50 seconds
Drive 2 Y 1 21 seconds 22 seconds 43 seconds
Drives 1 + 2 + 3 Y 3 23 seconds 9 seconds 32 seconds
Note how striping the backup across three files onto three separate disks without compression yielded a similar result to compressing the backup to a single file on a single drive. Compression was still faster but not by as much as you might expect. The final test shows that the biggest gain comes from enabling compression and striping the backup across multiple drives, although this is largely from the verification process.
The encouraging point here is that if you have an edition of SQL Server without backup compression splitting the backup across several disks clearly makes a difference. It may not necessary to split the backup into multiple files if you have a single RAID array made up of several physical disks; but skimping on cheap/slow storage for your database backups can definitely have a negative impact on backup duration.
In conclusion:
  1. You should never backup to the same physical disks that your data sits on.
  2. You should always verify that your backup files are valid, at the end of each backup.
  3. If you have an edition of SQL Server that supports compression, you should use it - it will help reduce the backup file verification time considerably.
  4. If you do not have compression just using multiple backup files may greatly reduce backup and backup file verification times.
  5. For very large databases the marginal gains of combining backup compression with multiple backup files will likely be noticeable.
For reference a basic backup command that uses multiple files and enables compression looks like:
Baqckup_Command

 

Subscribe to Email Updates

Back to top