Testing ZFS Compression Levels

We’re about to replace one of the backup servers at work, and in the process get our feet wet with ZFS. In addition to RAIDZ, something that got my attention was ZFS compression. With our existing backup procedures the mail servers [as large as 600GB with millions of files] are simply fed through tar with gzip compression and stored as a massive, unfriendly backup.tar.gz file. This, coupled with the now relatively small partition size of 1.2 TB was giving us problems with both storing more than a couple days’ worth, and the fact that it was taking in excess of 48 hours to perform a backup.

For years I had been itching to use rsync to back up the mail servers, but the main problem was that the backups would not be compressed, and a secondary problem was how to store more than a single copy of the filesystem. ZFS compression solved the main issue, and a quick spin through the man pages showed that the –delete, –backup, and –backup-dir directives would solve the remaining hurdle.

Given that we would need ZFS to compress several terabytes of data we wanted to test out the various levels of compression available to us in advance. To this end I set up a test machine with the following specs:

  • Dell PE2950
  • 2x Dual-Core Xeon CPUs
  • 4GB RAM
  • 5x SAS 10k disks
  • FreeBSD 8.2 AMD64

One disk is used for the OS, and the other 4 were used to create a RAIDZ1 pool. Under this pool I initially created 3 ZFS datasets with GZIP compression levels of 1, 4, and 8. For test data I pulled 29GB worth of mail data [maildir format, approx 435k files] from one of our servers to an uncompressed section of the pool, and then copying them to one of the compressed datasets. After seeing the results I created 3 more datasets for GZIP-2, GZIP-3, and LZJB compression.
Continue reading…