I was about to give up on ZFS because I was having some drive issues, but I think that was just because they were behind a cut-rate SATA port multiplier, as I never had issues with the ones that are connected to my other computer through a quality (well, still budget, but good) SAS card. It was also tempting to try aufs again, with snapraid, so I could make use of some smaller drives, but ultimately the extra space was a marginal benefit in comparison with certain features gained from ZFS.
So I’m still on ZFS — though I’m just doing raidz2 this time, there’s something I’d been reading about optimal vdev size being 2^N+1 for raidz1, 2^N+2 for raidz2, and 2&N+3 for raidz3, and, so, I figured I’d try to do the optimal setup with what I had.
ZFS is so compelling because its snapshot and backup features are brilliant. Having installed zfs-auto-snapshot, I get hourly, daily, weekly, monthly, and every-15-minutes snapshots automatically, meaning if I accidentally delete anything I can quickly copy it out of a hidden snapshot, and these are differential so they take up hardly any space.
Furthermore, zfs send/receive, built in to the tools, allows me to send the whole filesystem, with snapshots, to my backup server, and incrementally to boot, for better-than-rsync backups. You can do this manually with one snapshot to get a straight copy backup, and pipe to pv and netcat for a progress meter (modified from here):
On sending machine:
zfs send storage@zfs-auto-snap_daily-2013-09-04-0759 | pv -s 4597499264K | nc -l 3333
On receiving machine:
nc 192.168.47.2 3333 | pv -s 4597499264K | zfs receive storage
The number after “pv -s” is the size of the data to be transfered, in kilobytes, which one can obtain by running “df -Bk” on the sending machine and copying the “Used” size given for your pool. If you have a number of snapshots, as I do, this will actually be higher than the actual data you’ll be transferring, but not grossly so, and anyway, that just means you’ll be done sooner than the indicator says, so that’s exciting, no?
That’s just for a quick-and-dirty filesystem copy from one machine to another. For daily use I found I was able to modify and use this backup script for nightly backups of the day’s changes, including snapshots.
So, easy backups, plus ZFS is built to checksum your data, so with weekly checks and daily scripts to monitor your hard disks and data sets, you’re pretty safe from data loss.