ZFS came up in the previous article on Proxmox as a virtualization storage solution, and works excellently within other virtualization environments (excellent down and dirty storage for Citrix).
Developed by Sun Microsystems originally using OpenSolaris, ZFS has at one time or another been ported to FreeBSD and later to Linux. When Oracle purchased Sun Microsytems OpenSolaris development became closed source and inturn so did ZFS. Unhappy with the decision to make ZFS closed source, it is reported, two-thirds of the ZFS development team left Oracle to continue the project in the spirit of the GNU General Public Licensing. ZFS currently sees it’s open source development in the form of the OpenZFS Project and the creation of an OpenSolaris proprietary licensed fork called NexentaStor (which is a fork itself of the OpenSolaris – “Illumos” distro), and FreeNAS.
ZFS (the Z File System) was designed to improve upon data integrity, work in a pooled storage configuration with multiple caching mechanisms (memory and disk read caching and synchronous disk write caching) to increase performance. The real thing that makes ZFS different is that it combines the roles of volume manager and filing system, whereas traditionally these were separate data management roles. This permits the file system to span multiple disks in a single volume, whereas RAID is the spanning of data across multiple individual file systems and multiple volumes. Like-wise data in a ZFS system is distributed across the disk structure. ZFS has a huge advantage in that additional storage can be increased automatically when additional hard drives are added to the pool – the file system simply increases it’s capacity.
In addition to the pooled storage, ZFS also features copy-on-write, snapshots, data integrity verification and repair, and RAID-Z – complete with the ability to store a file equal to 16 exabytes with a maximum of 256 quadrillion zettabytes of storage. Check out Raul Rubens 2009 article on “10 Reasons Why ZFS Rocks“. He gives an excellent run down on each of these features and explains how ZFS over comes the RAID 5 write hole flaw.
While OpenZFS is full featured, the proprietary NexentaStor project community version limits this to 10 TB of used disk space in a non-production environment and locking features out if the system exceeds 18 TB – expandable with the purchase of their enterprise edition Given the stability and ability to get support, I personally have successfully deployed NexentaStor and am quite impressed with its performance and reliability. I would expect no less performance from any of theother OpenZFS flavors. OpenZFS in addition to its expansive storage capabilities has the decided advantage of operating on many platforms: illumos, FreeBSD, Linux and OSX. Deploy a copy of either and enjoy free NAS (less the cost of hardware)!
*Speaking of FreeNAS checkout the comparison of both systems. It claims to be “The World’s #1 Storage OS with over 10+ Million Downloads”. Having likewise deployed this version of OpenZFS, like NexentaStor, it simply rocks as a storage solution. Simply, ZFS is hard to kill and as such can’t be beat.