KVM vs ZFS

Bruce Dawson jbd at codemeta.com
Fri Aug 21 17:47:15 EDT 2015


On 08/21/2015 05:30 PM, Tom Buskey wrote:
>
>
> On Fri, Aug 21, 2015 at 3:33 PM, Bruce Dawson <jbd at codemeta.com 
> <mailto:jbd at codemeta.com>> wrote:
>
>     For this rainy weekend, please consider the following:
>
>     I'm constructing a new server and want 2 KVM guest systems on it.
>     There are 3 4TB drives on it. At the moment, assume one 4TB drive
>     will be reserved for the KVM host. The server has 16GB of RAM.
>
>
> I've been running ZFSonLinux for awhile.  Now on CentOS 7, but 
> previously on Ubuntu.  And OpenSolaris before that.
>
> I typically do a minimal OS with 2 smaller disks with RAID1 mdadm.  I 
> like to make my OS disks independant of any driver or OS addons.  I 
> don't know how good Linux booting on ZFS is either.  Actually, I don't 
> even know if it's possible.  I think it is with BSD.

Ubuntu 14.04 will supposedly boot from a ZFS root.

>
> I do ZFS on my data disks (no dedup!).  ZFS could do a RAIDZ of the 
> unused space in a partition of the OS drive + the same partitions of 
> the other drives, but it really prefers whole disks and works better.  
> Plus, all drives should be the same size.
>
>     What are the advantages/disadvantages of:
>
>      1. Putting all disks in a ZFS pool on the host and dividing the
>         pool between each guest. Or:
>
> So you're going to use one drive for the OS w/o ZFS? Then 2 drives for 
> ZFS & data?
> Then using zfs commands to allocate space to the guests?  I do this 
> all the time.
>
>      1. Giving each guest its own disk. (At least one of the guests
>         will be running ZFS).
>
> I wouldn't ever run ZFS on a single disk if I cared about the data.  
> It's like running RAID0; get an error, you lose your all your data.  
> Actually, you might recover data from a RAID0 non-ZFS.

Oh - but I thought ZFS will mirror "filesystems" within the pool 
(probably with much poorer performance)? At any rate, I'm thinking the 
first approach is the best.

>
> You can use iSCSI on ZFS to give your KVMs a a raw block device 
> instead of a zfs partition w/ a QCOW2 file.  I've only done the zfs 
> partition & qcow2, not the iSCSI block.

I didn't know ZFS would provide that. Guess I've got more reading - I 
wonder if it'll be faster.

>
> I'd do the 1st setup and get the benefits of ECC and on the fly 
> partitioning.  I'd imagine the snapshots would be big for either qcow 
> or an iSCSI block.  I think you'd have to benchmark qcow vs iSCSI 
> block to see which is faster w/ various compressions (in qcow, in ZFS, 
> etc)
>
> ZFS will eat up unused RAM, but Linux does that for filesystems 
> already so we're used to that.  I don't see any huge performance hits 
> with modern multicore systems.
>
>     The guests will be:
>
>        * Both guests will be running DNS servers
>        * One guest will be running a Postfix/Dovecot mail server
>     (including mailman)
>        * The other guest will be running a LAMP stack.
>
>     Hints:
>        * I don't particularly like option 2 as I'll lose the benefits
>     of ZFS (snapshot backups, striping, ...)
>        * I don't know if the performance benefits of ZFS will outweigh
>     the overhead of KVM/libvirt.
>
>     --Bruce
>
>
>     _______________________________________________
>     gnhlug-discuss mailing list
>     gnhlug-discuss at mail.gnhlug.org <mailto:gnhlug-discuss at mail.gnhlug.org>
>     http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/pipermail/gnhlug-discuss/attachments/20150821/67433cc7/attachment.html 


More information about the gnhlug-discuss mailing list