<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 21, 2015 at 3:33 PM, Bruce Dawson <span dir="ltr"><<a href="mailto:jbd@codemeta.com" target="_blank">jbd@codemeta.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
For this rainy weekend, please consider the following:<br>
<br>
I'm constructing a new server and want 2 KVM guest systems on it.
There are 3 4TB drives on it. At the moment, assume one 4TB drive
will be reserved for the KVM host. The server has 16GB of RAM.<br>
<br></div></blockquote><div><br></div><div>I've been running ZFSonLinux for awhile. Now on CentOS 7, but previously on Ubuntu. And OpenSolaris before that.</div><div><br></div><div>I typically do a minimal OS with 2 smaller disks with RAID1 mdadm. I like to make my OS disks independant of any driver or OS addons. I don't know how good Linux booting on ZFS is either. Actually, I don't even know if it's possible. I think it is with BSD.</div><div><br></div><div>I do ZFS on my data disks (no dedup!). ZFS could do a RAIDZ of the unused space in a partition of the OS drive + the same partitions of the other drives, but it really prefers whole disks and works better. Plus, all drives should be the same size.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
What are the advantages/disadvantages of:<br>
<ol>
<li>Putting all disks in a ZFS pool on the host and dividing the
pool between each guest. Or:</li></ol></div></blockquote><div>So you're going to use one drive for the OS w/o ZFS? Then 2 drives for ZFS & data?</div><div>Then using zfs commands to allocate space to the guests? I do this all the time.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><ol>
<li>Giving each guest its own disk. (At least one of the guests
will be running ZFS).</li></ol></div></blockquote><div>I wouldn't ever run ZFS on a single disk if I cared about the data. It's like running RAID0; get an error, you lose your all your data. Actually, you might recover data from a RAID0 non-ZFS.<br></div><div><br></div><div><div>You can use iSCSI on ZFS to give your KVMs a a raw block device instead of a zfs partition w/ a QCOW2 file. I've only done the zfs partition & qcow2, not the iSCSI block. </div><div><br></div></div><div>I'd do the 1st setup and get the benefits of ECC and on the fly partitioning. I'd imagine the snapshots would be big for either qcow or an iSCSI block. I think you'd have to benchmark qcow vs iSCSI block to see which is faster w/ various compressions (in qcow, in ZFS, etc)</div><div><br></div><div>ZFS will eat up unused RAM, but Linux does that for filesystems already so we're used to that. I don't see any huge performance hits with modern multicore systems.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><ol>
</ol>
<p>The guests will be:<br>
</p>
<p> * Both guests will be running DNS servers<br>
* One guest will be running a Postfix/Dovecot mail server
(including mailman)<br>
* The other guest will be running a LAMP stack.<br>
</p>
<p>Hints: <br>
* I don't particularly like option 2 as I'll lose the benefits
of ZFS (snapshot backups, striping, ...)<br>
* I don't know if the performance benefits of ZFS will outweigh
the overhead of KVM/libvirt.<span class="HOEnZb"><font color="#888888"><br>
</font></span></p><span class="HOEnZb"><font color="#888888">
<p>--Bruce<br>
</p>
</font></span></div>
<br>_______________________________________________<br>
gnhlug-discuss mailing list<br>
<a href="mailto:gnhlug-discuss@mail.gnhlug.org">gnhlug-discuss@mail.gnhlug.org</a><br>
<a href="http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/" rel="noreferrer" target="_blank">http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/</a><br>
<br></blockquote></div><br></div></div>