ZFS is cool

Tom Buskey tom at buskey.name
Tue Nov 24 10:11:33 EST 2009


If you're going to hijack the thread, you should change the subject... :-)

On Mon, Nov 23, 2009 at 1:42 PM, Alan Johnson <alan at datdec.com> wrote:

> On Mon, Nov 23, 2009 at 1:09 PM, Tom Buskey <tom at buskey.name> wrote:
>
>>
>
>> I once replaced my 120 GB drives with 500 GB drives to increase the pool.
>> It didn't seem slow to me, but..  You'll have to google :-/ to get real
>> numbers.  I suspect the speed is similar to RAID 5/6 rebuilds
>>
>
> Yes, I've heard that performance is really good while rebuilding and it is
> nice to have it confirmed.  However, on a busy system, I expect the trade
> off is even longer rebuild times.
>

Yes,  ZFS can have some pausing.  If realtime type stuff is important, it
might not be a good fit.  Sun also has some hardware accelerators that run
sections of the ZFS in RAM/SSD.


>
>
>> ZFS will work on top of ISCSI SAN drives.  Or you can share out a partion
>> from a ZFS pool as an iSCSI target.
>>
>
> Now THAT sounds like the stuff!  So, you can make and share ZFS
> partitions?  Is that functionally similar to LVM partitions?  I'm a bit
> confused.  More below...
>

You make a pool of your disks.  Then partition it up, dynamically, similar
to LVM.  More like a NetApp if you've used one of those.


>
>
>>
>> zpool create raidz mypool c0t0d0 c0t1d0 c0t2d0 # create a RAIDZ from 3
>> disks
>>
>> # Create a home with 10GB, max, share it on NFS and compress the data as
>> it comes in
>> zfs create mypool/home
>> zfs set quota=10G mypool/home
>> zfs set compression=on mypool/home
>> zfs set sharenfs=on mypool/home
>>
>> # Another one, but put it on iscsi
>> zfs create mypool/iSC
>> zfs set quota=10G mypool/iSC
>> zfs set compression=on mypool/iSC
>> zfs set shareiscsi=on mypool/iSC
>>
>> They really got the CLI stuff right!
>>
>
> Nice!  But then what does it look like to the client?  Doesn't iSCSI appear
> like a block device that still needs a file system on top of it?  Does the
> client need ZFS support?  That's the
>

Yes.  A chunk of disk the client accesses with iSCSI and needs to put a
filesystem on.
But on the ZFS host (the SAN) you can put compression, deduplication, and
snapshots on that chunk.

The client doesn't need to support ZFS.  Just iSCSI.  Or NFS.  Or CIFS.


> rub if I want to boot Linux clients from it. NFS removes the need for ZFS
> on the client, but I am concerned about network overhead for some of our
> heavier needs.
>

My take is to usee NFS unless you need file locking or CIFS.  A database
needs to be on local disk (which ISCSI/Fibre channel is, really).  NFS is
very bad for file locking.


>
>
>>
>>> I'm tempted to try Fuse+ZFS for our database servers, or even just to
>>> right to FreeBSD, but
>>
>>
>> I wouldn't touch *anything* FUSE for production work.  Well, I've used
>> NTFS-3G because I had to.
>>
>
> Yeah, I keep trying to put it out of my head, but it keeps sneaking back in
> there.  I know it is not what I want it to be, but I can't stop wanting it!
>
>
>>
>> Get VirtualBox and play with FreeBSD/FreeNAS/Solaris/OpenSolaris inside
>> it.
>>
>
> I've got a blade chassis coming out of production after some upgrades in
> Q1.  I can use for a lab, so I'm just going to hold off until then so I can
> get all the pieces of our production cloud going at once and see what breaks
> when I do nutty stuff to get ZFS in the mix.
>

FWIW, I've ben getting some good compression on some of my systems:
 MOUNTPOINT            QUOTA   USED  AVAIL  COMPRESS  RATIO  SHARENFS
/emcsan/raw            none   486G   199G        on  1.53x  off
/onroot                none  33.8G  18.9G       off  1.07x  off
/onroot/archive        none  54.4M  18.9G        on  8.03x  on
/export                none  23.7G  18.9G        on  1.01x  off
/var/sadm/pkg          none  1.79G  18.9G       off  1.00x  off
/export/temp            19G  4.91G  14.1G        on  1.36x  on
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20091124/67249faf/attachment.html 


More information about the gnhlug-discuss mailing list