Software RAID issues (was Re: Suggestions solicited, server bring up)

Alan Johnson alan at datdec.com
Mon Nov 23 10:31:16 EST 2009


On Mon, Nov 23, 2009 at 9:25 AM, Tom Buskey <tom at buskey.name> wrote:

> I think the RAID 5 write hole refers to the slowdown on writes with RAID
> 5.  In order to lose data, a 2nd drive needs to fail (as opposed to only 1
> drive on a RAID 0 or JBOD).
>

According to
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance:
"In the event of a system failure while there are active writes, the parity
of a stripe may become inconsistent with the data. If this is not detected
and repaired before a disk or block fails, data loss may ensue as incorrect
parity will be used to reconstruct the missing block in that stripe. This
potential vulnerability is sometimes known as the *write hole*.
Battery-backed cache and similar techniques are commonly used to reduce the
window of opportunity for this to occur. The same issue occurs for RAID-6."


> I think most software RAID only does mirrors for boot.  RAID 1, not 5.
>

I have a Ubuntu 9.10 box that boots a RAID6 with GRUB2.  I expect that is
very new, eh?


>
> RAID5 will have faster read performance then RAID 1 or a single disk.  It
> might be faster for reads then RAID-0 (striping) also.
>

If the disks are a severe bottle neck, RAID5 can match RAID0 read speeds in
theory.  However, I've never seen this in practice.  RAID5 cannot be faster
than RAID0 unless something outside those definitions being at play.

ZFS's RAIDZ ...RAIDZ2 ... RAIDZ3 which has 3 parity disks.
>

I know what you mean, but I'm just nit-picking here for clarification so as
not to confuse the uninitiated: party disks are a thing of RAID3.  RAID5/6/Z
all use distributed parity, so no one disk is dedicated to parities.  This
is a big part of what makes rebuilds so slow on RAID5/6. The process is not
as linear as a mirror or a RAID3 with dedicated parity drive.  How does
RAIDZ do on a rebuild?


>
> ... ZFS ... ZFS ... ZFS fanboy and I'm very disappointed it won't be
> adopted in Linux due to its license.  It's in FreeBSD (and FreeNAS). btrfs
> looks like it has some nice improvements so I'm hoping to see it succeed
> alongside ZFS.
>

Weeeee!  From all the theory I've read and watched, ZFS is the end game.
I'm still trying to figure out how to work it into cloud storage.  Does
FreeNAS some how enable ZFS over iSCSI?  I can't wrap my mind around that,
but the benefits of ZFS on the minimal overhead of iSCSI (vs. NFS) would be
ideal, if impossible.

I'm tempted to try Fuse+ZFS for our database servers, or even just to right
to FreeBSD, but that would be a hard sell in my company and I don't even
want to try it without some lab work to back it up, which is not in the
cards in the near future.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20091123/481e3b9b/attachment.html 


More information about the gnhlug-discuss mailing list