Hardware vs software RAID (was: RAID Controllers and Linux)

Ben Scott dragonhawk at gmail.com
Fri Jun 29 18:34:28 EDT 2007


Subject line changed to reflect threadjack.

On 6/29/07, Tom Buskey <tom at buskey.name> wrote:
> I used to be all for hardware raid but my thinking has changed over the
> years.  I prefer software RAID that the OS supports w/o extra drivers

  There's no doubt that both have their pluses and minuses.
Portability between disparate hardware platforms is my big favorite
for software RAID.  There are some counter-points worth making though:

  It is true CPUs are getting faster faster than anything else.
However, that doesn't necessarily translate to "no benefit to
offloading RAID calculations".  That's because you can't "bank" idle
CPU time.  When a CPU is busy, it still tends to be pinned at 100% for
the duration of that busy task.  Offloading as much as you can to
dedicated co-processors during such will still yield a benefit.
Whether that benefit is significant depends on the specific
circumstances.

  The proliferation of SMP ("multi-core") may or may not make the
above point obsolete.  Software hasn't really changed to be more
multi-threaded.  That often leaves those extra cores sitting idle,
even under heavier (single threaded) workloads.  As long as that
persists, you lose nothing by letting a main core handle RAID work.
If software sophistication catches up and starts making use of
multi-core, the point above becomes valid again.

  During a rebuild or consistency check, hardware RAID can give you a
real benefit.  It keeps all that I/O out of the main system, which can
be very significant.  One feature of recent MegaRAID controllers is
called "patrol read", which is basically a continuous consistency
check.  This helps find disks that have bad sectors but which haven't
read those sectors recently.  Putting this on the CPU could easily
drag performance down.  Again, specifics depend on the circumstances.

  If you're working in a multi-OS environment (dual-boot workstation
or home PC, say), hardware RAID is the only way to go.  This is the
other side of the "software RAID, mutli-hardware" coin.

  In my experience, hardware RAID has been more robust in the face of
disk failures than software RAID.  I've had disk failures kill
software RAID systems until the system is cold booted, or sometimes
until the dead disk is disconnected.  Sometimes this only shows up
when you try to reboot the system, and the BIOS/firmware insists on
trying to boot from a dead disk.

  I suspect this has been mostly because the software RAID setups were
typically two IDE disks hanging off one multi-port IDE chip, and a lot
of IDE chips do not handle failure well.  At the same time, the
hardware RAID controllers were presumably designed to cope with dying
hardware.  So I suspect it doesn't have to be this way.  But it is my
experience.  FWIW.  YMMV.

  Finally, I find it's easier to manage hardware RAID.  The OS sees
one big device, and all the OS tools are happy with that.  Meanwhile,
the RAID management software handles the RAID management well.  Plus
you get those nifty disk status indicator lights.  But again, this may
just be a reflection of the tools and software I use.  Where you stand
depends on where you sit.

-- Ben


More information about the gnhlug-discuss mailing list