RAID Controllers and Linux: Ugh!

Tom Buskey tom at buskey.name
Fri Jun 29 20:57:07 EDT 2007


On 6/29/07, Warren Luebkeman <warren at resara.com> wrote:
>
> I have setup a software RAID once in Debian, however I was under the
> impression that hardware raid was the way to go because the recovery is
> better.  I'm using RAID 1, so I need to make sure the system doesn't go
> down
> if a hard drive crashes, and that it boots properly if a hard drive
> crashes.


Software RAID in Linux handles this quite well.  I have a server that has
replaced 3 boot drives and and 1 data drive.  These were RAID 1 setups.  The
original boot drives were 4 GB.  When one failed I replaced it with a 20 GB
drive as 4GB were 3x the cost.

The process was:
  * see the drive fail in logs
  * order a new drive & keep running on the remaining drive
  * shutdown the system when the drive arrives
  * remove the failed drive & replace with the new drive
  * boot
  * tell linux a new drive is there (details left out)


That being said, I wouldn't have a problem using an OS supported software
> raid
> if it was as robust as hardware RAID.  It would certainly save a few bucks
> on
> the controller.


IMHO the main advantages with hardware raid are speed and hot swap.


On Friday 29 June 2007 5:06 pm, Tom Buskey wrote:
> > On 6/29/07, Warren Luebkeman <warren at resara.com> wrote:
> > > Since we are on the subject of servers, I am now dealing with an issue
> > > that I
> > > always face when using a new server configuration:  Is the RAID Card
> > > supported in Linux?  I usually like to go with Adaptec RAID cards
> because
> > > they provide Linux driver sources so we can compile the driver
> ourselves.
> >
> >  ....
> >
> > I guess this is more of a frustrated rant rather than a question.  It
> looks
> >
> > > like I will have to use the Adaptec card, which I have no problem
> with,
> > > except that its about $400.  Just seems silly (and incredibly
> annoying)
> > > to me.
> >
> > I used to be all for hardware raid but my thinking has changed over the
> > years.  I prefer software RAID that the OS supports w/o extra drivers
> >
> > I saw someone reconfigure a hardware RAID on Solaris and make a mistake
> in
> > fstab.  Solaris was installed on the RAID and wouldn't boot.  The
> install
> > CD would boot, but it didn't have the drivers for the RAID so he
> couldn't
> > mount /etc to edit fstab.  That mistake cost him a few hours to fix.
> >
> > Hardware RAID has a dedicated CPU to handle the XORing for RAID 5.  That
> > was important when you had a 486.  Now you have a multicore multi GHz
> cpu
> > with extra capacity.  Doing an extra computation will not add much
> > overhead.
> >
> > What if the RAID card dies?  Can you get a replacement that can read the
> > RAID disks you have now?
> >
> > Sun's 3510FC RAID box had a firmware bug that would make disks
> disappear.
> > It happened to 2 drives in a RAID 5 configuration I had.  I had a
> corrupted
> > RAID and lost the data.  There have been 2 patches since then.
> >
> > I'm now using ZFS on Solaris which needs to disable the hardware RAID to
> do
> > its error correction and compression.  It gets patches automatically
> with
> > the OS (similar to yum/apt-get).
>
> --
> Warren Luebkeman
> Founder, COO
> Resara LLC
> 1.888.357.9195
> www.resara.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20070629/22034247/attachment.html 


More information about the gnhlug-discuss mailing list