Categories

Donate

Advert

Software vs Hardware RAID

It’s a commonly held myth that hardware RAID is unconditionally better than software RAID. That claim is not true in all cases and is particularly wrong at the low end.

Really Cheap Hardware RAID

The cheapest so-called hardware RAID uses RAID in the BIOS and relies on an OS driver for support when running in protected mode. This is essentially a different sort of software RAID but with BIOS support to boot from it. Using a different disk format to the standard software RAID for your OS can make it more difficult to recover when things go wrong and there’s no benefit to this. If you use software RAID-1 from your OS and set things up correctly then you can boot from either disk. Using software RAID-1 for booting and RAID-5 or RAID-6 for the OS and data is a viable option.

Cheap Hardware RAID

Cheap hardware RAID doesn’t have write-back caching and therefore can’t give any significant performance benefit over software RAID. Note that there are different options for how RAID stripes are laid out which can affect performance, so if a cheap hardware RAID device gives any significant performance benefit over software RAID then it’s probably due to where the blocks happen to be stored working well with your filesystem. Which is of course a benefit you could get from tuning software RAID.

The Mythical CPU Benefits of Hardware RAID

It’s widely regarded that hardware RAID is faster due to taking the processing away from the CPU. But the truth is that for at least the last 10 years CPUs have been fast enough and in fact it’s often been the case that RAID controllers have been the bottleneck.

When I loaded the Linux RAID-5/RAID-6 driver on my Thinkpad T61 it’s 2.2GHz T7500 CPU (which isn’t a particularly new or powerful laptop CPU) was tested and shown to be capable of 3227MB/s for RAID-6 calculations. The fastest SATA disk I’ve benchmarked was capable of sustaining almost 120MB/s on it’s outer tracks. If we assume that newer disks are capable of 150MB/s then my Thinkpad could handle the RAID calculations for an array of 20 such disks.

An old P3-1GHz desktop system I use for a low-end server can do 591MB/s of RAID-6 calculations in software, if I was able to connect SATA disks to that old system then it could drive four of them in a RAID array at full speed!

It’s often regarded that a benefit of hardware RAID is to avoid CPU use. Contiguous IO can use a moderate amount of CPU power, I could potentially use 20% of one core of a T7500 if I had four disks running at once. But usually contiguous IO isn’t that common. If you are using a Gigabit Ethernet port to transfer data then you are limited to something slightly more than 100MB/s. But most applications don’t involve large contiguous data transfers and thus the amount of data transferred goes down.

One way that hardware RAID can save CPU time is if the interface to the hard drives was inefficient. The IDE interface didn’t seem particularly efficient and large transfers to IDE disks used to often require more CPU time than was expected. For such disks having them on a RAID controller that emulated a giant SCSI disk could save some CPU time.

Back in 2000 I did some tests on a Mylex DAC 960 hardware RAID controller that was only capable of sustaining 10MB/s. This wasn’t a problem as the applications were seek intensive and the Mylex performed well for that task. But for contiguous IO software RAID would have given much better performance.

The Real Benefits of Hardware RAID

A good hardware RAID system will have NVRAM for a write-back cache. This can dramatically improve write performance which is very important on RAID-5 and RAID-6 systems that perform really badly for small writes.

Good hardware RAID controllers will often support many more disks than a non-RAID controller. If you want to have more than 4 disks then hardware RAID has some serious benefits. But it has to have NVRAM write-back cache, otherwise you get no useful benefits and you might as well use software RAID.

Conclusion

If you can’t afford a high-end RAID system like a HP CCISS then use software RAID. Software RAID will be faster and more reliable than cheap hardware RAID.

If you need more than four disks then you can probably benefit a lot from hardware RAID with write-back caching.

3 comments to Software vs Hardware RAID

  • Hi,
    We have tested HP CCIS for a while, and the strange thing is that sofware RAID in Linux RedHat 5.8 overcome the performance of the hardware RAID on the controller card (P812).

    I will start to check whether we can use MetaTool (MD- devices); or if we should aim for LVM raid in RedHat 6.

    If someone has some tips regarding how to improve this performance, than please contact me.

    Regards Tomas

  • archenroot

    Exactly, I also cannot find any real reason for HW raid in these times. Almost any modern CPU will outperform, I really saw very interesting results with software raid by just replacing powerful CPU.

    Regarding the write-back cache you can enable it by using RAM. Of course in case of electricity outage it can create issues, that is why there are batteries on hw raid cards. Well in my case I rather go with UPS and in case of electricity failure I umount all arrays and switch OFF all devices until I don’t detect the electricity in ON state for more than 10 minutes (stable interval in my case).

    I would like to ask you if there is some RAID calculation simulation (C, perl, python) where one can see simulated CPU load and performance?

    Thanks.

  • archenroot

    Well, to be honest :-)), I see the real reason for using the hw card (in my case), these cards are able to connect much more hdds (16 and more) than standard sata cheap cards (1-4). So in my case I use hw cards only for connecting many hdds together but running the software raid on top of this.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>