# software vs hardware raid - YOUR experiences wanted!

## chashab

wanted: experiences with proprietary/opensource tools, rebuilding disk images, speed and performance comparisons

software raid (linux md) claims:faster than low to mid range raid cards

software tools and drivers open source and well documented (compare EVMS w/ your proprietary software)

hardware raid claims:faster in real world situations where the proc is busy doing something else

logic resides in the card, won't break when you mess with the kernel

as an exampe, consider these two scenarios; software or hardware?:

server 1:  busy lamp style web server w/ raid 5.  perhaps software raid if you have enough ram and dual procs?

server 2: server w/ storage array (many TBs) w/ raid 5.one storage raid performance test has software raid winning even w/ 1 proc and 1gb ram!

Last edited by chashab on Fri Jul 22, 2005 5:43 pm; edited 3 times in total

----------

## makton3g

https://forums.gentoo.org/viewtopic-t-244941-highlight-fake+raid.html

This has some benchmarks for "fake raid to MD". Fake won.

I used to run software Raid 0, but I wanted to setup XPs on my raid drives, so I have migrated to hardware raid 0 now (Dmraid, DMsetup). 

There is a noticable difference between the two raids in my Linux. The fake raid does fel faster when accessing a bunch of information for a bit of time. "Dmraid" picks up my fake raid and maps it for me, so I have mapped devices just like normal harddrives. I set up my system jsut like I would do on a normal system. I think the "fake raid" was easier, now that the software has caught up to the hardware (driver wise).

----------

## lbrtuk

It's not just about speed - it's also about having to rely on wierd binary only proprietary drivers that aren't guaranteed to be updated for kernel version xyz in 2 years time.

----------

## 1U

I always avoid software raid like the plague. I just recently had to set up a 2 drive raid1 (3WARE Escalade 7006-2 ATA RAID Controller Card), and a 4 drive raid 10 (LSI MegaRAID SATA 150-6) before that. Both are supported by the kernel in the scsi section. I like how easy  it is to do with high end hardware raid controllers and also their support in the linux kernel. Also I don't know what happens with software raid, but at least with hardware raid if you get failures you usually get good tools to help you fix the problem and replace the drives along with rebuilding the raid. I haven't used software raid in Linux, but I"m sure it can't make up for those features.

The only software raid I ever used was an adaptec card in Winblows box. The difference between the same hard drives performing as a standalone typical drive and a software raid of any sort on that computer was like night and day. They really degrade the performance.

The only factor you should consider when thinking about which to chose is cost. Yeah they're expensive but definitely worth it for storing anything other than pr0n. Like they say, you get what you pay for. 

As I've mentioend above the controllers I've worked with are by 3ware and LSI. I like both companies but 3ware tends to be too expensive. LSI makes fairly cheap lower end hardware controllers that perform decently and give you a lot of bang per buck. I try to avoid Adaptec as most of theirs are poor in performance and cost more than the others. Check eBay especially as they tend to go cheap sometimes but they aren't sold often. I tried looking at benchmarks of different controllers but now I don't even bother as pretty much every hardware review site out there a. has different results than the others b. uses winblows benchmarks.

----------

## chashab

 *1U wrote:*   

> I always avoid software raid like the plague.
> 
> 8<------------------------------
> 
> I haven't used software raid in Linux, but I"m sure it can't make up for those features.
> ...

 

I find it interesting that you avoid software raid when you haven't even tried it.  One of the reasons I'm actually going with software raid is because of the abundance of features available compared to the hardware raid utilities that I'm also considering.  As an example, check out EVMS.  Of course you can use EVMS with hardware raid solutions as well, but it's software raid options (using md) are complete.

Now if you've skewered your OS and your MOBO has raid utilities built in, or you have a rescue disk for your raid, that would come in handy.  Of course the software raid guys could also whip out a knoppix disk and repair both their skewered disks AND yours, but i digress.   :Smile: 

----------

## 1U

Well that was only one of the reasons, I don't see what there is to even compare between them. Performance, ease of use, hardware support, reliability are all superior with hardware raid. Like I said the only downside is cost though in my opinion all those benefits are worth the money. I could see someone using software raid for storing random stuff and for just a home system, but for a lamp server I wouldn't even give it a second thought.

----------

## NeddySeagoon

1U,

When your hardware raid card dies and its obsolete, you are dead. Get your backups out.

With kernel software raid, put the drives on another controller or another motherbaord and all is well again.

Also, in a 33MHz 32bit PCI bus box, hardware raid is strangled by the 133Mb/sec PCI bus bandwith.

On board SATA chipsets (not on a PCI bus) can achieve sustained data rates at least twice that.

Of course you need drives that can do better than the typical 54Mb/sec head to platter data rate.

----------

## 1U

Good point. Although all hardware raid controllers have a 64bit pci interface and if you were serious about it would use it on a board that supports that, thus eliminating the pci bottleneck. Since drives can't utilize all that speed it doesn't really matter anyways like you said. The thing that does matter though is that one way or another software raid would require more system resources than hardware raid. 

And since when do raid controllers die? It's possible but I'm sure it happens even less than it does to motherboards. Though I'm sure there's some way to recover from that also. Nobody would use, pay for, and rely on such expensive hardware raid controllers if the death of the controller meant unrecoverable data.

Though I may seem a bit one sided towards hardware raid, I'll have to check out software raid sometime soon as I like minimalistic setups. The more uses I can get out of the same hardware the better. But how would it function if let's say you had a 4 onboard sata ports and you decided to utilize them all into a raid 10? Would that slow down the computer at all or is that even possible?

Btw NeddySeagoon, nice signature  :Smile:  I remember hearing that quote before somewhere else but it was slightly different, it was "Those who have had drive failure and those who will experience it". Though you gotta admit I'm sure there's still plenty of people who don't do backups after failures hehe.

----------

## NeddySeagoon

1U,

All hardware dies. The random failure rate is proportional to the junction temperature. It doubles for every 10C rise.

The point is that the data format on the disks, for hardware raid, is well, hardware specific. With software raid, its the same for all kernels. You can even swap the drives around and the kernel won't mind.

With software raid0, I get 84Mb/sec sustained transfer rate but my SATA (SIL 3112A) pretends to be a PCI device. The overhead in raid0 is minimal, data is read/written from two (or more drives) instead of one. I've not tried timing raid1, my /boot (which is raid1 to keep grub happy) is too small to time.

----------

## 1U

Is raid 10 supported by software? I just hope the difference between normal drive speed and 2 of them in a raid 1 won't be like my experience with software raid in winblows where it was terribly slow.

----------

## NeddySeagoon

1U,

You can nest kernel raid as much you like.

You define say /dev/sda1 and /dev/sdb1 as being /dev/md0 as raid0

and a few more, then define /dev/md0 and /dev/md1 as being a raid set at some other level.

Windows (BIOS RAID) raids whole drives, unlike kernel raid, that uses partitions.

----------

## lbrtuk

1U, windows software raid sucks because it's always wierd cheap proprietary software. This does not mean that all software raid sucks.

On linux, it is generally recommended to use software raid in all but the very high end. The reliability is far better. It means you aren't tied to one hardware vendor for your data integrity (RAID controllers do go wrong). You aren't tied to one vendor's closed drivers & updates. The tools are generally much better and are standardised (mdadm and raidtools).

Edit: Oh, and software raid usually has superior speed, simply because the drivers for hardware raid cards are often badly written. What's more, once the hw raid cards are shipped, that's basically it. The vendor can't make (m)any performance enhancements / bugfixes once it's out the door. Software raid gets bugfixes and speedups almost every kernel revision.

----------

## 1U

Thanks for explaining all that, I'll definitely give it a try on a test machine  :Smile: .

----------

## chashab

NeddySeagoon; thanks for the nesting tip, i'll remember that.

one of the servers i have is a dell: dual xeon 2.8, 1 GB ram, 2 SCSI drives using software raid 1.  it's used to serve a JSP site with about 1.2GB traffic daily.  i just emerged apache and ran top, md3_raid1 is barely seen at the bottom of the top list.  the pages still came up fairly fast.

anyone have any suggestions on how to go about benchmarking this machine?

thanks for your help guys.  i've decided to go with software raid on my new servers coming next month.  i'll test them out and let you know how they perform!

----------

## chashab

 *NeddySeagoon wrote:*   

> 
> 
> You can nest kernel raid as much you like.

 

so what your saying is that i should get 6 drives and configure them in 3 pairs of raid 0, then use the three md devices and create a raid 5?  that sounds like fun!   :Smile: Last edited by chashab on Fri Jul 22, 2005 6:27 pm; edited 1 time in total

----------

## NeddySeagoon

chashab,

You can do that if you want.

----------

## simeli

anyone here know of a solution to grow an existing raid 5 by adding more disks and use xfs_growfs. i believe this cannot currently be done in software but i'd like to grow my /home over time by adding more disks.

----------

## dstutz97

 *simeli wrote:*   

> anyone here know of a solution to grow an existing raid 5 by adding more disks and use xfs_growfs. i believe this cannot currently be done in software but i'd like to grow my /home over time by adding more disks.

 

LVM is what you want...

Add 50GB to a logical volume named "share" in the volume group vg0:

```
# lvextend -L+50G /dev/vg0/share
```

grow the xfs filesystem that lives on /dev/vg0/share:

```
# xfs_growfs /mnt/share
```

You don't even need to unmount the filesystem!  :Smile: 

----------

## obrut<-

from make xconfig:

 *Quote:*   

> Support adding drives to a raid-5 array (MD_RAID5_RESHAPE)
> 
> A RAID-5 set can be expanded by adding extra drives. This
> 
> requires "restriping" the array which means (almost) every
> ...

 

----------

## Mad Merlin

 *chashab wrote:*   

>  *NeddySeagoon wrote:*   
> 
> You can nest kernel raid as much you like. 
> 
> so what your saying is that i should get 6 drives and configure them in 3 pairs of raid 0, then use the three md devices and create a raid 5?  that sounds like fun!  

 

You can, but it's generally best to put your redundancy at the lowest level. A RAID 0 of 3x(2 disk RAID 1) would probably be a better choice, in this case.

----------

## Cyker

My take:

3 RAID types:

Hardware

Accelerated

Software.

Of the three, Hardware RAID is by far the most expensive.

However, it requires the least data bandwidth out of all the RAID interfaces (You CAN use Hardware RAID across a PCI bus without saturating it! Unlike the other RAIDs, it only sends the data to the array 'drive' ONCE, not 3+ times - The RAID controller splits that data stream up to the different drives attached to it.)

Hardware RAID cards are the easiest to set up - They generally require, at minimum, generic IDE support and the array will show up as a single drive to the OS. Proper drivers are needed to utilise the array fully 'tho, espescially for array management.

Accelerated RAID is most commonly found on motherboards and cheap RAID cards (If your RAID card is less than £200, it's probably 'Accelerated' RAID and not Hardware!)

All the hard work (Parity calculations etc.) are done by the main CPU, and it requires an OS driver to work. These are usually proprietary and very buggy. Accelerated RAID is by far the slowest of the three.

It's simpler to set up than Software RAID, as it often has a BIOS setup system similar to Hardware RAID, but getting it to work in Linux is a lesson in pain because you either have to use the buggy proprietary kernel module drivers, or dmraid.

The RAID format is usually the same between Linux and DOS drivers, so both Linux and Windows can access the same RAID array.

Software RAID is by far the cheapest, but also has the heaviest data overhaul as duplicate/stripe data has to be sent to each drive individually - Don't even think about using it with IDE - SATA/SCSI is virtually a requirement for Software RAID! And even then, you don't want a shared bus in the way (If you want to use Software RAID, stay away from Shared Bus systems - This means PCIe over PCI-X/PCI and HyperTransport over Intel's shared bus!)

It's considerably scarier to setup than Hardware RAID ('tho slightly easier than Accelerated RAID, on Linux at least  :Wink: ) - You can do it with about 7 commands, but you REALLY NEED TO KNOW what you're doing in order to tailor the commands properly.

It's not hard, but the art of reading is on the decline, and many a newbie has messed up their RAID array by blindly following a guide!

Software RAID has the biggest disparity in performance - A properly configured Software RAID on a decent system will blow the socks off Accelerated RAID, and probably out-perform Hardware RAID too.

A poorly configured one will limp along painfully.

Compatibility is a good advantage with Software RAID - Unlike the others, you can take all the disks and put them in a totally different machine, the boot it up in a Linux with RAIDx support, and it'll automatically detect them and offer them up for mounting  :Smile: 

Software RAID is the most flexible - You can even run RAID6 given enough drives - Something which is only available on the really expensive Hardware RAID controllers.

Software RAID does use the most resources (CPU power, I/O bandwidth, IRQ spamming etc.) out of all the three, but on modern machines the performance hit is tiny. On a dual-core/cpu system, it's practically non-existant!

I definitely think Software RAID is nothing to be sneezed at - It offers very good performance and the best value, since the cost of a Hardware RAID controller can easily be the same as half the array's disks!

You do need a good disk subsystem to cope with the considerably higher I/O traffic, so a good mobo or some multi-lane PCIe controller card is highly recommended.

Hardware RAID is good if you've got a chunk of money burning a hole in your pocket, and want to get the array up with the minimum of fuss. Make sure you buy from a reputable, and pref. Linux friendly, company. 3ware seems to have a good rep here  :Smile: 

Accelerated RAID is just pants - Unless there is some critical reason to use it, stay away - In Linux it is at least as, if not harder, to set up than Software RAID, and if the RAID chip on the mobo goes you're screwed. At least with a daughterboard you can buy a new one because they don't revise them as quickly. With the rapid changing of the mobo world, finding a similar mobo might be hard, esp. older ones!

Some people insist on using it because the feel it is there (Aye! A bit like them damned AMD64 users! (j/k guys!  :Mr. Green: )), but seriously, don't fall into this trap.

----------

## zeek

Software Raid: played up to be better than it is.  MD RAID1 will come up unsync'd and broken after a power failure if it was being written to when the power failed.  This means it needs manual attention to fix.  I've never had this problem with Driver Accelerated Raid or HW Raid.  MD Raid0 doesn't give real world performance benefits either.  I moved a busy mysql DB to a MD Raid0 and saw no performance improvement.  The next step was to spend some $$$ and move it to a Megaraid solution where the performance went through the roof.  Yes its not a fair comparison but it shows you get what you pay for.

Driver Accelerated Raid aka Fake Raid: My experience here is limited to HighPoint controllers.  linux support used to be poor and the initial 2.6 kernel dropped all support for this type of raid and I haven't tried it since.  On Windows it works great.  If this is what you have I'd suggest getting on the Google and finding out what others have to say as I'm sure the performance is all over the spectrum.

HW Raid: experience here is limited to server class HW.  Its expensive and performs well.  Yes it means you're tied to a vendor -- isn't that what you want?  If you can afford this it means you have a warranty and vendor support.

To sum it up, software raid is ok, I use it on my workstation and at home but have had disappointing results with it when running servers.  If you're buying rack servers, spend a few extra $$$ and get the HW raid gear.  Its well worth the investment.

----------

## bunder

my experiences:

sw raid:  works alright, but i don't like rebuilding my mirror every boot though.

hw raid: works alright, but i can't do any diagnostic stuff on it without installing the kernel modules and software for the lsi raid bios.

i don't really notice a speed difference, but my sw is ide and my hw is scsi, ymmv.  

one pita about raid:  if you can't afford all your discs at once, mirror first before doing a striped-mirror.  i've lost 2 drives the hard way. (one, than 6 months later, the other)   :Wink: 

----------

## Cyker

I would have thought ALL types of RAID would need to check/resync the drives if they got powered off during a write?

Heck, even my non-RAID drives will be forced to fsck themselves on boot-up if the drive is being written to during power loss.

One REALLY REALLY cool thing about high-end Hardware RAID is they have a battery-pack connected to them, and if there's a power outage it can (If setup properly...) quickly dump all the data to the disks before shutting down the drives, or just maintain the write-memory and complete the write-op when you power back on.

It's a little redundant if you have a UPS (And if you're spending that kind of money on a daughterboard, you WILL be using a UPS. RIGHT?!), but having an extra safety net is never a bad thing  :Wink: 

I've had a total of 2 power-failures since I've been using mdadm RAID5 on my server.

The first time, it noticed this and executed a resync op.

It does this in the background however, and I've found it doesn't impact the system that much unless you really hammer the disk I/O, and even then the re-sync will defer to the data op and let it have priority.

The second time, I'd read about the Write-Intent Bitmap extension and activated it; This is kinda like Journalling-lite for software RAID, and it meant that post-power fail, Linux ONLY re-syncs the parts that were in use. This is much faster than a full re-sync, and the array was back in a consistent state before the kernel handed over to init  :Smile: .

It's all these little touches that have really impressed me with the mdadm software RAID in Linux.

Most of my Software RAID experience has been with RAID5.

I must admit, I've not touched RAID0 and 1 in Software mode. I'd be more inclined to find a pure hardware RAID1 card that needs no drivers and run it off that. That way, you could boot off it a lot easier than the kludgey methods with Software RAID1/0.

I recently remembered one area that I've found Software RAID (5) really gets thrashed by Hardware RAID5 - Lots of small (sub-stripe) sized writes. That really butchers the performance and your IOWAITs go through the roof.

----------

## zeek

 *Cyker wrote:*   

> 
> 
> One REALLY REALLY cool thing about high-end Hardware RAID is they have a battery-pack connected to them, and if there's a power outage it can (If setup properly...) quickly dump all the data to the disks before shutting down the drives, or just maintain the write-memory and complete the write-op when you power back on.
> 
> It's a little redundant if you have a UPS (And if you're spending that kind of money on a daughterboard, you WILL be using a UPS. RIGHT?!), but having an extra safety net is never a bad thing 
> ...

 

I've been in a server room of hundreds of computers during a power failure.  There was a problem with getting the generators online when I heard one of the most eerie sounds ever: hundreds of computers and 3 HUGE AC units all switching off simultaneously.  Battery backup is not redundant.  :Smile: 

----------

## Suicidal

I have used, software and hardware raid under Linux, 

My software raid was mirroring /boot and striping everything else. The results were excellent, It didn't feel like any SW RAID I had ever used under windows. It was fast, quick & responsive.

I had 13 servers configured this way at my previous job that I left in December and so far I have recieved no calls from my previous employer about raid failures.

----------

