# Hardware vs Software RAID

## JeffBlair

Hey all. Well, I found a good deal on a older 3ware 9500s-8 card. I've got it setup as a 5x2T RAID5 right now. And, I might be adding another 2T to it in the near future. 

So, what I was wondering is, this. I know that card's old. So, would it be better to just say forget it, and backup what I have on there now, and convert it over to a software RAID5? I mainly use it for streaming my DVD/Blue-Ray collection/Movies/Backup of PC's. Sooner is better then later so I don't have to re-do a lot of stuff.  :Wink: 

Thanks all.

Here's what I got really quick from my tests on the 3ware card:

hdparm -tT /dev/media/media

/dev/media/media:

 Timing cached reads:   3296 MB in  2.00 seconds = 1648.82 MB/sec

 Timing buffered disk reads: 210 MB in  3.02 seconds =  69.51 MB/sec

============

grep MemTotal /proc/meminfo

MemTotal:        2035188 kB

============

grep bogo /proc/cpuinfo

bogomips        : 5866.76

bogomips        : 5866.46

----------

## cst

Hardware RAID controllers should in theory be better, but just for a comparison I get this on a "software" (motherboard) RAID:

```
 hdparm -tT /dev/dm-0

/dev/dm-0:

 Timing cached reads:   8212 MB in  2.00 seconds = 4107.74 MB/sec

 Timing buffered disk reads: 664 MB in  3.00 seconds = 221.00 MB/sec
```

----------

## NeddySeagoon

JeffBlair,

hdparm is not a benchmark.  It carries out sequential reads from the outside edge (where its fastest) of the device you tell it.

What sort of bus is your 3ware 9500s-8 card on ?

If its a  33MHz 32 bit PCI bus, the theoretical maximum is 133 MB/sec for all connected devices but in practice you will be doing well to get 100MB/sec

Chip sets on the motherboard are not constrained by bus specifications and often operate at much faster clock speeds and concurrently with one another.

Check drive transfer rates with bonnie

----------

## Veldrin

What happens if the card dies? you have a replacement card to recover your array?

IMO software raid - for home use, not for professional - is better, as the slowdown is much less to worry about, than the vendor lock-in to you get with hardware raid.

OTOH - though I never really tested it - hardware raid is supposedly faster than software raid, while software raids biggest advantage is the independence from any vendor.

Looking at your setup - at least the parts you mention: 

I assume a network connection - at what speed? 

if you are running at 100MBit or wireless, then definitely convert to software raid.

if you are running gigabit, how is the card connected? PCIe, then you should be fine. PCI, then you have again the bus to worry about, as it could be shared with the raid card.

Considering software raid - what cpu and how much ram are we talking about.

I am trying to find a bottleneck, or the weakest link in the setup.

just my .02$

V

----------

## frostschutz

If it's a server that is several $1000 in hardware and several $100 in monthly hosting costs, with several TBs of disk space where every bit of speed counts, sure why not spend another $800 on a hardware raid controller with battery backup. For home use or cheap desktop hardware servers, Software RAID will offer you practically the same performance and much more flexibility at a lower price. As long as you're fine with it working in Linux only there's no reason to use anything other than mdadm...

----------

## Mad Merlin

I wrote this up to address this very common question: http://skrypuch.com/raid/

----------

## krinn

 *JeffBlair wrote:*   

> 
> 
>  Timing cached reads:   3296 MB in  2.00 seconds = 1648.82 MB/sec
> 
>  Timing buffered disk reads: 210 MB in  3.02 seconds =  69.51 MB/sec
> ...

 

It's really simple in fact to see how much you will get : 1xdiskrate * number_of_disks_stripe <= bus_max_bandwith (more or less)

Any sata1 controller can drive a sata1 disk at ~65-100MB/s (and i gave you a big interval, but just check users here posting their result, average is ~90)

I think your bus show its limits there.

So your card isn't helping there and 69MB/s is poor, but quiet good for a pci interface, must be max out. You can say it's a good card, but with an old interface that is limited.

and hardware raid controller cannot show what you'll get vs software or fakeraid with just an hdparm, because when doing hdparm the software is just using the two disks and all the cpu it wish to do the tests.

when your cpu get busy doing something else, hardware controller will still keep the going, when the soft/fake raid will decrease because of your busy cpu.

you can also then guess easy cst is using 2 stripe disks @110, must be good sata2 or bad sata3 drives (or 4 good ata5-6 drives)

Upto you for what you'll be do with that, the bus must be max out (or else this time the card sucks) but even when cpu busy, your throuput will stay stable, and for streaming without hipcut or things like that it's really good.

Considering your using 2TB drives, i suppose speed isn't what you were targeting, but more safety & space. That's what this card will gave you.

If your cpu is big enough, it might be able to handle decoding your movie but still have spare time to handle software raid. But again, remember it's software, the more disks, the more cpu need, when hardware raid controller are build toward controlling X number of disk, and have the only task of doing that.

----------

## JeffBlair

Yeah, I just have it in a regular PCI slot. Latter on I might swap out the motherboard for a server one so I can get the full PCIx use out of it. 

As for the decoding of the movies, my media PC should be doing that, since I just have the drive mapped on there. The storage server doesn't do a whole lot other then just store all my data, and run my torrent program. 

If I do convert it back over to software, I'll have to end up swapping out the motherboard to an older AM2+ board. The one I have in there now doesn't have enough SATA connectors. Or, I might see if I can find a cheap newer one for the Phenom II I have. 

currently I have 2 gig's of memory in it, but again, the board is limited to only 2 slots....It's looking more and more like I should upgrade that board. 

I currently have dual gig NIC's in there bonded. One onboard, and one PCI NIC. Again, my next board I'm hoping to have dual onboard NIC's. Ugh.. off to Fry's I go.  :Smile: 

----------

## wildbug

 *JeffBlair wrote:*   

> If I do convert it back over to software, I'll have to end up swapping out the motherboard to an older AM2+ board. The one I have in there now doesn't have enough SATA connectors. Or, I might see if I can find a cheap newer one for the Phenom II I have. 

 

What about the 8 SATA connectors on the 3ware?  You can set it up in "JBOD" mode so it passes unused disks through to the OS.  If you enter the BIOS during boot, I think it's under the "Policy" menu (Alt-P, then Enter).

2GB of RAM should be *plenty* for a fileserver.

----------

## JeffBlair

Right, but if I'm just using the JBOD option, it's still going through the PCI bus, and not the SATA one. Just wondering if that would be quicker or not.

----------

## NeddySeagoon

JeffBlair,

Using your Raid card in JBOD mode will be significantly slower. At present your raid card, gets data over the PCI bus and does all the work to distribute the data over the multiple platters.

If you move this to the CPU, you have to pass Nx as much data over the bus as the syatem has to write to each drive.

Reads are similar.

Using a bond with one NIC in a PCI slot is a bad idea.  A PCI slot cannot keep a 1G NIC busy and depending on how your bond is working you may be passing data to/from your raid over the PCI bus twice.  Once from raid to main RAM, the RAM to NIC.

It would not be surprised to learn that your raid gets faster if its the only PCI plug in card you have.

Post your lspci output - it would be useful to see how your PCI hardware is distributed over your PCI buses.  ou will have more than one.

----------

## JeffBlair

Yeah, that's why if I do move it back to a software RAID, I won't use the 3ware card, and I'll just have them all plugged directly into the motherboard. 

The other motherboard that I'd use has dual gig NIC's onboard, so I wouldn't have to use my PCI NIC card.

And, here's the output for my lspci:

00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03)

00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03)

00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01)

00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01)

00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01)

00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01)

00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01)

00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01)

00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01)

00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)

00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01)

00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01)

00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01)

02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)

03:00.0 Ethernet controller: Linksys Gigabit Network Adapter (rev 10)

03:01.0 RAID bus controller: 3ware Inc 9xxx-series SATA-RAID

----------

## NeddySeagoon

JeffBlair,

The numbers at the start of every line in lspci tell about the topology of the bus arrangement.

For example 

```
00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03)
```

shows bus 00, device 00 function 0

Everything that starts with 00.  is on PCI bus 00. As its all your internal devices in the motherboard chipset, it is not constrained to 33MHz operation.

You cannot plug anything into the PCI bus, so Intel can do as they please.

I suspect your have a bus 01 somewhere but it has no device on on so its not show. Perhaps its an AGP port. You may not gave a connector to be able to use it yourself.

```
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) 
```

is your Internal/on board NIC.  Its the only device on bus 02, so does not share the bus bandwidth with other devices.  As you can't add anything, it may well operate faster than 33MHz too.

bus 03 which contains 

```
03:00.0 Ethernet controller: Linksys Gigabit Network Adapter (rev 10)

03:01.0 RAID bus controller: 3ware Inc 9xxx-series SATA-RAID
```

 is your 33MHz PCI bus with connectors you can plug devices into.  It must operate in compliance with the PCI specification or many plug ins would fail.

The bad news here is that your plug in NIC, which is in your bond is competing for PCI bus bandwidth with the RAID card. Removing the NIC may improve raid performance as the PCI bus will be the bottleneck.

----------

## JeffBlair

Yeah, I do have a PCI-Express slot that I'm not using, so that's why it's not showing up.  

Yeah, the more I look at it, the better software looks. And, I won't have to worry if the card dies on me. Fun times of switching out boards.  :Smile: 

----------

## dE_logics

I have no experience in RAIDs (only tried software RAID once), but following the quad and 6 cored CPUs performance of software RAID should be identical if not better.

----------

