# Does HardWare RAID-1 is faster then Linux SoftWare RAID-1 ?

## Bash[DevNull]

Hello everyone,

I have one simple question and can't find right answer.

Does hardware RAID-1 is faster then Linux software RAID-1??

In my simple configuration linux software RAID-1 does not give any performance benefits on READ.

I think this is not right, because RAID-1 should give READ speed-up (am i right?)

So I am wondering, does hardware RAID-1 can help me with READ operation or not? :)

read from single sata disk:

```
 dd if=/dev/sda1 of=/dev/null bs=1M count=1024

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 14.5046 s, 74.0 MB/s
```

read from software raid-1 (sda1+sdb1) "disk":

```
dd if=/dev/md0 of=/dev/null bs=1M count=1024

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 13.9585 s, 76.9 MB/s
```

----------

## NeddySeagoon

Bash[DevNull],

It all depends. 

What do you mean by hardware raid?

How are the devices contributing to your software raid set arranged?

IDE, SATA or some other interface ?

A plug in card or fakeraid as provided on most motherboards these days. Fakeraid is another variation on software raid.

I would not expect there to be any difference in sequential reads but random reads *may* be faster, depending on your answers to the above.  Get Bonnie to to speed testing.

----------

## Cyker

Not sure with RAID1. With RAID5 it depends on the system powering it:

On multi-socket (And to a lessor extend multi-core) systems, software RAID is generally faster than fakeraid and on par with entry-level hardware RAID. On single-core and slower systems, the fakeraid will probably be slightly faster.

Linux RAID1 is weird; It is SUPPOSED to double read speeds by alternating between drives for read requests, but either it's not working or it doesn't do it concurrently because as you say, there appears to be no read performance boost in Linux' software RAID1.  :Sad: 

----------

## Dairinin

afaik, md raid1 does not "balance" sequential io in a way raid0 does. Sequential single-threaded read is always done from a single disk. With random read, the disk with the head closest to needed sector is selected. You can run two "hdparm -t" in parallel for a single disk and raid1 array, and should see th difference.

----------

## Cyker

Ahh, I didn't realise that! No wonder hdparm doesn't see any boost!

----------

## zotalore

Software raid on a system with high bandwidth to the disk controller (e.g. PCI Express) and a multicore CPU with large caches can be a lot faster than a hardware raid based upon some embedded CPU running at lower speeds. However, an optimal hardware raid controller will be faster since it usually will have a dedicated datapath to the disk.

----------

## Bash[DevNull]

 *NeddySeagoon wrote:*   

> 
> 
> What do you mean by hardware raid?

 

I can order 3Ware RAID Controller (no additional info from my hoster)

 *NeddySeagoon wrote:*   

> 
> 
> How are the devices contributing to your software raid set arranged?
> 
> IDE, SATA or some other interface ?

 

Simple 2 (identical) SATA disks:

```
md0 : active raid1 sdb1[1] sda1[0]

      308367552 blocks [2/2] [UU]
```

 *NeddySeagoon wrote:*   

> 
> 
> I would not expect there to be any difference in sequential reads but random reads *may* be faster, depending on your answers to the above.  Get Bonnie to to speed testing.

 

I can't use bonnie, cause it will requie me to detach one disk and format it for new FS. It's not possible on my working web-server.

According to all other posts i can conclude that i need RAID-5.

Because I use dual-core server that have 100% CPU idle time in average (200% for 2 cores), and 50% CPU time on IOWAIT (and 130% at load).

What is your opinion? Here is a picture for daily CPU load: http://img222.imageshack.us/img222/3116/cpuday.png

----------

## NeddySeagoon

Bash[DevNull],

I would expect raid1 on a plug in hardware raid card to be slower than kernel raid using the on board SATA connectors.

The key here is 'on board'

The CPU overhead in raid1 is almost zero, as no redundant data is calculated but writes have to det up to both drives, which is just setting up the DMA controller, thats just a few extra bytes for the CPU to write. 

The on board SATA interfaces can be operated much faster than the 33MHZ PCI Bus and the reality is, that with a plug in card, the limiting factor becomes the bus bandwidth. The situation is slightly different with a raid5 card, as the redundant data calculations are offloaded from the main CPU but a multi core CPU should not need that sort of help.

If you have a PCI-E raid card, the bus is faster but there are almost no savings to be had for raid1. With this sort of motherboard, your SATA interfaces may be spread over several buses too, so you need to take that into consideration when you do the arithmetic.

In short, you will see no benefit from a hardware raid card working in raid1.

----------

## Bash[DevNull]

Thank you for detailed answer.

I am going to try find hosting with RAID-5.

----------

