# RAID 5 / RAID 10 - do I get something wrong?

## mulda

Hi,

I currently have a gentoo box with kernel 2.6.20.4 running on it. I have put four SP2504C (250 GB SATA, connected via ICH7 bridge) devices into software RAID10 provided by the kernel, using XFS filesystem.

Now I'm somewhat confused about the (writing) speed:

 *Quote:*   

> 
> 
> dd if=/dev/zero of=testdatei count=2097152
> 
> 2097152+0 records in
> ...

 

47.4 MB/s seems to be very slow for me, a SP2504C in single mode already gets around 45.6 MB/s.

I asked a friend of mine to perform the same command on his box, and he has a writing speed of 166.4 MB/s (!) - he has four 150 GB WD Raptor into software RAID5.

Well, I know, Raptors have typically more writing speed (~70 MB/s), but I simply don't understand why he has such an amazing writing speed, although he uses a RAID5 which actually has a not-so-good writing performance and what should therefore decrease the result.

As far as I know, RAID10 increases both writing and read performance. In my case, read and writing speed only seem to have be slightly increased (writing 47.4 MB/s <=> 45.6 MB/s, reading 1 GB 17 seconds <=> 24 seconds), making the loss of about 50% disk space for that small performance gain unjustified.

My question: Do I just get the whole RAID 10 story wrong, is the kernel SW RAID performance just loosy, hard disks too slow, ... ?  :Smile: 

Thanks!

Max

----------

## Veldrin

I am not sure about files creation speed on XFS... but IIRC it seemed a little slow.

I use hdparm to test performance. 

```
# hdparm -Tt /dev/<md-partition/hdd>
```

cheers

V.

----------

## mulda

Results are almost the same:

```

hdparm -tT /dev/sda

/dev/sda:

 Timing cached reads:   3626 MB in  2.00 seconds = 1813.78 MB/sec

 Timing buffered disk reads:  188 MB in  3.01 seconds =  62.44 MB/sec

hdparm -tT /dev/md0

/dev/md0:

 Timing cached reads:   3348 MB in  2.00 seconds = 1674.40 MB/sec

 Timing buffered disk reads:  264 MB in  3.03 seconds =  87.25 MB/sec

```

----------

## Veldrin

Well almost... I do have a SATA1 system, I get speeds around 60-70MB/s. I am not sure, which one it was, RAID10 or RAID01: one gice you a nice speedup, the other make security wise more sense. From you problems, I guess that RAID10, tough stable, to not (or only little) improve performance. 

WD Raptor HDD's are know to be very fast, and on top of that they are SATA2-based. By just looking at the specification of the protocols/interfaces SATA1 should give you 150MB/s and SATA2 300MB/s, even the old PATA interface was designed to have tranferrates of 133MB/s.

cheers

V.

----------

## NeddySeagoon

mulda,

For your raid1, you must write data twice, once to each part of the mirror, so your write speed is normally reduced.

You get the speed of a single drive, which is actually pretty good.

edit ========

For files bigger than the on drive buffer, you can ignore the difference between 150G and 300G SATA interfaces since the data rate limit becomes the head/platter interface. That limit is determined by the size of the data bits the head produces and how fast the platter moves past the head.

----------

## mulda

I understand, but how does it come that a RAID5 config with also 4 devices literally outperforms my RAID10? I'm loosing 50% of my space, with a RAID5 config just 25%, but obviously it's much faster - though RAID10 only rewrites the data, while RAID5 even has to calculate parity and spread it between the disks.

The read speed is also disappointing, it should actually be as twice as fast as with a single device.

I recently had a RAID0+1 config under Windows with the nvraid controller with 4 devices and I definitely had much more increased read (+200%) and write speed. The linux kernel RAID isn't really performing well.  :Sad: 

----------

## Veldrin

As I mentioned above, RAID10 != RAID01. 

RAID10 mirrors the disks first, the stripe those mirrors. Meanwhile RAID01 stripes the disks, mirror those stripes. 

 *Quote:*   

>  Generally, this level [RAID level 0 + 1] provides better performance than RAID5.

 

So RAID01 give you a performance boost at cost of security.

[edit] typoesLast edited by Veldrin on Thu Jul 19, 2007 7:36 pm; edited 2 times in total

----------

## NeddySeagoon

mulda,

With 4 devices in raid10, you have 100% redundancy - everything is written twice.

The same 4 devices in raid5 only store 33% extra data for redundancy, so there is a lot less data to write.

I don't know the implementation of raid in the kernel but its possible it does not do parallel reads and writes.

I can see the dividing and merging the data for raid1 being non trivial.

The overhead calculating parity for raid5 is very small for a modern CPU. It won't have any problems keeping the drives busy.

I'm not sure how you can compare times between Windows and kernel raid, since you don't know you are measuring the same things.

----------

## mulda

 *Quote:*   

> 
> 
> As I mentioned above, RAID10 != RAID01. 
> 
> 

 

Yes, I read that, now I know the difference better  :Smile:  (good german explanation: http://www.blogs.uni-erlangen.de/anfalas/stories/75/)

Basically I got the hint now. Would be nice to see RAID0+1 support in kernel, as I actually just have to get up and replace the affected device in case of a failure - no need to wait until a second drive gets screwed *g*

Thanks!

----------

## Veldrin

Can't you just do this by hand? i.e creating the mirrored drives, and then stripe those mirrors? Or is there a problem in creating a md-device consisting of md-devices?

----------

## NeddySeagoon

Veldrin,

You just make an md consisting of other md devices

----------

## Veldrin

I was thinking doing that, but I was not sure whether it would work.

So it can be done.

----------

## mulda

So basically I can have a RAID0+1 with 2 devices?

/dev/md0 @ /dev/sda1, /dev/sdb1, each 125 GB, RAID0

/dev/md1 @ /dev/sda2, /dev/sdb2, each 125 GB, RAID0

Final result: /dev/md2 RAID 1 set, including /dev/md0 and /dev/md1 as members

If that works I'll be very astonished, was already about to buy enough drives to build up a RAID 5  :Very Happy: 

----------

## Veldrin

I don really see the point in doing that. the mirroring will cause the head move forwards and backwards to write all data. This will end in massive loss in speed. 

If you care about data redundancy, just use a plain simple RAID1.

/dev/md0 @ /dev/sda1, /dev/sdb1 each 250GB, RAID1

You can build a RAID5 array with just 3 disks, and you would have the space of 2 disks at you disposal. [or in general, a RAID5 array with n disks, will have n-1 times the space of each disk]

----------

## NeddySeagoon

mulda,

Yes, you can do that but the speed will be poor because of all the head movements between the two partitions on the same drive. Think about how writes will be laid out on the platter surface. 

You can do it all on one drive if you just want to play but thats of no practical value whatsoever.

----------

## ferg

As far as I understand it Linux kernel RAID10 (as opposed to RAID 10 or RAID 01) is fairly close to RAID0 in both write an read speeds.  You may be writing twice, but each of those writes is spread across two devices so is twice normal write speed (or rather teh other way around). I guess read speed should be similar to RAID1.

Bear in mind that kernel RAID10 is neither a RAID0 over RAID1, nor RAID1 over RAID0. It's somewhere in between, and can do things like have an array over an odd number of drives.

To be honest though in my real life experience on my own desktop, I've not noticed much difference between RAID5 and RAID10 (kernel that is). I use RAID10 for my HOME directories (for speed and redundancy), and RAID5 for media that is generally jsut read.

Cheers

Ferg

----------

## ferg

BTW hdparm is not a great tool for benchmarking raid arrays.

DD is OK, but using multiple instances of Bonnie++ or iozone are the best way to really see how fast your array is performing.

Cheers

Ferg

----------

## eccerr0r

I'm using a 4-disk RAID5 (Maxtor+Seagate 120GB PATA) Disk-per-channel, AMD XP2200+, SiS735 (K7S5A) (EXT3FS)

Single disk performance is around 53MB/sec:

hdparm:

 Timing buffered disk reads:   440 MB in  3.01 seconds = 146.31 MB/sec

dd read:

1073741824 bytes (1.1 GB) copied, 9.77426 s, 110 MB/s

dd write:

1073741824 bytes (1.1 GB) copied, 17.899 s, 60.0 MB/s

When I was using the same RAID setup on my P3-Celeron 1.2GHz (@1.364GHz), 440BX + PIIX4, it was a _LOT_ slower despite the fact that single disk performance is comparable.  I was barely cranking out hdparm scores the speed of a single disk on the array, and dd read/write were pure miserable.  Seems hardware makes a big difference.

Unfortunately I haven't/can't try on my C2D/i965/ICH8 board... but the RAID0 I've tried seemed pretty good and I expect it to perform even better than the SiS735.

----------

