# soft raid5 slower than the indivudual disks

## zatalian

Hi,

I noticed my soft raid 5 is very slow. I have 4 sata 250gb drives making one /dev/md0

When i try hdparm i get the following results : 

```
 hdparm -t /dev/sda

/dev/sda:

 Timing buffered disk reads:  192 MB in  3.02 seconds =  63.55 MB/sec

hdparm -t /dev/sdb

/dev/sdb:

 Timing buffered disk reads:  198 MB in  3.02 seconds =  65.53 MB/sec

hdparm -t /dev/sdc

/dev/sdc:

 Timing buffered disk reads:  200 MB in  3.00 seconds =  66.59 MB/sec

hdparm -t /dev/sdd

/dev/sdd:

 Timing buffered disk reads:  200 MB in  3.02 seconds =  66.32 MB/sec

hdparm -t /dev/md0

/dev/md0:

 Timing buffered disk reads:  134 MB in  5.75 seconds =  23.32 MB/sec
```

from what i can find on this forum, it looks like the individual hdparm results are normal. However, hdparm on md0 seems way to slow.

Any ideas how i can fix this?

----------

## mrybczyn

Try running a bonnie++ test to compare (use a 10GB test size to eliminate cache polluting your results).

hdparm is not a good test to measure raid performance.

Also, raid5 doesn't necessarily equal high performance, there's quite a bit of overhead in the checksum algorithm.

----------

## NeddySeagoon

zatalian,

You cannot determine read speed very accurately with hdparam. The values it returns depends on what else the CPU is doing at the time and worse, what else is using the disk (or raid set) under test.

I would expect software raid5 to be slower than an individual disk for small files as three (or more) drives need to be read to run the inverse raid algorithim. Its not clear to me what the hdparm result means either as its a raw 'partition' read and /dev/md0 is a compound object.

----------

## RaceTM

In addition to what neddy mentioned..a more accurate way of benchmarking your array would be to time a large file transfer.  You could use dd to generate a block of data then time this transfer:

dd if=/dev/random of=/mnt/md0/random.bin bs=51200000 count=1

time "cp /mnt/md0/random.bin /root/"

and compare this with the time it takes to copy from a single drive.

Just be careful using the dd command...you can easily destroy your system if your syntax is not correct...

----------

## NeddySeagoon

RaceTM,

Ouch!  /dev/random blocks when ut runs out of entropy, so that command could take a long time to complete.

/dev/zero is probably a good enough data source for this test but if you want a reasonable approximation to random data use /dev/urandom, it uses sudo random number generation, so never blocks. Its still quite slow though

----------

## star882

 *NeddySeagoon wrote:*   

> RaceTM,
> 
> Ouch!  /dev/random blocks when ut runs out of entropy, so that command could take a long time to complete.
> 
> /dev/zero is probably a good enough data source for this test but if you want a reasonable approximation to random data use /dev/urandom, it uses sudo random number generation, so never blocks. Its still quite slow though

 

Even better than that is to use a video camera pointed at anything that changes significantly (fish tank, lava lamp, busy street, etc.) and capture uncompressed video.

----------

## RaceTM

 *NeddySeagoon wrote:*   

> RaceTM,
> 
> Ouch!  /dev/random blocks when ut runs out of entropy, so that command could take a long time to complete.
> 
> /dev/zero is probably a good enough data source for this test but if you want a reasonable approximation to random data use /dev/urandom, it uses sudo random number generation, so never blocks. Its still quite slow though

 

lol good to know  :Very Happy: 

----------

## zatalian

I only have this raid drive and a boot partition which is on one of those drives in my machine, so i tried the following commands  : 

```
time dd if=/dev/zero of=/boot/zero.bin bs=1048576 count=2048          

2048+0 records in

2048+0 records out

2147483648 bytes (2,1 GB) copied, 28,1935 s, 76,2 MB/s

real   0m28.196s

user   0m0.003s

sys   0m2.987s

time dd if=/dev/zero of=/tmp/zero.bin bs=1048576 count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2,1 GB) copied, 119,205 s, 18,0 MB/s

real   1m59.207s

user   0m0.005s

sys   0m13.507s
```

for some reason /dev/random did not start on those big bs's, so i used /dev/zero, but i guess the results are the same. As you can see, this test confirms what i saw with the hdparm tests. Its even worse...

I also noticed that the first test makes a lot less noise, but maybe thats normal seen the fact that only one disk is working?

I didn't try to copy the zero.bin files because i would have to write them to the same raid array.

----------

## Cyker

As someone mentioned, writing to RAID5 can be slow because it's very expensive computationally.

Before I upgraded the CPU in this system to a dual opty from a 3000+ A64, RAID5 was just too damned slow (Between the hash calculations and the IRQ spam/IO wait the poor thing could barely do anything else without huge amounts of latency!)

With the dual-core opty it performs much better!

Reading, however, is much faster and should be near RAID0 speeds.

I know with mine, I can saturate the 2 bridged gigabit links quite easily if two people are pulling backup recovery or some other gargantuan read  :Mr. Green: 

(And since the .23-mm patched sky2 driver I have seems more stable, these dumps actually complete without me needing to reset the server! \o/)

----------

## HeissFuss

Cyker is correct that reads will be much faster with RAID5.  hdparm is a bad benchmark, but it at least should report much higher read speeds from the raid compared to single disks.

Could you post your processor, memory, kernel version, and lspci output.  Also, watch your CPU usage while copying a file off the raid to see if it's spiking.

Also the output from cat /proc/mdstat, and mdadm -D /dev/md0

----------

## Cyker

For comparison (System under a bit of load so not optimal, but still...):

 *Quote:*   

> hdparm -tT /dev/md0
> 
> /dev/md0:
> 
>  Timing cached reads:   1146 MB in  2.00 seconds = 572.74 MB/sec
> ...

 

----------

## zatalian

I have a AMD64 X2 3800+ with 4Gb ram. Right now, i'm running kamikaze3 but i don't think it has anything to do with the kernel. I could try running vanilla for a test though...

```

uname -a

Linux drgnfly.dyn-o-saur.com 2.6.23-kamikaze3 #2 SMP Mon Oct 22 16:33:28 CEST 2007 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ AuthenticAMD GNU/Linux

lspci

00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)

00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev a3)

00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)

00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)

00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)

00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2)

00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)

00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)

00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)

00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev a2)

00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev a3)

00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)

00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)

00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)

00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)

00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration

00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map

00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller

00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control

01:00.0 VGA compatible controller: nVidia Corporation NV43 [GeForce 6600 GT] (rev a2)

05:0b.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)

05:0c.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13)

cat /proc/mdstat 

Personalities : [raid6] [raid5] [raid4] 

md0 : active raid5 sdd2[3] sdc2[2] sdb2[1] sda2[0]

      720563136 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      

mdadm -D /dev/md0 

/dev/md0:

        Version : 00.90.03

  Creation Time : Wed Jun 21 21:46:48 2006

     Raid Level : raid5

     Array Size : 720563136 (687.18 GiB 737.86 GB)

  Used Dev Size : 240187712 (229.06 GiB 245.95 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Persistence : Superblock is persistent

    Update Time : Thu Oct 25 08:31:03 2007

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

         Layout : left-symmetric

     Chunk Size : 64K

           UUID : e8e250b8:481392d5:d6514c94:86ac6be6

         Events : 0.270862

    Number   Major   Minor   RaidDevice State

       0       8        2        0      active sync   /dev/sda2

       1       8       18        1      active sync   /dev/sdb2

       2       8       34        2      active sync   /dev/sdc2

       3       8       50        3      active sync   /dev/sdd2

```

The CPU isn't doing much when testing my raid volume (max. 10%), so i guess i have enough cpu power. Or is there a way to tell the cpu to do some more work and make things faster? Are there any tweaks for dual core processors that can boost my raid?

On the other hand, i'm starting to think that what i see is normal. After all, i'm unable to test my raid volume without writing to it (except for hdparm), so it's normal i get these low results. But this does not explain the low hdparm results.

Edit: whow, just rereading this thread and i don't know how i misted the remarks on /dev/random, /dev/zero and /dev/urandom...

----------

## HeissFuss

What you're seeing isn't normal.  With your system specs, you should be seeing much faster reads and at least equivalent write speed for large files.

I'd try a different kernel (vanilla or gentoo 2.6.22.)  If that doesn't help, we should take a look at which kernel options you're using to see if there may be a problem there.

----------

## zatalian

I'm running vanilla 2.6.23.1 now, but i don't see any improvement...

You can find my config settings here

Also, how am i supposed to use bonnie++? I can't find any examples. I have however been testing with large files also. (copying vmware files). That is what made me think there is something wrong in the first place.

----------

