# How do i check if my raid10 is performing ok?

## arndawg

/dev/md3 is four 250GB s-ata drives in raid10 with mdadm. What should the expected performance be? 

running hdparm -tT /dev/md3 gives me this:

/dev/md3:

 Timing cached reads:   4336 MB in  2.00 seconds = 2169.98 MB/sec

 Timing buffered disk reads:  282 MB in  3.00 seconds =  93.94 MB/sec

Is this good, crap or in between? Or perhaps i'm using a wrong check for the task? 

thanks.

edit:

more information:

 *Quote:*   

>  # cat /proc/mdstat
> 
> Personalities : [linear] [raid0] [raid1] [raid10] [raid5] [raid4] [raid6]
> 
> md1 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
> ...

 

 *Quote:*   

>  # cat /proc/interrupts
> 
>            CPU0       CPU1
> 
>   0:      14422    2377072    IO-APIC-edge  timer
> ...

 

----------

## Mad Merlin

hdparm doesn't really test a RAID properly, for a quick estimate, try something like this:

```
dd if=/dev/zero of=/file/somewhere/in/raid bs=1024 count=$[1024*1024*20]
```

Which will write 20G of zeros into /file/somewhere/in/raid. dd should tell you some stats about the copy when it's done.

I would expect sequential read speeds to be ~3-4x faster than a single drive, and write speeds to be ~2x faster than a single drive.

----------

## arndawg

 *Mad Merlin wrote:*   

> hdparm doesn't really test a RAID properly, for a quick estimate, try something like this:
> 
> ```
> dd if=/dev/zero of=/file/somewhere/in/raid bs=1024 count=$[1024*1024*20]
> ```
> ...

 

Ok thanks. Will test it now and edit this post when it's finished.

edit:

 *Quote:*   

> 
> 
> 20971520+0 records in
> 
> 20971520+0 records out
> ...

 

And this is okay?

----------

## arndawg

Any comments?

----------

## Mad Merlin

That looks about right to me. Try testing the read speed like so: 

```
dd if=/big/file/in/raid of=/dev/null
```

----------

## tgh

In general, sequential read performance in RAID-0 increases with the number of spindles.  This also holds true for RAID-10.  So if a single drive can give you N throughput on reads/writes (sequential), then two drives will give you N*2.  Adding a 3rd mirrored pair to the set should give you another bump to N*3.  After that you may start running into other bottlenecks, even though the disks are capable of suppling you with the data.  The 93MB/s sounds about right for 500GB/750GB SATA drives that are RAID0 across 2 disks (or across 2 sets of RAID1 mirrors which is what 4-disk RAID10 is).

Random access times won't change much.  Even with the additional spindles.

You should also use atop and watch your disk utilization numbers and CPU wait times.  That will show you some of the effects.  It seems that mdadm prefers to do all reads from one side of the RAID10 array (i.e. using all of the left-hand side mirrored drives), so you'll see high utilizations on 2 of the 4 drives during reads.  During writes, you'll see that the first disk in the array gets overloaded (mdadm inefficiency?).

Blinky lights on the front of the drive bays indicating drive access (i.e. using a 4-drive SATA hot-plug bay that takes up 3 5.25" bays) is also fun to watch.  It shows you the same information as atop but in a more immediate and visible way.

----------

## nianderson

What would you expect for SW RAID1?

This is what Im getting on 2 SATA drives RAID1

```

dd if=/dev/zero of=test bs=1024 count=$[1024*1024*20]

20971520+0 records in

20971520+0 records out

21474836480 bytes (21 GB) copied, 644.256 seconds, 33.3 MB/s

```

----------

## arndawg

 *Quote:*   

>  dd if=/root/test of=/dev/null
> 
> 41943040+0 records in
> 
> 41943040+0 records out
> ...

 

My read speed seams a bit low?

----------

## Mad Merlin

RAID 1 does not improve write speeds, in fact you may notice that writes are a little slower than with a single drive. Read speeds should be about double that of a single drive, however.

arndawg: Yes, perhaps a little low. You could try atop as another poster mentioned.

----------

## tgh

Not sure, you show dd for a write test in one post and dd for a read test in another post.

Bonnie results for a 4-drive 5400rpm 300GB PATA RAID10 setup:

```
nogitsune backup # bonnie -s 16384 -m raid10

File './Bonnie.6159', size: 17179869184

Writing with putc()...done

Rewriting...done

Writing intelligently...done

Reading with getc()...done

Reading intelligently...done

Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...

              -------Sequential Output-------- ---Sequential Input-- --Random--

              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

raid10   16384 29357 70.4 32018 12.9 20499  4.8 29166 57.5 80455 13.6 195.3  1.2
```

As you can see, block writes are 32MB/s, rewrites 20MB/s, block reads 80MB/s.  

Here's the same system, but using Bonnie on a 2-drive RAID1 set (also 5400rpm 300GB drives).  The system has 4GB of RAM and a single Athlon64 CPU (3200+?).

```
# bonnie -s 16384 -m raid1

File './Bonnie.15625', size: 17179869184

Writing with putc()...done

Rewriting...done

Writing intelligently...done

Reading with getc()...done

Reading intelligently...done

Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...

              -------Sequential Output-------- ---Sequential Input-- --Random--

              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

raid1    16384 16047 39.5 17445  7.9 12025  4.2 42487 82.5 56282  4.3 181.8  0.7
```

RAID1: write 17MB/s rewrite 12MB/s read 56MB/s

Bonnie results for a more powerful system (Athlon64 X2 4200+, 2GB RAM, 7200rpm 750GB drives).  The RAID1 is 2x750GB and the RAID10 is a 4x750GB drive set.

```
san1-azure snapshot-daily1 # bonnie -s 10000 -m raid10    

File './Bonnie.10706', size: 10485760000

Writing with putc()...done

Rewriting...done

Writing intelligently...done

Reading with getc()...done

Reading intelligently...done

Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...

              -------Sequential Output-------- ---Sequential Input-- --Random--

              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

raid10   10000 52079 93.8 119733 56.8 41244 17.1 52885 73.5 114913 28.0 141.7  1.1

san1-azure snapshot-daily1 # cd /backup/system

san1-azure system # bonnie -s 10000 -m raid1 

File './Bonnie.10732', size: 10485760000

Writing with putc()...done

Rewriting...done

Writing intelligently...done

Reading with getc()...done

Reading intelligently...done

Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...

              -------Sequential Output-------- ---Sequential Input-- --Random--

              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

raid1    10000 53150 94.7 63171 29.0 33675  8.0 39835 52.7 69100  7.4 166.1  1.1

san1-azure system # 
```

RAID1: 63MB/s write 34MB/s rewrite 69MB/s read

RAID10: 120MB/s write 41MB/s rewrite 115MB/s read

----------

## arndawg

108MB/s write..31MB/s rewrite. And 90MB/s read. 

So this confirms that the read is a bit slow I guess?

```
 

# bonnie -s 10000 -m raid10

File './Bonnie.25171', size: 10485760000

Writing with putc()...done

Rewriting...done

Writing intelligently...done

Reading with getc()...done

Reading intelligently...done

Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...

              -------Sequential Output-------- ---Sequential Input-- --Random--

              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

raid10   10000 56596 95.4 108386 48.8 31577  8.7 41661 58.0 90748 11.3 216.3  1.2

```

----------

## tgh

Not from what I've seen.  It's pretty rare that RAID1 speeds up read rates.  (At least with PATA/SATA drives.)

Your numbers look pretty reasonable to me.

----------

## arndawg

 *tgh wrote:*   

> Not from what I've seen.  It's pretty rare that RAID1 speeds up read rates.  (At least with PATA/SATA drives.)
> 
> Your numbers look pretty reasonable to me.

 

But I use RAID10. ???

----------

## tgh

RAID10 is just a special instance of RAID1.  So RAID10 read speeds are simply going to be N*RAID1-speed where N is the number of mirrored pairs in the RAID0 stripe.  So a 4-disk RAID10 would be 2x faster then a 2-disk RAID1.

So if a 2-disk RAID1 set provides you with roughly 45MB/s read/write speed (pretty normal) and 15MB/s rewrite-speed, you'll see 90MB/s read/write and 30MB/s rewrite in a 4-disk RAID10 setup.  If you had a 6-disk RAID10 setup, you'd see 135MB/s read/write and 45MB/s rewrite.  Assuming that you haven't run into another bottleneck somewhere (such as the PCI bus bottleneck of ~130MB/s).

Other then that, I'm not sure what else to tell you.  

Note: PCIe x1 slots have roughly double the bandwidth (256MB/s) of standard PCI (128MB/s) and of course x4 and x16 slots are 4x (1GB/s) and 16x (4GB/s) faster then the PCI x1 slot.  So on a PCIe motherboard, you can have a lot more disks in the RAID before you run into bus limitations (assuming the chipset can handle it).  IIRC, 66MHz 64bit PCI-X slots in server motherboards topped out at 512MB/s (which is roughly PCIe x2).  In fact, on modern PCIe motherboards, the limit for Software RAID seems to be the CPU where you'll want at least a dual-core CPU if not a quad-core CPU for the really heavy activity servers.  (Or use a hardware RAID card and offload everything from the CPU.)

----------

