# Strange hdparm -T performance

## alinmesser

I stumbled across this incidentally, I'm trying to understand what could cause this: installing a new gentoo box, and I created a 4 disk md raid 0 array.

While still under the chroot, after the boot using the LiveCD 2007.0 (kernel 2.6.19-r5), I do:

```

# hdparm -tT /dev/md0

/dev/md0:

 Timing cached reads:   4132 MB in  2.00 seconds = 2065.97 MB/sec

 Timing buffered disk reads:  716 MB in  3.00 seconds = 238.66 MB/sec

```

I finish installing, compile a new 2.6.23-r3 SMP kernel (using genkernel) and reboot. New pristine system boots, no services or anything and I run the command again:

```

# hdparm -tT /dev/md0

/dev/md0:

 Timing cached reads:   2070 MB in  2.00 seconds = 1034.98 MB/sec

 Timing buffered disk reads:  438 MB in  3.01 seconds = 145.51 MB/sec

```

I copied the config from the livecd SMP kernel (from /proc/config.gz) and compiled different kernels on the new system, 2.6.19-r5 and 2.6.23-r3. I tried again using just -pipe and -O2 as CFLAGS. Compared outputs of dmesg and lsmod, I don't see anything that could give me a clue.

I repeated the tests: booting with the LiveCD always gives 2x the performance, at least in hdparm. I don't care that much about the buffered disk read, it's more than enough, it's the cached performance, which tells me that something is wrongly configured in the controller-northbridge-cpu-memory chain.

Where else should I look?

Note: I think HW config is irrelevant, but:

Tyan S2925 w/ Nvidia MCP55, onboard XGI Volari Z7, AMD X2 BE 2350, 2GB RAM DDR2800, sata_nv, forcedeth, everything on board.

----------

## bunder

did you choose a different scheduler between compiles?

----------

## alinmesser

Great idea, haven't thought of checking that. I looked and the default IO scheduler in 2.6.19 is "deadline" whereas the one in 2.6.23 is "anticipatory".

HOWEVER, I did compile a 2.6.19r5 with the exact same config as the one on the LiveCD, and I verified in /etc/kernels it hasn't changed after the successful compilation (using genkernel), I think that should rule it out... ;(

----------

## lightvhawk0

I have this problem to, I'm not sure whats causing it.

----------

## schachti

Could you try it with 2.6.24?

----------

## Paapaa

Could you try this using something that is better suited for performance tests. I have a gut feeling that hdparm doesn't give very comparable results.

----------

## Cyker

I've been playing around with the filesystem params a bit (Things like 'echo "deadline" > /sys/block/sd[abcd]/queue/scheduler' and 'blockdev --setra 2048 /dev/sda /dev/sdb /dev/sdc /dev/sdd && blocksev -setra 8192 /dev/md0').

My tests have been stuff like 'dd if=/dev/md/0/BP07_Concert_PPOT_XViD.avi of=/dev/null bs=64k' and 'hdparm -t /dev/md0'

In the course of my messing about, I've found hdparm -t is fairly unreliable as a benchmark, espescially if the filesystem is in use (Which is is since I'm in X)

Sample of 5 runs of hdparm -t /dev/md0 in short succession:

Timing buffered disk reads:  578 MB in  3.00 seconds = 192.40 MB/sec

Timing buffered disk reads:  508 MB in  3.02 seconds = 168.39 MB/sec

Timing buffered disk reads:  488 MB in  3.01 seconds = 162.39 MB/sec

Timing buffered disk reads:  516 MB in  3.01 seconds = 171.63 MB/sec

Timing buffered disk reads:  434 MB in  3.01 seconds = 144.07 MB/sec

By comparison, the dd test looked like this:

1307346378 bytes (1.3 GB) copied, 29.7797 s, 43.9 MB/s

1307346378 bytes (1.3 GB) copied, 29.8094 s, 43.9 MB/s

1307346378 bytes (1.3 GB) copied, 28.797 s, 45.4 MB/s

1307346378 bytes (1.3 GB) copied, 29.3707 s, 44.5 MB/s

1307346378 bytes (1.3 GB) copied, 31.3525 s, 41.7 MB/s

So... the tests are good for ballpark figures, but aren't exactly what you'd call reliable...

----------

## lightvhawk0

Well for instance I have 3 disks two 120gb disks and 1 80gb disk.

I have a raid0 that I boot from with the 2 120's and the 80 for my back up. (will be buying bigger backup drive lol)

When I create the raid and test it with hdparm I get twice the speed, reboot and it's back down to half. Any ideas both drives work fine so I don't know whats going on.  I think that I might try different configurations to narrow down the problem.

Yeah performance is why I'm going raid0 in the first place. If I'm only getting the performance of 1 disk I might just go with a raid1 so at least my reads will be faster.

----------

## lightvhawk0

I will also post some bonnie++ results from the livecd mounted raid to the sysem mounted raid

----------

