# SATA - hdparm - performance

## rbr28

I've seen a lot of comments about drive performance and a good bit about SATA, but I have yet to see a good summary.  If I missed the thread someone please point me there.

I'm using the 2.6.1 kernel right now, from gentoo-dev-sources.  I have two 80GB Seagate SATA drives and I get an ourtageous number of around 1400MB/sec on hdparm -t and 55MB/sec on hdparm -T.  I'm lazy and not doublechecking, so I might have those reversed.  Anyway, from what I have seen those are good numbers but I'm really not sure since I haven't tested many other SATA systems yet.  I have tested other SCSI workstations and other ATA workstations and I have never seen above 38MB/s for the lower number, but I would just be interested in how others are making out with the 2.6 kernel and SATA, and if there are any optimizations that have worked well.

Additionally, I disabled the option for dma by default when I configured the kernel, to see if that was related to an early problem I was having.  I no longer have the problem but I changed several things at once and I was wondering if disabling dma by default was necessary and/or the preferred setting for SATA drives and the 2.6 kernel.

----------

## Dr_b_

What liveCD did you use to get your system to recognize your SATA Drive? 

What is the drive showing up as? /dev/hd?   One one of the liveCD's its showing up as a SCSI device.  The latest experimentals wont boot my system they seem to hang at some point.

----------

## arska

My config is following:

-  DMA set in 2.6.3-rc2 kernel.

- Linux assigns /dev/sda for the HDD

- Same hdparm results

----------

## rbr28

I'm going to post a more complete mini howto to help other people but for now, here are some pointers.  I use the Mandrake Move cd to build my system.  The latest one recognized my SATA drives without any problem.  My setup is an Asus P4C800-E Deluxe, with two SATA drives on the Intel ICH5R controller, not in a raid setup.  The bios is set to run the drives in native mode.  I have two CD-roms connected and I have the promise raid controller disabled in the bios.  

When i boot with the MandrakeMove cd, my SATA drives are mounted as hde and hdf.  One note, when I used the Mandrake cd and followed the instructions for Knoppix, I had problems with the install.  Follow the instructions for using the LiveCD for the most part.  Make sure you mount /proc and /dev the way they tell you to do it for the LiveCD, not the way they say to do it for the Knoppix based install.

Anyway you can find info all over the forum about what you need to compile into your kernel, depending on which one you are using.  I had no problem with anthing from 2.4.22 on and I finally settled on the gentoo-dev-sources when they were using the 2.6.1 kernel.  I also compiled the mm-sources when they were using the 2.6.3 kernel, but I had some minor issues that I didn't want to bother working through, since I had the 2.6.1 kernel working flawlessly.   The drives worked fine with all the kernels I tried though.  I measured performance and I had no significant difference between the 2.6.1 kernel and the gentoo-dev-sources, and the mm-sources with a slightly newer kernel.  My final performance stats were around 55MB/s and 2000MB/s for the two different hdparm -tT /dev/sda numbers.  I ran piozone and came up with very similar numbers.

Anyway follow everyone elses recommendations about compiling scsi emulation, scsi-generic, ata, serial ata, and so on into your kernel, don't compile these as modules.  Make sure you compile the specific drivers for your setup too.  For mine, it was the Piix drivers, or something like that, and the intel driver under SATA (which is under scsi low level drivers).  

One of the problems I had was that Mandrake Move recognizes the drives as hde and hdd as I mentioned.  The kernels (at least 2.6.x kernels) will recognize your drives as /dev/sdxx .  Make sure you setup your fstab and your grub.conf to point to the appropriate partitions such as /dev/sda1, rather than /dev/hde1 .  If your grub.conf is not configured correctly you will never get it to work.  Remember too, if you mess it up, you can get a command shell from grub to edit the lines in grub.conf before you boot.  If your fstab is incorrectly setup though, you will still have problems and you will need to do something like booting with whatever cd based distro you used, and then edit the file from there.

What I've seen is that most people are getting the right drivers compiled in, because there is lots of info on that all over the forum.  Most people are making the mistakes in setting up grub.conf or fstab.  Also, don't use genkernel.  It just introduces another point of failure and makes troubleshooting more difficult.

Finally if you get all that working, you'll probably have trouble with the cdrom drives and/or cd-writers if you have those too.  I had to put hdc=ide-scsi and hdd=ide-scsi on my kernel line in grub for both my cd-rom drives to work correctly.  I also had to edit my fstab to mount /dev/cdroms/cdrom0 and /dev/cdroms/cdrom1 respectively.  This allowed my burners to work both with burning software like K3B and with programs like XMMS.  

I wish kernel developers would get the whole ide/scsi thing worked out.  It's very confusing using different kernels and trying to figure whether something needs to be loaded as a module or not, whether to use scsi, scsi emulation , ata, sata, etc...how things are referenced by what, how they are mounted, etc.  Seems to change all the time with different kernels and with different hardware.  It gets easier the more you have messed with it, but I can imagine how frustrating it is for first timers.

Anyway, dont give up.  It took me weeks, but I got the Gentoo-dev-sources with the 2.6.1 kernel running flawlessly on my board.  It's probably the best performing machine I have ever used, and definitely the best performing linux machine I have ever used (not including servers of course).  I have every single thing working flawlessly, no errors whatsoever on bootup.  Gigabit ethernet was no problem, sound works great, video is awesome (Nvidia Geforce), drive performance is great (including the burners), etc.  It's taken a lot of work, but it's been worth it.

----------

## Dr_b_

Thats an awesome post, thanks for the reply, I have by no means given up. 

Look forward to your SATA howto.

The gentoo-2004.0-x86-20040121.iso liveCD experimental image actually works too.  I also have the Asus P4C800-E deluxe mainboard and WD raptors.  

I agree with your this whole scsi-ide thing is a bit confusing as Im not yet very familiar with how to set it up, for instance i couldn't figure out why an IDE drive would need to be recognized as a SCSI device.

----------

## DarrenM

I just got a couple of SATA Raptors and wasn't too impressed at my test hdparm figures. I'm hoping I've missed something somewhere.

ATA-66 drive -t 34MB/s

ATA-100 drive -t 38MB/s

SATA Raptor -t 40MB/s

all get 900MB/s with -T

----------

## nmcsween

maybe it's just me but all those numbers seem a little low. I get about 980mb and 43mb with hdparm on one sata disk.

----------

## serotonin

im a noob when it comes to tweaking hard drives, do i have this thing running well ?

here's my result with the onboard via sata controller on an xp 2700+, 512 ddr3200

segate 160 gb baracuda

hdparm -i /dev/hde

/dev/hde:

 Model=ST3160023AS, FwRev=3.18, SerialNo=3JS2E9YT

 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }

 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4

 BuffType=unknown, BuffSize=8192kB, MaxMultSect=16, MultSect=16

 CurCHS=65535/1/63, CurSects=4128705, LBA=yes, LBAsects=268435455

 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}

 PIO modes:  pio0 pio1 pio2 pio3 pio4

 DMA modes:  mdma0 mdma1 mdma2

 UDMA modes: udma0 udma1 udma2

 AdvancedPM=no WriteCache=enabled

 Drive conforms to: ATA/ATAPI-6 T13 1410D revision 2:

 * signifies the current active mode

hdparm -T /dev/hde

/dev/hde:

 Timing buffer-cache reads:   980 MB in  2.00 seconds = 488.85 MB/sec

 hdparm -t /dev/hde

/dev/hde:

 Timing buffered disk reads:  152 MB in  3.01 seconds =  50.56 MB/sec

last,

 hdparm -I /dev/hde

/dev/hde:

ATA device, with non-removable media

        Model Number:       ST3160023AS

        Serial Number:      3JS2E9YT

        Firmware Revision:  3.18

Standards:

        Used: ATA/ATAPI-6 T13 1410D revision 2

        Supported: 6 5 4 3

Configuration:

        Logical         max     current

        cylinders       16383   65535

        heads           16      1

        sectors/track   63      63

        --

        CHS current addressable sectors:    4128705

        LBA    user addressable sectors:  268435455

        LBA48  user addressable sectors:  312581808

        device size with M = 1024*1024:      152627 MBytes

        device size with M = 1000*1000:      160041 MBytes (160 GB)

Capabilities:

        LBA, IORDY(can be disabled)

        bytes avail on r/w long: 4      Queue depth: 1

        Standby timer values: spec'd by Standard

        R/W multiple sector transfer: Max = 16  Current = 16

        Recommended acoustic management value: 254, current value: 0

        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6

             Cycle time: min=120ns recommended=120ns

        PIO: pio0 pio1 pio2 pio3 pio4

             Cycle time: no flow control=240ns  IORDY flow control=120ns

Commands/features:

        Enabled Supported:

           *    READ BUFFER cmd

           *    WRITE BUFFER cmd

           *    Host Protected Area feature set

           *    Look-ahead

           *    Write cache

           *    Power Management feature set

                Security Mode feature set

           *    SMART feature set

           *    FLUSH CACHE EXT command

           *    Mandatory FLUSH CACHE command

           *    Device Configuration Overlay feature set

           *    48-bit Address feature set

                SET MAX security extension

           *    DOWNLOAD MICROCODE cmd

           *    SMART self-test

           *    SMART error logging

Security:

                supported

        not     enabled

        not     locked

        not     frozen

        not     expired: security count

        not     supported: enhanced erase

Checksum: correct

i have yet to try with the scsi drivers.  any good ?

----------

## Moled

I get 2000+ / 50-55

----------

## Crg

 *DarrenM wrote:*   

> I just got a couple of SATA Raptors and wasn't too impressed at my test hdparm figures. I'm hoping I've missed something somewhere.
> 
> ATA-66 drive -t 34MB/s
> 
> ATA-100 drive -t 38MB/s
> ...

 

Not surprising that they all get 900MB/s with -T as it is testing the speed of reading directly from linux's buffer cache - ie it isn't testing the drives.

The SATA drivers for linux are quite new so there might be more performance tweaking that can be done, but I wouldn't expect it to be much faster than ATA-133, and the performance really comes done to the disk drive itself.

For example a Barracuda 7200RPM  120GB ATA-100 has a sustained avg transfer rate of >58Mbytes/sec, where as a Barracudua 7200RPM Serial ATA 120GB only has an avg sustained rate of >44MBytes/sec.

----------

## TheJackal

Just thout I'd add my 50 cents worth :

```
/dev/sda:

 Timing buffer-cache reads:   1180 MB in  2.01 seconds = 588.32 MB/sec

 Timing buffered disk reads:  166 MB in  3.00 seconds =  55.29 MB/sec
```

Seagate 7200.7 80GB (ST380013AS) 8MB cache SATA and Kernel 2.6.5 (gentoo-dev-sources)

With my old Seagate Barracuda 80GB ATAIV (ATA100, 7200rpm, 2MB cache) - same kernel, I used to get :

```
/dev/hda:

Timing buffer-cache reads:  728 MB in  2.01 seconds = 362.60 MB/sec

Timing buffered disk reads:  120 MB in  3.04 seconds =  39.43 MB/sec
```

----------

## torklingberg

A SATA HOWTO would be really nice. It is i really a mess.

----------

## eNTi

```
/dev/sda:

 Timing buffer-cache reads:   1012 MB in  2.00 seconds = 505.07 MB/sec

 Timing buffered disk reads:  156 MB in  3.01 seconds =  51.80 MB/sec

```

is there a way to optimize a sata drive?

i got a Segate 160GB 7200rpm SATA UDMA6 (on PCI Promise Controller SATA150 TX2plus).

----------

## serendipity

I have a working ebuild to emerge patched 2.4.26 sources that handle SATA and SATA RAID 0 on the ICH5-R. I'm still working on the genkernel and grub mods, which is why nothing is posted yet, but with these three ebuilds, it should be possible to 

emerge i875p-iswraid-sources

emerge grub-iswraid

grub-install /dev/ataraid/d0

emerge genkernel-iswraid

genkernel

...  modify grub.conf to load the new kernel and initrd

and reboot.

kernel 2.4.26 with

- device mapper patches

- libata patch

- iswraid patch

- i2c and lm_sensors  2.8.7 

- video4linux patches

The kernel source is the easy part, although the config (modules vs compiled in) can be tricky. The difficult part is genkernel, which needs to be hacked to create an appropriate initrd (problems with device files, module probing, errors in devfsd.conf). I'm working on a genkernel-iswraid ebuild.

Also, if you boot off a raid array, grub needs to be patched. I'm working on an ebuild for a grub-iswraid. 

If you want the kernel source ebuild, and you are technically oriented (I'll send you the genkernel diffs), let me know and I'll send you the ebuild. Don't expect much support. If you need support, then wait until I 've finished the ebuilds so that you can just emerge everything blindly.

Oh, and I also posted a nasty hack to hdparm for it to benchmark ataraid drives. Look on this forum for "hdparam ataraid patch". Im getting 95MB/s off my two raptors in raid 0 in the ich5-r controller.

----------

## BlinkEye

it's about time someone started a thread about it. i almost started one but  i don't have enough time right now. so just in short (some of us already started a dispute here: https://forums.gentoo.org/viewtopic.php?t=8813&postdays=0&postorder=asc&start=75)

i don't think hdparm returns any accurate result at all about your hardware performance. as suggested in the link above try out IOZone to measure your performance and compare them with others. i collected 2 different IOZone files from other systems and users with a raid and SATA hd's and of course made some test myself - even though my drives seem to be much slower according to "hdparm -t -T blabla" i won most of the test in IOZone ( http://www.iozone.org/ - have a look at the above mentioned thread as i posted some command on how to use it). we still could compare our IOZone files, i still in the mood to compete!

----------

## Nephren-Ka

I have two sata drives:

Seagate 160:

hdparm -Tt /dev/sda

```

/dev/sda:

 Timing buffer-cache reads:   3872 MB in  2.00 seconds = 1935.33 MB/sec

 Timing buffered disk reads:  158 MB in  3.03 seconds =  52.19 MB/sec

```

****TheJackal, enti, and others: Note, this is the same drive you guys have, and your buffer-cache read results are WAY too slow, you have something set up improperly, I'd check that out....

*****

and Seagate 200:

```

hdparm -Tt /dev/sdb

/dev/sdb:

 Timing buffer-cache reads:   3840 MB in  2.00 seconds = 1920.29 MB/sec

 Timing buffered disk reads:  188 MB in  3.03 seconds =  62.06 MB/sec

```

Setting them up was really quite simple (I have an ABIT IS7-G, Intel i865-PE chipset)...simply build the SATA support into the kernel (in the SCSI setup part of the kernel config), and make sure you have i865 and ICH5 support compiled in as well....and you're golden  :Smile:  If anyone has a simliar setup and needs help, don't hesitate to ask  :Smile: [/b]

----------

## BlinkEye

i repeat, i do have 3 seagate sata drives in a raid 5 too and do not get anywhere near your result. this isn't a setting problem but a testing issue. hdparm doesn't say if your drives are fast or not (altough it says if dma is enabled or not - which results in a faster or slower drive - but don't relay on the result). use IOZone, a simple, small software which really tests your drives and post your results here (maybe do the same BIG test as i did (takes about an hour or so - follow the link of the thread in the gentoo forum) so we could really compare and after that we're going to discuss settings! or do you really relay on a test which takes about 5 seconds to see if your drive is fast or not?

----------

## Corona688

 *BlinkEye wrote:*   

> i repeat, i do have 3 seagate sata drives in a raid 5 too and do not get anywhere near your result. this isn't a setting problem but a testing issue. hdparm doesn't say if your drives are fast or not (altough it says if dma is enabled or not - which results in a faster or slower drive - but don't relay on the result). use IOZone, a simple, small software which really tests your drives and post your results here (maybe do the same BIG test as i did (takes about an hour or so - follow the link of the thread in the gentoo forum) so we could really compare and after that we're going to discuss settings! or do you really relay on a test which takes about 5 seconds to see if your drive is fast or not?

  There's a line at which benchmarks turn into pointless hard-drive torture.  I'd rather stray on the side of possible inaccuracy than the side of unnecessary wear.

----------

## Nephren-Ka

I'm not sure if you were talking to me or not, however...I am not here to discuss the merits (or lack thereof) of hdparm testing. All I was saying, is that comparing apples to apples (all of us with same drives (or close) and using the same tool) are getting very different results. 

It is indeed a settings problem, because I was getting the same low performance numbers they were getting, when I did not have the proper options compiled into my kernel.

 *BlinkEye wrote:*   

> i repeat, i do have 3 seagate sata drives in a raid 5 too and do not get anywhere near your result. this isn't a setting problem but a testing issue. hdparm doesn't say if your drives are fast or not (altough it says if dma is enabled or not - which results in a faster or slower drive - but don't relay on the result). use IOZone, a simple, small software which really tests your drives and post your results here (maybe do the same BIG test as i did (takes about an hour or so - follow the link of the thread in the gentoo forum) so we could really compare and after that we're going to discuss settings! or do you really relay on a test which takes about 5 seconds to see if your drive is fast or not?

 

----------

## BlinkEye

 *Nephren-Ka wrote:*   

> I'm not sure if you were talking to me or not, however...I am not here to discuss the merits (or lack thereof) of hdparm testing. All I was saying, is that comparing apples to apples (all of us with same drives (or close) and using the same tool) are getting very different results. 
> 
> It is indeed a settings problem, because I was getting the same low performance numbers they were getting, when I did not have the proper options compiled into my kernel.

 

well, that sounds interesting but i honestly don't know what should cause the low buffer-cache readings. but would you please tell us how you fixed that ... ? you mentioned some posts above that you had the same problems and fixed it by enabling some mb specific sata driver and the raid stuff - but i can't believe that this is the issue. how is someone able to not have enabled sata and raid support while running a sata raid?

----------

## BlinkEye

 *Corona688 wrote:*   

> There's a line at which benchmarks turn into pointless hard-drive torture.  I'd rather stray on the side of possible inaccuracy than the side of unnecessary wear.

 

that's a point. i just wanted to suggest that before someone loses a  restless night over a bad benchmark from hdparm he should use another benchmark. it annoyed me at last, i admit   :Wink: 

----------

## Nephren-Ka

I'm not running SATA RAID, I just have 2 standalone drives. However, the problem I had is that I didnt have the proper sata controller driver compiled in, so the kernel wasn't able to use DMA properly with the drives...I'll post my kernel .config if you guys want?

 *BlinkEye wrote:*   

>  *Nephren-Ka wrote:*   I'm not sure if you were talking to me or not, however...I am not here to discuss the merits (or lack thereof) of hdparm testing. All I was saying, is that comparing apples to apples (all of us with same drives (or close) and using the same tool) are getting very different results. 
> 
> It is indeed a settings problem, because I was getting the same low performance numbers they were getting, when I did not have the proper options compiled into my kernel. 
> 
> well, that sounds interesting but i honestly don't know what should cause the low buffer-cache readings. but would you please tell us how you fixed that ... ? you mentioned some posts above that you had the same problems and fixed it by enabling some mb specific sata driver and the raid stuff - but i can't believe that this is the issue. how is someone able to not have enabled sata and raid support while running a sata raid?

 

----------

## taskara

bonnie is your friends.. swap hdparm for it  :Smile: 

----------

## serendipity

I liked this iozone thing. Here are the results of 45 non-stop minutes of my two maxtor 120GBs being thrashed by iozone, max file size specified as 2GB. I'm not too sure how often I'd like to run it, because the disks really do take a beating....

http://perso.wanadoo.fr/ic/iozoneresults.html

----------

## lbrtuk

The problem with iozone is it's not filesystem independant. It works on top of the filesystem. Therefore someone using reiserfs will get totally different results to someone using ext3 and it will have little to do with the hardware.

----------

## BlinkEye

 *serendipity wrote:*   

> I liked this iozone thing. Here are the results of 45 non-stop minutes of my two maxtor 120GBs being thrashed by iozone, max file size specified as 2GB. I'm not too sure how often I'd like to run it, because the disks really do take a beating....
> 
> http://perso.wanadoo.fr/ic/iozoneresults.html

 

gnah, i did the test with a file size of 1GB. i'd like to compare but to get useful result i really suggest of not doing anything else than running IOZone. if you want to compare and are willing to do another test please mail me your results (not only the graphics) with the command you executed (please use a file size of 1GB - i'd have other 2 test results from other users). i'll pm you with my email address

----------

## BlinkEye

 *lbrtuk wrote:*   

> The problem with iozone is it's not filesystem independant. It works on top of the filesystem. Therefore someone using reiserfs will get totally different results to someone using ext3 and it will have little to do with the hardware.

 

i don't see a problem there, because i do want to know how fast my drives are configured as they are. i'm not interested how fast they could be if everything was perfect - what's the use of it? that's in fact another reason why one SHOULD use IOZone to benchmark his system.

----------

## lbrtuk

 *BlinkEye wrote:*   

> i don't see a problem there, because i do want to know how fast my drives are configured as they are. i'm not interested how fast they could be if everything was perfect - what's the use of it? that's in fact another reason why one SHOULD use IOZone to benchmark his system.

 

It's because if you say "Hey, I'm getting 38Mb/s with my setup: xyz" and someone comes back and says "Hi, I've got a a very similar setup: xyz. But I'm getting 53Mb/s. You must have configured something wrong." that can be very useful information. But if you use iozone and you're both using different filesystems it's completely worthless information when it comes to setting up drivers and udma modes.

----------

## BlinkEye

 *lbrtuk wrote:*   

> It's because if you say "Hey, I'm getting 38Mb/s with my setup: xyz" and someone comes back and says "Hi, I've got a a very similar setup: xyz. But I'm getting 53Mb/s. You must have configured something wrong." that can be very useful information. But if you use iozone and you're both using different filesystems it's completely worthless information when it comes to setting up drivers and udma modes.

 

your first point may be right if hdparm -t -T brings up useful and accurate results, so here an example:

system specs #1: amd64 3200+ MHz, 3x512MB DDR 400MHz, 3x120GB SATA SEAGATE drives 7200 RPM in a raid5

```
# hdparm -t -T /dev/md1

/dev/md1:

 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 671.77 MB/sec

 Timing buffered disk reads:   56 MB in  3.02 seconds =  55.58 MB/sec

```

system specs #2: intel pentium M 1200 MHz, 1x512 SDRAM, 1x 40GB ATA drive 5400 RPM

```

/dev/hda:

 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 876.82 MB/sec

 Timing buffered disk reads:   56 MB in  3.02 seconds =  18.53 MB/sec

```

according to the manual of hdparm:

 *Quote:*   

>        -T     Perform timings of cache reads for benchmark and comparison  purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a  couple  of megabytes  of free memory.  This displays the speed of reading directly from the Linux buffer cache without disk access.  This measurement is essentially an  indication  of the throughput of the processor, cache, and memory of the system under test.

 

so one result doesn't really say something about your drive and what it says is that my laptop is a lot faster than my server? that's no useful result. the other result may be ok, but it doesn's say anything at all about your raid or your drives when you're working with them.

so, what should be configured wrong if someone gets different transfer rates? as we are talking about SATA drives the malconfiguration of ide drives on the same ide chanel does not apply (if you're running a raid, else it doesn't matter for the test). as most of us get unsatisfying results of raid devices you MUST have enabled the right kernel settings or your raid wouldn't run at all. so, i want to know how fast my drives are, hence i do a test with IOZone and compare the result to yours. maybe my drives are slower which would be the result of the filesystem or setting of the raid (i guess you know the drill) - so, i'd be totally persuaded that i'm not getting the utmost of my raid and either change the filesystem or change some settings, because now i know, i made some hard tests which reflect the daily situation of using my drives so it MUST be a setting problem.

----------

## lbrtuk

That's entirely what I'm talking about!

You count the number of threads on this forum which are: "I'm not sure udma is working properly", "My hard disk is making clicking sounds, and hdparm -tT says this...". When you're trying to troubleshoot problems like that, you want a tool that has nothing to do with the filesystem. That would overcomplicate things.

 *Quote:*   

> doesn's say anything at all about your raid or your drives when you're working with them.

 

I know, I'm not talking about that. I'm talking about troubleshooting hardware problems.

----------

## BlinkEye

 *lbrtuk wrote:*   

> I'm talking about troubleshooting hardware problems

 

i agree! this is what i forgot to mention in my previous post: for quick and easy troubleshooting there's no better way than to start up hdparm. but according to the thread i thought that it was all about REALLY benchmarking your drives ...

----------

## lbrtuk

Well no, when you're asking about SATA performance, what you're asking is "Hi guys, I've got a SATA system and here are the numbers I'm getting. Do you think I've got it set up right?" and not "Hi, what real life performance should I expect to get with SATA?".

He's not asking a filesystem question.

Anyway. This has got gravely off topic.

----------

## BlinkEye

now you gave me two reason to draw back   :Wink: 

----------

## srs5694

 *lbrtuk wrote:*   

> The problem with iozone is it's not filesystem independant. It works on top of the filesystem. Therefore someone using reiserfs will get totally different results to someone using ext3 and it will have little to do with the hardware.

 

That's the impression I get. Whatever its flaws, hdparm is a fairly direct test of hardware performance, and in particular, sustained (on computer timescales) raw read operations. IOzone, from what I've seen in the documentation, is a filesystem tester. As such, it's dependent on hardware, but it's also dependent on the filesystem implementation, data structures, and maybe even stuff like how full or fragmented a specific disk is. (I've not looked into it in enough depth to know what might influence its results.) IOzone's hardware dependency will also test somewhat different features than hdparm; for instance, I'd expect IOzone performance to be more influenced by head seeks.

In sum, my impression is that hdparm is the superior tool for testing whether your kernel parameters and drive DMA features are set reasonably; it's quick and directly tests the drive performance factors that'll be most influenced by kernel settings. IOzone might be a superior tool for comparing different brands or models of drives or even disk controllers if you perform sufficiently controlled tests. If you just compare your disk to your neighbor's using your existing installations, there are likely to be too many variables to draw valid conclusions about your hardware -- or your kernel settings, for that matter.

As to the mention of buffer-cache readings, in the hdparm output, that's mostly a measure of your computer's memory subsystem; it's the performance of the buffer cache that the kernel maintains. Disk hardware has little or no influence on this measure, as I understand it. Low values might result because of a weak CPU, poor motherboard memory subsystem, slower-than-optimal RAM, etc. This value can vary much more dramatically across systems than actual disk performance. For instance, my Athlon 64 3000+ system gets values of about 1180 MB/s for buffer-cache reads and 30MB/s for disk throughput on an older IDE disk, whereas my 266MHz iMac gets values of 71MB/s and 13MB/s. Clearly, the Athlon 64's memory performance blows away the iMac, but the actual disk subsystem, although better, isn't nearly so dramatically better.

----------

## BlinkEye

how do you explain this result?

system specs #1: amd64 3200+ MHz, 3x512MB DDR 400MHz, 3x120GB SATA SEAGATE drives 7200 RPM in a raid5

```
# hdparm -t -T /dev/md1

/dev/md1:

 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 671.77 MB/sec

 Timing buffered disk reads:   56 MB in  3.02 seconds =  55.58 MB/sec

```

system specs #2: intel pentium M 1200 MHz, 1x512 SDRAM, 1x 40GB ATA drive 5400 RPM

```

/dev/hda:

 Timing buffer-cache reads:   1756 MB in  2.00 seconds = 876.82 MB/sec

 Timing buffered disk reads:   56 MB in  3.02 seconds =  18.53 MB/sec

```

according to the manual of hdparm:

 *Quote:*   

>        -T     Perform timings of cache reads for benchmark and comparison  purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a  couple  of megabytes  of free memory.  This displays the speed of reading directly from the Linux buffer cache without disk access.  This measurement is essentially an  indication  of the throughput of the processor, cache, and memory of the system under test.

 

----------

## Gherald2

In that particular system the md1 Raid5 cannot keep up with > 670 MB/sec  cache speeds.  Note, however, that it is plenty fast enough to keep up with ~55mb/s of actual drive throughput.

On System #1 do:

hdparm -tT /dev/hdX

You should run it 3 times on each of your sata drives (9 times total) and round your figures....

----------

## carpman

just for  a comparison i have 2 maxtor diamond plus 9 40gb (not sata) drives on ITE raid0 kernel 2.6.7 and get 

```

hdparm -t /dev/sda 

/dev/sda:

 Timing buffered disk reads:  216 MB in  3.01 seconds =  71.80 MB/sec

```

[/code]

----------

## arsen

my sowtware raid0, 2 x maxtor sata 80Gb:

mount /dev/md3

```

hdparm -tT /dev/md3 

/dev/md3:

 Timing buffer-cache reads:   1316 MB in  2.00 seconds = 657.44 MB/sec

 Timing buffered disk reads:  238 MB in  3.02 seconds =  78.87 MB/sec

```

unmount /dev/md3

```

hdparm -tT /dev/md3

/dev/md3:

 Timing buffer-cache reads:   1276 MB in  2.00 seconds = 637.14 MB/sec

 Timing buffered disk reads:  306 MB in  3.01 seconds = 101.71 MB/sec

```

hmmm, mount slow, unmount fast....

----------

## rkrenzis

I have a Maxtor 6Y250M0 SATA drive on a SOYO CY-K8 Plus board (nforce3) based.

Is it:

1. An error that the system recognizes it as a ATA drive versus a SATA drive?

2. Can my performance be tuned? (from hdparm -tT /dev/hdc)

/dev/hdc:

 Timing buffer-cache reads:   1960 MB in  2.00 seconds = 979.17 MB/sec

 Timing buffered disk reads:   46 MB in  3.11 seconds =  14.80 MB/sec

I'm quite impressed to see individuals running raid-0 with reads in the mid-200s to low-300s.

Entries in /etc/conf.d/hdparm:

disc0_args="-d1 -A1 -m16 -u1 -a256 -X69"

TIA.

----------

## Dr_b_

My SATA Drive:

```
ENROL-V2 ~ # hdparm -tT /dev/sda

/dev/sda:

 Timing buffer-cache reads:   3724 MB in  2.00 seconds = 1862.28 MB/sec

 Timing buffered disk reads:  164 MB in  3.01 seconds =  54.53 MB/sec

```

It wouldn't work for me without the SCSI ontop, couldn't figure out how to get it to work with the PIIX only.  Still, not bad performance.  

Drive is a WD Raptor, 36G, board is an Asus P4C800E

----------

## rkrenzis

Short answer is to wait for 2.6.8.  I'm going to grab a prepatch and see if it works on improving the overall speed.

----------

## c0balt

hi,

is there any way to improve performance on sata with hdparm?

ie doing settings like on ide

```
[mybox ~]# hdparm -Tt /dev/sda /dev/hda

/dev/sda:

 Timing buffer-cache reads:   3736 MB in  2.00 seconds = 1867.35 MB/sec

 Timing buffered disk reads:  206 MB in  3.02 seconds =  68.18 MB/sec

/dev/hda:

 Timing buffer-cache reads:   3784 MB in  2.00 seconds = 1890.40 MB/sec

 Timing buffered disk reads:   64 MB in  3.01 seconds =  21.28 MB/sec

```

not bad, but maybe it can get better with improved settings?

----------

## rkrenzis

Gruß Gott c0balt!

My understanding from the newsgroups is that linux can identify the optimum settings for most new drives.  So, unless you are having problems tweaking isn't necessary.  You are getting a much better disk reads.  I 'm getting worse benchmarks than your IDE drive (my drive is a SATA) because the SATA drivers aren't incorporated in the linux kernel (for the nvidia chipset).    :Sad: 

I'm going to have to kiss bootsplash goodbye until the final 2.6.8 comes out and I'll get the desired speed (--I am almost sure on this).

I'll send a update later this evening regarding 2.6.8-rc2 and nvidia sata drivers.

...btw c0balt what part of Germany are you in?  I have family in Augsburg.Last edited by rkrenzis on Tue Aug 03, 2004 2:41 am; edited 1 time in total

----------

## rkrenzis

Okay, I've gotten a 2.6.7 vanilla kernel and patched it to 2.6.8-rc2.  My drive still shows up as a ATA drive.

This is very annoying.  I even get this lovely message:

 *Quote:*   

> hdc: Speed warnings UDMA 3/4/5 is not functional.

 

Any thoughts or ideas?

I have SCSI, SCSI Disk, SATA, and NVIDIA SATA Driver Support statically compiled into the kernel.  Any ideas?

----------

## rkrenzis

I tried 2.6.8-rc2-bk12 and still no go on sata nvidia driver.  I tried disabling all ide controllers in the bios, disabling all ide disks but it still shows up as a ide disk.  I also tried the sata driver under the "ide" menu but still to no avail.  My disk in my Pentium 200 connected to a UltraDMA133 controller is faster than this heap of junk.  :Crying or Very sad: 

Any ideas?

----------

## rkrenzis

dmesg output

```

Bootdata ok (command line is root=/dev/hde3 vga=795)

Linux version 2.6.8-rc2-bk12 (root@clawhammer) (gcc version 3.3.4 20040623 (Gentoo Linux 3.3.4-r1, ssp-3.3.2-2, pie-8.7.6)) #3 Mon Aug 2 22:26:21 GMT 2004

BIOS-provided physical RAM map:

 BIOS-e820: 0000000000000000 - 000000000009f800 (usable)

 BIOS-e820: 000000000009f800 - 00000000000a0000 (reserved)

 BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)

 BIOS-e820: 0000000000100000 - 000000001fff0000 (usable)

 BIOS-e820: 000000001fff0000 - 000000001fff3000 (ACPI NVS)

 BIOS-e820: 000000001fff3000 - 0000000020000000 (ACPI data)

 BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved)

 BIOS-e820: 00000000fee00000 - 00000000fef00000 (reserved)

 BIOS-e820: 00000000fefffc00 - 00000000ff000000 (reserved)

 BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)

No mptable found.

On node 0 totalpages: 131056

  DMA zone: 4096 pages, LIFO batch:1

  Normal zone: 126960 pages, LIFO batch:16

  HighMem zone: 0 pages, LIFO batch:1

PCI bridge 00:0a from 10de found. Setting "noapic". Overwrite with "apic"

ACPI: RSDP (v000 Nvidia                                    ) @ 0x00000000000f62c0

ACPI: RSDT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff3000

ACPI: FADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff3040

ACPI: MADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x000000001fff8000

ACPI: DSDT (v001 NVIDIA AWRDACPI 0x00001000 MSFT 0x0100000e) @ 0x0000000000000000

ACPI: Local APIC address 0xfee00000

ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)

Processor #0 15:4 APIC version 16

ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])

ACPI: Skipping IOAPIC probe due to 'noapic' option.

Using ACPI for processor (LAPIC) configuration information

Intel MultiProcessor Specification v1.1

    Virtual Wire compatibility mode.

OEM ID: OEM00000 <6>Product ID: PROD00000000 <6>APIC at: 0xFEE00000

I/O APIC #2 Version 17 at 0xFEC00000.

Processors: 1

Checking aperture...

CPU 0: aperture @ c0000000 size 256 MB

Built 1 zonelists

Kernel command line: root=/dev/hde3 vga=795 console=tty0

Initializing CPU#0

PID hash table entries: 16 (order 4: 256 bytes)

time.c: Using 1.193182 MHz PIT timer.

time.c: Detected 2000.025 MHz processor.

Console: colour dummy device 80x25

Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)

Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)

Memory: 510932k/524224k available (1864k kernel code, 12536k reserved, 990k data, 432k init)

Calibrating delay loop... 3964.92 BogoMIPS

Mount-cache hash table entries: 256 (order: 0, 4096 bytes)

CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)

CPU: L2 Cache: 1024K (64 bytes/line)

CPU: AMD Athlon(tm) 64 Processor 3200+ stepping 08

Using local APIC NMI watchdog using perfctr0

Using local APIC timer interrupts.

Detected 12.500 MHz APIC timer.

NET: Registered protocol family 16

PCI: Using configuration type 1

mtrr: v2.0 (20020519)

ACPI: Subsystem revision 20040326

ACPI: IRQ9 SCI: Level Trigger.

ACPI: Interpreter enabled

ACPI: Using PIC for interrupt routing

ACPI: PCI Root Bridge [PCI0] (00:00)

PCI: Probing PCI hardware (bus 00)

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.HUB0._PRT]

ACPI: Power Resource [ISAV] (on)

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.AGPB._PRT]

ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LNK2] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LNK3] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)

ACPI: PCI Interrupt Link [LNK4] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LNK5] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)

ACPI: PCI Interrupt Link [LUBA] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LUBB] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LMAC] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)

ACPI: PCI Interrupt Link [LAPU] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LACI] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)

ACPI: PCI Interrupt Link [LMCI] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LSMB] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)

ACPI: PCI Interrupt Link [LUB2] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LFIR] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [L3CM] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LIDE] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LSID] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)

ACPI: PCI Interrupt Link [APC1] (IRQs *16), disabled.

ACPI: PCI Interrupt Link [APC2] (IRQs *17), disabled.

ACPI: PCI Interrupt Link [APC3] (IRQs *18), disabled.

ACPI: PCI Interrupt Link [APC4] (IRQs *19), disabled.

ACPI: PCI Interrupt Link [APC5] (IRQs *16), disabled.

ACPI: PCI Interrupt Link [APCF] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCG] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCH] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCI] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCJ] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCK] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCS] (IRQs *23), disabled.

ACPI: PCI Interrupt Link [APCL] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCM] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [AP3C] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCZ] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APSI] (IRQs 20 21 22) *0, disabled.

SCSI subsystem initialized

usbcore: registered new driver usbfs

usbcore: registered new driver hub

PCI: Using ACPI for IRQ routing

ACPI: PCI Interrupt Link [LSMB] enabled at IRQ 5

ACPI: PCI interrupt 0000:00:01.1[A] -> GSI 5 (level, low) -> IRQ 5

ACPI: PCI Interrupt Link [LUBA] enabled at IRQ 11

ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 11 (level, low) -> IRQ 11

ACPI: PCI Interrupt Link [LUBB] enabled at IRQ 11

ACPI: PCI interrupt 0000:00:02.1[B] -> GSI 11 (level, low) -> IRQ 11

ACPI: PCI Interrupt Link [LUB2] enabled at IRQ 11

ACPI: PCI interrupt 0000:00:02.2[C] -> GSI 11 (level, low) -> IRQ 11

ACPI: PCI Interrupt Link [LMAC] enabled at IRQ 5

ACPI: PCI interrupt 0000:00:05.0[A] -> GSI 5 (level, low) -> IRQ 5

ACPI: PCI Interrupt Link [LACI] enabled at IRQ 5

ACPI: PCI interrupt 0000:00:06.0[A] -> GSI 5 (level, low) -> IRQ 5

ACPI: PCI Interrupt Link [LSID] enabled at IRQ 11

ACPI: PCI interrupt 0000:00:09.0[A] -> GSI 11 (level, low) -> IRQ 11

ACPI: PCI Interrupt Link [LNK3] enabled at IRQ 9

ACPI: PCI interrupt 0000:02:06.0[A] -> GSI 9 (level, low) -> IRQ 9

ACPI: PCI Interrupt Link [LNK4] enabled at IRQ 11

ACPI: PCI interrupt 0000:02:07.0[A] -> GSI 11 (level, low) -> IRQ 11

ACPI: PCI Interrupt Link [LNK5] enabled at IRQ 9

ACPI: PCI interrupt 0000:01:00.0[A] -> GSI 9 (level, low) -> IRQ 9

agpgart: Detected AGP bridge 0

agpgart: Setting up Nforce3 AGP.

agpgart: Maximum main memory to use for agp memory: 439M

agpgart: AGP aperture is 256M @ 0xc0000000

PCI-DMA: Disabling IOMMU.

vesafb: framebuffer at 0xb0000000, mapped to 0xffffff000008e000, size 10240k

vesafb: mode is 1280x1024x32, linelength=5120, pages=0

vesafb: scrolling: redraw

vesafb: directcolor: size=8:8:8:8, shift=24:16:8:0

fb0: VESA VGA frame buffer device

IA32 emulation $Id: sys_ia32.c,v 1.32 2002/03/24 13:02:28 ak Exp $

Total HugeTLB memory allocated, 0

devfs: 2004-01-31 Richard Gooch (rgooch@atnf.csiro.au)

devfs: boot_options: 0x1

Console: switching to colour frame buffer device 160x64

Real Time Clock Driver v1.12

Linux agpgart interface v0.100 (c) Dave Jones

Hangcheck: starting hangcheck timer 0.5.0 (tick is 180 seconds, margin is 60 seconds).

Serial: 8250/16550 driver $Revision: 1.90 $ 8 ports, IRQ sharing disabled

ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A

ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A

Using anticipatory io scheduler

floppy0: no floppy controllers found

RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize

loop: loaded (max 8 devices)

forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.28.

ACPI: PCI interrupt 0000:00:05.0[A] -> GSI 5 (level, low) -> IRQ 5

PCI: Setting latency timer of device 0000:00:05.0 to 64

eth0: forcedeth.c: subsystem: 010de:0c11 bound to 0000:00:05.0

Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2

ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx

NFORCE3-150: IDE controller at PCI slot 0000:00:08.0

NFORCE3-150: chipset revision 165

NFORCE3-150: not 100% native mode: will probe irqs later

NFORCE3-150: 0000:00:08.0 (rev a5) UDMA133 controller

    ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA

    ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:DMA

hda: Hewlett-Packard DVD Writer 100, ATAPI CD/DVD-ROM drive

ide0 at 0x1f0-0x1f7,0x3f6 on irq 14

NFORCE3-150: IDE controller at PCI slot 0000:00:09.0

ACPI: PCI interrupt 0000:00:09.0[A] -> GSI 11 (level, low) -> IRQ 11

NFORCE3-150: chipset revision 245

NFORCE3-150: 0000:00:09.0 (rev f5) UDMA133 controller

NFORCE3-150: 100% native mode on irq 11

    ide2: BM-DMA at 0xd000-0xd007, BIOS settings: hde:DMA, hdf:pio

hde: Maxtor 6Y250M0, ATA DISK drive

ide2 at 0x9f0-0x9f7,0xbf2 on irq 11

hde: max request size: 1024KiB

hde: 490234752 sectors (251000 MB) w/7936KiB Cache, CHS=30515/255/63, UDMA(33)

 /dev/ide/host2/bus0/target0/lun0: p1 p2 p3 p4 < p5 p6 p7 p8 >

hda: ATAPI 32X DVD-ROM CD-R/RW drive, 2048kB Cache, UDMA(33)

Uniform CD-ROM driver Revision: 3.20

libata version 1.02 loaded.

ACPI: PCI interrupt 0000:00:02.2[C] -> GSI 11 (level, low) -> IRQ 11

ehci_hcd 0000:00:02.2: nVidia Corporation nForce3 USB 2.0

PCI: Setting latency timer of device 0000:00:02.2 to 64

ehci_hcd 0000:00:02.2: irq 11, pci mem ffffff0000af1000

ehci_hcd 0000:00:02.2: new USB bus registered, assigned bus number 1

PCI: cache line size of 64 is not supported by device 0000:00:02.2

ehci_hcd 0000:00:02.2: USB 2.0 enabled, EHCI 1.00, driver 2004-May-10

hub 1-0:1.0: USB hub found

hub 1-0:1.0: 6 ports detected

ohci_hcd: 2004 Feb 02 USB 1.1 'Open' Host Controller (OHCI) Driver (PCI)

ohci_hcd: block sizes: ed 80 td 96

ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 11 (level, low) -> IRQ 11

ohci_hcd 0000:00:02.0: nVidia Corporation nForce3 USB 1.1

PCI: Setting latency timer of device 0000:00:02.0 to 64

ohci_hcd 0000:00:02.0: irq 11, pci mem ffffff0000af3000

ohci_hcd 0000:00:02.0: new USB bus registered, assigned bus number 2

hub 2-0:1.0: USB hub found

hub 2-0:1.0: 3 ports detected

ACPI: PCI interrupt 0000:00:02.1[B] -> GSI 11 (level, low) -> IRQ 11

ohci_hcd 0000:00:02.1: nVidia Corporation nForce3 USB 1.1 (#2)

PCI: Setting latency timer of device 0000:00:02.1 to 64

ohci_hcd 0000:00:02.1: irq 11, pci mem ffffff0000af5000

ohci_hcd 0000:00:02.1: new USB bus registered, assigned bus number 3

hub 3-0:1.0: USB hub found

hub 3-0:1.0: 3 ports detected

usbcore: registered new driver usblp

drivers/usb/class/usblp.c: v0.13: USB Printer Device Class driver

Initializing USB Mass Storage driver...

usbcore: registered new driver usb-storage

USB Mass Storage support registered.

usbcore: registered new driver usbhid

drivers/usb/input/hid-core.c: v2.0:USB HID core driver

mice: PS/2 mouse device common for all mice

serio: i8042 AUX port at 0x60,0x64 irq 12

input: ImPS/2 Generic Wheel Mouse on isa0060/serio1

serio: i8042 KBD port at 0x60,0x64 irq 1

input: AT Translated Set 2 keyboard on isa0060/serio0

NET: Registered protocol family 2

IP: routing cache hash table of 4096 buckets, 32Kbytes

TCP: Hash tables configured (established 32768 bind 32768)

NET: Registered protocol family 1

NET: Registered protocol family 17

VFS: Mounted root (jfs filesystem) readonly.

Mounted devfs on /dev

Freeing unused kernel memory: 432k freed

Adding 2008116k swap on /dev/hde2.  Priority:-1 extents:1

ACPI: PCI interrupt 0000:00:06.0[A] -> GSI 5 (level, low) -> IRQ 5

PCI: Setting latency timer of device 0000:00:06.0 to 64

intel8x0_measure_ac97_clock: measured 49553 usecs

intel8x0: clocking to 47413

Linux video capture interface: v1.00

bttv: driver version 0.9.15 loaded

bttv: using 8 buffers with 2080k (520 pages) each for capture

i2c /dev entries driver

tvaudio: TV audio decoder + audio/video mux driver

tvaudio: known chips: tda9840,tda9873h,tda9874h/a,tda9850,tda9855,tea6300,tea6420,tda8425,pic16c54 (PV951),ta8874z

ohci1394: $Rev: 1223 $ Ben Collins <bcollins@debian.org>

ACPI: PCI interrupt 0000:02:06.0[A] -> GSI 9 (level, low) -> IRQ 9

ohci1394: fw-host0: OHCI-1394 1.0 (PCI): IRQ=[9]  MMIO=[d6004000-d60047ff]  Max Packet=[2048]

ieee1394: raw1394: /dev/raw1394 device initialized

video1394: Installed video1394 module

ieee1394: Host added: ID:BUS[0-00:1023]  GUID[00308d012000038e]

hde: Speed warnings UDMA 3/4/5 is not functional.

```

uname -a output

```
Linux clawhammer 2.6.8-rc2-bk12 #3 Mon Aug 2 22:26:21 GMT 2004 x86_64 4  GNU/Linux

```

hdparm -tT /dev/hde output

```
/dev/hde:

 Timing buffer-cache reads:   2288 MB in  2.00 seconds = 1142.46 MB/sec

 Timing buffered disk reads:   46 MB in  3.03 seconds =  15.20 MB/sec
```

----------

## c0balt

hi, 

you sure youve disable "Support for SATA" in the ATA/ATAPI Submenu?!

if that is active every SCSI-SATA driver will be deactivated!

```

#

# Please see Documentation/ide.txt for help/info on IDE drives

#

# CONFIG_BLK_DEV_IDE_SATA is not set

# CONFIG_BLK_DEV_HD_IDE is not set

CONFIG_BLK_DEV_IDEDISK=y

# CONFIG_IDEDISK_MULTI_MODE is not set

CONFIG_BLK_DEV_IDECD=y

# CONFIG_BLK_DEV_IDETAPE is not set

# CONFIG_BLK_DEV_IDEFLOPPY is not set

# CONFIG_BLK_DEV_IDESCSI is not set

# CONFIG_IDE_TASK_IOCTL is not set

# CONFIG_IDE_TASKFILE_IO is not set

```

edit just to be sure, youve got this too?

```

#

# SCSI low-level drivers

#

# CONFIG_BLK_DEV_3W_XXXX_RAID is not set

# CONFIG_SCSI_3W_9XXX is not set

# CONFIG_SCSI_ACARD is not set

# CONFIG_SCSI_AACRAID is not set

# CONFIG_SCSI_AIC7XXX is not set

# CONFIG_SCSI_AIC7XXX_OLD is not set

# CONFIG_SCSI_AIC79XX is not set

# CONFIG_SCSI_DPT_I2O is not set

# CONFIG_SCSI_MEGARAID is not set

CONFIG_SCSI_SATA=y

# CONFIG_SCSI_SATA_SVW is not set

# CONFIG_SCSI_ATA_PIIX is not set

CONFIG_SCSI_SATA_NV=y

# CONFIG_SCSI_SATA_PROMISE is not set

# CONFIG_SCSI_SATA_SX4 is not set

...

```

if there is no ide driver in the kernel then its rather impossible that the drive is recognized as /dev/hd*

----------

## c0balt

ive just checked my dmesg, somehow this doesnt sound good:

```

libata version 1.02 loaded.

ata_piix version 1.02

ata1: SATA max UDMA/133 cmd 0xEFE0 ctl 0xEFAE bmdma 0xEF90 irq 18

ata2: SATA max UDMA/133 cmd 0xEFA0 ctl 0xEFAA bmdma 0xEF98 irq 18

ata1: dev 0 cfg 49:2f00 82:74eb 83:7f63 84:4003 85:74e9 86:3c43 87:4003 88:207f

ata1: dev 0 ATA, max UDMA/133, 145226112 sectors: lba48

ata1: dev 0 configured for UDMA/133

scsi0 : ata_piix

ata2: SATA port has no device.

scsi1 : ata_piix

```

configured for UDMA/133 ?! wth?

edit: im on 2.6.8-rc2-mm1-reiser4, maybe you should try rc2-mm2

----------

## Dr_b_

I get the same thing...

```
ata1: dev 0 configured for UDMA/133

Linux enrolv2 2.6.7-gentoo-r11 #9 SMP Mon Jul 26 04:14:14 UTC 2004 i686 Intel(R) Pentium(R) 4 CPU 3.20GHz GenuineIntel GNU/Linux
```

----------

## rkrenzis

I did check that.  SATA is only enabled in the SCSI submenus.

Being that:

1. SCSI *

2. SCSI Disk *

3. SATA *

4. NVIDIA SATA *

All statically compiled into the kernel.  Atleast your system recognizes that your drive is connected to SATA.  I'm getting piss-poor performance.  I'm ready to go back to SCSI disks after this bout with IDE disks.

----------

## rkrenzis

So far the findings have proven that SATA was only incorporated in the nforce3-250 chipset.  :Confused: 

I have the nforce3-150 chipset.   :Sad: 

Thus, I will be reduced to using a SATA interface which is transparently mapped to a 3rd IDE interface.  I'm going to purchase a 3Ware SATA HBA and hope that it addresses my issues if I can't make any further progress with this.  Go figure.  Leave it up to Soyo to make some half-a$$ bastardized version of a *wannabe* SATA implementation.   :Rolling Eyes: 

I've also had problems with the nforce audio drivers.  I can't control the volume.  Apparently I paid extra for this feature.

This is the last time I buy Soyo.  I have always bought ASUS and never had such problems.

 :Evil or Very Mad:  Caveat Emptor!: Stay away from the SY-CK8 Plus.  It is nothing but a hacked attempt to try and make something decent.  :Evil or Very Mad: 

----------

## Dr_b_

I believe Soyo went belly up, there's a news item that they were bought out by a chip consortium.

Either way, so we should disable the SATA support in the kernel everywhere else, except under the SCSI section?

----------

## rkrenzis

Yes, you should only have SATA support under the SCSI subsystem menu.  SATA support under the ATA/ATAPI menu should be disabled as the two conflict each other.  The SATA support under ATA/ATAPI is only there for compatibility purposes.

----------

## Stolz

 *c0balt wrote:*   

> ive just checked my dmesg, somehow this doesnt sound good:
> 
> ```
> 
> ata1: SATA max UDMA/133 cmd 0xEFE0 ctl 0xEFAE bmdma 0xEF90 irq 18
> ...

 

I'm getting the same statement. Can someone explain it?

Thanks.

----------

## rkrenzis

What chipset are you using?  You need to verify if you have a true SATA controller.  Most ide drives and motherboards claim that they have "SATA" support.  This pseudo-"SATA" support is accomplished by adding an additional IDE controller then using a bridge to the SATA connector.  The hard drive you  may have also may not be a true "SATA" drive.  You need to look near the connectors of the drive.  If you see an "M" logo, your interface is bridged to an ide interface.

I think there are only a handful of drives that actually have native SATA support.  You should verify this.

Also, can you share with us your kernel configuration (the obivous sections regarding the scsi configuration. Yes, SATA is under SCSI.)

Also a dead give away if you are actually using SATA.  In your fstab, are your raw disk devices /dev/hd* or /dev/sd*?

/dev/hd* = ide

/dev/sd* = sata or scsi

What about hdparm -iI /dev/hd* or hdparm -iI /dev/sd*?

----------

## elvisthedj

I don't have onboard sata.  I'm using an Adaptec (Silicon Image chipset) SATA Connect pci.  Per some threads I've read here and elsewhere, I did the following:

 *Quote:*   

> 
> 
> Edit : /usr/src/linux/drivers/ide/ide-io.c and change the lines as indicated :
> 
> - if (hwif->irq != masked_irq)
> ...

 

Here are my before and after stats (both tests done when system was idle)

```

bash-2.05b# hdparm -tT /dev/sda

/dev/sda:

 Timing buffer-cache reads:   764 MB in  2.00 seconds = 381.11 MB/sec

 Timing buffered disk reads:  164 MB in  3.03 seconds =  54.06 MB/sec

```

new kernel:

```

bash-2.05b# hdparm -tT /dev/sda

/dev/sda:

 Timing buffer-cache reads:   996 MB in  2.00 seconds = 497.58 MB/sec

 Timing buffered disk reads:  170 MB in  3.03 seconds =  56.06 MB/sec

```

Anybody else running this patch (now I wish I hadn't skipped 4 pages of the thread  :Very Happy: )    Geuss I'll go read it.

Ok, I tested my IDE and.. yuck.  Practically like a floppy...

```

/dev/hdb:

 Timing buffer-cache reads:   816 MB in  2.00 seconds = 407.86 MB/sec

 Timing buffered disk reads:   12 MB in  3.45 seconds =   3.47 MB/sec

```

----------

## yottabit

Time to wake this thread up, I guess.

Running 2.6.11-mm2 with default anticipatory I/O scheduler.

Config is two Hitachi 80 GB SATA drives using linux RAID-1 and two Hitachi 250 GB SATA drives using linux RAID-0. All four drives are on a Promise FastTrack S150 TX4 (not the motherboard's SiI controller) and using the kernel's promise driver.

Other pertinent info: ASUS A7N8X-Deluxe, AMD Athlon XP 2100+, 1024 MB RAM (3 DIMMs), nVidia nForce2.

```
hal linux # hdparm -t /dev/sda  # Hitachi 80 GB native drive

/dev/sda:

 Timing buffered disk reads:  174 MB in  3.03 seconds =  57.47 MB/sec

hal linux # hdparm -t /dev/md1  # Linux RAID-1 (mirror) array of two Hitachi 80 GB drives

/dev/md1:

 Timing buffered disk reads:  166 MB in  3.02 seconds =  54.94 MB/sec

hal linux # hdparm -t /dev/sdc  # Hitachi 250 GB native drive

/dev/sdc:

 Timing buffered disk reads:  172 MB in  3.02 seconds =  56.96 MB/sec

hal linux # hdparm -t /dev/md3  # Linux RAID-0 (striped) array of two Hitachi 250 GB drives

/dev/md3:

 Timing buffered disk reads:  250 MB in  3.02 seconds =  82.85 MB/sec
```

I'm quite happy with it. I have a D-Link DGE-530T Gigabit Ethernet adapter installed (and set to jumbo frame 9000-byte MTU) and Samba set to a 64 KB window size, and I can actually max out the I/O to the disk array! Crazy.

I'm using the GigE card as a second NIC in the server to store all of the DVR & DVD video data on the server from the media computer attached to the TV. Quite excellent.

----------

## Dr_b_

Can you tell us a little bit about your kernel config, or how you got your RAID working? 

Thanks,

-Dr_b_

----------

## yottabit

 *Dr_b_ wrote:*   

> Can you tell us a little bit about your kernel config, or how you got your RAID working?

 

Sure, no problem. I'm pretty much running the stock 2.6.11-mm kernel available in Gentoo (~x86 keyword). I am not using a preemptible kernel (this is a server, not a workstation). I was using the default anticipatory I/O scheduler (more on this later). My RAID setup is pretty simple, using a mirror (RAID-1) for the system and striping (RAID-0) for the big video array. All disks are Hitachi SATA and I'm using a Promise S150 TX4 SATA controller with the 2.6 kernel's promise driver. I'm using the Reiser 3.6 filesystem on both arrays. Here's my /etc/raidtab:

```
# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdb1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md1

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda3

raid-disk               0

device                  /dev/sdb3

raid-disk               1

# swap (RAID 1)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda2

raid-disk               0

device                  /dev/sdb2

raid-disk               1

# big disk (RAID 0 striping)

raiddev                 /dev/md3

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sdc1

raid-disk               0

device                  /dev/sdd1

raid-disk               1
```

I actually have two NICs in the system. The primary NIC is the nForce2 onboard using the 2.6 kernel's reverse-engineered driver (forcedeth) and is on the primary network space. The secondary NIC is the D-Link DGE-530T Gigabit Ethernet, using the 2.6 kernel's sk98lin driver, and on a separate network space. I enabled Jumbo Frames on the second NIC by setting the MTU to 9000 (put in /etc/conf.d/local.start for change on boot since it defaults to the standard MTU of 1500). The gigabit link is directly connected to the HTPC upstairs with crossover Cat5 UTP cable. The HTPC unfortunately uses Windows XP since there are no Linux drivers available for my ATSC digital tuner. Jumbo Frame support was enabled in Windows XP through the network driver, and I changed the MTU to 9000 with the Dr. TCP utility.

So, now that I've bored you with my details, it must be said that I'm having some performance degradation. I've spent quite a lot of time diagnosing this, and more time is going to be spent as soon as the data finishes transferring off the array so I can try destructive testing.

I have already changed a few kernel parameters, namely I've enabled the new deadline I/O scheduler, though it may not actually be active since I haven't booted with the "elevator=deadline" kernel option yet. It seems at the present time that the striped RAID-0 array is suffering from some performance problems while under heavy read conditions. Yes, I said read conditions, not write conditions.  :Confused: 

My on-going struggle with bizarro performance is being discussed in this thread.

As soon as my data finishes transferring off the array (slowly, I might add) I'll start some more tests, including using the deadline and anticipatory schedulers, changing the stripe-size between 4k and 512k, and trying the JFS, XFS, and Reiser filesystems. I'll try to be methodical in my procedures and document the tests well. I'm going to use iozone for the performance tests, and I've thought mostly about using -aMop -i 0 -i 1 -g 2g -+u as my iozone parameters.

If you want to continue the discussion of parameters and tests and such, please do so in the above-referenced thread since this one is pretty much dedicated to hdparm statistics which are useless for my problem.

Cheers!

J

----------

