# Lower SATA hard disk performance in AHCI mode?

## Paapaa

I have an AHCI compatible motherboard (Asus P5B Deluxe) with a new SATA hard disk (Western Digital WD1600YS). In normal IDE mode my SATA gives about 60MB/s. With AHCI enabled I get only 50MB/s. Another SATA drive gives the exact same performance in both modes so something is wrong. One difference in dmesg is this:

With AHCI:

```
ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (depth 31/32)

ata2.00: ata2: dev 0 multi count 1

ata2.00: configured for UDMA/133

```

Without AHCI:

```
scsi1 : ata_piix

ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (depth 0/32)

ata2.00: ata2: dev 0 multi count 16

ata2.00: configured for UDMA/133

```

NCQ should not decrease transfer rates by 20%. It should not affect transfer rates at all. What does "multi count" mean? Any ideas?

The rates were measured roughly with "hdparm -t". Im using gentoo-sources-2.6.20-r4.

----------

## SoylentGreen

hmm, works fine 4me   :Shocked: 

i still get the same performance.

ASUS A8N-E, Sata2 Samsung drives. nforce4 chipset.

```

dev/sdb1:

 Timing cached reads:   1576 MB in  2.00 seconds = 787.68 MB/sec

 Timing buffered disk reads:  228 MB in  3.02 seconds =  75.53 MB/sec

```

sure there was not a cronjob or something running while you tested the speed using hdparm? or you compiled something, etc.. ?

----------

## Paapaa

I investigated this a bit further and tried an older kernel: 2.6.19-r5. It gave perfect speed with AHCI enabled. So I knew something was wrong with 2.6.20-r4. (No, this had nothing to do with cron jobs or other background tasks). Then I noticed:

2.6.20-r4, AHCI=enabled, 50MB/s:

 *Quote:*   

> ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (depth 31/32) 

 

2.6.20-r4, AHCI=disabled, 60MB/s:

 *Quote:*   

> ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (depth 0/32) 

 

2.6.19-r5, AHCI=enabled, 60MB/s:

 *Quote:*   

> ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (depth 0/32) 

 

I think 2.6.20 has different default settings for NCQ or at least something is done differently. Don't know id that is intentional or not but I'd like to alter the setting as Im not running a server and I don't benefit from NCQ.

Does anyone know how to alter NCQ depth?

----------

## PrakashP

NCQ depth of 0 means NCQ off. It seems HD controller combo with NCQ gives performance troubles. I'd turn NCQ off (it is possible via sys or proc interface, see kernel docs or search lkml). But I'd report this to lkml anyway.

----------

## widan

You can add it to the NCQ blacklist in drivers/ata/libata-core.c:

```
static const struct ata_blacklist_entry ata_device_blacklist [] = {

        ...

        /* Devices where NCQ should be avoided */

        /* NCQ is slow */

        { "WDC WD740ADFD-00",   NULL,      ATA_HORKAGE_NONCQ },

        ...

        /* End Marker */

        { }

};
```

You will need to find out the model number to put in the list (run "hdparm -I" on the problematic drive, the "model number" is the string you need), and add an entry for it with the same parameters than the one shown above (that should already be in the file). After recompiling and installing the kernel it should tell you:

```
ata2.00: ATA-7, max UDMA/133, 321672960 sectors: LBA48 NCQ (not used)
```

----------

## Paapaa

 *PrakashP wrote:*   

> NCQ depth of 0 means NCQ off. It seems HD controller combo with NCQ gives performance troubles. I'd turn NCQ off (it is possible via sys or proc interface, see kernel docs or search lkml). But I'd report this to lkml anyway.

 

I tested a bit more with this command:

```
rm temp && sync && time sh -c 'cat quite_big_file > temp && sync'
```

quite_big_file was 540 MiB and 700MiB - alternately. And in both cases the time (real time, user time was 0s) was about 19s and 23s - respectively. So it seems that NCQ doesn't affect raw trasfer rate after all. Go figure.

Maybe hdparm is doing something very silly and gives totally bogus results?

----------

## Paapaa

I posted to Linux Kernel Mailing List and it seems something _might_ be wrong with CFQ IO scheduler:

 *Nick Piggin wrote:*   

> Thanks. I believe CFQ contains some code to keep NCQ depths managable, which might be causing the problem. I've cc'ed the CFQ author (Jens) who might be able to give some more ideas. 

 

When I tested with Deadline scheduler I got no transfer rate issues with "hdparm -t". With Deadline I got 60MB/s both with and without NCQ. I'll post here if I get more info on this. 

Thanks for suggesting to report this to LKML.

----------

