# Share your experience with SATA vs PATA performance

## zaai

I always believed that SATA performance is somewhere between PATA and SCSI. At home and at work I found that during heavy SATA disk access a system seems to be much less responsive than during PATA disk access. I don't know how to measure this properly though.

I noticed this especially when running VmWare. When I run VmWare from a PATA drive it is twice (or more) as quick than running it from a SATA drive. Its not only VmWare though. Any heavy disk access will show it.

I found this to be the case on several different motherboards, on Gentoo x86 with kernel 2.6.16-2.6.22 using P4, dual core and AMD processors, and even on MIPS system. In fact I have not found a single machine where SATA performs better than PATA.

Is the idea that SATA performs better than PATA a myth?

A colleague from work came to the same conclusion and he is running Windows XP.

Your vote and comments please.

----------

## NeddySeagoon

zaai,

In terms of interface technology, PATA, SATA and SCSI can all provide a higer data rate than the interface.

The limit is the head/platter data rate within the drive. Thats true ever for 15,000 rpm drives.

Where SCSI and SATA both score over PATA is that they are capable of queuing commands and executing them out of order to minimise head movement.

The result of this is to reduce some transaction times at the expense of increasing others. Under heavy disk I/O you get an increased useful data transfer time. SCSI drives have had this feature for a long time but its recently been introduced to SATA drives.

SCSI and SATA should be equal if they both have this native command queuing (NCQ) feature

----------

## eccerr0r

I've found that most of the SATA performance issues were rooted to improper configuration of the host adapter drivers.  Maybe I'm missing something, but I think my ICH8/G965 board with SATA and SATA-with-PATA bridge disks seem to work fairly well, I don't even notice any difference between it and a pure PATA solution -- except SATA seems slightly faster but it's so little it's insignificant.

I ran my ICH5/i865 board with both SATA (160G Samsung, SATA3Gb on 1.5Gb) and PATA (250G HGST, PATA) primary root disks separately at different times and did not notice either to be any faster than the other.  The PATA 250G is a better disk so possibly it was *slightly* faster but I didn't notice much.  I was using the libata driver both times.

----------

## zaai

Thank you for the reply NeddySeagoon,

Your description of the interface is indeed how I thought these interfaces differ. SATA has NCQ which is more efficient handling of the commands and theoretically faster. 

I found however that the system performance as a whole degrades more with SATA disk access than with PATA disk access. It is almost as if the SATA bus access has a side effect that slows down the system. I've observed this on five systems which each have a different Mobo and CPU. So I'd like to see if I'm going crazy or if others have noticed the same thing or the opposite. 

Yes, its quite subjective, which is why I'd like to know if others have similar findings. Maybe someone can recommend a good benchmark test. I tried dbench, which shows 144MB/sec throughput for the SATA disk and 85MB/s for the PATA disk. I can assure you that this doesn't reflect the behavior of the system as a whole.

Update:

Thanks eccerr0r as well. This is the first time I've heard someone confirm SATA does work equal or better than PATA>

Maybe it is the drivers but I've seen this effect on an Intel ICH7/975X board, an AMD nVidia nForce3, and other widely different boards. 

By the way, its not plain disk access, its disk access from multiple sources, for example opening a file for the first time sometimes has a seconds larger delay while making a backup on SATA than in the same situation on PATA. The PATA disk responsiveness also goes down more when the SATA disk is used than if the same PATA disk is used.

----------

## micmac

My gut tells me that when using AHCI/NCQ it's better to setup your disk with deadline or no-op scheduler than with cfq. It feels faster. When I first setup this box with cfq I thought Man this disk is slow. Now with no-op and ncq doing the scheduling it's all good allthough I haven't used it extensively yet.

Is this plausible?

----------

## snIP3r

hi all!

i must agree the first comment of zaai. i also encountered the same behaviour. i bought a new server with hw raid controller, 4 sata 3gb ncq discs, 2gb ram and a dual core amd64 cpu. it replaced my former gentoo box with pata drives, sw-raid, 1,25gb ram and an amd athlon xp  1100.

i have the feeling that the older systems _overall_ response was a little bit faster than the new machine. of course things like compiling the kernel are much faster the with the old system. but under some heavy load the older system seems to respond faster than the new one. this might be an issue caused by the amd64 arch or the 3ware hw raid controller drivers or the mentioned scheduler type or the pci vs. pci-e chipset, i dont know - its only a feeling.

but i always argued that (harddisc) i/o is the bottleneck of _all_ pc systems...

just my 2 cents...

greets

snIP3r

----------

## Cyker

 *micmac wrote:*   

> My gut tells me that when using AHCI/NCQ it's better to setup your disk with deadline or no-op scheduler than with cfq. It feels faster. When I first setup this box with cfq I thought Man this disk is slow. Now with no-op and ncq doing the scheduling it's all good allthough I haven't used it extensively yet.
> 
> Is this plausible?

 

I'd like to try that on the mdadm RAID5 array in this box.

Can anyone remember how to change the IO scheduler for a software RAID array?

I'm vaguely aware I'd have echo noop or something similar too somewhere, but the details escape me...

----------

## zaai

 *micmac wrote:*   

> My gut tells me that when using AHCI/NCQ it's better to setup your disk with deadline or no-op scheduler than with cfq. It feels faster. When I first setup this box with cfq I thought Man this disk is slow. Now with no-op and ncq doing the scheduling it's all good allthough I haven't used it extensively yet.
> 
> Is this plausible?

 

Easy enough to try. I'll give it a shot and let you know how that went.

Cyker, the I/O scheduler can be found in the kernel config under Block Layer -> IO Schedulers

Does anyone have an idea for a good testcase to expose performance under heavy load?

----------

## Cyker

 *zaai wrote:*   

> Cyker, the I/O scheduler can be found in the kernel config under Block Layer -> IO Schedulers

 

I want to change them WITHOUT re-compiling the kernel and re-booting.

I *know* there is a way (Why else would they let you compile all 4??), but I just can't remember the *how* part. Just that it involves echo'ing the appropriate scheduler to somewhere in /proc...

----------

## eccerr0r

 *Cyker wrote:*   

> I want to change them WITHOUT re-compiling the kernel and re-booting.
> 
> I *know* there is a way (Why else would they let you compile all 4??), but I just can't remember the *how* part. Just that it involves echo'ing the appropriate scheduler to somewhere in /proc...

 

You can change the scheduling of the underlying disks:

modprobe (scheduler) # if it's not already compiled in

To change hda's queue policy to deadline,

echo 'deadline'  > /sys/block/hda/queue/scheduler

if you cat that file, the bracketed entry is the enabled entry.

----------

## incabolocabus

```
echo "noop" > /sys/block/sda/queue/scheduler
```

changed the scheduler for sda.

```
cat /sys/block/sda/queue/scheduler

[noop] deadline cfq
```

seems like if you have different drives you want configured with different schedulers, you could add some lines to local.start

----------

## zaai

 *incabolocabus wrote:*   

> 
> 
> ```
> echo "noop" > /sys/block/sda/queue/scheduler
> ```
> ...

 

Good one. I found a post at http://lists.openwall.net/linux-kernel/2007/02/12/321 that seems to confirm this. 

 *Quote:*   

> > It looks like 
> 
> > the SATA driver simply blocks the CPU while doing whatever...
> 
> The system sleeps while waiting for the disk (actually, for the SATA
> ...

 

Chosing the noop I/O schedular lets the drive handle it. Correct?

----------

## micmac

 *zaai wrote:*   

> Chosing the noop I/O schedular lets the drive handle it. Correct?

 

Sounds right. But the drive doesn't handle process priorities (how would it?). The anticipatory (probably) and cfq (definitely) schedulers do. So if one wants disk access priority dependent on process priority one should rather use anticipatory or cfq scheduler and no NCQ over deadline or no-op and NCQ.

Btw. NCQ can be disabled like this according to the libata FAQ:

```
echo 1 > /sys/block/sdX/device/queue_depth
```

Edit: This is just my impression, I don't know that it works like this for sure. Hope someone can clear this up.

----------

## splurben

Just from State of NooB:

Possibly step back a bit. Certainly sustained large transfers (i.e. > 100MB) are faster on well-configured SATA drives.

I'm only considering now whether there exists a testing tool to emulate the kind of disc access which is endemic of contemporary desktop utilisation. (Excuse the Australian spelling.) You know, launching one app (say Gimp) while ripping a DVD, surfing the web, playing ogg-vorbis music, copying files via NFS and downloading torrents.

Users seem to be guessing and proposing. What about a real world testing utility? Is there one?

Are users considering the impact of cache sizes on the hard drives when comparing? Does a large disc cache impede moving mixtures of large and small files? I generally prefer large caches when purchasing large drives, but I can't actually say that I've seen evidence in support of large disc caches in terms of performance.

Just a few thoughts along these lines.

K

----------

## snIP3r

hi all!

so i should change my scheduler to "noop' when using NCQ drives?? actually i have a "deadline" driven scheduler...

greets

snIP3r

----------

## Cyker

Ah, that's the commands I wanted  :Smile: 

I wonder what performs better; CFQ with NCQ or noop with NCQ...

----------

## eccerr0r

I'd venture to guess that you should still use CFQ with NCQ.  Despite intelligence in the drive, its buffer is so deep and the host OS would know more about pending transfers, so it could optimize with them.  Theoretically, if the OS is trying to multitask a whole mess of disk transfers due to a backup, and also doing demand fetches as the user is trying to load Firefox at the same time, the OS should prioritize one or the other depending on the user (Server? Backup.  Workstation? Firefox!).  The disk would have no way of determining which is what.

That is, you should still try both ways.

----------

## Cyker

Laziness wins - I stick with CFQ+NCQ  :Razz: 

----------

