# Poor SATA I/O performance

## chix4mat

Hi all: 

I have been wanting to perform reliable network-related tests for a while (transfer), but in testing, I discovered that the I/O performance in Gentoo (and other Linux's) is slower than it is in Windows, for some reason. The goal of the tests I will be running is simple... to copy a file or folder from my OS SSD (capable of 200MB+s read and write) over to a NAS. However, with poor I/O (or at least write) performance, that's proving to be an impossible challenge.

Here's the lspci -vv output for the SATA chipset: 

```
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller (prog-if 01 [AHCI 1.0])

        Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard

        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+

        Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

        Latency: 0

        Interrupt: pin B routed to IRQ 41

        Region 0: I/O ports at f900 [size=8]

        Region 1: I/O ports at f800 [size=4]

        Region 2: I/O ports at f700 [size=8]

        Region 3: I/O ports at f600 [size=4]

        Region 4: I/O ports at f500 [size=32]

        Region 5: Memory at fbffc000 (32-bit, non-prefetchable) [size=2K]

        Capabilities: [80] MSI: Enable+ Count=1/16 Maskable- 64bit-

                Address: fee00000  Data: 4051

        Capabilities: [70] Power Management version 3

                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)

                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-

        Capabilities: [a8] SATA HBA v1.0 BAR4 Offset=00000004

        Capabilities: [b0] PCI Advanced Features

                AFCap: TP+ FLR+

                AFCtrl: FLR-

                AFStatus: TP-

        Kernel driver in use: ahci
```

For testing, I took the same 20.0GB solid file (located on an NTFS partition) and transferred it from the SSD over to the NAS  box (via 1Gbit/s network using a server-grade NIC) across three versions of Linux and also Windows. These were the results:

Gentoo [2.6.38]: 49,515,475 b/s

Fedora 15 [2.6.38]: 53,977,445 b/s

Ubuntu 11.04 [2.6.38]: 55,080,433 b/s

Windows 7 x64: 95,718,073 b/s

I can expect normal variation when transferring anything over to a hard drive - like those seen with the Linux tests. But Windows consistently manages to execute the transfer almost twice as fast. These are repeatable results.

The NAS box runs on Linux and is currently using 4x2TB in a RAID5 setup (ext4).

I'm stuck in an odd place where I could just use Windows for the sake of testing, but the rsync alternative there, robocopy, is causing its own set of problems (likely due to the Windows FS). 

If anyone has any ideas on this problem, or how to fix it, I'd greatly appreciate it. As I see the same sort of performance across all Linux OSes, it could be that the chipset here just doesn't have a proper driver, though it is a recent chipset (ICH10R).

Thanks!

----------

## Mad Merlin

 *chix4mat wrote:*   

> Hi all: 
> 
> I have been wanting to perform reliable network-related tests for a while (transfer), but in testing, I discovered that the I/O performance in Gentoo (and other Linux's) is slower than it is in Windows, for some reason. The goal of the tests I will be running is simple... to copy a file or folder from my OS SSD (capable of 200MB+s read and write) over to a NAS. However, with poor I/O (or at least write) performance, that's proving to be an impossible challenge.
> 
> Here's the lspci -vv output for the SATA chipset: 
> ...

 

NTFS support on Linux is far from fast. Try putting the file on ext4 or ext3 and repeating the test.

----------

## krinn

- copy over a linux nas from a windows machine : with what ? samba ? Its speed suck, why? closed sources generally gave poor emulator or program because not everything is implemented correclty.

- a gigabyte ethernet will cap your raid bandwith at 128MBytes

- as you use fakeraid, the software will also cap your speed (or at min, eat your cpu, not really a problem, but strange usage from a server with "server-grade nic")

- as the gigabyte ethernet & sata might share the same line, your pci express bandwith might also get cap, refer to motherboard manual

so your perf cap on windows should come from fakeraid+gigabyte ethernet

and your perf cap on linux should come from samba usage.

----------

