# Maximum Speed over Gigabit NFS?

## Kai Hvatum

I was wondering what speeds are possible over a Gigabit connection using NFS?

I have two systems with built-in Gigabit controllers on the motherboard which therefore overcome the PCI bus's speed limitation. The server has a RAID-5 array so disk access speed will also not be a limitation unless NFS can break 60 Mb/s. 

Also, if anyone has NFS server configuration files optimized for Gigabit they would be much appreciated.

----------

## adaptr

 *Kai Hvatum wrote:*   

> I was wondering what speeds are possible over a Gigabit connection using NFS?
> 
> I have two systems with built-in Gigabit controllers on the motherboard which therefore overcome the PCI bus's speed limitation.

 

No they don't.

They may be on a different PCI bus, but they are still connected to a PCI bus.

Nevertheless, this should not be a problem since the theoretical PCI transfer limit is double that of your drives...

 *Kai Hvatum wrote:*   

> The server has a RAID-5 array so disk access speed will also not be a limitation unless NFS can break 60 Mb/s. 

 

It might... theoretically (again) if you connect the systems with a FD cross cable you might even double that.

Theoretically.

----------

## nevynxxx

You don't mention if the two systems are identical, but imply they are not (by only refering to the "server" drives as being RAIDed.

The easy way to answer this is to work out what systems are involved in the transfer, find out their transfer speeds, and the lowest one will be your limit.

I would expect something like:

hard drive->raid controller->pic(-x?) bus->memory(?)->pci bus->Gbit link->PCI bus->memory(?)->pci(-x?)->raid controller(?)->disk.

Remember that the bandwidth of the controller will be split amongst the number of drives attached to that controller, and can get quite strange in some Raid set-ups.

----------

## Kai Hvatum

 *adaptr wrote:*   

>  *Kai Hvatum wrote:*   I was wondering what speeds are possible over a Gigabit connection using NFS?
> 
> I have two systems with built-in Gigabit controllers on the motherboard which therefore overcome the PCI bus's speed limitation. 
> 
> No they don't.
> ...

 

Yeah sorry I should have explained that - they're both running over the PCI-X (64) bus instead of the slower PCI bus which the cheap consumer Gigabit cards use. I've also got a third computer running an aged Tyan tiger board with a Intel Gigabit card in an PCI-64 bit slot.

You're completely right in that it's still PCI though. 

 *adaptr wrote:*   

>  *Kai Hvatum wrote:*   The server has a RAID-5 array so disk access speed will also not be a limitation unless NFS can break 60 Mb/s.  
> 
> It might... theoretically (again) if you connect the systems with a FD cross cable you might even double that.
> 
> Theoretically.

 

Will performance be impacted if I employ a decent switch which supports large frames such as this one as compared to crossover? If crossover is that much faster than I can leave the third computer on the old 100 Base-T network. 

The above named switch supports jumbo packets up to 9000 k.

----------

## Kai Hvatum

 *nevynxxx wrote:*   

> You don't mention if the two systems are identical, but imply they are not (by only refering to the "server" drives as being RAIDed.
> 
> The easy way to answer this is to work out what systems are involved in the transfer, find out their transfer speeds, and the lowest one will be your limit.
> 
> I would expect something like:
> ...

 

My setup is as follows:

Identical in both computers

PCI = PCI-E on both computers so that won't be a bottleneck, right?

Server

CPU = 2x Athlon MP 2600+

Memory = 512 MB ECC DDR 266

Raid Controller = Linux Software RAID. I don't think this should be a problem as it reaches 60 MB/s in HD-Parm. 

Hard Drive = 5x 200 GB Seagate 7200 RPM Ultra-ATA 

Client

CPU = AMD64 3200+

Memory = 1 GB DDR 400

Hard Drive = 160 Seagate SATA

I'm going to be primarily doing work on the client which will not require transferring the files from the server. More often I'll be transferring them to the server from the client or editing them without removing them from the server. My hope is that Gigabit will allow me to store important files remotely without having to pay a large penalty in performance or deal with the pain of transferring them every time I want to work on them. This way the performance hit of software RAID is also put on the server instead of my workstation.  :Mr. Green: 

Yeah, it's a crazy setup but I got the server pretty much for free. It was my brother's old workstation (he earns way too much money) and he gave it to me since he's now upgraded to dual Opterons. He was too cheap to live me his Three Ware RAID card though.   :Evil or Very Mad: 

Oh yeah a few more questions:

- Could the CPU usage on the server cut into network performance? When doing large transfers from it's RAID array CPU usage sometimes reaches 60%.

-Will NFS act nicely (like a local hard drive) if I'm editing the files while leaving them on the server. iN other words, will my programs stall and act funny if they're constantly grabbing information over the network?

----------

## adaptr

 *Quote:*   

> - Could the CPU usage on the server cut into network performance?

 

Not likely - NIC transfers are typically done via DMA.

 *Quote:*   

> When doing large transfers from it's RAID array CPU usage sometimes reaches 60%. 

 

Then you don't actually have hardware RAID, do you ?

 *Quote:*   

> -Will NFS act nicely (like a local hard drive) if I'm editing the files while leaving them on the server. iN other words, will my programs stall and act funny if they're constantly grabbing information over the network?

 

Why wouldn't it ?

In what way is it different from using any other network-based FS ?

The whole idea behind a network FS is that you can handle remote files as if they were local.

 *Quote:*   

> PCI = PCI-E on both computers

 

Just seconds ago it was PCI-X - so which is it ?

PCI Express is not the same as PCI-X.

For one thing, PCI-E is a serial interface.

----------

## nevynxxx

 *Kai Hvatum wrote:*   

> 
> 
> Server
> 
> CPU = 2x Athlon MP 2600+
> ...

 

I doubt you'll notice the difference between Gb and 100Mb.......Try this, the server I have been playing with recently, has a dual channel U320 SCSi card, on pci-x, downloading files (quite large) from that to my workstartion (with a 10krpm Raptor SATA drive) doesn't use up all of the available network connection.

What you'll worry about more using NFS as a 'live' filesystem is the seek time, not the transfer rate, I've noticed that going from ATA to the Raptor that has SCSI-esk seek times.

Also look at the internal transfer speeds of your disks, 

 *Quote:*   

> 
> 
> Performance	
> 
> Drive Transfer Rate	150 MBps (external) / 102 MBps (internal)
> ...

 

Thats for said western digital raptor.....you'd need 9 or 10 of these (on seperate channels) to strain a Gbit link.....

<edit> missed the big 'B', 102MB internal is 816Mb sor make that 9 or 10, 2.....</edit>

----------

## Kai Hvatum

 *adaptr wrote:*   

>  *Quote:*   When doing large transfers from it's RAID array CPU usage sometimes reaches 60%.  Then you don't actually have hardware RAID, do you ?

 

Nope, sadly not as my brother took his three-ware controller and transferred it to his new workstation. The sixty percent is more of a worst case scenario and doesn't happen under normal use. The software RAID setup is built using the md module in kernel 2.6. 

 *adaptr wrote:*   

>  *Quote:*   -Will NFS act nicely (like a local hard drive) if I'm editing the files while leaving them on the server. iN other words, will my programs stall and act funny if they're constantly grabbing information over the network? 
> 
> Why wouldn't it ?
> 
> In what way is it different from using any other network-based FS ?
> ...

 

That's the idea I know - the implimentation can be better or worse though. I've noticed stalling and very slow seeking times when playing video files over the current network connection and wondered if NFS over Gigabit was more comperable to a local drive. I wouldn't copy the files them over when I'm editing them if the performance were perfect. 

 *Quote:*   

>  *Quote:*   PCI = PCI-E on both computers 
> 
> Just seconds ago it was PCI-X - so which is it ?
> 
> PCI Express is not the same as PCI-X.
> ...

 

Ah, I had thought that the longer slots on the Tyan board were PCI 3.0 Slots and that everything else was either PCI 1.0 or PCI-E/PCI-X.  I didn't realize that PCI-X referred to the 64-Bit server PCI slots and that PCI-E was the new serial interface - I thought they were the same thing. Sorry for the mistake and thanks for clearing it up & being patient with all these newbie questions.  :Very Happy: 

(I'm very glad I posted now. I thought I already knew most of the important information about networking but it's looking like about half of my information was wrong including Gigabit actually being any faster)

----------

## Kai Hvatum

 *nevynxxx wrote:*   

> Also look at the internal transfer speeds of your disks, 
> 
>  *Quote:*   
> 
> Performance	
> ...

 

I'm confused as to why Gigabit won't offer a performance benefit. When I run HD-Parm against the md device it gives me a 60 MB/s (Mega Bytes) transfer rate. I've tweaked my current 100 Base-T network connection and  the best I could get out of either NFS or Samba was around 10 MB/s. I suppose this makes sense as 100 Base-T is capable of sending 100 million bits per second which translates into 12.5 MB/s some of which is lost to overhead. 

Gigabit is capable (in theory) of sending 1,000,000,000 bits per second which would be 125 MB/s right? With even half of that bandwidth I would see transfer rates six times faster which I think would be pretty decent. 

I completely agree about the seek times, that's another thing I don't like about network filesystems. Maybe that would get a bit better with Gigabit though. 

Thanks alot of all the help.

----------

## nevynxxx

 *Kai Hvatum wrote:*   

> 
> 
> Gigabit is capable (in theory) of sending 1,000,000,000 bits per second which would be 125 MB/s right? With even half of that bandwidth I would see transfer rates six times faster which I think would be pretty decent. 
> 
> I completely agree about the seek times, that's another thing I don't like about network filesystems. Maybe that would get a bit better with Gigabit though. 
> ...

 

Yeah, I realised that after I posted when I was on my way home from work.

The Gb link is slightly faster than a fast hard drive, so your poerformance will probably end up depending on the hard drive at the other end of the link.

----------

## adaptr

If you intend to do such wizardry as working with remote video files over a network FS then I'd seriously consider pumping in as much RAM as you can afford - upwards of 1 GB in any case.

Caching anything the disks pass through the kernel will probably improve performance more than anything else.

Also, if you're concerned about seek times over an NFS link then whatever you do don't use jumbo Ethernet frames!

As you can figure out for yourself this will invariably increase latency, so it's a definite no-no.

----------

## Kai Hvatum

 *adaptr wrote:*   

> If you intend to do such wizardry as working with remote video files over a network FS then I'd seriously consider pumping in as much RAM as you can afford - upwards of 1 GB in any case.
> 
> Caching anything the disks pass through the kernel will probably improve performance more than anything else.
> 
> Also, if you're concerned about seek times over an NFS link then whatever you do don't use jumbo Ethernet frames!
> ...

 

Ah, so there's really an inverse relationship between transfer speed and responsiveness - a large packet size increases transfer speed but decreases responsivenes. 

Thanks for pointing that out.

----------

## Sparohok

 *adaptr wrote:*   

> If you intend to do such wizardry as working with remote video files over a network FS then I'd seriously consider pumping in as much RAM as you can afford - upwards of 1 GB in any case.

 

If, and only if, you expect a high degree of locality in your data access.

Video streaming is the case where you would be least likely to observe locality, and hence a big buffer cache will give you the least benefit, compared with any other application I can think of.

 *adaptr wrote:*   

> Also, if you're concerned about seek times over an NFS link then whatever you do don't use jumbo Ethernet frames!
> 
> As you can figure out for yourself this will invariably increase latency, so it's a definite no-no.

 

Wrong. You really don't know what you're talking about. The maximum jumbo frame of 9216 bytes will take less than 100ns to transmit over gigabit ethernet. Compared to disk latencies which are measured in milliseconds, jumbo frames add no significant latency. However the larger frame improves throughput significantly and reduces CPU load. NFS benefits a great deal from jumbo frames with no downside whatsoever.

The only application where jumbo frames may be questionable is a cluster running fine grained parallel message passing such as MPI. In that case, small synchronization packets can get stuck behind jumbo packets. I can't think of any other case where the latency of a jumbo frame outweighs the reduced overhead and increased bandwidth.

Martin

----------

## adaptr

 *Sparohok wrote:*   

>  *adaptr wrote:*   If you intend to do such wizardry as working with remote video files over a network FS then I'd seriously consider pumping in as much RAM as you can afford - upwards of 1 GB in any case. 
> 
> If, and only if, you expect a high degree of locality in your data access.
> 
> Video streaming

 

I never mentioned video streaming.

I said "such wizardry as working with remote video files over a network FS" - replace "video files" with "large data files" if you like.

In other words, large remote datafiles that need high-speed random access.

"Locality" is not a factor.

 *Sparohok wrote:*   

> Wrong. You really don't know what you're talking about. The maximum jumbo frame of 9216 bytes will take less than 100ns to transmit over gigabit ethernet.

 

Really ?

9216 bytes * 10^7 frames per second = 92.16 billion bytes/second, or over 85GB/sec.

That's some pretty fast ethernet you have there.

I think you mean 100 microseconds - which is the typical latency of gigabit ethernet.

The minimum transfer time for a normal 1518-byte frame is roughly 12 microseconds, given a maximum speed of 82K frames per second.

Or, 8 times faster than your jumbo frame.

Yes, jumbo frames transfer more bytes per second - the overhead is lower.

That's not in question.

Both request and reply for a normal packet will inevitably be faster - especially since the request will typically be only a few 100 bytes.

That's a total of maybe 2K for a request and the first response packet - assuming the server can dish it up that fast.

Of course this is a factor - which is why lots of cache RAM is always a plus.

If you think the transfer speed is limited by the packet handling of the network then use better euquipment - there is a world of difference between cheap gbit switches that do maybe 50K frames/second, and decent equipment with cut-through and store-and-forward options that can reach up to 90K frames/second sustained over multiple segments.

Same with gigabit adapters.

 *Sparohok wrote:*   

>  Compared to disk latencies which are measured in milliseconds,

 

That's with physical disks, yes.

Anybody serious about pumping this much data will opt for a sub-millisecond 300MB/sec+ RAID-6 with 15K rpm drives.

And a GByte of cache on the controller.

In typical applications, cache hits all but eliminate the disk latencies.

 *Sparohok wrote:*   

> jumbo frames add no significant latency.

 

They add latency in the network packet handling, i.e. responsiveness, as I said.

 *Sparohok wrote:*   

>  However the larger frame improves throughput significantly and reduces CPU load. NFS benefits a great deal from jumbo frames with no downside whatsoever.

 

I am guessing this is only for disk subsystems that cannot match the speed of the network.

Turn those two around and see what happens.

----------

## Sparohok

 *Quote:*   

> I think you mean 100 microseconds - which is the typical latency of gigabit ethernet.

 

Of course. My apologies.

 *Quote:*   

> Or, 8 times faster than your jumbo frame.

 

So what? Those frames contain 1/8 as much data.

The IP stack latency is at least 20 or 30 microseconds per packet. For 1500 byte frames the IP stack is taking at least as long as transmitting the packet itself. You are likely to saturate a CPU before you saturate the wire. That is why jumbo frames are so important for gigabit ethernet.

Faster network hardware will not help with this problem, IP reassembly is done by the destination CPU.

 *Quote:*   

> Both request and reply for a normal packet will inevitably be faster - especially since the request will typically be only a few 100 bytes.

 

A 100 byte packet will be 100 bytes long, whether jumbo frames are enabled or not. It's not like turning on jumbo frames causes all packets to be 9k long. Ethernet frames are no larger than they need to be.

Turning off jumbo frames will increase the latency of any packet larger than the 1500 bytes, and will have no effect on any packet smaller than the 1500 bytes.

 *Quote:*   

> Anybody serious about pumping this much data will opt for a sub-millisecond 300MB/sec+ RAID-6 with 15K rpm drives.

 

Everyone benefits from jumbo frames. You can saturate a gigabit ethernet with a few commodity hard drives.

 *Quote:*   

> [jumbo frames] add latency in the network packet handling, i.e. responsiveness, as I said.

 

Nope. In a properly functioning gigabit network, NFS performance (both latency and throughput) is better with jumbo frames than without. This is true whether or not the data is in cache. If you don't believe me, Google "nfs mtu performance", or measure it yourself.

Martin

----------

## Jovana

 *Kai Hvatum wrote:*   

> I was wondering what speeds are possible over a Gigabit connection using NFS?
> 
> Also, if anyone has NFS server configuration files optimized for Gigabit they would be much appreciated.

 

Okay, but back to the question. I'am also wondering the same things as Kai Hvatum has asked.

----------

