# Gentoo NFS Client > Windows NFS Server Performance Issues

## Crimjob

Hey guys,

I've been racking my brain over this one for months and I've finally given up and made the (perhaps temporary) switch to CIFS.

I've found plenty of posts around the internet about NFS issues, but my situation appears to be unique(ish) as I haven't found any posts about it else where (only vice versa).

I have a Windows Small Business Server 2011 running Server for NFS and Services for Unix. It took me awhile to get NIS and username mapping and everything set up, which is kind of why I want to keep running it, but it's always run like crap.

Small file transfers are doable, but very slow:

```
Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_10MB bs=10M count=1

1+0 records in

1+0 records out

10485760 bytes (10 MB) copied, 3.93638 s, 2.7 MB/s

Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_50MB bs=50M count=1

1+0 records in

1+0 records out

52428800 bytes (52 MB) copied, 26.0095 s, 2.0 MB/s

```

Large file transfers are pretty much out of the question. They do eventually succeed, but they take absolutely *forever*, and to boot, they completely lock up not one, but both machines to the point of being unusable. These are file transfers between two of my main servers, so this doesn't work out well for my network with two out of four main components are unresponsive  :Smile: 

CIFS on the other hand seems to be much quicker and does not cause these lockups, but I feel there is yet more speed available to my network (all Cisco gigabit wired gear, less than 0.1ms latency across the board, no errors or discards).

```
Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_10MB bs=10M count=1

1+0 records in

1+0 records out

10485760 bytes (10 MB) copied, 0.170322 s, 61.6 MB/s

Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_50MB bs=50M count=1

1+0 records in

1+0 records out

52428800 bytes (52 MB) copied, 1.10896 s, 47.3 MB/s

Aurora ~ # dd if=/dev/zero of=/mnt/cifs/TV/testfile_500MB bs=500M count=1

1+0 records in

1+0 records out

524288000 bytes (524 MB) copied, 9.31721 s, 56.3 MB/s

```

One thing I really wanted to do but haven't figured out is async transfers, but there doesn't appear to be the option on the Windows server, and setting it client side has no effect.

I'm just wondering if anyone else has tried similar and has come up either way? As I'm unable to find much information on the web, other than the likely reason that Microsoft doesn't do well at implementing NFS for Windows. I just can't imagine why they'd even bother putting it out there if the performance is this bad all the time though  :Sad: . Perhaps there's other options / settings I can try?

----------

## Rexilion

The below post might be considered as 'throwing raw stuff over the wall and never look back'. Since this seems to be a more Windows related issue than a Linux one. However, I did find some links:

http://technet.microsoft.com/en-us/library/bb463205.aspx (optimizations)

This mentions the following as well (besides the other probable usefull stuff):

 *Quote:*   

> NFS-Only Mode
> 
> Enhanced NFS performance can be achieved by using the NFSONLY.EXE application. This allows a share to be modified to do more aggressive caching to improve performance. This may be set on a share-by-share basis. NFS-Only mode should not be used on any share that can be accessed by any means other than NFS, because data corruption can occur. However, as much as a 15% improvement has been observed when using an NFS-only share. The syntax of this command is:
> 
> NfsOnly <resourcename|sharename> [/enable|/disable]
> ...

 

I also found this:

http://www.suacommunity.com/forum/tm.aspx?m=18142 (post mentioning somewhat identical problems)

http://www.oreillynet.com/onlamp/blog/2008/04/nfs_vs_cifs_for_vmware.html (mentions the same outcome: use CIFS)

And as a final remark:

Dit you try jumbo (ifconfig eth0 mtu 9000) frames?

----------

## Crimjob

 *Rexilion wrote:*   

> The below post might be considered as 'throwing raw stuff over the wall and never look back'. Since this seems to be a more Windows related issue than a Linux one. However, I did find some links:
> 
> http://technet.microsoft.com/en-us/library/bb463205.aspx (optimizations)
> 
> This mentions the following as well (besides the other probable usefull stuff):
> ...

 

Interesting, those are very very close to my situations and I had no luck finding them myself  :Razz: .

I've tried the majority of the optomizations listed there and have mixed results. It looks like some of the changes have made the client a bit more responsive but the server still practically locks up. The speeds got worse as well. Wishing I would have backed up the defaults  :Smile: 

Sounds like I'll be sticking with CIFS for this situation though as it seems to work much better. One of those posts suggests that VMWare supports CIFS as well so I'll have to give that a try to get my hobbling VM's on NFS better performance.

 *Quote:*   

> 
> 
> And as a final remark:
> 
> Dit you try jumbo (ifconfig eth0 mtu 9000) frames?

 

I have not. I've actually been pretty concerned about doing so. I have some things on the network that require specific MTU settings, and everything outside of my network must match 1500. My primary concern with enabling jumbo frames is the amount of work my router (gentoo box) will have to do if the traffic has to go to the internet. Mind you, I'm not really hurting in the power department (it's overkill for a router, Dual CPU Dual Core AMD Opteron 280's, 8GB RAM, 10KRPM SAS), but I just haven't had the time to "play around" with that coupled with the fear  :Smile: .

Would you happen to know of any potential downsides to enabling jumbo frames? Should I even worry about the extra overhead on the router this day of age with the power I have?

----------

## Crimjob

I couldn't even get a 1GB file to complete successfully  :Sad: 

```

Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_10MB bs=10M count=1

1+0 records in

1+0 records out

10485760 bytes (10 MB) copied, 10.2177 s, 1.0 MB/s

Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_50MB bs=50M count=1

1+0 records in

1+0 records out

52428800 bytes (52 MB) copied, 36.353 s, 1.4 MB/s

Aurora ~ # dd if=/dev/zero of=/mnt/nfs/testfile_100MB bs=100M count=1

1+0 records in

1+0 records out

104857600 bytes (105 MB) copied, 23.3514 s, 4.5 MB/s

```

----------

## Rexilion

To be honest, I find CIFS a lot better than NFS too. Strange hangs, wouldn't unmount, file operations that took ages, file transfers that never completed (or half of it) and finally unexplainable unresponsiveness.

About the jumbo frames, enable them only on the card serving your internal network (card). And, judging by how it works, it should reduce the load on the router given the same data rate. Instead of having to detect/repair/check/verify/handle 9000/1500 = 6 packets, it only has to handle 1 packet. Make sure your hardware support it and do try it, you are hurting yourself if you don't.

I'm out of idea's here, did notice some other NFS server implementations for Windows. Maybe those are better? Unix services for Windows aren't exactly 'optimized' according to that vmware pages. And considering that argument validity, it could be true. They have CIFS, why would they put effort in NFS?

----------

## Crimjob

Well so far CIFS seems to be functioning so I guess I can live with it. I've also switched my VMWare box to iSCSI which is much faster than the Windows NFS.

As for the jumbo frames, all the server equipment supports it. My concern is that there are some devices on the network that will access content on one of the servers, some of them are hard coded to 1500 MTU. Is that going to cause a problem if I have the server running at 9000? Otherwise, I'm excited for the test  :Smile: 

----------

## Rexilion

Unfortunately that is not possible, I found this on wikipedia:

 *Quote:*   

> Internet Protocol subnetworks require that all hosts in a subnet have an identical MTU. As a result, interfaces using the standard frame size and interfaces using the jumbo frame size should not be in the same subnet. To reduce interoperability issues, network interface cards capable of jumbo frames require explicit configuration to use jumbo frames.

 

However, a nice alternative is setting up two networks, one with mtu 1500 and another with 9000. One of the servers just could contain *another* LAN card with a small seperate network for those units that are not capable of doing mtu, and do forwarding for that network.

----------

## Crimjob

Interesting! Likely a problem I had when attempting years past  :Razz: 

Luckily I have a spare gigabit line card for my Cisco 4006 I should be able to dedicate to such a task.

I'm going to pick up two Intel PCIX Dual Gigabit NIC's and give this all a whirl on a separate subnet just for large file transfers, hopefully allowing for expandability in the future (maybe Mr. VMWare can have split port channels, one for iSCSI and one for regular LAN). You've really got my mind going on optimization, not sure where I can stop now!

Thanks again for all your assistance  :Smile:  I'll post back if I have any problems once the gear arrives.

----------

