# NFS Recommendations

## Telamon

What do people out there recommend for NFS settings to optimize reliability and performance on a 100mbit LAN, where the client machines may be often

rebooted?  I'm currently using

```
timeo=14,retry=1,rsize=32768,wsize=32768,sync,intr 
```

 and I sometimes run into problems with reads/writes failing when I copy large files (200+ megs) to or from NFS partitions, or when I unrar large archives from an NFS mount.

Both machines are running kernel 2.6.1.

----------

## meowsqueak

I just use the defaults (with 'sync') and it works 100% perfectly. Never had a problem.

----------

## Ateo

 *meowsqueak wrote:*   

> I just use the defaults (with 'sync') and it works 100% perfectly. Never had a problem.

 

Same here..

----------

## jondkent

Hi,

I find that you don't usually have to be clever with NFS, its pretty straight forward.

Jon

----------

## Cossins

I set wsize and rsize to 8192, which gave me a clear performance boost compared to the default.

- Simon

----------

## nielchiano

 *Cossins wrote:*   

> I set wsize and rsize to 8192, which gave me a clear performance boost compared to the default.

 

what do you mean by "clear"? any numbers? like 100MB file took xx seconds on default, xx seconds on my settings...

Ans what type of network do you use? 100base-T?

----------

## Cossins

 *nielchiano wrote:*   

>  *Cossins wrote:*   I set wsize and rsize to 8192, which gave me a clear performance boost compared to the default. 
> 
> what do you mean by "clear"? any numbers? like 100MB file took xx seconds on default, xx seconds on my settings...
> 
> Ans what type of network do you use? 100base-T?

 

100Base-T, yes. I didn't bechmark the boost exactly, but before the change I was transferring files at 5-6 MB/s and after was about 9-10 MB/s.

[OT]

Can someone confirm this:

b = bit

B = byte

[/OT]

- Simon

----------

## nielchiano

 *Cossins wrote:*   

> 
> 
> [OT]
> 
> Can someone confirm this:
> ...

 

I can, officialy its b for bit, B for byte...

and k for kilo, M for mega.

however there is 1 exception that is widely accepted: if K is used for kilo, it means 1024 instead ok 1000:

1Mbps = 1000000 bits per second

2KB = 2048 bytes

1MB = (depending) 1000000 bytes or 1048576bytes...

----------

## krunk

I'm experiencing the same problems. Transfers of large files always fail. Any sustained throughput eventually fails as well, i.e., palying mp3's over the LAN which is the main purpose of the server I put together.

The only solution is to /etc/init.d/nfs restart on the server and /etc/init.d/nfsmount restart on the client. (If there is another way please let me know). 

These are the pertinent config files, let me know if I left anything out or if there are any logs, etc, that would be helpful:

```
#Server:

# /etc/exports: NFS file systems being exported.  See exports(5).

/mnt/mnt/share 192.168.0.0/255.255.255.0(ro,sync)

#Client:

192.168.1.45:/mnt/share         /mnt/share      nfs     rw,hard,intr,user                       0 0

```

Of course I have several pc's on the network and they all export and mount NFS shares, but they all look like the above. Nothing fancy. How can I resolve this problem.

----------

## dreamer

@krunk:

I use nfs for the same purposes ( playing MP3's over the LAN ) and am not experiencing any problems.

Basicly i have the same mount options as you do, i only added rsize=8192,wsize=8192 to them. As stated this gives a performance boost of about 40-50 % ( depending on the quality of your LAN ). Maybe your problem has something to do with this? It's worth the try  :Smile: 

Do you use NFS 3 of still NFS 2 ?

----------

## meowsqueak

 *dreamer wrote:*   

> i only added rsize=8192,wsize=8192 to them

 

So this is just a client-side tweak? No changes required to server?

----------

## eNTi

```

rsize=8192,wsize=8192,timeo=14,intr,hard,tcp

```

these are my settings. it works very well, even tough you have to set the "TCP" in your kernel. it's still considered experimental, but working well.

----------

## dreamer

 *meowsqueak wrote:*   

>  *dreamer wrote:*   i only added rsize=8192,wsize=8192 to them 
> 
> So this is just a client-side tweak? No changes required to server?

 

only client-side indeed. Just another mount option

----------

## krunk

Here is some output from mpg123 after a failed nfs. It only played one song:

```

Playing MPEG stream from 02_feeling_so_real.mp3 ...

MPEG 1.0 layer III, 128 kbit/s, 44100 Hz joint-stereo

 

[0:36] Decoding of 02_feeling_so_real.mp3 finished.

03_all_that_i_need.mp3: Stale NFS file handle

04_lets_go_free.mp3: Stale NFS file handle

05_everytime_you_touch_me.mp3: Stale NFS file handle

06_bring_back_my_happiness.mp3: Stale NFS file handle

07_what_love.mp3: Stale NFS file handle

08_first_cool_hive.mp3: Stale NFS file handle

09_into_the_blue.mp3: Stale NFS file handle

-z: Stale NFS file handle
```

Stale NFS handle?

 *eNTi wrote:*   

> 
> 
> ```
> 
> rsize=8192,wsize=8192,timeo=14,intr,hard,tcp
> ...

 

Could you elaborate on the benefits of: intr, hard, or tcp?

----------

## dreamer

can i see your /etc/exports file? ( serverside )

Does it contain sync or async keywords?

And i advise you to read the manpages on nfs ans exports, they could be quite enligthning.

----------

## Cossins

Strange - I have just upgraded some kernels here and there (only minor upgrades, like 2.6.1 to 2.6.2-rc2 or something), and suddenly {rsize,wsize}=8192 gives really crappy performance. Increasing the values to 32768 improves it dramatically.

I benchmarked it: Writing exactly 64 Mb* took nearly 4 minutes with my old settings, and around 7 seconds with {rsize,wsize} changed.

*) To do this: dd if=/dev/zero of=outfile bs=1M count=64

Also, don't be confused by the {rsize,wsize}-style of writing, it simply means "both rsize and wsize"...  :Smile: 

- Simon

----------

## krunk

 *dreamer wrote:*   

> can i see your /etc/exports file? ( serverside )
> 
> Does it contain sync or async keywords?
> 
> And i advise you to read the manpages on nfs ans exports, they could be quite enligthning.

 

My origianl server side settings are in my first post.

I read all the docs when I set it up....I was just being lazy.   :Embarassed: 

Anyhow, I recompiled my kernels to support nfs v.4 and modifiied my */exports and fstab to reflect the setting suggested here and have been playing mp3's for hours and transferred a ~700mb file at the same time as a test with no problems or  disconnects or even mp3 skippage. 

Could you elaborate on the benchmarking technique?

```
dd if=/dev/zero of=outfile bs=1M count=64
```

doesn't do anything except drop down to

>

I suppose I'm supposed to supplement my own settings for "zero" and "outfile", but what?

cheers

james

----------

## Cossins

 *krunk wrote:*   

> Could you elaborate on the benchmarking technique?
> 
> ```
> dd if=/dev/zero of=outfile bs=1M count=64
> ```
> ...

 

Oh sorry, of course you will need to time the process with the "time"-utility.

You don't have to replace anything with your own values - /dev/zero is a dummy device created by the kernel which outputs an infinite torrent of binary zeroes. This is mostly used for scrambling harddrives, making it impossible to undelete files. You could also use /dev/random, but it uses some CPU power also which is not desirable in this situation.

I don't understand why it gives you a > prompt? That usually happens if you have an unfinished quotation or something else that bash doesn't understand. If you copy-paste the command I wrote, it shouldn't happen (it doesn't here, at least).

The full command for benchmarking write-operations on larger files with NFS:

```
# time dd if=/dev/zero of=testfile bs=1M count=64
```

The "count"-value can be anything of reasonable size, really.

To test the speed of read-operations, you can simply time copying the testfile to a local harddrive.

- Simon

----------

## Cossins

On a side note:

Beware the dd utility!

For instance, you can use it to directly overwrite your whole harddrive with meaningless rambles from /dev/random in a single command, without warning.

See the man-page for more information.  :Smile: 

- Simon

----------

## dreamer

 *krunk wrote:*   

>  *dreamer wrote:*   can i see your /etc/exports file? ( serverside )
> 
> Does it contain sync or async keywords?
> 
> And i advise you to read the manpages on nfs ans exports, they could be quite enligthning. 
> ...

 

oops   :Embarassed:   :Smile: 

But i'm glad it works fine for you. In your post you are referring to NFS v4. I suppose you mean v3 ? 

( now it's ME being lazy for not checking it myself   :Razz:   )

----------

## eNTi

 *krunk wrote:*   

> 
> 
>  *eNTi wrote:*   
> 
> ```
> ...

 

```

4.3. Mount options

4.3.1. Soft vs. Hard Mounting

There are some options you should consider adding at once. They govern the way the NFS client handles a server crash or network outage. One of the cool things about NFS is that it can handle this gracefully. If you set up the clients right. There are two distinct failure modes: 

soft

If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. Some programs can handle this with composure, most won't. We do not recommend using this setting; it is a recipe for corrupted files and lost data. You should especially not use this for mail disks --- if you value your mail, that is. 

hard

The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using hard,intr on all NFS mounted file systems.

```

```

5.4. NFS over TCP

A new feature, available for both 2.4 and 2.5 kernels but not yet integrated into the mainstream kernel at the time of this writing, is NFS over TCP. Using TCP has a distinct advantage and a distinct disadvantage over UDP. The advantage is that it works far better than UDP on lossy networks. When using TCP, a single dropped packet can be retransmitted, without the retransmission of the entire RPC request, resulting in better performance on lossy networks. In addition, TCP will handle network speed differences better than UDP, due to the underlying flow control at the network level. 

The disadvantage of using TCP is that it is not a stateless protocol like UDP. If your server crashes in the middle of a packet transmission, the client will hang and any shares will need to be unmounted and remounted.

The overhead incurred by the TCP protocol will result in somewhat slower performance than UDP under ideal network conditions, but the cost is not severe, and is often not noticable without careful measurement. If you are using gigabit ethernet from end to end, you might also investigate the usage of jumbo frames, since the high speed network may allow the larger frame sizes without encountering increased collision rates, particularly if you have set the network to full duplex.

```

i snipped that information from the nfs-howto.

----------

## krunk

 *dreamer wrote:*   

> In your post you are referring to NFS v4. I suppose you mean v3 ? 
> 
> ( now it's ME being lazy for not checking it myself    )

 

[*]   Provide NFSv4 client support (EXPERIMENTAL)

----------

