# NFS completely fucked, misc. questions.

## dE_logics

On my side NFS is working just horribly.

1) If I set timeo to 3 seconds, the NFS client should retry requests every 3 seconds in case the server is too slow. In my case if I do any operation apart from a very heavy operation, the client applications utilizing the NFS shares hangs completely. Even if the second application utilizing the share is bash completetion. Yup, you got it right, dd on one side and bash completion on the other completely hangs the clients with a lot of - 

```
...

nfs: server 192.168.1.3 not responding, still trying

nfs: server 192.168.1.3 OK

nfs: server 192.168.1.3 OK

nfs: server 192.168.1.3 not responding, still trying

...
```

Going on. And at times it's gonna hang everything except the kernel... even the ttys forcing a hard reboot.

2) HDD activity on the server too high -- I don't understand why a simple dd operation from the network share causes the HDD light to list as if the dd operation was going on locally, as a result I get 127 Kbps on a 100Mbps network and with a HDD with performance at 100MB/s. The performance of the dd operation is just good initially.

To top that off the dd operation refuses to die, even if I try to kill the process or umount the share.

All this happens regardless of the timeo value.

Once (recently) this happens with timeo=60

mount options - 

rsize=200,wsize=200,async,hard,timeo=60,retrans=2,actimeo=900,retry=2,lookupcache=pos,nfsvers=4,nolock,intr,noacl,rdirplus

----------

## Hu

Those are really tiny read/write windows.  Larger windows are generally better for performance.  I mount a share with rw,rsize=524288,wsize=524288,timeo=600,retrans=2 (among other options) and can stream MythTV recordings over the NFS share with no stutter.  A full read of a 1.6G file takes ~20 seconds:

```
time cat <test.mpg>/dev/null

real    0m20.549s

user    0m0.000s

sys     0m0.469s
```

Both machines are negotiated to 1000Mb/s, according to ethtool.

----------

## whig

It may help to test the network speeds up/down eg net-misc/iperf. Ping loss perhaps?

----------

## dE_logics

 *Hu wrote:*   

> Those are really tiny read/write windows.  Larger windows are generally better for performance.  I mount a share with rw,rsize=524288,wsize=524288,timeo=600,retrans=2 (among other options) and can stream MythTV recordings over the NFS share with no stutter.  A full read of a 1.6G file takes ~20 seconds:
> 
> ```
> time cat <test.mpg>/dev/null
> 
> ...

 

Wow. However in my case mounting with r/wsize 80000 does not work, and going above 200 will results in hangs on heavy file transfers (unsquahsfs for e.g)

If I don't specify r/wsize option the data transfer is a lot faster but still very slow, at best 4 MB/s with dd. However with other operations I get 88 Mb/s. But i/o on the server is still a problem, atleast with dd it's still a bottleneck with bandwidth. Does increasing the r/wsize help reduce i/o on server?

 *whig wrote:*   

> It may help to test the network speeds up/down eg net-misc/iperf. Ping loss perhaps?

 

Nope, everything else works fine including X over ssh, which'll definitely trip if a bit goes wrong.

----------

## dE_logics

Ok, my fault -- the r/wsize I thought was the packet size, thus I was suffering to the hardware's MTU, and initially set it to 1500. Now it's 1048576 and working well. This also solves the I/O issue on the server.

Thanks!!!

----------

## blackwhite

 *dE_logics wrote:*   

> Ok, my fault -- the r/wsize I thought was the packet size, thus I was suffering to the hardware's MTU, and initially set it to 1500. Now it's 1048576 and working well. This also solves the I/O issue on the server.
> 
> Thanks!!!

 

Would you give me your system some details: kernel version, nfs-utils version? /etc/fstab nfs drive mount option, and /etc/conf.d/nfs?

In my case, several clients hangs up randomly on executing/copying files shared on NFS server, but the client still can copy other files on the NFS server.

Thanks.

----------

## dE_logics

 *blackwhite wrote:*   

>  *dE_logics wrote:*   Ok, my fault -- the r/wsize I thought was the packet size, thus I was suffering to the hardware's MTU, and initially set it to 1500. Now it's 1048576 and working well. This also solves the I/O issue on the server.
> 
> Thanks!!! 
> 
> Would you give me your system some details: kernel version, nfs-utils version? /etc/fstab nfs drive mount option, and /etc/conf.d/nfs?
> ...

 

Client is Gentoo with nfs-utils version 1.2.3-r1 and kernel as 2.6.37-zen, mount options as follows - 

mount -o rsize=1048576,wsize=1048576,async,hard,timeo=60,retrans=2,actimeo=900,retry=2,lookupcache=pos,nfsvers=4,nolock,intr,noacl,rdirplus

I do not use fstab for mounting, since I only mount sometimes.

Server is Debian testing with 2.6.32 using nfs-kernel-server v1.2.3, following are the export options - 

anonuid=1000,anongid=1000,secure,subtree_check,rw,fsid=0

----------

## blackwhite

Thanks.

Would you give me emerge nfs-utils -pv outputs, thanks.

 *Quote:*   

> 
> 
> emerge nfs-utils -pv
> 
>  * Last emerge --sync was 41d 0h 59m 40s ago.
> ...

 

----------

