# NFS rpc.mountd - memory leak?

## Oxydius

I have an NFS server with 512MB of RAM.

When it boots, the memory usage is around 100MB and the rest is used for caching and buffering.

However, after a few days the NFS rpc.mountd process increases its memory consumption to the point where the RAM doesn't cache anything anymore. rpc.mountd consumes over 50% of the RAM and causes memory to be swapped. After a week, I had over 1GB of swap and rpc.mountd's virtual memory usage keeps growing. Restarting the nfs daemon once in a while fixes it before the kernel runs out of memory and the whole box crashes, but this really looks like a leak. My memory usage for the last month looks like a toothsaw with weekly peaks over 1.5GB.

Here are the options I use :

(/etc/export) /usr/portage 192.168.0.0/16(rw,sync,no_root_squash,no_subtree_check)

(/etc/fstab) server:/usr/portage  /usr/portage  nfs  async,soft,intr,rw,lock,rsize=8192,wsize=8192  0 0

Did anyone else experience that?

I tried both nfs-utils-1.0.10 and nfs-utils-1.0.12-r3. Apparently, a leak was fixed between nfs-utils-1.0.12 and nfs-utils-1.0.12-r3 but it might be a different one.

----------

## RiBBiT

I haven't noticed anything out of the ordinary, although my NFS server rarely is on for more than 24 hours in a row. Current RAM usage for rpc.mountd is about 3 MB after an uptime of 8 hours. I have nfs-utils 1.0.12.

----------

## Oxydius

I guess I'll submit a bug.   :Surprised: 

Last month

Last day

----------

## jjlawren

I've seen the same thing, running 2.6.19-r7 and nfs-utils-1.0.12-r3.

----------

## RAPHEAD

I Successfully using NFS4 with uptimes for days and several clients and it 

odes not show such behaviour. Kernel 2.6.20

----------

## dspgen

system was very sluggish, and /usr/sbin/rpc.mountd was using 3G of virtual on system with 2G ram.

did /etc/init.d/nfs restart and all is well.

uname -a

Linux blue 2.6.21-gentoo #1 Mon Apr 30 06:20:30 EDT 2007 i686 AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

----------

## RAPHEAD

Very strange, I suggest you report bugs regarding NFS4 to these people:

http://bugzilla.linux-nfs.org/

Here you can see some stats of my server:

dbserver1 ~ # uptime

 18:21:19 up 8 days, 14:18,  5 users,  load average: 2.91, 2.15, 1.69

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root      5649  0.0  0.0      0     0 ?        S<   Apr28   0:00 [nfsd4]

root      5650  0.0  0.0      0     0 ?        S    Apr28   7:22 [nfsd]

root      5651  0.0  0.0      0     0 ?        S    Apr28   8:49 [nfsd]

root      5652  0.0  0.0      0     0 ?        S    Apr28   8:21 [nfsd]

root      5653  0.0  0.0      0     0 ?        S    Apr28   9:33 [nfsd]

root      5654  0.0  0.0      0     0 ?        S    Apr28   9:39 [nfsd]

root      5655  0.0  0.0      0     0 ?        S    Apr28   9:00 [nfsd]

root      5656  0.0  0.0      0     0 ?        S    Apr28   8:09 [nfsd]

root      5657  0.0  0.0      0     0 ?        S    Apr28   7:28 [nfsd]

root     23301  0.0  0.0   1696   620 pts/8    S+   18:24   0:00 grep --colour=auto nfs

root     24585  0.0  0.0      0     0 ?        S    Apr28   0:00 [nfsv4-svc]

as you can see, it has almost no CPU and memory consumption

----------

## veal

same here, i had several out_of_memory reports in dmesg running 2.6.18-gentoo-r3 and nfs-utils-1.0.12 (used 37% of memory and swap was full, restart fixed it)

i upgraded nfs-utils at the following dates:

     Wed Apr  6 13:01:29 2005 >>> net-fs/nfs-utils-1.0.6-r6

     Mon Feb 26 16:49:12 2007 >>> net-fs/nfs-utils-1.0.10

     Wed Mar 28 14:51:59 2007 >>> net-fs/nfs-utils-1.0.12

my uptime was 11days till yesterday when the out_of_memory exception occurred.

so 1.0.12 run previously for ~30days w/o any noticeable problems (i use nfs quite intensively having mp3/movies on my server)

will update now to nfs-utils-0.1.12-r3

in the nfs-utils-1.0.12-r1 changelog there seems to be a memleak fixed but seems there is still a problem? guess i will upgrade the kernel aswell if the problem persists

----------

## dspgen

I just upgraded from net-fs/nfs-utils-1.0.12 to net-fs/nfs-utils-1.1.0 (on both server and client), and it looks like it is fixed.

previously, rpc.mountd rss was 165312 after client had mounted 12 shares, then it jumped to 291680 when I shutdown the client - and it was slowly going up.

with the new version, it started at 708, then 740 after mounting 12 shares, then 748 after rebooting and remounting client.  Maybe still leaking, but much slower.  Anyway, I am now logging it every minute, and will post anything useful I find.

----------

