# NFSv4 on Gentoo- has anyone got it to work ?

## brankob

Hi to all.

I am using NFSv{2,3} on Gentoo for quite some time now and like it very well.

It's much faster and less CPU intensive than Samba, and it works with practically all applications.

But since there are a few features from NFSv4 i sorely miss, so I have made quite some attempts at NFSv4, almost all of them unsuccessfull so far.

I have a couple of machines- one server and two workstations, connected through Gbit Ethernet.

All machines are dual Opteron, running 64-bit Gentoo linux with fresh gentoo-sources 2.6.16 kernel.

I have tried exporting thorugh NFSv4 a couple of maps on the server and I have basically two problems:

- I have to export some share with fsid=0 ("root share"), otherwise my clients won't be able to mount any share from the server.

- share with fsid=0- root share must contain all other exported shares. 

This is in contrast with NFSv4 protocol documentation, which basically says that although NFSv4 sees all exports as one filesystem and that fsid=0 share represents root of that FS, there is no equirement that root share be actually exported or that it must contain all other shares physically on the servers's filesystem. 

Scarce documentation basically says that I can export whatever I want to and NFS  on the server will "glue" all exported shares into one FS. 

It ain't so in real world.

Not only I have to actually export fsid=0 share, it actually has to be parent of all  other exported shares on server's FS and it has to be accessible (readable, writeable etc) 

Since least-common-denominator of all my exported shares is root directory - "/", that puts me in awkward position- I have to export "/" and it has to be world readable, writeable and executable

Other weird thing is that even if I make it work, client can't acces to anything on the share, unless it is world readable/writeable/executable.

After looking at the attributes of uploaded files from the client, it seems that NFS got the uid and gid totally wrong. I get them as an 32-bit int quantities with absurd negative value instead of e.g

1001/100. (uid=my_username/gid=users)

As I understand specs, client and server should be able to agree on alphanumeric username/group.

Both exist on the server and client, albeight username's ( IIRC 1001 vs 1000) uid slightly differs betwen them.

I have tried with several kernels from gentoo-sources as well as latest vanilla sources, all with the same result... 

I don't uderstand it. On NFSv4 official site there are many NFSv4 on Linux succes stories, stories about robustness testing on "connectathlons" etc., but I can't find a single soul anywere who culd muster an educated guess about solution for my problem or even share his/her NFSv4 experiences.

Can anyone here shed some light on this ? Is anyone here using NFSv4 ?

----------

## tkdfighter

I haven't bothered to set up NFSv4 yet, as v3 is sufficient at the moment for my purposes, but there's a good guide on gentoo-wiki.org.

Maybe you could keep us up to date on your progress?

----------

## brankob

That how-to is elementary, it doesn't work in practice and it is outdated.

It doesn't contain much beyond compiling the right modules, having them loaded, making the "/etc/exports" and mounting exported partitions.

In theory, things should work like shown in that how-to. In practice, they don't.

----------

## brankob

It seems that now it works a bit better than before, although I would have to be suicide bomber to even contemplate about using it for anything semi-useful.

Now at least client and server can get across username and group without external means, but root (fsid=0) still has to be explicitly exported, it has to physically contain all other exported shares on server and it can't be read-only if other shared maps need read-write access. It is also very unstable.

When something breaks, share can't be effectively unmounted and only way to mount anything again with "mount -t nfs4 " is to reset whole server and client.

Still, it is a step forward...

 :Razz: 

----------

## Herring42

What I have learned so far:

As you said, with nfsv4, all exports have to come from a single directory, but that doesn't (and shouldn't) have to be root!

Simply make a directory /exports, then mount the directories you wish to export underneath with mount --bind:

```

mkdir -p /export/shared

mount --bind /home/shared /export/shared

```

----------

## brankob

O.K.

But now I have new set of problems with NFSv3/v4:

I can't export directory, which subdirs are Samba mounts. 

I mean, I can export it, but all clients see only empty subdirs.

There is an option "no-subtree-check" or somesuch, but it doesn't have any effect...

----------

## brankob

Sorry, my error. 

Subdirs are not mounted through cifs(samba etc). They are mounts from separate LVM partitions on that machine.

So through NFS I can see only things on the same physical mount. Once some other poartition is mounted on some dir, all I can see is empty dir...

I didn't check if that dir is always empty or just showing its "unmounted" content...

----------

## zecora

I am also using NFSv4 and on the server that is running it also is running Samba. When I look at the mount dir on the client I cannot view what is in them but when I run a ftp server on the client I can view the dir. I also cannot get svcgssd to load when I start nfs. Is this a problem you are having as well. 

thx,

Benjamin

EDIT:  Here is the thread I started about this problem

https://forums.gentoo.org/viewtopic-t-523470.html

----------

## Herring42

Ahh!

Just found a solution for me. It might help you too:

Make sure you have nohide as an option in your /etc/exports:

```

/exports    gss/krb5(rw,fsid=0,insecure,no_subtree_check,sync)

/exports    gss/krb5i(rw,fsid=0,insecure,no_subtree_check,sync)

/exports    gss/krb5p(rw,fsid=0,insecure,no_subtree_check,sync)

/exports/music    gss/krb5(rw,nohide,insecure,no_subtree_check,sync)

/exports/music    gss/krb5i(rw,nohide,insecure,no_subtree_check,sync)

/exports/music    gss/krb5p(rw,nohide,insecure,no_subtree_check,sync)

/exports/shared    gss/krb5(rw,nohide,insecure,no_subtree_check,sync)

/exports/shared    gss/krb5i(rw,nohide,insecure,no_subtree_check,sync)

/exports/shared    gss/krb5p(rw,nohide,insecure,no_subtree_check,sync)

```

Also, ensure that rpc.gssd is running on the client.

----------

## chris.c.hogan

NFS4 is still very much a work in progress. However, I've basically gotten it working.

For Gentoo:

The nfsmount init script is still being worked on. It currently does not start all of the needed services. A bug 101624 has been filed. I haven't tested the patches at that bug. However, I did upload an init script to bug 150006 that works. I just haven't had the time to break it up into separate scripts.

For ACL support, the sys-apps/acl (up to acl-2.2.39-r1) package needs a patch from http://www.citi.umich.edu/projects/nfsv4/linux/. I seem to remember a bug being filed. However, I'm not sure what it is.

----------

## Herring42

That explains a lot.

Out of curiosity, I'm mounting my nfs mounts using fstab. Should they be listed elsewhere if using nfsmount?

----------

## depontius

I recently got nfsv4 running, though not without a few glitches, one of which still exists.

I'd started at "x86", nfs-utils-1.0.6 with nfs3, and that's what I still have on my production server. Then one day nfs-utils-1.0.10 went stable, which brings with it the nfsv4 stuff, and it got upgraded on my systems (except the amd64, where it's still not stable) along with other packages. Things started up just fine, but none of the mounts would stay up longer than 15-20 minutes. In fact, they'd pretty reliably drop symptomlessly in 15-20 minutes, leaving a bunch of "???????" in "df" reports as about the only clue, other than not being able to access data. I took all systems back to nfs-utils-1.0.6, and got running. Then I realized that since the amd64 system had stayed at that level and still had problems, the fault must lie with the server, so brought everything but the amd64 back to 1.0.10, again.

On bugzilla someone suggested nfs-utils-1.0.12, so I tried that on a test server. So with 1.0.12 on a server, and 1.0.10 on clients I was prepared for nfsv4

Note:

Probably the biggest issue with nfsv4 is getting your head around the whole "virtual root" concept, along with the right nohide options. At least until I try to beef up my authentication, that is.

Beyond that, nfsv4 works and the mounts appear stable. I haven't done any benchmarking, but I'd swear that editing a file pops faster grabbing the data from an nfsv4 mount (of a mirrored copy) than the same data on the nfs3.

There's just one oddity left. After a clean boot, the first attempt to start rpc.statd fails. If I then "/etc/init.d/nfs restart" everything starts up OK and works. I get:

```
Mar  8 07:35:44 testserver rpc.statd[13352]: Version 1.0.11 Starting

Mar  8 07:35:45 testserver rpc.statd[13352]: unable to register (statd, 1, udp).

Mar  8 07:35:45 testserver rc-scripts: Error starting NFS statd
```

I've done "ps -ef" at various points, and find nothing hanging around from the first attempt that might make the second attempt pass. Nor am I using any tweaked stuff, just stock stuff in portage. I'm also not running any special security yet, so svcgssd isn't an issue. (yet) The mounts have been sufficiently reliable that I'd feel comfortable moving 1.0.12/nfsv4 to my production server. I could always put "/etc/init.d/nfs restart" in "/etc/conf.d/local.start", but I'd like to understand the bad start first.

----------

## tnt

is there any simple way to get the same UID and GID on the client and the server?

I just want that user 0 on server is user 0 on client and user 1000 on server is user 1000 on client - just as it was with nfs3...  :Sad: 

----------

## tnt

 *tnt wrote:*   

> is there any simple way to get the same UID and GID on the client and the server?
> 
> I just want that user 0 on server is user 0 on client and user 1000 on server is user 1000 on client - just as it was with nfs3... 

 

anyone?

----------

## depontius

 *tnt wrote:*   

>  *tnt wrote:*   is there any simple way to get the same UID and GID on the client and the server?
> 
> I just want that user 0 on server is user 0 on client and user 1000 on server is user 1000 on client - just as it was with nfs3...  
> 
> anyone?

 

I didn't notice that the uid rules were any different on nfsv4 vs nfs3. I have anonuid= and anongid= in my /etc/exports to handle root, and for the rest, I just try to "harmonize" my /etc/passwd and /etc/group files. (They're not perfect, but they cover the real users.)

----------

## chucks

 *tnt wrote:*   

>  *tnt wrote:*   is there any simple way to get the same UID and GID on the client and the server?
> 
> I just want that user 0 on server is user 0 on client and user 1000 on server is user 1000 on client - just as it was with nfs3...  
> 
> anyone?

 

It is not necessary on NFSv4, but I can heartily sympathize with wanting this to be a feature of your network.

Two services can provide this, both with their own hitches

NIS - historically insecure, but has a long history of being coupled with Sun & NFS.

LDAP - newer, quite secure.  This can provide the authentication & account directory for NFS and your entire network in general.  Hard to configure, but well worth it if you do.  Also, look into the smbldap tools for ldap-oriented replacements for useradd and groupadd and others.

----------

