# Mounting NFS share is refused

## Ingenieur

I am trying to mount a shared folder on a NAS over NFS on my Gentoo server, the command that I am using is:

# mount -t nfs -vvv 192.168.1.2:/volume1/S_shared /home/S_shared/

This gives me several connection refused errors, like so:

mount.nfs: trying text-based options 'vers=4,addr=192.168.1.2,clientaddr=192.168.1.1'

mount.nfs: mount(2): Connection refused

And eventually a connection time out.

Showmount -e on the server gives an error

# showmount -e 192.168.1.2

Gives me: " clnt_create: RPC: Port mapper failure - Unable to send: errno 1 (Operation not permitted)" 

I have installed nfs-utils-1.2.3-r1 thus no separate portmapper is required and NFS is running.

When trying to mount the same share on a HPUX workstation which is connected to the same server there is no problem in mounting the share.

And showmount -e shows the exports file of the NAS correctly.

I have a firewall (shorewall) which allows connection to and from the ports shown by rpcinfo -p.

# rpcinfo -p

   program vers proto   port

    100000    4   tcp    111  portmapper

    100000    3   tcp    111  portmapper

    100000    2   tcp    111  portmapper

    100000    4   udp    111  portmapper

    100000    3   udp    111  portmapper

    100000    2   udp    111  portmapper

    100024    1   udp  32765  status

    100024    1   tcp  32765  status

    100005    1   udp  32767  mountd

    100005    1   tcp  32767  mountd

    100005    2   udp  32767  mountd

    100005    2   tcp  32767  mountd

    100005    3   udp  32767  mountd

    100005    3   tcp  32767  mountd

    100003    2   tcp   2049  nfs

    100003    3   tcp   2049  nfs

    100003    4   tcp   2049  nfs

    100003    2   udp   2049  nfs

    100003    3   udp   2049  nfs

    100003    4   udp   2049  nfs

    100021    1   udp   4001  nlockmgr

    100021    3   udp   4001  nlockmgr

    100021    4   udp   4001  nlockmgr

    100021    1   tcp   4001  nlockmgr

    100021    3   tcp   4001  nlockmgr

    100021    4   tcp   4001  nlockmgr

Please let me know if you need any additional information to solve this problem.

----------

## NeddySeagoon

Ingenieur,

Welcome to Gentoo.

```
# mount -t nfs -vvv 192.168.1.2:/volume1/S_shared /home/S_shared/ 
```

A few things,  your are tying to mount an nfs share as root.  If it mounts, its likely to be read only as root_squash is normally a default option.

Secondly, you don't give any mount point in the command. Thats not always an issue as /etc/fstab will be consulted for the mount point.

Sight of the /etc/exports on the server would be useful.

----------

## Ingenieur

Thanks for the quick reply.

I think I do give a mount point, it is  "/home/S_shared". I am trying to mount the /volume1/S_shared from the NAS (on IP 192.168.1.2) on /home/S_shared. Or is my command wrong?

The /etc/exports file on the NAS shows the following:

```
/volume1/S_shared       192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0)
```

As you can see root is not squashed. 

But I think my problem is related to something else, as the connection is refused when trying to mount the folder.

I must mention that the NAS makes its own exports file, I can only select a few options (like root_squash) through the GUI of the NAS. I can however adapt it through an ssh connection, which is also why the insecure_locks option is added, this was a recommendation from the support guys of manufacturer of the NAS.

Could it be that I did not correctly configure Shorewall and that it is not allowing traffic through the specified ports?

In the "SECTION NEW" of my /etc/shorewall/rules I have put the following lines:

```
# portmap

ACCEPT      loc             $FW        tcp     111

ACCEPT      loc             $FW        udp     111

# nfs over TCP and UDP

ACCEPT      loc             $FW        tcp     2049

ACCEPT      loc             $FW        udp     2049

#

ACCEPT      loc             $FW        tcp     892

ACCEPT      loc             $FW        udp     892

# rpc.quotad

ACCEPT      loc             $FW        tcp     32764

# rpc.statd

ACCEPT      loc             $FW        tcp     32765

ACCEPT      loc             $FW        tcp     32766

# rpc.mountd

ACCEPT      loc             $FW        tcp     32767
```

Can you spot anything out of the ordinary here?

----------

## Ingenieur

Anybody any idea?

Thanks

----------

## NeddySeagoon

Ingenieur,

Poke about with tcpdump at both ends to see whats coming and going.

shorewalls show command to see traffic being controlled by the various rules is useful too.

```
Common firewall configurations block the well-known rpcbind port.  In the absense of  an  rpcbind  service,  the  server

       administrator  fixes  the port number of NFS-related services so that the firewall can allow access to specific NFS ser-

       vice ports.  Client administrators then specify the port number for the mountd service via the mount(8) command's mount-

       port option.  It may also be necessary to enforce the use of TCP or UDP if the firewall blocks one of those transports.
```

So things may well be getting dropped.

Given the above, your firewall rules look right if the NFS share is on the firewall itself. 

I use nfs and shorewall but as my nfs is no my protected network, it does not need to pass through Shorewall.

I use some non standard Shorewall zone names so my protected zone to firewall zone looks like 

```
shorewall show green2fw

Shorewall 4.4.22.1 Chain green2fw at router - Fri Sep 23 18:00:16 BST 2011

Counters reset Sat Aug 27 18:19:46 BST 2011

Chain green2fw (1 references)

 pkts bytes target     prot opt in     out     source               destination         

 198K   12M dynamic    all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID,NEW

 198K   12M smurfs     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID,NEW

  643  214K ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpts:67:68

8964K  664M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED

    2   120 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22

    5   165 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 8

 115K 5866K ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3128 ctorigdstport 80

    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22

82060 6122K Reject     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

   36  1440 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0            LOG flags 0 level 6 prefix "Shorewall:green2fw:REJECT:"

   36  1440 reject     all  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]
```

You would use 

```
shorewall show loc2fw
```

 on your system.

Looking in your logs may be useful too.  Where you look depends on the choice of logger and logger configuration.

Look in /var/log/...

----------

## krinn

And you might also be aware that your NAS doesn't respect NFS version requierement. This is not a valid nfsv4 export file.

And depending on your nfs server/client, you can then only expect random issues.

I've seen this, but it's not limited to this i suppose : server transform the whole path and use it as it's nfsroot (see bellow, server export it as /), server transform and use the first directory as root and use others as subdirectory (server export it as /S_shared)

And random issues also with nfs-utils version : denied connection as your trying to mount server:/volume1/S_shared/volume1/S_shared and this directory have no rule in your export file, another fail because /volume1/S_shared doesn't exist too (as it is /), fail and keep retry nfsv4, fail nfsv4 and then retry with nfsv3, then nfsv2 (not your case or you would have get success)...

NFSv4 need a root filesystem and this root filesystem will mount subdirectory in it

NFSv4 root filesystem must be uniq

That root nfs will get fsid=0, and any directory attach to it will get their path set relative to this root filesystem.

so if your server export /volume1/S_shared and assume this directory is the fsid=0 (the root nfs) then you will always fail when trying to mount this one, as this one will be seen as server:/ (and not like nfsv3 that will see it as server:/volume1/S_shared)

Also your server export group anonuid=0 and anongid=0, the anonymous group depend on your linux distro (or the user choice) and cannot really be guess, but the 0 is always the root id (and even i'm not sure, i'm confident this is the case on all distro).

So, using anonuid/gid=0 is totally stupid, insecure, and i'm not sure it is, but it should be define as "a bad idea" in any nfs drafts.

You can find what your anon group id is by just looking in /etc/groups, if you don't know just assume 65535 and in my gentoo i have nogroup set to 65533 and nobody set to 65534 that would be better candidate to match.

You might also specify nfs-utils on the client side to not use the "bad" nvfsv4 server announcement and force it to use the v3 the server is claiming to support.

And considering how lame your nfsv4 support is on that NAS, you better use that instead (or update your server software to have a real nfsv4 support). And because nfsv3 also support the anonuid/gid you better alter it yourself to stop it from matching that 0 id, and have a look at your NAS because it's a really insecure and weird server implementation there.

```
mount -t nfs 192.168.1.2:/volume1/S_shared /home/S_shared -o vers=3,nfsvers=3
```

of course the above command might still fail if you have trouble with your firewall...

----------

