# Can't understand why is NFS service writing so slowly

## LIsLinuxIsSogood

I am not sure why there is such a slow write process to the NFS share I have when used to do a rsync file copy, which I am interested in using for a full system backup from one machine to another.  Usually I would use rsync over ssh and that goes faster, but at the moment I was trying to also not set up ssh access for root user, and decided to go with NFS and use the no_root_squash option to get the backup started.  It is working, but very slowly really.  After 2 hours just 3.5 gB is copied.  When going over SSH and just using rsync I think I'm used to the entire partition taking maybe <10 minutes only.  So what is going on here?

Can there be some other secure way that doesn't involve SSH?

----------

## krinn

 *LIsLinuxIsSogood wrote:*   

> I am not sure why there is such a slow write process to the NFS share I have when used to do a rsync file copy, which I am interested in using for a full system backup from one machine to another.  Usually I would use rsync over ssh and that goes faster, but at the moment I was trying to also not set up ssh access for root user, and decided to go with NFS and use the no_root_squash option to get the backup started.  It is working, but very slowly really.  After 2 hours just 3.5 gB is copied.  When going over SSH and just using rsync I think I'm used to the entire partition taking maybe <10 minutes only.  So what is going on here?
> 
> Can there be some other secure way that doesn't involve SSH?

 

Probably you have mistake options or configuration, and get slowness as result.

Per example no_root_squash option is not a secure option at all, when you have root_squash it mean root operations will be made as nobody user, and with no_root_squash made as root

----------

## bunder

can you post your export/mount options for the share?

----------

## steve_v

 *LIsLinuxIsSogood wrote:*   

> Usually I would use rsync over ssh and that goes faster, but at the moment I was trying to also not set up ssh access for root user, and decided to go with NFS and use the no_root_squash option to get the backup started.

 

NFS is inherently insecure unless it's part of a full kerberos system. NFS trusts the client to check user credentials, hence the root_squash thing.

To avoid allowing root login over SSH and still have a secure backup system, the general solution is rsync over SSH, with PermitRootLogin set to forced-commands-only and the backup command in /root/.ssh/authorized_keys.

 *LIsLinuxIsSogood wrote:*   

> Can there be some other secure way that doesn't involve SSH?

 If you want secure and are not part of a larger network, SSH or maybe SMB are the only options I can think of immediately.

----------

## LIsLinuxIsSogood

First thank you to those people responding it is always a nice thing to get the help of others when troubleshooting a home computer, etc.  In response to the SSH related posts it was possibly a mistake to make direct reference to any security needs when I am just running a local backup. Of course I had to figure it out and so I turned back to SSH with enabled PermitRootLogin in order to complete the backup via SSH.  Now this is what I was trying to avoid (my idea was that a malicious SSH attack would be far more likely to occur with the built-in  root account).  Therefore I will still need to decide whether or not PermitRootLogin setting shoud be part of the solution, but at least if it is then I think the foced-command option sounds like a great one.  It is unclear to me whether I would be able to refer to a shell script for that command, but I assume yes. 

Then going to back to NFS and what could be causing the its slowness is it a misconfiguration or some other problem maybe use case related...here are the mounts on both client and server for NFS.

 *Quote:*   

> Probably you have mistake options or configuration, and get slowness as result. 
> 
> 

 

krinn, I have no doubt this is the case, but just fyi I intentionally set it up the way you describe, with root user to be able to write to destination backup with the correct file permissions and access control that would remain consistent with the entire tree of source files.  If I had used root_squash then it sounds like I would possibly end up with several different ownerships, depending on whether files were originally owned by root or another user.  I think that might get kind of messy.  Do you know?

NFS client mount:

```
192.168.1.201:/Backups/operating-system.syncs/ on /mnt/backup.remote type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.201,mountvers=3,mountport=59956,mountproto=udp,local_lock=none,addr=192.168.1.201)

```

Relevant NFS server mounts (or what I deem relevant in this case):

```
/dev/sda4 on / type ext4 (rw,relatime,errors=remount-ro)

/dev/sdb2 on /mnt/restore type xfs (rw,noexec,relatime,attr2,inode64,noquota)

rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

nfsd on /proc/fs/nfsd type nfsd (rw,nosuid,nodev,noexec,relatime)

```

----------

## mike155

NFS is slow if you copy many small files. For example, if you copy /usr/portage to a NFS mount, it will take minutes. It's much faster to tar /usr/portage, send the tar file to the remote machine using NFS, SSH or some other protocol and untar it. rsync is also much faster.

----------

## LIsLinuxIsSogood

Ok, so that makes sense.  Is it worth trying with another archiving utility maybe tar to write to the NFS?

When I try to use rsync with the forced-commands-only in sshd_config I am getting this message.

```
Use "rsync --daemon --help" to see the daemon-mode command-line options.

Please see the rsync(1) and rsyncd.conf(5) man pages for full documentation.

See http://rsync.samba.org/ for updates, bug reports, and answers

rsync error: syntax or usage error (code 1) at main.c(1644) [Receiver=3.1.3]

rsync: connection unexpectedly closed (0 bytes received so far) [sender]

rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]

HPNotebook ~/Bash_Scripts # 

```

----------

## krinn

You're using nfsv3 mount, the problem is that you "must" have an nfsv4 server no?

While nfsv4 server could be use with nfsv3, it doesn't mean it's legit to configure the mount as nfsv3, nfsv4 introduce an nfsroot and directories attach to it hierarchy for security, and its configuration need to respect this, even if nfsv4 "may" work with a badly configured export file.

nfsv4 server should be configure as nfsv4 server, it's the nfs client that will tell it if it should be mount as nfsv4 or nfsv3

So the question is : did you read how to configure an nfsv4 server, and set it up as it should or did you mistake like many users do and configure it like if it was an nfsv3 server?

----------

## LIsLinuxIsSogood

Thanks krinn, I'm glad you point out this difference, but I am not exactly sure how to tell which one is based on the configuration for my system.

Can we start by checking if my kernel is configured for it correctly, which it looks to me like yes and no, because at least the root_set that is in question here as well.

```
jonathanr@******** ~ $ zcat /proc/config.gz | grep -i nfs

# CONFIG_USB_FUNCTIONFS is not set

CONFIG_KERNFS=y

CONFIG_NFS_FS=y

CONFIG_NFS_V2=y

CONFIG_NFS_V3=y

CONFIG_NFS_V3_ACL=y

CONFIG_NFS_V4=y

# CONFIG_NFS_SWAP is not set

CONFIG_NFS_V4_1=y

CONFIG_NFS_V4_2=y

CONFIG_PNFS_FILE_LAYOUT=y

CONFIG_PNFS_FLEXFILE_LAYOUT=m

CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"

CONFIG_NFS_V4_1_MIGRATION=y

# CONFIG_ROOT_NFS is not set

CONFIG_NFS_FSCACHE=y

# CONFIG_NFS_USE_LEGACY_DNS is not set

CONFIG_NFS_USE_KERNEL_DNS=y

CONFIG_NFSD=y

CONFIG_NFSD_V2_ACL=y

CONFIG_NFSD_V3=y

CONFIG_NFSD_V3_ACL=y

CONFIG_NFSD_V4=y

CONFIG_NFSD_PNFS=y

CONFIG_NFSD_BLOCKLAYOUT=y

CONFIG_NFSD_SCSILAYOUT=y

CONFIG_NFSD_FLEXFILELAYOUT=y

CONFIG_NFSD_FAULT_INJECTION=y

CONFIG_NFS_ACL_SUPPORT=y

CONFIG_NFS_COMMON=y

```

----------

## LIsLinuxIsSogood

Also just to double check, is this what was referred to as the possible issue of using nfsv3

```
mountvers=3

```

The entire line in mount output

```
192.168.1.201:/Backups/operating-system.syncs/ on /mnt/backup.remote type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.201,mountvers=3,mountport=59956,mountproto=udp,local_lock=none,addr=192.168.1.201)

```

----------

## LIsLinuxIsSogood

Got it...

thanks for the help

The problem I was experiencing was the I was not paying close enough attention to the different mount option in NFSv4 which no longer makes use of the root folder e.g. /exports in the mount path but instead starts with the first nested folder underneath it.  Oh well I got it now.

Marking as solved as soon as I can do some more tests on the use of rsync over NFS and post back with results.

Thanks

----------

