# Cryptoloop on a NFS mount

## pa4wdh

Hi All,

I have a quite odd setup here which gives problems, it seems to be NFS related so i thought this would be the right place to post it  :Smile: 

My Setup:

```

Client ------- Router ------------------ Server + Storage

        LAN            Internet+OpenVPN

```

The server has some storage which i would like to use for backups. However, because this is a xen-based virtual server hardware resources are shared with 100+ other systems. Therefore i don't want the server to handle unencrypted data. I also can't encrypt everything on the client and ftp/scp/whatever because the storage on the client is not large enough to do that.

This is what i've done now:

Server exports storage via NFS, which gets mounted on the client. On this NFS mount there's a large file which i mount via an aes256 cryptoloop. (this is what i think is the odd part of the story  :Smile:  ). Because NFS in itself is not secure enough to be run over the public internet it's run though an OpenVPN.

This works nice, until i started using files larger than 2 GB for the cryptoloop. NFSv2 failed first because it can't handle files that big. NFSv3 failed too, not because it couldn't handle the file size but because it simply hangs the client even with a simple operation like mkfs.ext2 on the loop device. NFSv4 doesn't have those problems, but has other trouble:

During the mkfs.ext2 i get an error, not always the same.

The first is:

 *Quote:*   

> 
> 
> Writing inode tables: done                            
> 
> Writing superblocks and filesystem accounting information: 
> ...

 

With files <2 GB i also get this sometimes, but the filesystem works. However, with files > 2GB fsck confirms the failure:

 *Quote:*   

> 
> 
> e2fsck 1.41.9 (22-Aug-2009)
> 
> fsck.ext2: Attempt to read block from filesystem resulted in short read while trying to open /dev/loop0
> ...

 

If i increase the file size even further (like 2.5GB) it gets worse, then mkfs.ext2 says:

 *Quote:*   

> 
> 
> Writing inode tables: done                            
> 
> ext2fs_mkdir: Attempt to read block from filesystem resulted in short read while creating root dir
> ...

 

I've tried NFS v2, v3 and v4 with different rsize/wsize values, tcp and udp transports, and use the intr, nolock and sync mount options.

Any suggestions to solve this ? (This may include other setups and/or protocols  :Smile:  )

Best regards.

pa4wdh

----------

## HeissFuss

I'm surprised that this setup even works as well as it does.  You should be getting killed by the latency of treating a remote (as in internet remote) filesystem mounted as local.

If I have this straight, you don't trust the server you're backing up to at all, to the point of not even having the unencrypted data in its memory.  I'll pretend I'm not wondering why you're backing up your sensitive data to a server you don't trust.

NFS isn't the most robust protocol.  iSCSI might survive the trip over the internet a bit better.  You could try making a large empty file on the server, export it as an iSCSI LUN, then mount it on your client and encrypt a filesystem on top of it.

A better approach though would just be to encrypt your files with the output file being located on the NFS share, or to a ssh pipe that cats to a file on the server.  This is a much better approach.

I suggest you read up on gpg or openssl.

----------

## pa4wdh

Hi,

Thanks for your comments HeissFuss.

In terms of trust it's indeed a bit strange. For me it's an additional off-site backup, which really does have it's value, however, since everything on the VPS is shared you probably can't trust it with the plain text data. And even if you allow it do the encryption, there's the problem of the key ...

I've seen iSCSI, but didn't try it yet, thanks for the tip.

What i did try;

- Use dm-crypt instead of a cryptoloop, same restults

- Use dm-crypt or cryptoloop but use it as an output for tar instead of making a filesystem on top of it. My idea was that tar might be more made to handle slow devices. Did work for writing, reading gave problems

- eCryptfs seems to be the most sane thing to do, sadly it doesn't work well over NFS yet.

- For now i made a script which i called "cryptar" which basically uses tar with the --use-compress-program to pipe data through openssl and gzip, seems to play nice with NFS, now running some basic tests and works. The only thing that is exposed is the actual filename i use for tar's output, i don't see that as a big issue.

Best regards,

pa4wdh

----------

## HeissFuss

 *pa4wdh wrote:*   

> 
> 
> - For now i made a script which i called "cryptar" which basically uses tar with the --use-compress-program to pipe data through openssl and gzip, seems to play nice with NFS, now running some basic tests and works. The only thing that is exposed is the actual filename i use for tar's output, i don't see that as a big issue.
> 
> 

 

This seems like the only kind of thing that will work reliably.

You could also do this on an individual file level.

```

gzip -c <myfile> | openssl enc -aes-256-cbc -salt -pass file:/path/to/passfile > /my/nfs/location/<filename> 

```

Decrypt would be:

```

openssl enc -d -aes-256-cbc -pass file:/path/to/passfile -in /my/nfs/location/<filename> | gunzip -c > <decrypted file name>

```

----------

## papahuhn

You could try LVM on a bunch of those 2GB files.

----------

## pa4wdh

Hi,

Thanks for the replies.

@HeissFuss:

As always the simplest solutions seem to work best. I'm even so glad with this solution i'm thinking to leave it like this.

For your information:

The "cryptar" script just does:

```

tar --use-compress-program /usr/local/bin/do_crypt.sh $*

```

It actually could have been just an alias in bash, but a "real" command work better as this script is a drop-in replacement for normal tar.

The "do_crypt.sh" script isn't too complex either:

```

#!/bin/sh

KEY=/etc/cryptar.key

if [ "$1" == "-d" ]

then

 openssl enc -d -aes256 -kfile $KEY | gzip -d

else

 gzip | openssl enc -aes256 -kfile $KEY

fi

```

I've tested it a few days and have written a complete backup script around it, and works like a charm.

@papahuhn:

 *Quote:*   

> 
> 
> You could try LVM on a bunch of those 2GB files
> 
> 

 

Could indeed be a nice experiment, but i'm stopping with that kind of solutions for my backup. Using a remote file as a device seems to give all kind of strange side-effects, something you don't want with a backup.

Anyway, thanks for your suggestion.

Best regards,

pa4wdh

----------

