# HOWTO: Encrypt partitions using LVM on software RAID

## squarebug

HOWTO: Encrypt partitions using LVM on software RAID.

Introduction

There are a lot of good posts out there about how to set up LVM2, file system encryption and software RAID. I have used them to string it all together for my system. That is to say most of the actual work has been done by others, and my thanks go to these people (forgive me if I don't quote). This turned out to be a rather lenghty post, and although I have taken great care to list everything I did, something may have slipped my mind, so please be kind.

What is described in the following is not necessarily for the faint at heart, and a rather brief description of LVM, encryption and software RAID. You probably want to read up on all these topics. And you definitely want to back up your system if you don't start from scratch. You have been warned.

What this post will give you:

 A Gentoo system running completely on a bootable, mirrored software RAID (RAID level 1).

 One (or more) volume groups on the RAID partitions.

 A partly encrypted file system using dm-crypt.

 Encrypted swap space.

 Automatic boot of the encrypted partitions using a keyfile located on a USB stick.

What this post will not give you:

 A completely encrypted root file system. You can find information about that  here and here

 A root file system on LVM. It is possible but not worth the trouble for me.

 Any other RAID lever than RAID1. As far as I know RAID1 is the only (software) level that can be automatically booted.

 An initrd setup for RAID, LVM and crypt support.

What you will need:

This post is designed for kernel 2.6 (I used 2.6.5). You will need device mapper support compiled into the kernel.

You need to emerge sys-fs/cryptsetup, sys-fs/lvm2, and sys-fs/raidtools.

Before we start

Some clarification might be in order. I've come across several posts that claim that you cannot boot from a software RAID system, and your /boot partition has to be non-RAID. I don't know if this used to be true for older kernels, it certainly isn't true now. We need to keep in mind two things however. When we partition our disk for the RAID system, we have to set the partition type to fd (Linux raid autodetect, see below). And we have to compile RAID support into the kernel. If we compile it as a module we have to create an initrd with appropriate RAID support (not covered here).

Also, we can use the new dm-crypt mapper without binding it to a loop device.

And last but not least, it does not make sense to put your swap space on a mirrored partition.

The layout will be as follows:

One small RAID partition holding /boot.

One small RAID partition holding /, /etc, /bin, /sbin, /mnt, /proc, /lib.

A large RAID partition holding a volume group with several logical volumes for /usr, /opt, /tmp, /var and /home (and whatever else you need).

An encrypted /home logical volume.

Encrypted swap space on separate partitions.

There are several ways to install this system. You can boot from a LiveCD and start from scratch. Or you can use an existing install if you have enough empty space on your drives (obviously you'll need 2 drives for the mirrored RAID). I used the latter approach, my 2 ATA drives are /dev/hda and /dev/hdc with enough space available.

Compiling the kernel

Start by compiling and installing a kernel with LVM and RAID support. You need to compile with the following options:

```
Device Drivers  --->

     Multi-device support (RAID and LVM)  --->

           [*] Multiple devices driver support (RAID and LVM)                                                        

                <*>   RAID support

                < >     Linear (append) mode                                                                              

                < >     RAID-0 (striping) mode                                                                             

                <*>     RAID-1 (mirroring) mode                                                                            

                < >     RAID-4/RAID-5 mode                                                                                

                < >     RAID-6 mode (EXPERIMENTAL)                                                                        

                < >     Multipath I/O support                                                                             

                <*>   Device mapper support                                                                               

                         <*>     Crypt target support

Cryptographic options  --->

 --- Cryptographic API                                                                                   

     [*]   HMAC support                                                                                      

          <*>   Null algorithms                                                                                   

          <*>   MD4 digest algorithm                                                                              

          <*>   MD5 digest algorithm                                                                              

          <*>   SHA1 digest algorithm                                                                             

          <*>   SHA256 digest algorithm                                                                           

          <*>   SHA384 and SHA512 digest algorithms

          <*>   DES and Triple DES EDE cipher algorithms 

          <*>   Blowfish cipher algorithm                                                                         

          <*>   Twofish cipher algorithm                                                                          

          <*>   Serpent cipher algorithm                                                                          

          <*>   AES cipher algorithms                                                                             

          <*>   CAST5 (CAST-128) cipher algorithm

          <*>   CAST6 (CAST-256) cipher algorithm

          <*>   ARC4 cipher algorithm                                                                             

          <*>   Deflate compression algorithm

          <*>   Michael MIC keyed digest algorithm

          < >   UCL nrv2e compression algorithm

          < >   Testing module    
```

Do not compile the RAID support as modules unless you know how to create the apropriate initrd file (which is not part of this post). Also, if you would like to use a USB memory stick you should compile with appropriate USB support. Install the kernel and boot it.

Preparing the disks

Use your favorite partitioning tool and create identical partitions on drive 1 and 2. The following is a sample layout and needs to be adjusted for your system. If you start from an existing install (like I did) your partition numbers most likely will be different:

```
/dev/hda1   100MB  (will be /boot)

/dev/hdc1   100MB  (will be /boot)

/dev/hda2   500MB  (will be /)

/dev/hdc2   500MB  (will be /)

/dev/hda3   swap    (size according to your memory)

/dev/hdc3   swap    (size according to your memory)

/dev/hda4 (or 5 if you set up an extended partition)   large  (will be LVM)

/dev/hdc4 (or 5 if you set up an extended partition)   large  (will be LVM)

```

I used the remaining space on my drives for /dev/hda(c)5.

In order to be able to boot from the RAID device you have to set the partition type to Linux raid autodetect (type fd). You can do this with fdisk:

```
fdisk /dev/hda

t <partition number> fd
```

Do this for all partitions that comprise the RAID, in our example partitions hda(c)1,2 and 5.

Creating the RAID devices

First, we need to create the RAID definitions. These go into the /etc/raidtab file.

```
raiddev /dev/md0                     (this will be boot)

        raid-level      1

        nr-raid-disks   2

        chunk-size      32

        nr-spare-disks  0

        persistent-superblock   1

        device          /dev/hda1

        raid-disk       0

        device          /dev/hdc1

        raid-disk       1

raiddev /dev/md1                     (this will be root)

        raid-level      1

        nr-raid-disks   2

        chunk-size      32

        nr-spare-disks  0

        persistent-superblock   1

        device          /dev/hda2

        raid-disk       0

        device          /dev/hdc2

        raid-disk       1

raiddev /dev/md2                     (this will be for the LVM group)

        raid-level      1

        nr-raid-disks   2

        chunk-size      32

        nr-spare-disks  0

        persistent-superblock   1

        device          /dev/hda5

        raid-disk       0

        device          /dev/hdc5

        raid-disk       1
```

Now we need to activate the RAID devices.

```
mkraid /dev/md*
```

This creates the RAID signatures and brings the partitions online. And finally we create file systems on the partitions (except for the LVM group).

```
mkfs.ext3 /dev/md0

mkreiserfs /dev/md1
```

Next, we mount these partitions and copy our existing data over.

```
mkdir -p /mnt/tmp/newroot/boot

mount /dev/md1 /mnt/tmp/newroot

mount /dev/md0 /mnt/tmp/newroot/boot

cp -a /boot/* /mnt/tmp/newroot/boot

cp -a /etc/* /mnt/tmp/newroot/etc

cp -a /lib/* /mnt/tmp/newroot/lib

cp -a /bin/* /mnt/tmp/newroot/bin

cp -a /sbin/* /mnt/tmp/newroot/sbin

mkdir /mnt/tmp/newroot/proc

mkdir /mnt/tmp/newroot/mnt
```

Creating the LVM group

This is a 3 step process. First we need to tag the physical volumes for the volume group.

```
pvcreate /dev/md2
```

Next, we create a volume group (or several if you want) on the physical devices. I will call the group volume00

```
vgcreate volume00 /dev/md2
```

And finally, we create our logical partitions /home, /usr, /var, /opt and /tmp. With option -L you specify the size of the partition and with -n you name it. For example:

```
lvcreate -L2G -ntmp volume00
```

creates a partition /dev/volume00/tmp of 2GB. We do this for all the partitions above.

Next we need to create file systems (I use reiserfs but others work equally well. You probably want to consult the LVM howto before you decide on a file system) on the logical volumes, mount them and copy our existing data.

```
mkreiserfs /dev/volume00/tmp

mkreiserfs /dev/volume00/opt

mkreiserfs /dev/volume00/var

mkreiserfs /dev/volume00/usr

mount /dev/volume00/tmp /mnt/tmp/newroot/tmp

mount /dev/volume00/opt /mnt/tmp/newroot/opt

mount /dev/volume00/var /mnt/tmp/newroot/var

mount /dev/volume00/usr /mnt/tmp/newroot/usr

cp -a /opt/* /mnt/tmp/newroot/opt

cp -a /var/* /mnt/tmp/newroot/var

cp -a /usr/* /mnt/tmp/newroot/usr
```

Creating the encrypted file system on /home

Now we are only missing the encrypted /home partition and our encrypted sqap space. I wanted to be able to mount the encrypted partition automatically on boot. Therefore I created a random keyfile which is stored on a USB memory stick. Without this keyfile, the partition will not be mounted. If you'd rather do without the stick you can modify the setup so that you are prompted to enter a password when the partition is mounted.

First, we create a passphrase and write it into the key file (Thanks to linux_girl).

```
tr -cd [:graph:] </dev/urandom |head -c128 > key
```

Edit the key and remove the header (begin-base64 644 -) and footer (====)

If this is too paranoid for you, you can simply write your favorite password to the file.

```
echo "mypassword" > key
```

Let's write some random data to the partition before we actually encrypt it (this may take quite some time for larger partitions).

```
dd if=/dev/urandom of=/dev/volume00/home bs=1024
```

Next, we use our new key to encrypt our /home partition (latest at this point you need to emerge cryptsetup). I use the aes algorithm but you can choose whatever you like.

```
cryptsetup -c aes -d key create home /dev/volume00/home
```

The encrypted partition will be available from /dev/mapper/home (home being the name we specified after the create instruction).

Create a file system on the new partition.

```
mkreiserfs /dev/mapper/home
```

Let's mount the partition and copy our data over. Note that the partition is now /dev/mapper/home and not /dev/volume00/home.

```
mount /dev/mapper/home /mnt/tmp/newroot/home

cp -a /home/* /mnt/tmp/newroot/home
```

We copy the key file to the memory stick (in my case mounted at /mnt/usb)

```
cp key /mnt/usb/
```

A word of caution. If you lose this key, you cannot recover your data. You want to be very careful with it, and certainly have a backup somewhere.

Wrapping up

What is left to do is setting up some encrypted swap space and do some

cleaning up. If we reboot now we wouldn't be able to use our logical

volumes or your encrypted /home partition. They both have to be activated upon every reboot. We put the necessary commands into a little script which will be called from /etc/conf.d/local.start.

In order to use the logical volume group we have to activate it first. The

command to do this is

```
vgchange -a y
```

Only then are the partitions available. To be able to boot smoothly, we have to issue this command before Gentoo attempts to mount the local file systems listed in /etc/fstab. There are 2 ways to do this. Either you modify your /etc/init.d/localmount script and add the vgchange command there, or you put it in /etc/conf.d/local.start and change the startup order of local. For the latter solution to work local needs to be started before localmount. I opted for the first way, mainly because I may need to include some other commands in my local.start at a later point, which may rely on certain services being started already (aka they may require to start local as late as possible). 

OK let's change /etc/init.d/localmount.

```
start() {

        # Mount local filesystems in /etc/fstab.

        ebegin "Mounting local filesystems"

        vgchange -a y

        mount -at nocoda,nonfs,noproc,noncpfs,nosmbfs,noshm >/dev/null

        eend $? "Some local filesystem failed to mount"
```

Next we create a little startup script which we will also use to set up the

swap. I call it startrc.

```
#!/bin/sh

# Create the encrypted swapspace

cryptsetup -d /dev/random create hda9 /dev/hda9

cryptsetup -d /dev/random create hdc9 /dev/hdc9

mkswap /dev/mapper/hda9 

mkswap /dev/mapper/hdc9

swapon /dev/mapper/hda9 

swapon /dev/mapper/hdc9

# Mount the hash key

mount -t vfat /dev/sda1 /mnt/keyfile

# Activate the encrypted partitions

cryptsetup -c aes -d /mnt/keyfile/key create home /dev/volume00/home

# Mount the encrypted partitions

mount /dev/mapper/home /home

# Unmount the hash key

umount /mnt/keyfile
```

The first set of commands creates 2 encrypted swap partitions on /dev/hda(c)9. They will be available as /dev/mapper/hda(c)9. We grab some random data from /dev/random as password, because we do not want to be able to recover these partitions after we are done using them. Next we mark the partitions as swap and activate them as usual.

Now before we can mount our encrypted home partition, we need to make the password available. We mount our USB stick (in my case sda1 at mount point /mnt/keyfile). Create the device mapper handle as you did before, and finally mount the encrypted partition to /home. The last step unmounts the memory stick, we can now take it off.

You can either include the above commands directly in /etc/conf.d/local.start or write them in a little executable file stored anywhere. Call the file from local.start.

```
startrc 1>&2
```

We should also do a little housecleaning when we shut the system down. Again, I include the necessary commands in a script named stoprc which will be called from /etc/conf.d/local.stop. 

```
stoprc 1>&2
```

All we need to do is to destroy the device mapper handles for the home partition and the swap.

```
#!/bin/sh

# Unmount the encrypted partitions

umount /home

# Destroy the mapper node

cryptsetup remove home

# Destroy the encrypted swapspace

swapoff /dev/mapper/hda9 

swapoff /dev/mapper/hdc9 

cryptsetup remove hda9

cryptsetup remove hdc9
```

Ok we are almost done. The last step is to modify our file system table to

include the volume groups and the home directory. Note the noauto option for /dev/volume00/home, because /home will be mounted through our startrc script.

```
/dev/md0                /boot           ext3            defaults                1 1

/dev/md1                         /               reiserfs        defaults,noatime        0 0

/dev/mapper/home          /home           reiserfs        noauto,noatime          0 0

/dev/volume00/opt          /opt            reiserfs        defaults,noatime        0 0

/dev/volume00/tmp         /tmp            reiserfs        defaults,noatime        0 0

/dev/volume00/usr          /usr            reiserfs        defaults,noatime        0 0

/dev/volumem00/var       /var            reiserfs        defaults,noatime        0 0
```

Re-nstalling the boot loader

If you followed the above, you now moved your /boot partition to /dev/md0. In order for your system to boot you need to reinstall the boot loader onto this partition. I prefer Grub, but Lilo works just as well. In order to do so, we have to chroot into our new environment. You should have your new root mounted at /mnt/tmp/newroot and all the partitions you created under this root. Let's map the proc file system and chroot into the new environment.

```
mount -t proc none /mnt/tmp/newroot/proc

chroot /mnt/tmp/newroot /bin/sh

env-update
```

To install Grub to the MBR do

```
grub
```

and at the prompt

```
root (hd0,0)

setup (hd0)
```

If your boot partition is not the first one then you have to adjust the root (hd0,x) command.

And that should do it. Exit the chroot environment, unmount the partitions, hold your breath and reboot.Last edited by squarebug on Fri Feb 04, 2005 4:50 pm; edited 1 time in total

----------

## linux_girl

bump nice . i am wodering i juts adde a new Hard Disk  (/dev/hdb 120GB)  into my working gentoo.

i am juste wondering since all my users HOME will be at /dev/hdb1 (120GB)  to avoid

if an user crash the BOX all the users may loos their $home reiserfsck sucks a lots

and i dont whant to split and divide /dev/hdb static size wont allow an user to use more free space from another partitions

do the LVM can be of a use ?  :Razz: 

----------

## squarebug

Hello linux_girl,

you could define your new drive as a volume group and create as many logical partitions on it as you have users. If you use all of the drives capacity, then you would need to shrink let's say user 1s' partition size, before you can increase the partition size of user 2. 

Or you could leave part of the volume group unused, and this space can be allocated, all or part of it, to your users partition if they need more disk space.

Thirdly, if you are really running out of space you could add a third disk (hdc), and add it to the same volume group. If you then look at the volume group, it appears as one big volume although it consists of more than one hard drive.

One thing I found slightly confusing in the beginning is that you have to resize the file system first if you shrink the partition, before you resize the underlying logical volume. If you are using ReiserFS, you can do this with resize_reiserfs:

```
resize_reiserfs -<size> <partition name>
```

The same is true for ext2/3 (you would use resize2fs instead).

Personally, I prefer to reformat the partition but that's just me. In this case you would copy the data from the partition that you want to resize to a temporary location, resize the logical volume, create a new file system on your logical volume and copy the data back to the logical volume:

```
cp -a <logical volume name> <temp location>

lvresize -L <new size> <logical volume name>

mkreiserfs <logical volume name>

cp -a <temp location> <logical volume name>

```

Last edited by squarebug on Tue May 24, 2005 1:59 am; edited 2 times in total

----------

## linux_girl

LVM is quite congusing for a newbe  :Laughing: 

0)

```

pvcreate /dev/hdb1

vgcreate HOME01 /dev/hdb1

for i in `ls /home` ;

do

lvcreate -L1G -n$i HOME01

dd if=/dev/urandom of=/dev/HOME01/$i bs=1024 count=10000

done

```

how to mkfs.reiser4 /dev/?????

uuencode is poor way to make keys update ur code

```

tr -cd [:graph:] </dev/urandom |head -c44

OB>;~r'gzdA?H'3<Noj634ni[!2{T2jc}Pk1We52;|\^

```

1)

i have 5 users. every one will start with 1GB remain => 115GB free over the Lvolume HOME01

does the above means that any user that need above 1GB ==> LVM automaticly resize his Logical volume over the rest of the volumes or i need to do it by hand ?

2) i need crypto suport for HOME01 in case the HardDisk is stoolen. Users should be able to share files (chmod g+rwx ....). if user1 stool the HarDisks (hda,hdb)  his key cant be used to uncrypte the drive.

3)user1 and user2 are loged in. they are writing to the disk but the system crash.

since LVM != straigth partitions will users1 & users2 LogicalVolume the only volume to be corupted ? i dont have raid.

thx  :Rolling Eyes: 

----------

## squarebug

I'm not sure I understand you correctly, once you've created the volume group and the logical volume then the device nodes will be  /dev/<volume group>/<logical volume>.  You can create a filesystem like so:

```
mkfs.reiser4 /dev/HOME01/$i
```

Thanks for the updated code for the random source, I'm not much of a security expert. 

 The LVM doesn't dynamically resize the partition for you (that would be nice though), you have to do it by hand.

 I'm assuming the users don't mount the volumes themselves. If you want to secure access to the logical volumes individually, you would need to create a different key for each volume. You have to ensure that the users can't get access to the keys of course. That is, if the server is not physically secured, and a user is able to steal the hard drive, make sure that at least the memory stick with the keys is not plugged in next to it.   :Wink: 

Once the volumes are mounted, the encrypted file system is transparent to the users, they see it like any other file system. Access rights are goverend by file system permissions, file sharing is possible.

 During testing, I managed to botch one of my logical volumes. This didn't affect the others on the volume group, they still worked fine. Don't take this as a guarantee though, you should probably ask this question in the LVM developers forum.

In my understanding, the mapper devices for encryption and LVM work rather like a pipe to the file system and the crash resitancy of the system equals the crash resistancy of the file system in use. So you should be relatively safe with a journaling file system. Again, this is my interpretantion, and it would be great if anybody with a better understanding could shed some light on this issue.

If I find some time I will try to simulate a crash during I/O on a test system.

----------

## daemonflower

Hi squarebug,

great HOWTO. I've got a few questions though, just to make sure I understood everything well.

1) Is it correct that, once I have set up the encrypted /home partition, I cannot change the key for this partition, without destroying all the data on it?

2)  Why would I want to encrypt the swap partitions without encrypting the system partitions? Isn't this paranoia in the wrong place, ie. either you encrypt all of it or nothing? "A little encryption" seems to me as good as none at all.

2a) Are there any measurements or estimations as to how much encryption affects swap performance? While I'm at it, how about disk performance?

3) Can I be certain that my USB stick is reachable under the same device (/dev/sda1) on every reboot?

4) Can you (or anybody else) give me a link to a discussion of the various encryption methods available in the linux kernel, regarding security and speed?

This is an interesting field. I have seen a few HOWTOS on this topic and related ones, but little in-depth coverage or discussion of topics like the ones I mentioned. This would be a place as good as any to start some discussion. And if you come up with any good links I'd be grateful as well.

----------

## squarebug

Hello daemonflower,

glad that you found it useful. Regarding your questions

 Yes that's correct.

 That depends what you want to achieve. The goal of the concept described above is to protect your data against tampering at the console/machine and against theft (whole machine or disk sets). Whether it will protect you against an analysis done in a forensic computer lab might be questionable. At the same time I would like to retain the core system as accessible as possible, in case of an OS crash. By not encrypting the root filesystem I am able to boot from an external source and access the OS while my data is still protected. The idea was to create encryption everywhere where data is stored, hence the encrypted swap. Reading your post brought to mind another such location, the /tmp directory. I am not sure whether this will hold only temporary program files or actual data (fragments) as well. If in doubt, encrypt. 

And if you feel the need for increased security, it is possible to encrypt the root filesystem. You need to create an initrd image which holds all the necessary programs and libraries (https://forums.gentoo.org/viewtopic.php?t=191052&highlight=)

I remember having read posts about performance hits due to encryption (can't remember where, a NG search will probably help) and the general consensus seemed that it is negligible at least for WS. But my memory is bad  :Confused: , please correct me if I'm wrong. Of course this is a very general statement, so take it with a grain of salt. I know for sure that the algorithm used has an effect on the performance. Also, of course so does your CPU (speed and type). And if you really mean disk performance (vs overall performance of the encrypted disk system), this shouldn't be affected, because the disk doesn't care whether it writes encrypted garbage or any other data. All I can tell you is that my encrypted disk system doens't "feel" any different than before. 

 You can ensure that the USB stick always gets the same device node by using udev (see http://www.gentoo.org/doc/en/udev-guide.xml)

 A nice archive for cryptographic research papers is found at http://eprint.iacr.org I'm sure if your inclined to dig through their papers you'll find answers to these questions.  Also, search the sci.crypt newsgroups (a  little less hard core than the above)

I believe AES won the NIST contest (www.nist.gov) for the next generation cryptographic algorithm, intended to replace DES and 3DES. But Twofish seems to be considered even more secure although slower than AES. I left it to the experts and am happily using AES    :Wink:  

----------

## daemonflower

Thanks for clearing up some issues for me.

Just quickly, because I'm short of time now, I'd like to add that, with your motivation, you'd certainly want to encryot /var as well. It contains Databases, Mails, logs, and whatnot. If you're senstive about your data, there's a lot in there you wouldn't want an intruder or thief to see.

I'll have a look at your links later.

----------

## squarebug

daemonflower,

thanks for pointing out the /var directory. I forgot to mention it, or rather forgot to mention that I direct all data pools (like postgres etc.) which would go by default into /var to my home directory as well. If you don't do this then encrypting the /var directory is a very good idea.

----------

## lunarg

To let you know: roughly followed this howto, and got it to work in Debian.

Will need to alter a few things to get the keyfile on a separate USB device, rather than on disk  :Wink: 

But so far, it's working really nicely.

----------

## tfh

What would be the procedure to use lvm2 features like extending my home partition in thys crypt setup ? 

Anyone has an idea ? i saw that cryptsetup has a resize action. is it that simple ?

----------

## tfh

Hello all, I don't know if anyone is still following that thread. Here is a little contribution: 

Since I don't like using local.start and local.stop I wrote a (dirty) init script based on the scripts provided in this howto to create and mount my encrypted partition on boot. Here is my script : 

```
#!/sbin/runscript

# Copyright 1999-2005 Gentoo Foundation

# Distributed under the terms of the GNU General Public License v2

# $Header: $

depend() {

after *

}

start() {

einfo 'Mount the hash key'

mount -t auto /dev/cdrom /mnt/keyfile

eend $?

einfo 'Activate the encrypted partitions'

cryptsetup -c aes -d /mnt/keyfile/key create stuff /dev/vgb/stuff

eend $?

einfo 'Mount the encrypted partitions'

mount /dev/mapper/stuff /mnt/stuff

eend $?

einfo 'Unmount the hash key'

umount /mnt/keyfile

eend $?

}

stop() {

einfo 'Unmount the encrypted partitions'

umount /mnt/stuff

eend $?

einfo 'Destroy the mapper node'

cryptsetup remove stuff

eend $?

}

```

You will need to taylor it to your setup then add it to default run-level.

This script is still "dirty", ideally it should be generic and parse some values from a /etc/conf.d/cryptostuff file. 

But i'm very bad at regexp so I can't do it on my own. 

I also don't know what should be included in depend(), but it works like that.

----------

## squarebug

I noticed that cryptsetup has a resize option, but so far I didn't have to resize my encrypted partition, and I haven't tested this option.

One tried way of resizing is to back up your data of the encrypted partition (always a good start), unmount the partition, resize the logical volume (for example add 2 GB) and just recreate the partition as described above:

```
umount /home

lvextend -L +2G /dev/volume00/home

dd if=/dev/urandom of=/dev/volume00/home bs=1024

cryptsetup -c aes -d key create home /dev/volume00/home

mkreiserfs /dev/mapper/home
```

----------

## tfh

Resizing encrypted partition is possible without any dd usage or shuffling of data, here is how i did it , pretty self explanatory. Basically see all the functionality as a stack : 

TOP : ReiserFS

MID1: crypt_dm

MID2: lvm

BOTTOM: Physical disk. 

So in order to extend the reiserfs partition, one must go down to lvm2 level (umount then cryptsetup remove) then extend lvm2 then stack crypt on it (cryptsetup create) extend crypt (cryptsetup resize) then extend reiserfs :

```

umount /mnt/stuff 

cryptsetup remove stuff 

lvextend -L +2G /dev/vgb/stuff

cryptsetup -c aes -d /mnt/keyfile/key create stuff /dev/vgb/stuff 

cryptsetup resize stuff

resize_reiserfs /dev/mapper/stuff

mount /mnt/stuff

```

As always I'm afraid I wasn't clear enough let's hope someone speaking proper english understood it.

----------

## lunarg

Does this resizing also work when decreases sizes?

I always assumed ReiserFS would go crazy, when not first resizing reiserfs.

Grts

----------

## squarebug

I would first decrease the size of the ReiserFS, and then shrink the partition. I once did try it without resizing the file system first (although this was an unencrypted partition), and it wrecked the file system.

----------

## lunarg

 *squarebug wrote:*   

> I would first decrease the size of the ReiserFS, and then shrink the partition. I once did try it without resizing the file system first (although this was an unencrypted partition), and it wrecked the file system.

 

That's why I asked.

I never experienced it first hand, but I assumed this would be the case.

----------

## Uwe

What about the performance? Should it even be considered on a PIII 1000 Mhz that has 4 HDDs (2x RAID 1)?

----------

## daemonflower

I can't give you hard facts, just:a gut feeling that while a performance hit is noticable when doing heavy I/O, as e.g. copying huge files, in day-to-day work I can't see the difference.this number: my desktop box has been up for 24 hours, emerging -e world, and kcryptd has a total CPU time of 2 minutes. I have an AMD64 3000+.Maybe, if you have two identical disks (or RAIDs), you'd like to make a few benchmarks and post them here?

----------

## Uwe

The disks are properly set up, I mainly think about encrypting the second RAID set as one 160 GB Partition. I'm just a little afraid that the poor PIII (Energy saving reasons, a PIV 2,0 consumes 3x more energy, and a Pentium M would be too expensive) could be a little bit overloaded with that task. Lets say that I want to be able to get at least 100MBit/s Performance, as the Machine ist patched to 100MBit/s LAN at the moment. Problem is that it already has several other jobs (mldonkey, vdr, File-Print-Domain-Server etc)...

What about "crash safety". What do I do if lets say the Power cory is unplugged or the power generally fails (no UPS present)? What if one HDD crashed (i once got filesystem errors on the second HDD because the SW RAID mirrored also the bad block-data)?

And besides, what is so bad with swap files? I use a swap file for the raseon that my system wont crash when one HDD fails (I heard it could do so when the swap partition is on the failed drive). What about simply using no swap at all (512 MB RAM, ~300 used)?

----------

## lunarg

 *Uwe wrote:*   

> The disks are properly set up, I mainly think about encrypting the second RAID set as one 160 GB Partition. I'm just a little afraid that the poor PIII (Energy saving reasons, a PIV 2,0 consumes 3x more energy, and a Pentium M would be too expensive) could be a little bit overloaded with that task. Lets say that I want to be able to get at least 100MBit/s Performance, as the Machine ist patched to 100MBit/s LAN at the moment. Problem is that it already has several other jobs (mldonkey, vdr, File-Print-Domain-Server etc)...

 

I did some tests on a PIII 500mhz, and didn't really notice a performance hit. I do want to point out that I haven't exactly benchmarked them, but I believe the performance not to be effected in a notable way if it's a system for everyday use.

 *Uwe wrote:*   

> What about "crash safety". What do I do if lets say the Power cory is unplugged or the power generally fails (no UPS present)? What if one HDD crashed (i once got filesystem errors on the second HDD because the SW RAID mirrored also the bad block-data)?

 

Did a bit of testing (although not extensively) on this, and did not have corruption on the disk. The reiserfs I used did complain about being unclean and replayed the journal. Other than that, so far, I did not have any problems with it. I could imagine in some cases, problems may occur, though.

About the HD crashing: well, it doesn't really matter as the RAID1 is at the "bottom layer". If an HD goes, the mirror and encryption will continue on one disk. Rebuilding works the same way as it would on a regular (unencrypted, no LVM) RAID1.

 *Uwe wrote:*   

> And besides, what is so bad with swap files? I use a swap file for the raseon that my system wont crash when one HDD fails (I heard it could do so when the swap partition is on the failed drive). What about simply using no swap at all (512 MB RAM, ~300 used)?

 

It's a long discussion about swap files vs. swap partitions. In theory, swap partitions would be faster than files - so I've been told. A swap file/partition isn't exactly required: the kernel makes do with whatever memory (be it RAM or swap) you throw at it. However, it's recommended to have a least a little swap available for flushing "dirty pages" (= data which has not been comitted to disk yet). The less swap, the more it needs to keep this data in the RAM, leaving less memory for other applications and disk caching.

Of course, you could still continue to use a swap file, and keep that file on an encrypted partition, or use cryptoloop to encrypt the file and mount it as a loopback partition, or... Well the possibilities are endless here.

----------

## daemonflower

Here are some timings I made with bonnie:

Without encryption, ATA-133 disk:

```
    -------Sequential Output-------- ---Sequential Input-- --Random--

    -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

 MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

100 54475 98.1 315383 95.2 643167 99.9 64325 99.6 1743542 97.0 90733.8 97.5
```

With encryption, SATA-150 disk:

```
    -------Sequential Output-------- ---Sequential Input-- --Random--

    -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

 MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU

100 60517 99.5 676014 98.4 710785 98.6 64122 99.5 1662823 97.4 92964.9 99.9
```

The numbers tell you that a modern disk with encryption outperforms an old disk without encryption in almost all benchmarks. Certainly there is no problem to supply 100MBit/s. Even if your CPU is way slower than mine, it shouldn't be the bottleneck.

If you have many other services running, that could slow disk performance down, of course. But that is the same with or without encryption. I think, the main bottleneck will be the disks (again).

I can't tell you for sure whether the encryption affects crash safety. As I understand it, if you use a journaling file system, the encryption layer doesn't pose any additional threat. Write operations can be viewed as atomic, they either succeed or fail completely. Maybe encryption raises the probability of the last write operation before powerdown to fail, but it isn't dangerous to the file system. 

I think if you had bad blocks mirrored you might have made a mistake setting up the RAID or during reconstruction after a crash. Again, this will not be affected by the additional encryption layer.

 *Quote:*   

> And besides, what is so bad with swap files? I use a swap file for the raseon that my system wont crash when one HDD fails (I heard it could do so when the swap partition is on the failed drive).

 I think that is a nice idea, because AFAIK Linux does not support swapping on RAIDs and yes, if swap on a dead partition is in use, it will bring the system down. I don't know what's wrong with swap files, except that swap partitions are better optimized for swap use and thus more efficient.

----------

## lunarg

 *daemonflower wrote:*   

> I think that is a nice idea, because AFAIK Linux does not support swapping on RAIDs and yes, if swap on a dead partition is in use, it will bring the system down. I don't know what's wrong with swap files, except that swap partitions are better optimized for swap use and thus more efficient.

 

Using a RAID1 partition for swap works just as it normally would, provided your RAID devices are started before the swap does (which usually is when your partitions have an md superblock). Create the /dev/mdX partition, do an mkswap and modify your fstab to point to the md partition.

----------

## Uwe

Okay, it seems that I got a lot of work to do this week-end  :Wink: 

----------

