# Some Raid5 LVM and encryption questions

## kevintshaver

I am going to setup a raid-5 array on an existing Gentoo installation. I currently have a 1TB disc full of data. The OS is on a separate hard drive. My plan is to setup a 3 x 1TB raid-5 array, copy over the 1TB of existing data and then add the original 1TB disc to the new raid-5 array and grow the array. I don't really want separate partitions on the raid array. I also want the array encrypted with truecrypt (if I can resize with it) or dmcrypt/LUKS. So the questions:

I plan on using ext3. Is there any really good reason to use a different filesystem with this setup? It's mainly just being used as NAS.

Is there any reason at all to use LVM here? I originally thought no, since I'd only wanted one partition, but will it help with re-sizing the partition after the array is expanded? Does LVM have any affect on expanding an encrypted partition?

Does anyone know if I'll run into problems having a partition of 3TB+?

Any guesses on how much processor I'll needto handle the encryption and raid? Right now it's a P4 3.2GHz, but I'd really like to use a slower, less power-hungry processor if possible.

----------

## nurachi

 *kevintshaver wrote:*   

> I plan on using ext3. Is there any really good reason to use a different filesystem with this setup? It's mainly just being used as NAS.

 

If you're brave enough, ou might want to try ext4 that looks a little bit more performant.

 *kevintshaver wrote:*   

> Is there any reason at all to use LVM here?

 

Snapshots to backup your data smoothly.

 *kevintshaver wrote:*   

> I originally thought no, since I'd only wanted one partition, but will it help with re-sizing the partition after the array is expanded? Does LVM have any affect on expanding an encrypted partition?

 

Did not try it but look at this: http://ubuntuforums.org/showthread.php?t=726724

 *kevintshaver wrote:*   

> Does anyone know if I'll run into problems having a partition of 3TB+?

 

Depends on the filesystem you'll choose, see http://en.wikipedia.org/wiki/Comparison_of_file_systems and check your block size.

 *kevintshaver wrote:*   

> Any guesses on how much processor I'll needto handle the encryption and raid? Right now it's a P4 3.2GHz, but I'd really like to use a slower, less power-hungry processor if possible.

 Atom?

----------

## drescherjm

 *Quote:*   

> Right now it's a P4 3.2GHz, but I'd really like to use a slower, less power-hungry processor if possible.

 

Just about any current dual core processor will be both significantly faster and use less power than your 3.2GHz P4.

----------

## drescherjm

 *Quote:*   

> Does anyone know if I'll run into problems having a partition of 3TB+? 

 

I have a few 4TB+ linux software raid setups at work using 6 to 8 750 GB Seagate 7200.11 drives and they work great. I am using xfs on them though and no encryption. With 6 drives and an Athlon X2 5200+ I get 200+ MB /s writes and 300MB/s reads at around 5% cpu usage.

----------

## unic.ori

HI,

i have an AMD X2 3800+ and i use it on a 3TB raid5 device with truecrypt encryption. I have used ext 4 for the filesystem, because i was not sure if i can grow, a with truecrypt encrypted partition, with mdadm/truecrypt. So i inserted a truecryptcontainer on the raidpartiton. But if u need a container large than 2TB u cant use ext3. 

the AMD dualcore is still to slow to get a faster writesspeed than 20mb/s on the truecrypt container.

look here: https://forums.gentoo.org/viewtopic-t-734504.html

If someone know that i can rezize raid5 truecrypt encrypted partitions without risk let me know  :Smile: 

----------

## mbar

I use AMD X2 3800+ on a 2 TB RAID 0 array with Luks/Dmcrypt and I think it is sufficient. I'm getting ~40 MB/s writespeed via Ethernet network on client (Samba) computers. But RAID 5 is much more CPU hungry...

----------

## unic.ori

@mbar: which lancard du u have ?

I only get 25mb/sek if copy over samba to a nonencrypted nonraid partition too. with hdparm i get over 90mb/s.

40mb would be a dream  :Smile: 

Maybe u can post your Sambaconfig/filesystemconfig or u have some kerneltips for me  :Smile: 

do u use 32bit or 64bit ?

thx  for help...

----------

## drescherjm

 *Quote:*   

> But RAID 5 is much more CPU hungry

 

At work I have been using software raid5 and raid6 for many years and many TB (I have over 50 disks spinning in raid6 at the moment) and it never uses more than 10% cpu usage (unless rebuilding on 6 year old systems) even on slower processors than your 3800. I have raid systems using a single core Athlon64 3200 that read at 266MB/s and write at 145MB/s. The slowdown must be from the encryption. Or possibly having too small sized stripe cache. 

https://forums.gentoo.org/viewtopic-t-673067-highlight-stripe.html

https://forums.gentoo.org/viewtopic-t-709075-highlight-.html

Or your chunk size is too big. Do not set this larger than 64k as this will hurt write performance.

----------

## mbar

 *unic.ori wrote:*   

> @mbar: which lancard du u have ?

 

I installed Intel Pro 1000 or something like that installed in regular PCI slot:

```
01:08.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
```

Also helped me a separate PCI Express SATA Controller:

```
03:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01)
```

This setup relieves internal Nvidia southbridge buses, I read about it like 2 years ago, I can't find the details now.

In short, "desktop" mainboards used in server role quickly saturate internal buses (PCI-SATA-PCIEx) and this is the main cause for network write slowdowns. Worst case scenario is using internal Gigabit with internal RAID/SATA.

Edit: I also use tweaked XFS and 8 MB readahead.

EDIT 2: I checked again and my config uses internal Gigabit (nforce) and PCI Express SATA card, so it uses two buses instead of saturating one bus (whitch happened when I was using internal nforce SATA controller).

----------

## unic.ori

thx, i use a gigabit Dual Intelcontroller, but its only pci and the onboard sata2 controller. I have tested with pci sata controller too. Next i want test pci-e gigabitcontroller or sata controller...

if i test the writespeed with dd i get 100% cpuload.

----------

## drescherjm

Using pci in either of these will kill performance because its limited to around 100MB/s total bandwidth. The delays in waiting for pci interrupts also cause  wasted cpu cycles.

----------

## kevintshaver

 *unic.ori wrote:*   

> 
> 
> If someone know that i can rezize raid5 truecrypt encrypted partitions without risk let me know 

 

I don't think you can. Here's a quote from the truecrypt faq:

 *Quote:*   

> Q: Can I resize a TrueCrypt partition?
> 
> A: Unfortunately, TrueCrypt does not support this. Resizing a TrueCrypt partition using a program such as PartitionMagic will, in most cases, corrupt its contents.

 

However, couldn't you work around it by removing the encryption, add your raid disk, resize and then re-encrypt?

http://www.truecrypt.org/docs/?s=removing-encryption

----------

## kevintshaver

Thanks for your replies.

In my original post, I was going to use ext3, maybe LVM, raid-5, and truecrypt. I'd really like to go with opensolaris mainly to use the ZFS with built in Raid-Z and easy portability (for when I "upgrade" my processor). However this won't work because you can't add a disk to Raid-Z and the incomplete status on the ZFS encryption project. Those are pretty much deal breakers.

Now I'll try and do xfs (mainly for the larger max volume size), raid-5, dmcrypt+LUKS, and no LVM. I'll be using the P4 3.2GHz for now but I'll eventually switch to either a dual-core atom or ideally the VIA 7800 if it can handle the tasks.

Again, thanks for the help and I'll update this if I learn anything useful.

----------

## drescherjm

 *Quote:*   

> I'll try and do xfs

 

Are your files mainly large are small? xfs will slow down operations on small files (compared to ext3) but speed up operations on large files. 

ext4 seems to be better than both but its not 100% stable yet although using it for 3 months I have not lost any data.

----------

## kevintshaver

Almost exclusively large files (~3GB). I considered ext4 because it would be an easy upgrade to btrfs when it's ready and since I can see the filesystem filling up > 85% from time to time, but decided xfs might be better for me since it should be more stable and probably work better since I'm working with large files.

What happens though if you share out a large file over samba or nfs? It breaks it down into smaller files, right? If my array is being used entirely as NAS, will I get any xfs large file advantage?

----------

## drescherjm

 *Quote:*   

> It breaks it down into smaller files, right?

 

I do not think so. I believe you will still have better performance with xfs or ext4 (with extents). However if you are on 100Mbit it probably will not matter as much being that the array will easily read faster than the network can send data.

----------

## fangorn

 *drescherjm wrote:*   

>  *Quote:*   Does anyone know if I'll run into problems having a partition of 3TB+?  
> 
> I have a few 4TB+ linux software raid setups at work using 6 to 8 750 GB Seagate 7200.11 drives and they work great. I am using xfs on them though and no encryption. With 6 drives and an Athlon X2 5200+ I get 200+ MB /s writes and 300MB/s reads at around 5% cpu usage.

 

One Annotation: 

If you plan on using XFS (good choice in my opinion, yeah I know, I am an oldfashioned stability fan  :Wink:  ) and want to append space later: Use LVM  :Exclamation:   You cannot resize partitions with xfs, at least not with a tool I know. With LVM there is no problem. Just add the new drive to the volume group and add Extents to the volume, then mount the volume and resize online.

 *Quote:*   

> If my array is being used entirely as NAS, will I get any xfs large file advantage?

 

Just the ones on the server machine. For instance when sending a command to delete the file, the server will be pretty fast with the job.   :Twisted Evil: 

----------

## tallica

Hello,

I'm going to setup Linux RAID5 + LVM2. I use ~amd64. At boot I get:

```
lvm              |*   lvm uses addon code which is deprecated

lvm              |*   and may not be available in the future.
```

```
device-mapper    |*   device-mapper uses addon code which is deprecated

device-mapper    |*   and may not be available in the future.
```

So is it recommended to use LVM2 on new installs? Which code is deprecated? I hope DM/LVM2/MDRAID will be still supported by kernel/soft in near future...

----------

## drescherjm

 *Quote:*   

> You cannot resize partitions with xfs

 

You can not shrink xfs partitions but you can grow them with xfs_growfs

```
 # xfs_growfs 

Usage: xfs_growfs [options] mountpoint

Options:

   -d          grow data/metadata section

   -l          grow log section

   -r          grow realtime section

   -n          don't change anything, just show geometry

   -I          allow inode numbers to exceed 32 significant bits

   -i          convert log from external to internal format

   -t          alternate location for mount table (/etc/mtab)

   -x          convert log from internal to external format

   -D size     grow data/metadata section to size blks

   -L size     grow/shrink log section to size blks

   -R size     grow realtime section to size blks

   -e size     set realtime extent size to size blks

   -m imaxpct  set inode max percent to imaxpct

   -V          print version information

```

 # equery list lvm                                           

[ Searching for package 'lvm' in all categories among: ]

 * installed packages

[I--] [ ~] sys-fs/lvm2-2.02.42 (0)

```
jmd0 john # equery belongs xfs_growfs

[ Searching for file(s) xfs_growfs in *... ]

sys-fs/xfsprogs-2.9.7 (/usr/bin/xfs_growfs)
```

----------

