# Striping size ?

## Gankfest

What would be the best striping sizes for each partition listed on raid 5:

```

/dev/sda1 /boot - 1G Gentoo boot partition ext2

/dev/sda2 /       - 250G Gentoo filesystem ext 3, maybe ext4(Not sure which is best)

/dev/sda3 swap - 8G Swap partition for both Gentoo and Debian

/dev/sda4 /       - 50G Debian Filesystem ext 3

/dev/sda5 C:/   - 250G Windows 7 Filesystem (Just system files, no downloaded files) NTFS

/dev/sda6 page-file 8G Windows page-file partition(Like Linux swap) NTFS

/dev/sda7 D:/ - About 1.5TB Where all the downloads, movies, music, and games go. This holds really big files and some small files such as pics and mp3's, but majority of the files are over 700MB up too 15GB. NTFS

```

Any advice on what the optimal striping sizes should be for each partition listed would be greatly appreciated. This will be my first raid so I'm not quite sure how it works, but thanx in advance for the advice!  :Smile: 

----------

## NeddySeagoon

paradox6996,

Rule 1.  /boot must be unraided or raid1 as grub ignores raid and cannot decode raid5, or it chokes at the end of the first stripe on raid0.

Stick with the default stripe sizes then use lvm2 on top of raid5 except for /boot and maybe root. Root on lvm needs an initrd.

----------

## Gankfest

 *NeddySeagoon wrote:*   

> paradox6996,
> 
> Rule 1.  /boot must be unraided or raid1 as grub ignores raid and cannot decode raid5, or it chokes at the end of the first stripe on raid0.
> 
> Stick with the default stripe sizes then use lvm2 on top of raid5 except for /boot and maybe root. Root on lvm needs an initrd.

 

Can a 1G section for the boot be on the same hard-drives as the raid 5, so first partition un-raided for boot then create the raid with the remaining space? Why default size for striping, from what I read it's better to do small striping for small files such as system files, and large stripping for bigger files like games, movies, iso's. If default is the best way to go, then I take your word for it, but would like to know why it's not a good to use custom stripping sizes.

----------

## NeddySeagoon

paradox6996,

You donate partitions to kernel raid sets, not whole drives, so you can mic an match raid levels on the same drive.

I think you can even make a raid5 set on 3 or more partitions on the same drive. You wouldn't want to but you could.

I have four drives partitioned as 

```
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0553caf4

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1           5       40131   fd  Linux raid autodetect

/dev/sda2               6          70      522112+  82  Linux swap / Solaris

/dev/sda4              71      121601   976197757+   5  Extended

/dev/sda5              71         724     5253223+  fd  Linux raid autodetect

/dev/sda6             725      121601   970944471   fd  Linux raid autodetect

```

sd[abcd]1 is /boot in raid 1

sd[abcd]2 is four swaps - not raided at all, so the kernel will use them as something like raid0

sd[abcd]5 is root in raid5

sd[abcd]6 is the rest of the space, in raid5, then donated to lvm2

This gives me

```
$ df -h

Filesystem            Size  Used Avail Use% Mounted on

rootfs                 15G  740M   14G   6% /

/dev/root              15G  740M   14G   6% /

rc-svcdir             1.0M   84K  940K   9% /lib64/rc/init.d

udev                   10M  388K  9.7M   4% /dev

shm                   4.0G  840K  4.0G   1% /dev/shm

/dev/mapper/vg-home  1008G  610G  348G  64% /home

/dev/mapper/vg-opt    9.9G  902M  8.5G  10% /opt

/dev/mapper/vg-tmp    2.0G   24M  1.9G   2% /tmp

/dev/mapper/vg-var     29G   24G  3.3G  88% /var

/dev/mapper/vg-usr     40G   21G   18G  55% /usr

/dev/mapper/vg-local 1008M   40M  918M   5% /usr/local

/dev/mapper/vg-portage

                      2.0G  268M  1.6G  15% /usr/portage

/dev/mapper/vg-distfiles

                       30G   11G   18G  37% /usr/portage/distfiles

/dev/mapper/vg-packages

                       30G  7.9G   21G  29% /usr/portage/packages

/dev/mapper/vg-vmware

                       82G   25G   53G  33% /mnt/vmware

/dev/shm              4.0G     0  4.0G   0% /var/tmp/portage

/dev/md1               38M   24M   13M  65% /boot
```

I could have combined root and lvm2 and put root inside the lvm2 space but to make that work, you must have an initrd.

What you day about optimising the stripe size to suit the file size is becoming less important with read ahead and large on drive caches.

Also with lvm2, its more difficult to allocate things to physical volumes ... the whole idea is that you don't, so you can grow and shrink 'partitions' on the fly ... no reboot required.  Thats worth more to me than the small amount of extra speed from optimising the strip size in the underlying raid.

I do match filesystems to purpose. /dev/mapper/vg-portage and /dev/mapper/vg-tmp are both ext2 with a 1k block size to avoid the wasted space with storing a large number of small files on a filesystem with a 4k block size. Both filesystems are completely expendable, so why have the extra writes associated with a journal?

/dev/mapper/vg-tmp is wiped every boot and /dev/mapper/vg-portage is just the current portage tree.

Hmm ... I suppose I could delete  /dev/mapper/vg-tmp and return the space to lvm2 and put /tmp in /dev/shm ... thats in RAM

----------

## Cyker

In theory, smaller stripe chunks are faster for many small files (e.g. browser cache, source code trees) while larger stripe chunks are better for larger files.

However, I've been reading up about it and it seems that in practice this isn't the case.

Check out https://raid.wiki.kernel.org and

http://blog.jamponi.net/2008/07/raid56-and-10-benchmarks-on-26255_10.html

Apparently 256-1024k give quite good boosts?

----------

