# Gentoo on HP MediaSmart EX485 - software Raid-5

## MarcusXP

Hey guys,

I managed to install successfully Gentoo on my EX485, using a single hard drive.

I have a fully blown OS running right now (except GUI which is not needed).

Now my next challenge is to migrate this installation (which was done on a 160GB drive) to 4x 2TB WD Green hard drives.

I'd like to setup software raid using these 4 drives.

I don't have much experience with software raid, I've read a little bit and doesn't seem too complicated.. however there are things that I don't fully understand yet.

How should I create the partitions? I was thinking to have a layout like this:

  - raid-5 for /boot partition (about 100MB or so)

  - raid-5 for /root partition  (about 60GB or so)

  - raid-5 for 'data'  partition (about 5.5TB or so)

I've read that grub isn't able to read from striped partitions (like raid-0 and raid-5), is it still true? In this case, I should do a raid-1 for my /boot partition? In this case, can I have raid-1 with all 4 drives, or only 2 drives are working with raid-1? My concern is that I will waste space anyways if I only make raid-1 with 2 drives, so I'd rather use the extra space on the other 2 drives to have them mirrored as well, so if any of the 4 drives fail, my system will still boot. What about the root partition, can it be read by grub or I will have to make it raid-1 as well?

Is what I want possible without using initramfs (initrd) or I will be obliged to use it? (in which case, I have no experience with it).

thanks a lot,

Marcus

----------

## NeddySeagoon

MarcusXP,

I have a 4 way mirror for my /boot, its only 32Mb on each drive.

You may as well install grub on the MBR of two of the four drives, that way, if one of them dies, you can still boot.

There is no point in installing grub on the MBR of more than two drives as with two dead drives, your raid5 is dead too.

Where will you put swap? 

I have four identical kernel managed swaps at equal priority ... thats like raid0.  Its not so good if you loose a drive that has used swap space as the applications that have data there will die a horrible death. Swap on raid works.

I have a small / root then everything else in LVM.  This avoids the use of of an initrd but if you don't mind an initrd, root can be in LVM too.

LVM allows dynamic partition resizing - no reboot required, no mess.  Just choose filesystems that support resizing. Be aware that some filesystems can only be made larger. Notice how less than 1G of my / is used. /boot is raid0, / is raid5 and everything else in in LVM on top of raid5

```
df -Th

Filesystem    Type    Size  Used Avail Use% Mounted on

rootfs      rootfs     15G  743M   14G   6% /

/dev/root     ext4     15G  743M   14G   6% /

rc-svcdir    tmpfs    1.0M   84K  940K   9% /lib64/rc/init.d

udev         tmpfs     10M  388K  9.7M   4% /dev

shm          tmpfs    4.0G  100K  4.0G   1% /dev/shm

/dev/mapper/vg-home

              ext4   1008G  679G  278G  71% /home

/dev/mapper/vg-opt

              ext4    9.9G  994M  8.4G  11% /opt

/dev/mapper/vg-tmp

              ext2    2.0G   36M  1.9G   2% /tmp

/dev/mapper/vg-var

              ext4     29G   25G  3.1G  89% /var

/dev/mapper/vg-usr

              ext4     40G   21G   17G  56% /usr

/dev/mapper/vg-local

              ext4   1008M   39M  918M   5% /usr/local

/dev/mapper/vg-portage

              ext2    2.0G  269M  1.6G  15% /usr/portage

/dev/mapper/vg-distfiles

              ext4     30G   12G   17G  43% /usr/portage/distfiles

/dev/mapper/vg-packages

              ext4     30G  9.2G   19G  33% /usr/portage/packages

/dev/mapper/vg-vmware

              ext4     82G   26G   53G  33% /mnt/vmware

/dev/shm     tmpfs    4.0G     0  4.0G   0% /var/tmp/portage

/dev/md1      ext2     38M   24M   13M  65% /boot
```

Most users doing the install / migrate to raid, use one the drives that will eventually belong to the raid for the single drive install, then they create the raid in degraded mode and copy the install over, lastly they add the single drive install space to the degraded raids to bring it up to strenght, thus they get to practice replacing a drive before they really need to.  You won't have to do that.

I have heard a few horror stories about  WD Green hard drives and raid ... because of the green features, drives keep dropping out. You want to research that before you make your raid.

One other quirk is that they have a 4k physical sector size and fake the more usual 512b sector size by doing read/modifiy/writes, which is horribly slow.

To avoid this you must make sure your partitions are aligned on 4k boundaries. Again more research is required. I don't have WD Greens.

----------

## MarcusXP

Hi Neddy,

Thanks a lot for taking the time to write.

I didn't think about swap yet. I might not use swap at all... this is supposed to be just a NAS box with some limited functions (NFS server, Samba and maybe rtorrent client installed, so I might never need more than 2GB of memory which is installed right now).

I thought it is better to have grub on the MBR of all 4 drives, since I am not sure which one will be the next one to boot if one drive fails. (this box doesn't have a VGA output, so I can't access its BIOS settings).

So my idea was to have the grub installed on all 4 drives, this way any of them can boot.

I thought about LVM as well, but I haven't used it at all until now, so I don't know how it works. I guess I will have to read more about that.

My last idea was to have the boot and / partitions mirrored on all 4 drives.

And the last partition on each drive (which will be very large) would be part of a software raid-5 (using mdadm).

Using LVM might be nicer, but I am not good with that at all...

what would be the command to create the /boot mirror on 4 drives.. would this work?

  mdadm --create /dev/md0 --level=mirror --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

I would use a similar command for the / partition, right? except that I would use /dev/sda2 /dev/sdb2 and so on

then for the 'data' partition I would create a raid-5 using the following command:

  mdadm --create /dev/md0 --level=raid5 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3

another challenge is how to have the hard drives partitioned identically. I thought I would partition the first one (sda) then use DD to clone it to the other drives. But this may be too time consuming.. do you have another idea?

Can I trust that fdisk will create the partitions identical, if I use the same commands on each drive?

then another issue would be how I mount the partitions in fstab, and how I configure grub.conf file..

Does it make sense what I am trying to do? The purpose is to have a NAS box that will be pretty robust (if any of the drives fail, the box will still boot and the data will still be intact).

Then what happens if one drive fails, how do I rebuild all the partitions and the raid?

----------

## MarcusXP

oh, and my WD Green drives have TLER enabled (like the RAID-edition ones).

Would the comment regarding aligning the partitions on 4k boundaries still be applicable?

I also have 4x Hitachi 2TB drives that I could use, do they still need to be "aligned" ?

----------

## NeddySeagoon

MarcusXP,

Large drives vary on the physical sector size. you need to read the data sheets.

dd is fine for duplicating MSDOD partition tables provided you have no more than four partitions numbered 1..4  This will not work for partitions numbered 5 or more

```
dd if=/dev/sda of=/dev/sdb count=1 bs=512 
```

 This copies the entire MBR from sda to sdb, including the partition table for the four primary partitions.

If you need 4kB block alignment, you probably don't want to use fdisk as your partition tool. It allocates space in terms of 'cylinders' which is a now mythical term from the days of DOS. These cylinders may not be 4kB aligned.

The old MSDOS partition table format breaks at exactly 2TB (2^32 bytes).  Thats as much space as it can describe, consider using gparted and GPT partition tables. dd will not copy GPT partition tables as there a several copies on the disk. You will need to enable GPT Partition table support in your kernel.

I don't know if grub works with GPT or not.

Everything else looks ok

----------

## MarcusXP

Is it safe to use ext4 on /dev/md1, which will be my /  ? (/dev/md1 consists in 4 mirrors, /dev/sda2 /dev/sdb2 /dev/sdc3 /dev/sdd3)

Or I should go with ext3?

I've read somewhere there are problems using journaled filesystems on software raid, I am not sure if it is still true or not.

thank you,

----------

## NeddySeagoon

MarcusXP,

ext3 and ext4 are both journalled.  I'm not aware of problems with joournalled filesystems on RAID5, just the RAID 5 Write Hole that occurs if a write is in progress when you get a power fail or have a disk die.

----------

