# SOLVED-What is the best way to use raid on Gentoo?

## FizzyWidget

This all now solved, but i thought i would edit this post in case anyone else was interested.

First off clear the disc in the raid utility (if coming from Windows) and then destroy it, then go back into the bios (a restart is more than likely required) under the disc you have set to raid, change the option to disabled, and then disable raid.

Once you have booted into gentoo, whether through Desktop/SSH, open a teminal as as root type

fdisk /dev/sdX - X being the disc(s) you used, wipe the drive, after this step i had to reboot as it said it couldn't update the partition table, partition the drives as normal linux, not raid auto detect (unless you want to use raid auto detect)

To make the raid you should type (obviously you need to have mdadm compile and installed, if not just emerge mdadm)

```
mdadm --create /dev/md0 --verbose --chunk=64 --level=raid5 --raid-devices=3 /dev/sd[bcd]1
```

Obviously changing this to reflect the amount of discs you have and the partitions you are using.

You can if like me wait until the raid has been made, you can check the progress of this by typing

```
cat /proc/mdstat
```

although you dont have too, i had to go out so i just left it to do what it had to, and luckily by the time i got back it had done it.

Next the conf file for mdadm, which is a rather simple setup

```
mdadm --detail --scan >> /etc/mdadm.conf
```

make your chosen filesystem on there - mkfs.ext3/4/xfs/jfs /dev/md0 - adding what ever performance tweaks you like, to find out what cfs options are best for you I found the script @ http://goo.gl/oCUfb to be useful, although you will have to change a few of the lines at the top to suit your setup.

put mdraid in the boot level - rc-update add mdraid boot

and last but not least add an entry to fstab so that it is mounted when you boot.

Reboot, and hopefully your raid should be ready for you to use

----------

## NeddySeagoon

Dark Foo,

What raid level do you want to use?

How many real cores does your CPU have?

What is the proposed workload.

The only excuse for using fakeraid is that Windows and Linux must share the raid set.

----------

## FizzyWidget

mainly i use raid0 as i have redundant backups of everything i have and i sync all of them once a day

as to the cores, one is a Q6600 - 4 cores - 4 threads, other is core i7 2600 - 4 cores - 8 threads

Workload will be general day to day normal pc usage

edit - bit moot on the nvidia system as one of the drives just died  :Sad: 

still be good to have the info for when i get a replacement drive  :Smile: 

----------

## krinn

 *Dark Foo wrote:*   

> 
> 
> 1, use linux software raid,
> 
> 2, get proper hardware raid card and watch out for "proper" raidcard that are fakeraid too 
> ...

 

----------

## NeddySeagoon

Dark Foo,

First, understand that RAID0 is really AID0 as there is no redundancy and the relibility is that for a single disk / n when n is the number of drives in the array.

Raid0 does not have any CPU overhead to calculate the redundant data (there is none), so go with mdadm and kernel raid.

This will give you an n times disk access speedup, for small values of n.

Traps for the unwary:-

/boot must be raid1 or unraided.

/boot must be raid superblock version 0.90 or grub will not work

If you want raid auto assembly, to make it easy to do root on raid, you must use raid superblock version 0.90 and the partitions to be donated to raid sets must by type 0xfd.

Raid auto assembly is going away one day, when that happens you will need to move to an initrd to have root on raid.

----------

## FizzyWidget

raid will only be for storage drives, the system will be on a separate non raided drive

----------

## krinn

this should higher disk access in fact neddy no  :Smile: 

to the let's say average of (fastest disk + slowest one / 2)

and should higher transfert rate to : slowest one * n < controller bandwith (n=number of disks in the array)

----------

## NeddySeagoon

krinn,

I've only ever used identical drives in raid sets but your aritmentic is more generally correct than mine, for the case of two spindles.

Dark Foo, 

You will lose the entire content of your storage, if any device in your raid0 set fails. I tend to value my data more than my Gentoo installs.

The installs are easy to recreate.  Its your data

----------

## FizzyWidget

 *NeddySeagoon wrote:*   

> 
> 
> Dark Foo, 
> 
> You will lose the entire content of your storage, if any device in your raid0 set fails. I tend to value my data more than my Gentoo installs.
> ...

 

As i said in first reply to you I have redundant backups of all information on the raid, have external 1TB drive as well as copies of stuff on 500GB laptop, 500GB server, Mac, and Mac backup drive, all synced daily.   :Smile: 

I would go raid5 if you could rebuild an encrypted raid system, but from all i have read you cant.   :Shocked: 

Wonder if it would be better to get some thumbdrives, encrypt them and keep personal stuff on them, and use un-encrypted raid5   :Confused: 

Wonder if i should just run un-encrypted on all machines   :Confused: 

----------

## NeddySeagoon

Dark Foo,

See this page

If you are really paranoid, /boot can be on a USB pen drive.

That article applies to any raid level.

----------

## FizzyWidget

So to rather simply get raid5 working, I just have to

fdisk sdb, sdc, sdd to raid auto detect

emerge mdadm

mdadm --create /dev/md0 --level=raid5 --raid-devices=3 /dev/sd[bcd]1

sit back and wait?

mkfs.xfs /dev/md0

One thing that has me confused is that i see this on the wiki page

```
Note: mdadm v3.1.4 uses version 1.2 superblocks by default. You will need to add the '-e 0.90' switch to these lines if you want to use version 0.9 superblocks and have auto-detection at startup!
```

I left it as it was thinking this may have been sorted as I'm using mdadm 3.1.5 - should i kill mdadm and add the lines as it suggests? I'm using ~amd64

----------

## NeddySeagoon

Dark Foo,

The kernel devs want to drop raid autoassemble, so if you want to use it you must tell mdadm to use version 0.90 raid superblocks.

You do not need to wait for the sync - the raid set is usable while it syncs.  The sync deliberately does not use all the disk bandwidth.

The idea being you can hot swap and hot add drives to replace a failed drive with zero downtime. Not all hardware and driver combinations supports this.

You don't even have to sync all in one go, its ok to shutdown and restart.

----------

## FizzyWidget

so which way would you use to make a raid5 ?

----------

## NeddySeagoon

Dark Foo,

You only need auto assemble if your root will be on the raid.  For a data volume, you can have mdadm assemble and start it during boot.

----------

## FizzyWidget

Yes it will be a data volume

so the drives do not need to be set as raid auto detect (fd)?

and i use the command i posted above?

----------

## NeddySeagoon

Dark Foo,

Correct - you need to have mdadm assemble the raid as part of the boot process.

----------

## FizzyWidget

ok, so last question, i just make the drives as normal linux partitions in fdisk, use command above, do i need to run mdadm --detail --scan > /etc/mdadm.conf to update the conf file, or will mdadm do this on boot?

----------

## NeddySeagoon

Dark Foo,

You will need an mdadm.conf file to tell mdadm which raid sets you have so it tries to assemble them.  

```
mdadm --detail --scan > /etc/mdadm.conf 
```

is as good a way as any.

----------

## FizzyWidget

Okay, I have the main part of the system back up and running now.

Going to do raid in the morning, far too late for me to do it now, just want to run this check list past you, see if i have everything set right.

Done

Turn off raid in bios

partition drives (not raid auto detect - fd)

emerge mdadm

Not Done

make raid 5 using - mdadm --create /dev/md0 --level=raid5 --raid-devices=3 /dev/sd[bcd]1 - is there any block size i should add to this? 32K - 64K ?

make file system, which is good for raid5 or does it not matter? Some files are large, some are small, your usual txt doc avi mkv type of stuff, day to day things.

I noticed when i did test format before, xfs bitched at me about using AG and that i had to use a block less than the size of the drive and that it didn't like the size and that it changed it to 32K, and ext4 said i had 14GB less that what i should have

----------

