# Boot from RAID?

## nosenseofhumor1

Hello wise gentoo community,

So the other day i thought i was being so proactive and clever by buying 4 identical 1TB SATA Harddrives and the promise TX4310 raid controller card. I figured that since there is a bios RAID configurator that exposes the logical disk you create to your main bios so that you may boot from it, that it would also expose the logical disk you configure to the OS. 

Wrong. 

At first nothing shows up in udev, so i recompiled my kernel with the TX2/TX4 support and suddenly the drives show up as their distinct drives! sda, sdb, sdc, sdd.

I called promise and spoke to somebody who had no idea what was going on. he put me on hold for about an hour while he asked somebody who did know what was going on and he told me that it was a driver issue. I needed to use the driver they offer on their webpage which is only supported by suse or redhat. knowing that this couldnt possible be true, i went out and tried to install the suse partial open source driver. It didnt work... but about half way through my attempt  it occurred to me: if the OS needs to load this module in order to see my RAID array, and I intend to boot from the array, i have a chicken and the egg scenario. 

Okay, research shows me that this card is FAKE RAID. Bah! garbage.

So then, where can i get my hands on a REAL RAID CARD? One supported by gentoo linux which uses the PCI bus and can support at least 4 sata drives for less than a thousand dollars?

----------

## NeddySeagoon

nosenseofhumor1,

You really don't want a real raid card on an ordinary 32 bit 33MHz PCI bus.  The bus bandwidth is totally inadequate, so its a waste of money.

If you can avoid it, you don't want any disk drives hanging off an ordinary PCI bus as they can easily use over half the available bandwidth.

Your TX4 card is a half way house between real hardware raid and software raid, in that it provides a hardware xor engine for calculating the redundant data needed for raid5 and raid6, if the card supports raid6. However, the open source linux driver does not use it.

Hardware raid cards are whole computers on a card, often with lots of RAM for buffers and battery backup so the data in the RAM is not lost in the event of a power failure. Expect to pay for a whole computer if you buy hardware raid.

Meanwhile, play with kernel raid, test to see if you can take the performance hit. You may not even notice.

Hint: With kernel raid, /boot must be raid1 or not raided as grub knows nothing of raid and just ignores it.

How do you boot a TX4 raid with the redhat driver?

You need to add the module to your initrd, which is a temporary root filesystem loaded by grub at the same time as it loads the kernel.

You also need to make the init script in the initrd load your module so its available for mounting your real root.

----------

## nosenseofhumor1

thanks for the fast response!

that is true... pci is a bit slow. but im less concerned about speed performance and more concerned about keeping my whole system backed up without a lot of overhead. thats why raid 5 is so enticing. if a drive dies with a real raid all i have to do is buy a new drive, boot into the raid bios and tell it to fix it. with software raid i still need to be able to get into my linux environment... which seems to be a bit of a disadvantage.

understanding that a real raid is a bit of a computer, why does it need a bus faster than what is required for one hd? as far as the operating system should be  concerned, there should appear to be one device sitting on this controller card. i tell it i want to write onto that device, and the raid controller does all the maneuvering necessary to stripe it across four disks with parity. it doesnt seem to me like a great big bus is really necessary for that. in fact, i would think youd need a bigger bus for software controlled raid... right?

so then... in order to do this properly, i need a new mobo with pcie slots and a beefy raid controller to sit in it, right?

----------

## NeddySeagoon

nosenseofhumor1,

The only way you can tell if you have software raid or hardware raid is looking at your setup.

If you lose a drive from software raid5, the raid set reverts to degraded mode and continues to work.

You add a new drive and it rebuilds as you would expect.

As you say, you need more bus bandwidth for more drives and the kernel manages them as separate drives reading/writing to them independently, so you have the raid overhead of writing the redundant data. With motherboard mounted hard drive controllers this is normally not a problem dor two reasons, a) they are on their own private PCI bus. b) the bus in not constrained to 33MHz operation, as its private, the drive controller chip can be operated at its rated speed, which is usually much higher than 33 MHz.

raid is not a backup. 

```
rm very_important_file
```

removes it from every drive in the raid set. You still need your backups.

Raid provides higher reliability and availability by providing some redundancy. However, there are many other single points of failure to consider when you are looking to do that.

----------

## DeathCarrot

I run gentoo on a 2x500GB RAID0 set through the motherboard's RAID controller (Intel ICH8R on Asus P5B Deluxe), which I presume is the same half-way RAID as your promise controller. I think this was the guide I followed. It's a bit of extra hassle, but once it's up everything works as you'd expect, you just need to remember to use /dev/mapper/* instead of /dev/sda* whenever you want to access a partition device.

----------

## energyman76b

I run a 2x500GB Raid 1 on software.

And it works great. Really. Despite the fact that I need a g*d d* initrd now. But I can live with that. It is even a lot nicer to be able to boot and have mdadm do the rebuilding than to reboot and wait the hours the bios needs.

----------

## NeddySeagoon

DeathCarrot,

For raid0 there is no redundant information so kernel raid is preferred over dmraid.

You can move a kernel raid raid set to another different hard drive controller and it just works. You will need another *identical* intel controller to recover the data from your raid0 set if your motherboard dies.

This is true of kernel raid vs dmraid regardless of the raid level.

----------

## DeathCarrot

 *NeddySeagoon wrote:*   

> DeathCarrot,
> 
> For raid0 there is no redundant information so kernel raid is preferred over dmraid.
> 
> You can move a kernel raid raid set to another different hard drive controller and it just works. You will need another *identical* intel controller to recover the data from your raid0 set if your motherboard dies.
> ...

 

I was under the impression that a purely software implemented RAID would introduce performance overheads. Any truth to that?

When I decided to run RAID0 I made a conscious choice to run a system that I wouldn't mind reinstalling. I make a point to backup anything I'd miss and my world and configuration files are almost identical on my laptop so installation would primarily be a question of compile time.

I'm also dual-booting with Windows and I've heard bad things about MS's software RAID implementation, so that also needs to be taken into account.

Thanks for the input, hope I didn't sound like I was irrationally defending my decision.  :Smile:  I did try to look at all the options I had when setting up the system.

----------

## energyman76b

there are elderly benchmarks showing that linux software raid beats all that fakeraid stuff (mobo onboard 'raid') into a bloody pulp.

----------

## NeddySeagoon

DeathCarrot,

raid0, (any sort) has almost no performance overhead at all as data blocks are directed to each member drive in turn.

Indeed, as long as you are not using two drives on the same IDE cable, you see a performance increase using raid0 of almost N x one drive, where N is the number of spindles in the set.

Since you need to share your raid set with windows, dmraid (BIOS raid) was the right choice.

You seem to be aware of the risks associated with your decision - thats the man thing

----------

## DeathCarrot

Thanks for the heads up guys - I did a bit more reading on mdraid, and I'm certainly more impressed with it now. If I was running a linux-only machine I'd certainly go mdraid, but sadly I'm still lugging around those old MS shackles so it's either mobo-RAID or nothing  :Sad: .

----------

## nosenseofhumor1

so what should my partition scheme be? i have four identical 1 tb hds, 1 of which is in use right now.

i need a partition for boot on raid 1 then the remainder can be raid 5 and i will use lvm2 on raid 5 to create my file structure (so that i can have easily re-sizable partitions) for the remainder... migrate my data off the 1tb hd that is currently in use to the array... wipe and add that disk to the array... expand the raid 5 LD onto it.... wait. im getting confused.

so i have 

sda (unformatted)

sdb (unformatted)

sdc (unformatted)

sdd1 (formatted 1tb partition)

i need to make 

sda1 (part a of raid 1 LD 500MB filesystem=FD bootable)

sda2 (part A of raid 5 LD the-rest-of-itGB filesystem=FD)

sdb1 (part b of raid 1 LD 500MB bootable filesystem=FD)

sdb2 (part B of raid 5 LD the-rest-of-itGB filesystem=FD)

sdc1 (part C of raid 5 LD 1tb? no, 999.5GB right? 500MB gone? filesystem=FD)

then migrate sdd1 to the raid 5 LD and format it

sdd1 (part D of raid 5 LD ???size? filesystem=FD)

hmm...

i guess i just have to lose the size of my boot partition on the other two disks... right? 

OR should i just put /boot and swap on a separate disk that isnt part of the array and keep a cd backup of /boot?

----------

## NeddySeagoon

nosenseofhumor1,

As you have 4 drives, you can have a 4 way raid1 boot, or you could raid1 two if them annd use the same space on the other two for swap, depending on how big you need swap to be.

The kernel will stripe (like raid0) multiple swaps but your system will die a horrible death if you have swap on a single drive and lose that drive.

With kernel raid, you donate partitions to the raid set, not drives, so mix and match any way you want.

When you make your raid5, tell mdadm that it will be 4 paritions but that the one on your used drive is missing.

This makes the raid set in degraded mode 3 of 4 drives (no redundant data). Copy the data over, test that its ok, then partition (not format) your last drive and add its partitions to your raid set. The raid code will rebuild the added drive in the background.

You get to use your raid and practice fixing a failed drive.

----------

## nosenseofhumor1

that is incredibly sound advice, thank you!

----------

