# Moving to RAID5 = Hell

## Ceniza

[/quote]Hi.

I got a lovely new PC recently, and I decided to buy a couple new HDDs to setup as RAID. The board has an Intel X58 chipset that supports RAID0, RAID1, RAID5 and RAID10. I copied my old partitions to the new disk partitions, changed everything to point to the right devices, compiled the kernel with support for all RAID levels following all advice in the Wiki about RAID, I even created the initrd thing using genkernel by specifying my .config, setup GRUB, and nothing.

GRUB started by spitting out an Error 17. After lots of Googling, many posts suggested that it was not possible to use GRUB to boot from a RAID5 setup. I decided to give up with that approach.

I kept searching and found some lists where it was said that GRUB2 supports RAID5. I emerged it (9999) and followed the steps from the Wiki, but I could not get it to install itself in the disk. grub2-install kept saying that it had no idea about the device/volume, that I should check my device.map, but it was right there as (hd0), pointing to /dev/mapper/isw_blahblahblahVolume0.

Failing to get GRUB2 to get installed, I decided to try to boot my system by creating a bootable flash drive with my kernel, initrd and all suggested things from the Wiki. With this approach I was able to get the kernel started, but then it failed to mount the root partition because it could not detect its type. What seemed to be happening was that the kernel was not detecting the RAID, thus being unable to mount the root partition.

I connected the old drive again (which is non-RAID), configured everything, and I got it boot... into the old installation in that drive. Now I am trying to, at least, be able to see the RAID, but it fails as well. All support is enabled in the kernel (2.6.35), dmraid is up to date, but when I try to use it (dmraid -ay) I get:

 *Quote:*   

> ERROR: device-mapper target type "raid45" is not in the kernel
> 
> RAID set "isw_bajbajfcej_Volume0" was not activated
> 
> ERROR: device "isw_bajbajfcej_Volume0" could not be found

 

Searching for this kind of problem I find that dmraid is looking for raid45 when the kernel has renamed it to raid456. However, all those posts are from years ago. It is even reported in the bug tracker for the 2008 Gentoo LiveCD/DVD. I would expect it to work after those years.

I am completely clueless now as to what to do in order to get this thing working. Buying a real RAID card is expensive. It is also worth saying that I want to have Windows 7 in the same RAID setup (it is installed already, and it works just lovely in there).

Am I really the only person trying to get this working and failing miserably? Why cannot I find any recent information about it (everything I find is from about 2007)?

I hope someone can help me figure this out. Using Gentoo was part of the idea when updating this PC, but now I cannot do it properly.

Thanks.

----------

## XQYZ

Is putting /boot on a raid1 not an option for you? I don't see you benefiting much from a larger, distributed /boot anyway.

----------

## John R. Graham

It's really not reasonable to have a small piece of software like a bootloader support all storage options.  This is why the Gentoo Handbook typically recommends setting up a small boot partition that is simple and uses a very simple filesystem (ext2).  Build whatever exotic filesystem support you want into your kernel but load the kernel from a non-exotic partition & filesystem.  You'll save yourself a world of pain.

Regarding getting Windows and Linux to share the same software RAID partition, I've never seen any indication that this can be made to work.  Best thing for you to do if you must make this work is to look for an inexpensive used hardware RAID card on eBay.  I really like the Adaptec RAID cards and have several in use on my home server infrastructure, including a 4-drive 1.4TiB RAID5 array with hot spare.  Boot, swap, and root partitions are all on the same controller-constructed RAID5 virtual drive.

- John

----------

## dE_logics

maybe fdisk -l will be useful here.

----------

## Ceniza

Thing is: I am using Intel's FakeRAID for the whole thing, so I do not have the flexibility to say that I want RAID1 here and RAID5 there using the same disks. All I could do, at best, is reduce RAID5 to use 3 disks and leave the other one out of it, or get a fifth disk to use without RAID... or use the old one (as I am doing right now).

It is for that reason that I decided to go with the flash drive approach. It has an ext2 partition, I put the kernel and initramfs files in there along with GRUB's menu.lst and GRUB setup as well to boot from it, but the kernel seems to be unable to see the RAID, even after following all steps from the Wiki. In any case, the end result is that the kernel from that flash drive cannot mount the root partition.

I also tried, as I said, to get access to the RAID using my old hard drive with the working installation (what I am doing right now), but it is not working. dmraid keeps complaining about that raid45 module not being there (raid456 is built into the kernel, though). I tried with a Kubuntu LiveCD, and it is able to see the RAID. Looking for modules I found it has one in a sub-directory 'ubuntu' called raid4-5.

By the way, if I try to use the initramfs thing with the old disk, the kernel has the same exact problem as when trying from the flash drive. That suggests the problem is not trying to boot from the flash drive, but instead something with the kernel/dmraid/initrd thing.

What I find kind of curious is that some people in multiple forums and mailing lists have said that trying to boot from RAID0 or RAID5 is not possible if /boot is there, but a week ago I was able to get that working right away with RAID0 (2 disks only) using Kubuntu. I used their installer, and it just worked. They use GRUB2 by the way, which, from a mailing list, it is supposed to understand RAID5 as well. The strange thing is that the installer does not detect the RAID5, but it did detect the RAID0 before (maybe it knows it will not work like that?).

Using fdisk -l helped me to get grub finding /boot in the RAID5 (by specifying geometry as suggested in the Wiki), but it would not boot afterwards (error 17).

Buying a RAID card would be nice, but they are quite expensive from what I have seen. At least a new one of a good brand that supports RAID5 is almost €400. That does not look like a very good solution to me, and I am not so sure about looking for a used one.

Do you have any ideas how could I at least get my installation to see the RAID5?

----------

## XQYZ

If it's a fakeraid anyway, what's the advantage over linux software raid? That is unless you are dual booting the machine. Both run on your CPU afaik.

----------

## NeddySeagoon

 *Ceniza wrote:*   

>  ... compiled the kernel with support for all RAID levels ...

 

The kernel options are for kernel raid. You need device-mapper and dmraid, for fakeraid.

With fakeraid, the BIOS should hide the underlying structure of the raid from the boot loader, so it should work.

Kernel raid and fakeraid are both different (incompatible) implementations of software raid.

Kernel raid is faster and more mature, so is preferred over fakeraid. The only reason to use fake raid is to share the raid set with Windows.

----------

## s4e8

Intel onboard RAID5 require dmraid + dm-raid45 module, and it's not in mainline kernel, and not maintenced anymore, and no replacement. You should dump it, use linux mdraid, but lose the on-raid /boot capaibilities.

----------

## Ceniza

 *XQYZ wrote:*   

> If it's a fakeraid anyway, what's the advantage over linux software raid? That is unless you are dual booting the machine. Both run on your CPU afaik.

 

The 'unless' part applies to my case. I want to dual boot the machine.

NeddySeagoon: I tried to install device-mapper, but it blocks udev. What can I do here?

I am reading now raid.wiki.kernel.org, which seems to be a more up to date document, and by using mdadm I was able to get access to my RAID. They have this comment in there:

[quote=https://raid.wiki.kernel.org/index.php/RAID_setup]Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from OptionROM depending on the vendor.

...

The second format is the Intel(r) Matrix Storage Manager metadata format. This also creates a container that is managed similar to DDF. And on some platforms (depending on vendor), this format is supported by option-ROM in order to allow booting.[/quote]

That applies to my case as I am using Intel's solution, which is reported by mdadm --detail-platform. Now, how do I use, where do I find, that OptionROM thing?

I keep searching and searching, but I just cannot find a definitive guide/answer to how to get this thing working. From what I am finding today it looks like it is possible somehow by using that OptionROM thing. There is even someone in a mailing list querying about this whole thing, but I cannot find more. It is quite recent as well (Aug 2010), and the links are:

http://www.spinics.net/lists/raid/msg29639.html

http://marc.info/?l=linux-raid&m=128206374710903&w=2

http://www.spinics.net/lists/raid/msg29667.html

What is the way to go with it using mdraid in Gentoo to get this thing booting?

Thanks.

----------

## NeddySeagoon

Ceniza,

My bad.  device-mapper functionality has moved to lvm2.

As you are dual booting, you won't use the Logical Volume Manager but you will need the device-mapper functions.

The kernel can access Windows Dynamic Disks. Thats software raid for Windows, so if you can't make fakeraid work, thats another way to have a raid set shared between Linux and Windows.

The OptionROM, if you have it, is your BIOS.

You do have Linux kernel v2.6.27 and mdadm v3.0 or later don't you?

You must have mdadm > 3 as all earlier versions have been removed from portage.

----------

## Ceniza

 *NeddySeagoon wrote:*   

> You do have Linux kernel v2.6.27 and mdadm v3.0 or later don't you?
> 
> You must have mdadm > 3 as all earlier versions have been removed from portage.

 

The kernel is 2.6.35 and mdadm is 3.1.3.

I decided to delete the RAID and re-create it with a 200MB RAID0 using the 4 disks, followed by a RAID5 using the remaining space. I wanted to try RAID1, but then I was unable to use RAID5 (RAID1 took 2 disks, RAID5 would only take the same 2 disks or the other 2). RAID0 was bootable with Kubuntu, so I thought it was a decent choice to put /boot into it. Since Kubuntu uses GRUB2, I tried the GRUB2-9999, but I could not get it to grub-install. It kept complaining about not finding a disk. Googling shows that I am not the only one with the problem, but there is no solution (when trying to use mdadm directly with Intel's FakeRAID). It looks like it would work with traditional mdadm, but not with imsm.

I also got initramfs to work with the non-RAID installation. However, if I try mdadm it does nothing, and if I try dmraid I get the raid45 not in kernel error. I hoped I could at least boot into the RAID5 using this approach, but mdadm did not want to work. I tried to get it working from the busybox, but it said it could not understand imsm.

I just looked for mdadm in the initramfs, but it is not there, so I guess its functionality is into busybox. I just recreated the initramfs myself including my mdadm + dependencies. Maybe it will work (I need to reboot first).

Any ideas about getting GRUB2 to install? Once again, it worked with Kubuntu, but using dmraid, though.

----------

## Ceniza

Almost there, but not quite.

Using Kubuntu LiveCD and chroot'ing into my Gentoo system I was able to get GRUB2 installed (9999 did not want to build again, so I had to use 1.9 :Cool: . Kubuntu, however, uses dmraid while I am trying to use mdadm in Gentoo.

I used 'cp -a' to copy my original / to the new / in the RAID5 while I was in Kubuntu. I set the copied fstab to mount the right devices as well.

Trying to boot from the RAID0 (where I installed GRUB2 and have my /boot) failed because it could not find a disk given an UUID, but at least it tried to boot. All I got was a 'grub rescue' prompt which I have no idea how to use.

I tried to boot again using the old non-RAID disk with my custom initramfs and pointing to the new / using real_root, and I got even closer. The error this time was that /dev/console and /dev/null were not in the real_root's /dev, and for that reason it would fail, then I got a kernel panic.

I booted into the old disk to see if the /dev of the new / had console and null, and it did. However, I suspect that mixing dmraid and mdadm is corrupting the file system in the RAID5. When the kernel failed to boot, it also said something bad about an inode. Running a fsck in the new / gave me error after error. As a matter of fact I got tired of pressing y. Does it really mean that mixing dmraid and mdadm is dangerous?

Running grub2-install from GRUB2 1.98 with mdadm still fails, but with other messages:

 *Quote:*   

> grub2-install /dev/md125
> 
> /sbin/grub-setup: warn: Attempting to install GRUB to a partitionless disk.  This is a BAD idea..
> 
> /sbin/grub-setup: error: embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.

 

It looks like I will have to use dmraid to install the boot loader, but mdadm for everything else. It is late now, so I will just try to leave this copying the old / to the new / using just mdadm after re-formatting the partition. Tomorrow I will check if it gets corrupted as well.

If I cannot get it to work like that, I think I will just install it in a non-RAID disk and use the other 3 for RAID5 with Windows. I just hope the dual boot does not become another issue.

If you get any ideas of what else I can try, let me know... please.

Thanks.

----------

## Ceniza

I am almost there now. I got initramfs to continue booting, but I got a read-only /. It is nice to be able to boot into the new / in a RAID5, but it is useless if I cannot change anything in the filesystem. Checking /proc/mdstat shows that the array is marked (read-only). I have no idea why it is happening, and I cannot find a solution for it. mdadm --readwrite does not solve the issue either.

Even though I am quite close, I have wasted so many hours trying to get this working, that I think I will just go for the easier solution of installing it in a non-RAID disk and let Windows just work in the RAID5 with the other 3 disks.

It is a real shame I could not get this to work.

[edit]

Just for the record:

Assembling the array from initramfs is always setting the RAID5 in read-only mode. The reason why it does it is unknown. Assembling the array once the system has started from an old non-RAID'ed disk sets it to auto-read-only, which is right (it will switch to read-write at the first write attempt).

Time to delete the RAIDs and go the old way for Linux...

[/edit]

[edit2]

I can no longer try this (I decided to use RAID0 instead for Windows), but someone pointed me to this link: http://lwn.net/Articles/332809/

It looks like having mdmon is also important, and probably running mdadm with that -I option as well.

[/edit2]

----------

## NeddySeagoon

Ceniza,

I suspect your read only root is caused by an error in /etc/fstab not allowing rootfsck to to run.

You get a message like "Press Ctrl-D to continue or give the root password for maintainance".

Give the root password to get into your system read only.

Do 

```
mount -o remount,rw /
```

To make root read/write and fix your /etc/fstab

Raid1 need not be on only two drives. You can have as many mirror copies as you like, inn your case, four, as all the partitions in your container must span the same number of drives.

If /dev is missing console and null they should be made statically. However, udev does this for you now, so is udev old or is it not running?

----------

## Ceniza

 *NeddySeagoon wrote:*   

> I suspect your read only root is caused by an error in /etc/fstab not allowing rootfsck to to run.
> 
> You get a message like "Press Ctrl-D to continue or give the root password for maintainance".
> 
> Give the root password to get into your system read only.
> ...

 

That is not the issue. What is happening is that the array is assembled in read-only mode and never gets into read/write mode. The system boots "fine" as long as it does not need to write anything. It even starts all init scripts (some of them complain because they cannot create some files, like .pid), takes me to a vt and lets me log in. Since the array is read-only, I cannot do much with it. It is like if I was trying to use a CD for root. I even tried by getting into busybox before the array was assembled and root mounted, and ran mdadm myself. The array is assembled just fine, but the RAID5 is read-only (which can be seen from /proc/mdstat). The link from 'edit2' may be the answer, but I no longer have a RAID5 to try it.

 *Quote:*   

> 
> 
> Raid1 need not be on only two drives. You can have as many mirror copies as you like, inn your case, four, as all the partitions in your container must span the same number of drives.
> 
> 

 

With the BIOS thing I was only able to select 2 drives, nothing more. That made creating a RAID5 impossible as I could only select the same RAID1 drives or the other 2, but 2 are not enough for RAID5.

 *Quote:*   

> 
> 
> If /dev is missing console and null they should be made statically. However, udev does this for you now, so is udev old or is it not running?

 

The problem was that the filesystem got damaged. I copied everything using dmraid from a Kubuntu LiveCD, but I tried to use it with mdadm in Gentoo inside an initramfs. Once I copied it using mdadm and tried to boot it using mdadm as well, that problem went away and no more filesystem issues were detected.

----------

## adramalech707

the big problem with using like onboard bios raid....is u have to build either a non-custom kernel build or u have like to jump through many flaming hoops and praying alot that u can build a initrd using genkernel with a custom bzImage....i had this issue with raid 0 with 2 hdd....a pain in the arse....it is even said that it is equal or less powerful than software raid...which is supposedly faster(idk i haven't tried that yet)

i have one question why not use raid 1+0(more commonly known as raid 10?) which is a hybrid raid...also known as nested raid...it would only be one more hdd....for a total of 4 instead of 3...

raid5:

 *Quote:*   

> 
> 
> Block-level striping with distributed parity.
> 
> Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.
> ...

 

raid 1+0 (raid10):

 *Quote:*   

> 
> 
>     * RAID10 provides superior data security and can survive multiple disk failures
> 
>     * RAID10 is fast
> ...

 

dmraid quote from gentoo wiki page:

 *Quote:*   

> 
> 
> This howto assumes that your boot partition is the first partition on the raid. If you set your /boot  partition as the first partition on the disk, you will likely save yourself the need of a floppy disk and a reboot later (Method 2 of installing GRUB mentioned later in this guide). This will not confuse Windows as long as you install Windows onto your second partition (or whichever partition you like). Here is a sample layout for dual-booting with Windows
> 
>     * Partition 1: /boot 
> ...

 

boot should be like a ext2 format with like 20-100mb of space...u can also use lvm2 to help with more partitions like /home /root /swap etc...if u want to control the size of each one...else just a / is fine...

what i used was a systemrescue cd with dodmraid as one of my many boot flags...thus i could mount the raid to perform the partitioning...what u have to remember is if u are doing just linux software raid is better....but if u are also dual booting with another non-linux operating system then you have to do a bios raid.....software won't work because the linux software raid cannot communicate with your windows partition...

one more thing,

http://en.gentoo-wiki.com/wiki/RAID/Onboard

this should help with understand the onboard raid with gentoo...

the problem with doing raid XX dual booting with windows....is custom kernel building....with having to workaround trick or build your own initrd image...

----------

## Ceniza

I also considered using RAID10, but I could not get it to be recognized as such by dmraid. Once I started to toy around with mdadm, I never gave RAID10 another chance (which could have worked).

In my last attempt I created a RAID0 as the first array with the 4 disks to put /boot into. RAID1 would not let me use the four disks making the RAID5 impossible to create later (all using the BIOS thing).

The link that you gave me in the Gentoo Wiki was one of the many pages I checked. The problem is that most pages show you how to get this thing working with RAID levels other than RAID5, or RAID5 created with mdadm itself. Most documents also use dmraid instead of mdadm, which would not work in Gentoo for RAID5 due to the old kernel module not being found.

It was a really painful experience, I wasted a lot of time, but I was quite close to get there as well. Customizing genkernel's initramfs was what took me almost there.

Right now I am using one of the new disks in non-RAID mode for Linux (so it was quite easy to get working), and the other 3 disks are in RAID0 for Windows for the sake of performance and good luck not getting a failing drive.

I just hope this thread can help someone to get to where I was quickly, and, hopefully, finish the job and get it working. If you are that person... please share your experience here.

----------

## adramalech707

here found this....

since u can build a custom kernel and save as .config as usual then you want to do this...i forgot about this because i had to spend 1hr search on google for this...

```

If you do not want to go the hard way but still want a custom kernel, then emerge genkernel and execute it as follows.

genkernel --dmraid --install --kernel-config=/root/kernel-config all

NOTE: With this, the Kernel and initramfs will be autom. installed, be sure to use the correct file names later in grub config. 

```

this will allow u to build a custom kernel but still not have to worry about building a custom initial ramdisk.....

----------

## Ceniza

 *adramalech707 wrote:*   

> here found this....
> 
> since u can build a custom kernel and save as .config as usual then you want to do this...i forgot about this because i had to spend 1hr search on google for this...
> 
> ```
> ...

 

I tried that, but dmraid refused to work due to the missing raid45 in the kernel (it should work for RAID0 and RAID1, though). mdadm was also 'available', but it was actually a symbolic link to busybox, which did not understand imsm. It is for that reason that I had to customize the initramfs produced by genkernel. A feature request to include the real mdadm (and, most likely, mdmon as well) into the initramfs produced by genkernl would be a good idea... I think. Thing is, it is not enough just to include it if it is not guaranteed to work (like what happened to me where the RAID5 would stay read-only).

----------

## CrazyCasta

Not sure if anyone's interested anymore (it is >1 year later). I've been running a dm raid5 for a while now. I've had to keep patching my patchset so I've just put it up on google code now. If anyone wants to use dm-raid45 just get the patch from http://code.google.com/p/dmraid-patches/ . I'm running plain old gentoo-sources, so anyone doing the same, and using a version that matches up with one of my patches *SHOULD* have absolutely no problem. It should also work with other kernel sources, but I don't know how much tweaking would be required.

----------

