# root on RAID & genkernel's initrd

## thyrihad

I have a server with all partitions, including root, on LVM2 on top of kernel RAID10 (not dmraid).  /boot is a separate drive with 1 standard partition on.

I have just switched from an archaic, combined from various sources initrd to genkernel's initrd, and genkernel works fine for the most part except that it fails to activate my raid device prior to mounting root.

The problem is because /dev/md0 doesn't exist at the time that the initrd tries to run "mdstart" (I'm assuming it does that?).

I am prompted to enter a shell.  I do that and type:

```
mknod /dev/md0 b 9 0

mdstart /dev/md0

vgscan

vgchange -a y

exit
```

then re-give it the root:

```
/dev/mapper/my-lvm2-root
```

and booting continues happily.

I'm a bit perplexed as to why the kernel doesn't autodetect the RAID device - partitions are all 0xfd and as far as I can remember, previous kernels/initrds didn't need to explicitly start it either, so I'm left suspecting the lack of /dev/md0

Is this udev at play?  I have tried passing udev and noudev on the kernel command line, but I get the same.  My kernel params are currently:

```
noudev dolvm2 root=/dev/ram0 init=/linuxrc real_root=/dev/mapper/vg0-lv_root
```

I'm hoping I don't need to edit the initrd to create the device nodes at start and if I do, I recommend it be added to genkernel.

As a last thought, the server has no /etc/raidtab because I have never needed one in the past (autodetect runs fine, mdadm does the rest).  Does genkernel use raidtab to decide what to do?

Cheers all!

----------

## bunder

have you tried compiling without genkernel (ie: do you really need an initrd?)

cheers

edit: do you have MD support built-into the kernel?  that would prevent the nodes from being created.

----------

## NeddySeagoon

thyrihad,

Lets explain a little of the boot sequence.

With your raid personalities built into the kernel (not modules) and the underlying partition types as 0xfd the raid should be started automatically at boot. You can see this in the boot messages 

```
[   69.500501] md: autorun ...

<lots of raid detect messages>

[   70.080769] md: ... autorun DONE.
```

if you don't have <lots of raid detect messages> between those two lines, then autodetect isn't working.

You also need a persistant raid superblock, which has been the default for some time.

As your mdadm.conf or raidtools.conf are stored on the root partition, they can't be read until root is mounted, so they don't participate in auto forming the raids. udev is also on the root partition (unless its in your initrd too) and therefore can't be started until root is mounted.

I have a plain raid (no LVM2) and it all just works . However, your root is on LVM2 on raid, so to read your root, you have to form the raid set, mount it somewhere then start LVM2.  Since udev is on root, it can only help if its in the initrd too.

----------

## thyrihad

 *NeddySeagoon wrote:*   

> With your raid personalities built into the kernel
> 
> (not modules) and the underlying partition types as 0xfd the raid should
> 
> be started automatically at boot. You can see this in the boot messages
> ...

 

Indeed, it isn't.  Why?

 *NeddySeagoon wrote:*   

> As your mdadm.conf or raidtools.conf are stored on
> 
> the root partition, they can't be read until root is mounted, so they
> 
> don't participate in auto forming the raids.

 

If you're referring to my mention of raidtab, I was supposing that

genkernel might take the system raidtab and include it in the initramfs

image it creates, thus using it to decide how to bring the array up.

I'm guessing not.

 *NeddySeagoon wrote:*   

> udev is also on the root partition (unless its in
> 
> your initrd too) and therefore can't be started until root is
> 
> mounted.

 

I was under the impression that the genkernel initrd compiled in a udev

binary, which I can see is not the case now I have unpacked the

initramfs image.

 *NeddySeagoon wrote:*   

> I have a plain raid (no LVM2) and it all just
> 
> works . However, your root is on LVM2 on raid, so to read your root, you
> 
> have to form the raid set, mount it somewhere then start LVM2.  Since
> ...

 

And creating an initrd with udev, kernel RAID and LVM2 support is

exactly why I have started using genkernel.  I run machines with root on

EVMS on RAID, root on LVM2 on dmraid and root on EVMS alone - I'm not

lost with any of these concepts.  It is just this combination - root on

LVM2 on RAID (for which I can't use dmraid because it has no RAID10

support), which is problematic when it comes to initrds.

Now, back to the business of the RAID array not auto-assembling.  Anyone?

----------

## NeddySeagoon

thyrihad,

There have been a few posts about raid auto assembly being broken with kernel 2.6.20.

If thats your kernel version, you may want to drop back to 2.6.19.

Like I say, I use kernel raid and gentoo-sources-2.6.20 worksforme.

----------

## thyrihad

Thanks, but I'm still on 2.6.19-gentoo-r5  :Confused: 

NeddySeagoon, are you using disk controller drivers from the new ATA layer in your kernel?

----------

## NeddySeagoon

thyrihad,

Yes. I'm using the new SATA drivers for SIL 3112A and the old PATA driver. It works for me in both 2.6.19 and 2.6.20.

make oldconfig broke moving from 2.6.18. I had to choose the SATA support by hand to make it work.

You can see my kernel config at http://62.3.120.141/~roy

----------

## RayDude

 *thyrihad wrote:*   

> Thanks, but I'm still on 2.6.19-gentoo-r5 
> 
> NeddySeagoon, are you using disk controller drivers from the new ATA layer in your kernel?

 

I have the same problem with 2.6.19-gentoo-r5 on my serial ata raid1.

I haven't found a fix, but I have found the bug report 

http://bugzilla.kernel.org/show_bug.cgi?id=7208

And it is suggested to use an initrd. I'm figuring that out now.

FYI

Raydude

----------

