# [SOLVED] How to initialize "composite" RAID

## MikeHartman

This is unrelated to my other RAID thread, but I discovered this issue when I was forced to hard restart due to the other one.

My main raid (md0) is a RAID 5 composite that looks like this:

- partition on hard drive A (1.5TB)

- partition on hard drive B (1.5TB)

- partition on hard drive C (1.5TB)

- partition on RAID 1 (md1) (1.5TB)

md1 is a RAID 0 used to combine two 750GB drives I already had so that they could be fit into the larger RAID 5 (since all the RAID 5 components need to be the same size).

This seems to be a fairly standard approach that is more or less endorsed by the various RAID tutorials I've read through, and it works fine when I start all my arrays manually, with md1 started before md0.

But when the system boots up it tries to start all my arrays automatically and the timeline looks like:

Detecting md0. Can't start md0 because it's missing a component (md1) and thus wouldn't be in a clean state.

Detecting md1. md1 started.

Then I use mdadm to stop md0 and restart it (mdadm --assemble md0), which works fine at that point because md1 is up. 

But aside from the fact that I don't want to do that manually every time I reboot, since md0 was started without the md1 component and then had it re-added it decides the array needs to go through a resync, which takes 10 hours. And that will only get worse as I continue to add more drives.

Is there any way to exercise more control over the array initialization order while still having everything start automatically at bootup? Right now I've done no setup like that at all - it all just works. I've been keeping /etc/mdadm.conf updated, but as I understand it that's more for my own reference than the system's.

MikeLast edited by MikeHartman on Wed Mar 23, 2011 3:38 am; edited 2 times in total

----------

## NeddySeagoon

MikeHartman,

You can be in complete manual control if you use an initrd with mdadm and a script to assemble the raid. 

While you use kernel auto assembly there is no such control

----------

## MikeHartman

I saw a (relatively complicated looking) initrd method mentioned on the Linux RAID Wiki (although the site seems to be down at the moment so I can't refer back to it). The main benefit there seemed to be that since you did everything in the initrd the RAID arrays were available before the main bootup sequence started, so you could do things like put your / on a RAID or include RAID partitions in /etc/fstab.

If none of those benefits are particularly important to me (this array is only for media storage and I could set the actual mounting up with autofs or something) do I really need to go through all the hassle of an initrd? Isn't there some way to disable the autodetect that's happening now and just put my manual commands in an rc script or something?

Either way, definitely appreciate the suggestion.

----------

## NeddySeagoon

MikeHartman,

Is your root not on raid ?

If not, its much easier.  An initrd is not neded for root on raid provided you use kernel autoassembly.

If you don't use auto assembly but you want root on raid, you must have an initrd, since you need mdadm ro assemble your raid.

None of this has any bearing on what goes in /etc/fstab as by the time its read, root is mounted anyway.

If your root is not on raid you have control over the raid assembly order.

----------

## MikeHartman

No, my root isn't on RAID. The RAID is just a big array used for storage of media files. That's why I'm not very worried about how exactly it gets started up, as long as I don't have to do it by hand every time. 

 *Quote:*   

> If your root is not on raid you have control over the raid assembly order.

 

How can I control it?

Mike

----------

## NeddySeagoon

MikeHartman,

With fdisk, change the partition types for the raid partitions from 0xfd to 0x83.

This will prevent the kernel auto assembing the raid. 

Fill in /etc/mdadm.conf to suit.  Its well commented.

Run 

```
/etc/init.d/mdadm start 
```

to test

To automate it, 

```
rc-update add mdadm defualt
```

This will assemble the raids after the mounts in /etc/fstab have been run.

I think you can add the mounts/umounts  to /etc/conf.d/local so they happen before the login prompt and first thing after the shutdown command.

----------

## MikeHartman

I took the question to the linux-raid mailing list, since it really wasn't a Gentoo-specific issue. I think we've got it resolved. Thread is here for anyone interested: http://marc.info/?t=128415696100001&r=1&w=2

NeddySeagoon:

My partition types are already using 0xDA (as suggested in the linux raid wiki) and not 0xFD. I believe the intention there is to avoid the kernel autodetection.

From the discussion linked above, it appears that it's not the old kernel raid autodetect that is initializing my RAID, but mdadm itself, even though I don't see it in "rc-update show". I didn't get a clear explanation why that would be the case, but it seems to be.

The feeling is also that my devices being built out of order wasn't the reason why md0 failed to start, but rather that it was already marked dirty (probably from the hard reboot I needed to do). That seems a little less certain to me, since the message I got only complained about the md1-based partition not being found. But at any rate, Neil suggested I swap the ordering of the lines in /etc/mdadm.conf.

I must say that while I'm not 100% committed to all of their explanations I made the change in mdadm.conf, stopped the arrays and restarted them with a simple "mdadm --assemble --scan" and they both came back up correctly. I haven't tried a full reboot yet though, so that will be the real test.

Your solution is the kind of thing I was planning on trying, but it will only work if I can figure out a way to disable the autodetect that's happening now. Since I haven't seen a solid explanation for how it's happening - it doesn't seem to be the kernel but I can't figure out what's starting mdadm either - I don't know if I'd have much luck with that. So I will keep my fingers crossed that this mdadm.conf change will work just as well during a reboot.

Thanks for your help!

Mike

----------

## NeddySeagoon

MikeHartman,

Do you use an initrd and does it contain mdadm ?

----------

## MikeHartman

I didn't set up any kind of initrd, that's what confuses me.

All I did when I started to create these RAIDs was:

- update to most recent stable kernel

- emerge mdadm

- start running mdadm commands to create/grow/reshape arrays

No adding boot scripts, no initrd, nothing like that.

I was planning to get them set up now, and worry about automounting them at boot later. I only found out something was automounting them when I was forced to reboot ahead of schedule.

But again, I'm cautiously optimistic now. Just have to wait until the next time I can afford a reboot and see if things work better now. If nothing else, Neil over at linux-raid gave me a tip about adding an intent bitmap to the array. So if I hit the same problem and need to reassemble it again manually the resync should go much faster.

----------

## MikeHartman

Wanted to follow up on this in case anyone else finds it useful.

 *Quote:*   

> From the discussion linked above, it appears that it's not the old kernel raid autodetect that is initializing my RAID, but mdadm itself, even though I don't see it in "rc-update show". I didn't get a clear explanation why that would be the case, but it seems to be.

 

I just figured out what is actually causing the auto-assembly. Googling and asking questions in forums failed me, both when I originally started this thread and when I revisited the issue today. But grepping the files in /etc/ netted me this gem in /etc/conf.d/rc:

```
# RC_VOLUME_ORDER allows you to specify, or even remove the volume setup

# for various volume managers (MD, EVMS2, LVM, DM, etc).  Note that they are

# stopped in reverse order.

RC_VOLUME_ORDER="raid evms lvm dm"
```

The "even remove" part caught my eye. I removed "raid" from that list and rebooted. Sure enough, my arrays are no longer automatically started, but will still be picked up by "mdadm --assemble". So even with raid autodetection disabled in the kernel, arrays using the non-autodetect partion type, no initrd and no dedicated init scripts Gentoo will still automatically scan for any raid arrays and try to start them. Which is fine I guess, I just wish it was documented somewhere more obvious.

For users with simpler setups, or users who need to boot from a RAID array, that kind of userspace autodetection is probably sufficient. My nested array setup definitely complicates things, but since it's not used for actually running the system I was more comfortable just breaking it out into a separate initialization script anyway, which I'm now able to do. This lets me do stuff like verify the presence of all the drives by UUID before trying to initialize the array. I'm more worried about data integrity than uptime, so if one of the drives is offline coming back from a reboot I want to know about it before everything is activated.

----------

