# Raid device gets autoassembled but without partitions

## mangueireiro

Hello fellows. I decided to install gentoo in a raid device, inside a virtual machine in VBox, and I have the following to report:

First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.

And then I made a primary partition ext4 ( /dev/md1p1 )

I've compiled the kernel and the initramfs with integration of /etc/mdadm.conf.

If I don't make the devices that need to be integrated explicit in that file, the raid drive doesn't get assembled giving an error saying that the superblock of /dev/sdb is equal to the superblock of /dev/sdb1. I haven't created a /dev/sdb1 but it somehow appears, I think it has to do with the raid thing going on, and being /dev/sdb the "first" part of the raid device.

Now, I've the following /etc/mdadm.conf file:

```

DEVICE partitions

CREATE owner=root group=disk mode=0660 auto=yes

HOMEHOST <system>

ARRAY /dev/md1 metadata=0.90 UUID=7513fe63:71b17520:cb201669:f728008a level=raid0 num-devices=2 devices=/dev/sdb,/dev/sdc

```

Now, with the kernel compiled with genkernel, integrating /etc/mdadm.conf and having "domdadm" in the grub kernel line with root=/dev/md1p1

I get:

```

mdadm: /dev/md1 has been started with two drives.

>> Determining root device...

!! Block device /dev/md1p1 is not a valid root device...

...

```

Now, I got into the shell, and confirm that /dev/md1 exists but /dev/md1p1 doesn't. 

But, if I do 

```

mdadm --detail /dev/md1

```

The partition /dev/md1p1 appears, and by exiting the shell and writing /dev/md1p1 I can now boot to the system and I do get a nice feeling from being in it.

That's it. I would like to ask what can I do so that /dev/md1p1 gets created automatically.

----------

## Goverp

Your analysis is correct; I have a similar setup but wrote my own initramfs script, so I could add 

```
mdadm --detail --scan
```

 after the first call to mdadm.  That makes the partition devices /dev/md1p1 etc.

So you need to persuade the genkernel people to include such a call in the init script it builds, or edit that script and inset the call manually.

----------

## szatox

I'm using lvm on top of raid, but if you want to use partitions on raid itself you might also want to have a look at genkernel's options

 *Quote:*   

> 
> 
>        --[no-]mdadm
> 
>            Includes or excludes mdadm/mdmon support. Without sys-fs/mdadm[static] installed, this will compile mdadm for you.
> ...

 

----------

## Roman_Gruber

 *mangueireiro wrote:*   

> 
> 
> First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.
> 
> 

 

Well if you use lvm anyway it is sufficent to jsut make the lvm container with these two hard drives. You do not need raid for that anyway. lvm handels that for you 

it looks more than a boot issue or init issue in my point of view.

your init script fails to mount it in a proper way. 

personally i think lvm is enough for you in this regard. I never touched that software raid thing because lvm does the things for me anyway. if it is a new install redo it with only an lvm container below. than make your volume groupes and in them your file system. for myself the genkernel initramfs works quite well with a custom made kernel.

it could be also a kernel related issue too. you need kernel + userpsace utils + config files working together.

----------

## mangueireiro

 *Goverp wrote:*   

> Your analysis is correct; I have a similar setup but wrote my own initramfs script, so I could add 
> 
> ```
> mdadm --detail --scan
> ```
> ...

 

Ok thanks   :Smile:  .

 *szatox wrote:*   

> I'm using lvm on top of raid, but if you want to use partitions on raid itself you might also want to have a look at genkernel's options
> 
>  *Quote:*   
> 
>        --[no-]mdadm
> ...

 

I'm have used the first two parameters, but not that last. Don't know if it helps in this case.

 *tw04l124 wrote:*   

>  *mangueireiro wrote:*   
> 
> First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.
> 
>  
> ...

 

But will it raid?

The virtual drives represent files in two different hard disks, so I want them to raid to work in parallel, that's why I made them raid level 0.

----------

## Roman_Gruber

Hi,

I think yes according to this

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html#create-raid

 *Quote:*   

> To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Usually when you create a logical volume with the lvcreate command, the --type argument is implicit. For example, when you specify the -i stripes argument, the lvcreate command assumes the --type stripe option. When you specify the -m mirrors argument, the lvcreate command assumes the --type mirror option. When you create a RAID logical volume, however, you must explicitly specify the segment type you desire. The possible RAID segment types are described in Table 4.1, “RAID Segment Types”. 

 

As I understood in the past about LVM you can specify how big the data junks are and how they are set up, if they are saved in duplicates, you can call them mirrored or anything else. LVM is a very powerful tool. It has many different features

----------

## mangueireiro

 *tw04l124 wrote:*   

> Hi,
> 
> I think yes according to this
> 
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html#create-raid
> ...

 

Ok thanks   :Smile: 

----------

