# Using mdadm for sw RAID creates spurious md0 device

## sirlark

Hi,

I was trying to create a RAID-1 device out of two SATA hard drives (not the root parition, just a data partition). I already had data one drive before purchasing an identical one. I was following the gentoo wiki guide on creating a software RAID, leaving out all the boot specific stuff. Here is my hdd layout:

/dev/sda   (500Gb SATA drive, boot, swap and root paritions)

/dev/sdb   (1.5Tb SATA drive, data parition, 2/3s full)

/dev/sdc   (1.5Tb SATA drive, new and unformatted)

Using cfdisk, I partitioned /dev/sdc to type linux raid autodetect. I rebuilt my kernel with RAID-1 support. Rebooted. I saw now that I had a /dev/md and a /dev/md0. Nothing in the guide mentioned /dev/md0, so I left it alone, and continued to creat the RAID on /dev/sdc, specifying a two disk raid with one disk missing initially. This created /dev/md1. I copied the data from /dev/sdb to /dev/md1, unmounted and repartitioned /dev/sdb, and added /dev/sdb to the /dev/md1 array using mdadm again. I now have a working RAID-1 array, which was surprisingly simple, however I still have two problems.

1. I've added mdraid to the boot runlevel, and put the entry for /dev/md1 to mount at /mnt/data1500 in my fstab, but /dev/md1 does not mount at boot. Actually, I can't even mount it manually until I restart the mdraid service, which never 'stops', i.e. it doesn't seem to be started during boot. There are no messages to the effect either. rc-update show indicates it is active in runlevel boot however.

2. What is the /dev/md0 device, and can I get rid of it. I'm happy to move /dev/md1 over to /dev/md0 if necessary too, e.g. if I need a /dev/md0 to have a /dev/md1, I'd be happy with a name change. I think it might have been created because the kernel auto detected /dev/sdc1 as a RAID partition and hence created the device before mdadm created /dev/md1, but I'm guessing. /proc/mdstat makes no reference to /dev/md0 at all.

Thanks

----------

## gazj

I also have an md0, even tho my arrays start at 1.  Although I prefer mine to start at 1.

I do not start mdraid in any runlevel.  Let the kernel detect the raid arrays, with the following kernel options

Device Drivers > Multiple devices driver support (RAID and LVM)

Make sure you have Autodetect RAID arrays during kernel boot as built in.  I don't use a initrd but if you do I guess you can use it as a module.

Hope that helps

EDIT: In fact I have not even got mdadm installed!!!! I used it to set up my arrays from the minimal install cd and never installed it when I chrooted into my install environment or anytime after

----------

## sirlark

Raid auto detect during boot is on in my kernel already, /dev/md1 only shows up AFTER I start mdraid service. dmesg reports

```
[    1.034970] md: invalid raid superblock magic on sdc1

[    1.035058] md: sdc1 does not have a valid v0.90 superblock, not importing!

[    1.059787] md: invalid raid superblock magic on sdb1

[    1.059866] md: sdb1 does not have a valid v0.90 superblock, not importing!

[    1.059948] md: Scanned 2 and added 0 devices.

```

----------

## Mad Merlin

You say that you added /dev/sdb and /dev/sdc to your RAID array, but did you actually add the whole drive, or did you create a single partition on each drive (/dev/sdb1 and /dev/sdc1, both with type fd) and then add the single partition to the array? Autodetect will only work in the latter case, not the former. The reason for that is that autodetect looks for partitions of type fd, not for whole disks.

----------

## gazj

backup your data, repartition your disks with linux raid autodetect partitions again, then run the following on each partition

```
mdadm --misc --zero-superblock /dev/sdb1

mdadm --misc --zero-superblock /dev/sdc1
```

    If the device contains a valid md superblock, the block is over-written with zeros. With --force the block where the superblock would be is over-written even if it doesn't appear to be valid.

Then carry on as normal.

When the device is created you will hopefully have new valid superblocks

----------

## sirlark

@Mad Merlin: Sorry, taking shortcuts whilst posting never helps. I partitioned both drives with a single partition each of type 'linux raid autodetect'. So I have /dev/sdb1 and /dev/sdc1, which I added to the array /dev/md1. I created an ext4 filesystem on /dev/md1.

The autodetect is not picking up the partitions because they have >0.9 superblocks, I saw this as an issue in the Gentoo Software RAID howto, but only mentioned in the case of booting off the array. With mdraid running, using the newer superblock versions is not a problem. What I don't understand is why mdraid does not start, unless I start it manually?

----------

## jawsdaws

I just setup a Raid-0 and had the same issue.  I just started over with a clean array and forced mdadm to use 0.9 metadata.  I'm not sure what the difference between 0.9 and >0.9 but it's getting recognized by the kernel now.  

```
mdadm --create --metadata=0.9 BLAH BLAH BLAH  
```

I don't know if this a good, bad, or ugly approach, but it works.

----------

