# Problem assembling RAID with mdadm

## anselm84

Hello everyone,

I have a problem (re)assembling a RAID with mdadm. When I boot up the machine the RAID gets properly assembled by the kernel:

```

[   11.724020] md/raid:md127: device sde1 operational as raid disk 0

[   11.724024] md/raid:md127: device sdg1 operational as raid disk 1

[   11.724408] md/raid:md127: allocated 3250kB

[   11.727432] md/raid:md127: raid level 5 active with 2 out of 3 devices, algorithm 2

[   11.727437] RAID conf printout:

[   11.727439]  --- level:5 rd:3 wd:2

[   11.727441]  disk 0, o:1, dev:sde1

[   11.727443]  disk 1, o:1, dev:sdg1

[   11.727469] md127: detected capacity change from 0 to 5999994863616

```

```

files ~ # cat /proc/mdstat 

Personalities : [raid1] [raid6] [raid5] [raid4] 

md127 : active (auto-read-only) raid5 sde1[0] sdg1[1]

      5859369984 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]

```

Now when I stop the RAID and try to reassemble it with mdadm I get an error:

```

files ~ # mdadm --stop /dev/md127 

mdadm: stopped /dev/md127

files ~ # mdadm --assemble /dev/sde1 /dev/sdg1

mdadm: device /dev/sde1 exists but is not an md array.

```

Which is strange because every thing looks good when examining the drives - as far as I can tell:

```

files ~ # mdadm --misc --examine /dev/sdg1 

/dev/sdg1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : fb775930:201599d9:dbfbfcd0:3a715afa

           Name : files:2  (local to host files)

  Creation Time : Sat Apr 13 18:08:07 2013

     Raid Level : raid5

   Raid Devices : 3

 Avail Dev Size : 5859371008 (2793.97 GiB 3000.00 GB)

     Array Size : 11718739968 (5587.93 GiB 5999.99 GB)

  Used Dev Size : 5859369984 (2793.97 GiB 3000.00 GB)

    Data Offset : 2048 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : ef15f85a:c013dcd4:70c500c0:9ddcf53b

    Update Time : Mon Apr 15 09:02:22 2013

       Checksum : 6c746bd2 - correct

         Events : 45390

         Layout : left-symmetric

     Chunk Size : 512K

   Device Role : Active device 1

   Array State : AA. ('A' == active, '.' == missing)

files ~ # mdadm --misc --examine /dev/sde1 

/dev/sde1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : fb775930:201599d9:dbfbfcd0:3a715afa

           Name : files:2  (local to host files)

  Creation Time : Sat Apr 13 18:08:07 2013

     Raid Level : raid5

   Raid Devices : 3

 Avail Dev Size : 5859371008 (2793.97 GiB 3000.00 GB)

     Array Size : 11718739968 (5587.93 GiB 5999.99 GB)

  Used Dev Size : 5859369984 (2793.97 GiB 3000.00 GB)

    Data Offset : 2048 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : 34b25ce4:bd4859b4:608351f2:b5d31464

    Update Time : Mon Apr 15 09:02:22 2013

       Checksum : 2fc5f21c - correct

         Events : 45390

         Layout : left-symmetric

     Chunk Size : 512K

   Device Role : Active device 0

   Array State : AA. ('A' == active, '.' == missing)

Model: ATA WDC WD30EZRX-00D (scsi)

Disk /dev/sde: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags: 

Number  Start   End     Size    File system  Name     Flags

 1      1049kB  3000GB  3000GB               primary  raid

Model: ATA ST3000DM001-1CH1 (scsi)

Disk /dev/sdg: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags: 

Number  Start   End     Size    File system  Name     Flags

 1      1049kB  3000GB  3000GB               primary  raid

```

Currently I am suspecting a bug in mdadm, because the kernel has no problems to assemble the RAID. So, I already tried the latest version from portage of mdadm but with the same result. But maybe I am only doing something completely wrong. So I would really appreciate the help of you guys since I am sure you are much more experienced than I am when it comes to Linux and RAID.

----------

## s4e8

Just use "mdadm -As" , it will assemble the RAID in degrade mode. You must add a replacement disk quickly, it not good to running RAID5 under degrade mode.

----------

## anselm84

There is no data on the RAID yet because of this problem. So its ok to experiment with it and let it run in degraded mode...

Using mdadm -As also won't assemble the RAID. I had already tried it in the beginning and just ran it again to be sure. There is no change...

----------

## s4e8

then add

DEVICE /dev/sd[b-z]1

ARRAY /dev/md0 NAME=files:2

AUTO +1.x homehost -all

to /etc/mdadm.conf and try again.

----------

## Mad Merlin

You need to name the md device you want to assemble the array as, your command was trying to create the array /dev/sde1 with 1 drive (/dev/sdg1). Try this:

```
mdadm /dev/md1 --assemble /dev/sde1 /dev/sdg1
```

----------

## Goverp

 *anselm84 wrote:*   

> ...
> 
> I have a problem (re)assembling a RAID with mdadm. When I boot up the machine the RAID gets properly assembled by the kernel:
> 
> ...
> ...

 

As noted by Mad Merlin, your mdadm command had incorrect parameters.

But there's a second problem here.  You are mixing kernel RAID assembly with mdadm assembly.  From my experience, I can say that leads to pain.  I presume you are using auto-assemble or kernel command line parameters to assemble your array.  The kernel assembly code appears to be based on an old version of mdadm that only supports V 0.9 superblocks, and the kernel code is definitely deprecated.  Your --examine output shows your array components were created using V1.2 superblocks.  Therefore using mdadm to repair or modify the array will be doing subtly different stuff to the kernel when assembling.

I had a RAID 5 array created with V1.0 superblocks where one drive was flagged bad due to a transient I/O error, and was using kernel assembly.  When I tried to recover the array, mdadm reported either V 0.9 or V 1.0 superblocks, depending on when I called it.

I suggest that if you are set on using kernel assembly, reformat everything to V 0.9 superblocks.  Better is to move off the deprecated kernel code, and build yourself an initramfs to use explicit mdadm assembly with your V 1.2 superblocks.  That has the advantage of letting you set up bitmaps, which really speed up syncing a drive in recovery situations.

If you're worried that an initramfs makes things more complex and slows boot, well true, but it's not as bad as it sounds.  For a start, you already have an initramfs - if you don't explicitly include one in the kernel, it builds a dummy,  Second, you don't need to add and maintain a new file in /boot; the  better way is to incorporate the initramfs into the kernel itself.  Another advantage of this is that it gets rebuilt whenever you make a new kernel, keeping the initramfs code current.  You'll find instructions on doing this in the gentoo wikis.

----------

## anselm84

Thank you guys very much!

```

mdadm /dev/md1 --assemble /dev/sde1 /dev/sdg1

```

did the job. Currently I am hitting my head against the wall not seeing it. 

@Goverp:

Thanks for the tip. Strangely the kernel was already ignoring my two other RAIDs because they have v1.2 superblock. But now I completely removed the auto assembly feature from the kernel just to be on the safe side. Since I don't boot from the RAID and also have an additional encryption layer, I already do a lot of stuff before the "final mount", so adding the assembly of the RAIDs is no problem.

Once again thank you very much everyone!

----------

