# Update mdadm - Devices md[0,1,2] changed to [125,126,127]

## KaterGonzo

Hello dear community,

I have a big problem with my software raid-1 because the raid device names have been changed (old: /dev/md[0,1,2], new: /dev/md[125,126,127])!  

No I need your help because I do not want to damage my system finally (at the moment it runs with one missing hard drive). Here some general informations:

kernel version: 2.6.34-gentoo-r6

mdadm version: mdadm-3.0

software raid: Raid-1 (mirror)

past history

Some time ago i created the raid 1 with the following commands:

```
mdadm -C /dev/md0 -l 1 -n 2 /dev/sda1 /dev/sda2

mdadm -C /dev/md1 -l 1 -n 2  /dev/sda2 /dev/sda2

mdadm -C /dev/md2 -l 1 -n 2  /dev/sda3 /dev/sda3
```

This is the sold /etc/mdadm.conf:

```
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=cb28a94e:40d93554:4fc4e4ff:e7b86def

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=860e225c:1d59d285:9224bc57:2ab24b23

ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=f93c01cd:70d9be28:a740eabd:3d89ce85
```

In the past I made regularly updates of world and system but I did not update the kernel because there was no reason to reboot. Result: new software versions but an old kernel. 

Some days ago i had to reboot this system which forced me to update the kernel (udev made trouble with the old kernel) with the help of a Gentoo LIVE CD.

Problem: RAID-Devices are named with changing names! 

The new kernel works but the raid devices were changed from md[0,1,2] to 125,126,127 (I saw it in the LIVE CD). The system did not boot because it did not find the /dev/md[0,1,2]. Then I thought: "OK, change it in /etc/fstab and grub.conf and everything is ok". But it was not OK because after each reboot the names /dev/md125, /dev/md126, /dev/md127 changed! Example: /dev/md125 was my / partition, after the next reboot  /dev/md125 was my /boot partition.

Then i made some changes (I think some mistakes, too!) and at least i disconnected the second hard drive. So I have a backup disk.

current status

My systems boots porperly (without changes raid device names) but one hard drive is missing. I'm wondering about the current names because there is a /dev/md1 instead of /dev/md125 and /dev/md0 still exists. It seems that I have a mix of old an new names  :Smile: 

```
# ls -l /dev/md*

brw-rw---- 1 root disk 9,   0 Dec  3 14:46 /dev/md0

brw-rw---- 1 root disk 9,   1 Dec  3 14:46 /dev/md1

brw-rw---- 1 root disk 9, 126 Dec  3 14:46 /dev/md126

brw-rw---- 1 root disk 9, 127 Dec  3 14:46 /dev/md127

/dev/md:

total 0

lrwxrwxrwx 1 root root 8 Dec  3 14:46 126_0 -> ../md126

lrwxrwxrwx 1 root root 8 Dec  3 14:46 127_0 -> ../md127

lrwxrwxrwx 1 root root 6 Dec  3 14:46 1_0 -> ../md1
```

I do not understand but these names seem to be stable/permanent. Have a look into /proc/mdstat 

```
 # cat /proc/mdstat 

Personalities : [raid1] 

md126 : active raid1 sda1[0]

      104320 blocks [2/1] [U_]

      

md1 : active raid1 sda2[0]

      1003968 blocks [2/1] [U_]

      

md127 : active raid1 sda3[0]

      77039616 blocks [2/1] [U_]

      

unused devices: <none>
```

1st question: Fix/change names?

How do I configure mdadm so the raid device names will not change in the future?

2nd question: add a new hard drive?

How do I add a new drive to this "corrupted" raid array. I need a safe step-by-step manual because i do not want to crash my system!

3rd question: How does mdadm recognize the hard drives which belong to an array?

Is that information stored in the superblock? I see the UUID and Preferred Minor?

```
# mdadm -E /dev/sda3

/dev/sda3:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 860e225c:1d59d285:9224bc57:2ab24b23 <--------------- !!!!!!!!!!!!!!

  Creation Time : Wed Sep  5 08:36:21 2007

     Raid Level : raid1

  Used Dev Size : 77039616 (73.47 GiB 78.89 GB)

     Array Size : 77039616 (73.47 GiB 78.89 GB)

   Raid Devices : 2

  Total Devices : 1

Preferred Minor : 127 <--------------------------------- !!!!!!!

    Update Time : Mon Dec  6 18:29:42 2010

          State : clean

 Active Devices : 1

Working Devices : 1

 Failed Devices : 1

  Spare Devices : 0

       Checksum : a6b08bd2 - correct

         Events : 40478144

      Number   Major   Minor   RaidDevice State

this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3

   1     1       0        0        1      faulty removed

```

4th question: Why did the system ignore mdadm.conf?

Have a look into the mdadm.conf: there are md0, md1 and md2 with the correct UUIDs.

```
## AUSZUG mdadm.conf

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=cb28a94e:40d93554:4fc4e4ff:e7b86def

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=860e225c:1d59d285:9224bc57:2ab24b23

ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=f93c01cd:70d9be28:a740eabd:3d89ce85
```

thank you very much!

----------

## vitoriung

Obviously issue with kernel autodetection? 

I am currently having issues with superblocks after update to mdadm 3.1 and one of my RAID 0 arrays changed to md127

Can you see something in the syslog?

----------

## Lebkoungcity

Hi schmidtsmikey,

I had also the problem with raid-devices changing their name from e.g. md1 to md125 and so on after using the live-cd. I found a solution that worked for me here: https://bugzilla.novell.com/show_bug.cgi?id=638532#c1:

I booted the live-cd, mounted my root-device (in my case it was md3 or with the "new name" md126), copied /etc/mdadm.conf from the HDD to the live-system's /etc, unmounted md126 again. Then I executed these lines (adjusted for each of my devices):

```
mdadm -S /dev/md125

mdadm -A /dev/md1 --update=super-minor
```

And now my devices are named md1, md3,... again   :Smile: 

 Maybe this could help you with your questions #1, #3 and #4 - but I'm not 100% sure!

Good luck!

Saludos,

Andy

----------

## augury

I came up with an /dev/md127.  This is contrary to my mdadm.conf.

In the /dev/md/ it is like:          ("l" is aliased in bashrc  l="ls -oaFilhq --color)

```
601:38:14 ribbon /a/var # l /dev/md

md/    md0    md127 
```

```
601:38:14 ribbon /a/var # l /dev/md/

total 0

3590 drwxr-xr-x  2 root  60 Dec 20 20:31 ./

  72 drwxr-xr-x 17 root 14K Dec 21 01:31 ../

3591 lrwxrwxrwx  1 root   8 Dec 20 20:31 ribbon:0 -> ../md127
```

wtf is that?  mdraid is out of the runlevels, md0 is still there and the only consistancy is that 127 still exists.  

In the "creation" of md 127, has the udev monitoring service knocked out my active mount raid disc?

WTF i now say!  /dev/md/127

what did i do?

I ran a round of sformat on my disc (did nothing but knock the stink of a disc wired for 512b sectors)

fdisk -b 2048 # this on a and b (with the o for dos partitioning, sformat gave me sun--ironic)

mdadm  --create --verbose /dev/md0 --level=0 --raid-devices=2 --chunk=4 --delay=16  /dev/sda1 /dev/sdb1

mkfs.xfs -b size=4096   -i size=2048 -i maxpct=10  -l size=4096b -s size=2048 -n size=2b -d agsize=131067b -f /dev/md0

I have a /dev/md/ribbon:0

this is syntacticly correct in the fstab?  I want the mount.

md127 is logically um 127 more than 0.Last edited by augury on Tue Dec 21, 2010 7:42 am; edited 1 time in total

----------

## augury

 *KaterGonzo wrote:*   

> 
> 
> 1st question: Fix/change names?
> 
> How do I configure mdadm so the raid device names will not change in the future?
> ...

 

It seems to me md127 just showed up.  Like a hot plugged device.

I could make the assumption and go with the 127 or I could use this {$HOSTNAME}: link I've been provided (of course we're back to the el logico of 0 1 2 3 etc.

Grub should understand me and my logic before the kernel gets there...

----------

## augury

ok  */etc/fstab wrote:*   

> 
> 
> /dev/md/ribbon:0        /a                      xfs     noatime   0 0
> 
> 

 

mounts

```
ribbon / # echo $HOSTNAME

ribbon
```

----------

## augury

 *Quote:*   

> fdisk -b 2048

 

Don't do that.  It only makes fdisk create the wrong partition size.

----------

## drescherjm

I keep getting this at work and its driving me nuts. Before a reboot I had md0, md1 and md2. Now I have this mess:

```
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10]

md125 : active raid6 sda5[0] sdh5[6] sdg5[5] sdb5[1] sdj5[8] sdd5[9](S) sdi5[7] sdf5[4] sde5[3] sdc5[2]

      13617515520 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/9] [UUUUUUUUU]

md126 : active raid1 sda1[0] sdb1[2] sde1[5] sdh1[4] sdg1[7] sdf1[3] sdc1[1] sdd1[8](S) sdj1[6]

      272960 blocks [8/8] [UUUUUUUU]

md127 : inactive sdi1[8](S)

      272960 blocks

md1 : active raid6 sdg3[5] sdh3[2] sdc3[0] sdf3[7] sdb3[3] sde3[4] sdd3[9](S) sda3[6] sdi3[1] sdj3[8]

      51447424 blocks level 6, 64k chunk, algorithm 2 [9/9] [UUUUUUUUU]

unused devices: <none>

```

md126 and md127 was md0. md125 was md2 and md1 was md1.

Edit: Ahhh I see my /etc/mdadm.conf is out of sync with the arrays:

```
cat /etc/mdadm.conf

ARRAY /dev/md1 level=raid6 num-devices=9 metadata=0.90 spares=1 UUID=0efd4edd:3d9f01aa:48122f25:357f81a7

   devices=/dev/sdc3,/dev/sdm3,/dev/sdh3,/dev/sdb3,/dev/sde3,/dev/sdg3,/dev/sda3,/dev/sdf3,/dev/sdk3,/dev/sdd3

ARRAY /dev/md2 level=raid6 num-devices=9 metadata=0.90 spares=1 UUID=f7db2a98:a8f55b45:01d44904:bd9194f4

   devices=/dev/sdh4,/dev/sdb4,/dev/sdc4,/dev/sda4,/dev/sdm4,/dev/sdf4,/dev/sdg4,/dev/sde4,/dev/sdk4,/dev/sdd4

ARRAY /dev/md0 level=raid1 num-devices=8 metadata=0.90 spares=2 UUID=d542ee7c:44d92786:d27241be:30e22931

   devices=/dev/sda1,/dev/sdc1,/dev/sdb1,/dev/sdf1,/dev/sdh1,/dev/sde1,/dev/sdk1,/dev/sdg1,/dev/sdd1,/dev/sdm1
```

Could that be causing this mess? I am using a 2.6.38 gentoo-sources kernel built with genkernel and mdadm support.

----------

## dacid

Anyone find a solution for this?  These minor being changed by the kernel are crazy....

Thanks

Dave

----------

## neofutur

I just had the same problem, 2 of my rid arrays were split and md126 and md127 appeared.

 sems fixed after stopping the "new" arrays and readding the partitions to the "normal" arrays.

I m also using kernel  2.6.38.2

 another thread says upgrading mdadm from 3.1.4 to 3.2.1 should help . . .  

http://ubuntuforums.org/showpost.php?p=11605115&postcount=5

could also be a bug in 2.6.38 kernel ?

another clue is here : 

http://www.issociate.de/board/post/508621/Question_on_md126_/_md127_issues.html

----------

## wrs4

No doubt everyone here already has their problems solved, but as I ran into this thread looking for answers, I figured I'd document what happened to me when I updated portage and re-emerged the system 2 days ago.

Configuration:

```

 ~ # fdisk -l /dev/sda

Disk /dev/sda: 40.0 GB, 40020664320 bytes, 78165360 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x796ff828

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048      309247      153600   fd  Linux raid autodetect

/dev/sda2          309248    78165359    38928056    5  Extended

/dev/sda5          311296     2408447     1048576   82  Linux swap / Solaris

/dev/sda6         2410496     4507647     1048576   fd  Linux raid autodetect

/dev/sda7         4509696    77910015    36700160   fd  Linux raid autodetect

/dev/sda8        77912064    78165359      126648   82  Linux swap / Solaris

 ~ # fdisk -l /dev/sdb

Disk /dev/sdb: 40.0 GB, 40020664320 bytes, 78165360 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xe0e8e0e8

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1            2048      309247      153600   fd  Linux raid autodetect

/dev/sdb2          309248    78165359    38928056    5  Extended

/dev/sdb5          311296     2408447     1048576   82  Linux swap / Solaris

/dev/sdb6         2410496     4507647     1048576   fd  Linux raid autodetect

/dev/sdb7         4509696    77910015    36700160   fd  Linux raid autodetect

/dev/sdb8        77912064    78165359      126648   82  Linux swap / Solaris

```

sd[ab]1 are /boot

sd[ab]6 are /

sd[ab]7 are /dev/vg[home|opt|portage|tmp|usr|var]

The first thing I did was boot my system using a livecd, referencing the http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml.

Instead of creating my devices (which already existed), I did the following (modified from http://superuser.com/questions/346719/how-to-change-the-name-of-an-md-device-mdadm):

```

mdadm --stop /dev/md125

mdadm --stop /dev/md126

mdadm --stop /dev/md127

mdadm --assemble --update=super-minor /dev/md1 /dev/sda1 /dev/sdb1 

mdadm --assemble --update=super-minor /dev/md6 /dev/sda6 /dev/sdb7 

mdadm --assemble --update=super-minor /dev/md7 /dev/sda6 /dev/sdb7

```

(don't mind my weird md device names; I did it that way so I know which volumes they're on).

Next, I mounted the mountable RAID volumes:

```

mount /dev/md6 /mnt/gentoo

mount /dev/md1 /mnt/gentoo/boot

```

Then I activated the LVM volumes on my md7 device

```

vgchange -a y

```

Next, I mounted the LVs (home, opt, tmp, usr, and var) (what you do and how you do it will be unique to you, but I got tired of typing the same thing over and over again as I was debugging this):

```

for v in `lvs |grep vg |cut -d' ' -f3 |grep -v portage`; do mount /dev/vg/$v /mnt/gentoo/$v; done;

mount /dev/vg/portage /mnt/gentoo/portage

```

Then I mounted some special devices:

```

mount -t proc proc /mnt/gentoo/proc

for v in dev sys; do mount --rbind /$v /mnt/gentoo/$v; done;

```

Then I chroot'd:

```

chroot /mnt/gentoo /bin/bash

```

Then, per http://en.wikipedia.org/wiki/Mdadm, I did:

```

mdadm -Es >>/etc/mdadm/mdadm.conf

```

Once that was done, I made sure that the relevant entries in /boot/grub/grub.conf, /etc/fstab, /etc/mtab, and /etc/mdadm.conf were correct.

That seems to have fixed it for me.

----------

## drescherjm

 *Quote:*   

> No doubt everyone here already has their problems solved

 

In some ways this is still driving me nuts especially with nagios.

----------

