# [solved] raid0 raid1 mdadm problem md125 etc

## gabrielm79

Hello everybody , happy holidays to all. 

I have an old  ssd pci  OCZ revodrive. I had managed to install gentoo and using it for almost 5-6 years but with grub legacy only. 

Now i had to do a fresh install. At #gentoo (irc.freenode.org) people advised me to use software raid instead of fakeraid. That's what i did.

Everything works fine apart the name of the raid devices. 

I only have md125 md126 etc and not the devices name i gave during the installation. 

Here is the output of /dev 

```
# ls /dev/md*

/dev/md125  /dev/md126  /dev/md127  /dev/mdev.seq

/dev/md:

livecd:1  livecd:2  livecd:3

```

Here is the /etc/mdadm.conf

```
cat /etc/mdadm.conf 

ARRAY /dev/md1 metadata=1.2 name=livecd:1 UUID=73a70668:67af19d1:3560b2b0:d28fee64

ARRAY /dev/md2 metadata=1.2 name=livecd:2 UUID=a4a8870f:e0512cd5:bdd8334f:48931434

ARRAY /dev/md3 metadata=1.2 name=livecd:3 UUID=dbc9024a:53a66fed:a87e0939:5fa7898a

```

Output of /proc/mdstat 

```
cat /proc/mdstat 

Personalities : [linear] [raid0] [raid1] 

md125 : active raid1 sdf1[0] sdg1[1]

      306880 blocks super 1.2 [2/2] [UU]

      

md126 : active raid1 sdf2[0] sdg2[1]

      12574720 blocks super 1.2 [2/2] [UU]

      

md127 : active raid0 sdf3[0] sdg3[1]

      91382784 blocks super 1.2 512k chunks

      

unused devices: <none>

```

fstab 

```
/dev/md1                /boot           ext3            noauto,noatime  1 2 

/dev/md2                none            swap            sw              0 0

/dev/md3                /               ext4            noatime         0 1 

```

I want to say here that according to /etc/mdadm.conf and and fstab that i really don't have /dev/md[1,2,3] but i have /dev/md/livecd[1,2,3] which are symlinks to /dev/md[125,126,127] 

i want to ask if the symlinks are being generated from the /etc/mdadm.conf , and if i change the names in /etc/mdadm.conf to md[1,2,3] is fine and if should be symlink to /dev/md[125,126,127]

Because i have used again once raid5 but there was not such dev as /dev/md125 or so. 

Which is the safest way to make asseble the devices?

```
mount |grep -i md

/dev/md127 on / type ext4 (rw,noatime,stripe=256,data=ordered)

```

Thanks!Last edited by gabrielm79 on Thu Dec 31, 2015 3:56 pm; edited 1 time in total

----------

## frostschutz

remove metadata= name= from your mdadm.conf so only the UUID is left; update your initramfs, and see if that helps

----------

## gabrielm79

frostschutz no luck....

new /etc/mdadm.conf 

```
cat /etc/mdadm.conf 

ARRAY /dev/md1 UUID=73a70668:67af19d1:3560b2b0:d28fee64

ARRAY /dev/md2 UUID=a4a8870f:e0512cd5:bdd8334f:48931434

ARRAY /dev/md3 UUID=dbc9024a:53a66fed:a87e0939:5fa7898a

```

Recreated initramfs with genkernel and nothing changed. 

/dev/md[1,2,3] are the knodes given by me when i was creating the arrays during installation 

Should i boot with live cd and assemble the arrays again? 

The ssd is parted like 

```

fdisk -l /dev/sdg

Disk /dev/sdg: 55.9 GiB, 60022480896 bytes, 117231408 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x60411d19

Device     Boot    Start       End  Sectors  Size Id Type

/dev/sdg1  *        2048    616447   614400  300M 83 Linux

/dev/sdg2         616448  25782271 25165824   12G 83 Linux

/dev/sdg3       25782272 117231407 91449136 43.6G 83 Linux

fdisk -l /dev/sdf

Disk /dev/sdf: 55.9 GiB, 60022480896 bytes, 117231408 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x6b410923

Device     Boot    Start       End  Sectors  Size Id Type

/dev/sdf1  *        2048    616447   614400  300M 83 Linux

/dev/sdf2         616448  25782271 25165824   12G 83 Linux

/dev/sdf3       25782272 117231407 91449136 43.6G 83 Linux

```

where sd*1 is boot , sd*2 swap sd*3 /

EDIT: The symlinks  at 

ls /dev/md/livecd\:*

/dev/md/livecd:1  /dev/md/livecd:2  /dev/md/livecd:3

still exist even after changing the mdadm.conf

----------

## NeddySeagoon

gabrielm79,

During raid assembly you can force an md node if you prefer, otherwise, its picked up from Preferred Minor below.

```
$ sudo /sbin/mdadm -E /dev/sda1

Password: 

/dev/sda1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 9392926d:64086e7a:86638283:4138a597

  Creation Time : Sat Apr 11 16:34:40 2009

     Raid Level : raid1

  Used Dev Size : 40064 (39.13 MiB 41.03 MB)

     Array Size : 40064 (39.13 MiB 41.03 MB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 125
```

You can edit Preferred Minor too but I've not bothered since booting a rescue system will change it back again.

Thats a piece of my /boot.

----------

## s4e8

Did you add mdadm.conf to genkernel?

----------

## gabrielm79

 *NeddySeagoon wrote:*   

> gabrielm79,
> 
> During raid assembly you can force an md node if you prefer, otherwise, its picked up from Preferred Minor below.
> 
> ```
> ...

 

NeddySeagoon , i don't know what preferred minor is  :Smile: 

Here is my output of examine 

```

dadm -E /dev/sdf1 

/dev/sdf1:                                                                                                                                

          Magic : a92b4efc                                                                                                                

        Version : 1.2                                                                                                                     

    Feature Map : 0x0                                                                                                                     

     Array UUID : 73a70668:67af19d1:3560b2b0:d28fee64                                                                                     

           Name : livecd:1                                                                                                                

  Creation Time : Wed Dec  2 16:37:12 2015                                                                                                

     Raid Level : raid1                                                                                                                   

   Raid Devices : 2                                                                                                                       

                                                                                                                                          

 Avail Dev Size : 613856 (299.73 MiB 314.29 MB)                                                                                           

     Array Size : 306880 (299.69 MiB 314.25 MB)                                                                                           

  Used Dev Size : 613760 (299.69 MiB 314.25 MB)                                                                                           

    Data Offset : 544 sectors                                                                                                             

   Super Offset : 8 sectors

   Unused Space : before=456 sectors, after=96 sectors

          State : clean

    Device UUID : b8fbf585:43c11e80:c63cbf4f:c1981ebb

    Update Time : Mon Dec 28 00:19:39 2015

  Bad Block Log : 512 entries available at offset 72 sectors

       Checksum : 4c4c1a6d - correct

         Events : 17

   Device Role : Active device 0

   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

```

I've did some tests with the /boot raid1 , i stopped the array , and re-assemble it . I didn't generate initramfs thou. And next reboot everything was the same.

s4e8 

How do i add /etc/mdadm.conf to genkernel? I use my own kernel and i just use genkernel (the easy way) to generate initramfs.

----------

## frostschutz

You could unpack the initramfs and check the contents of initramfs/etc/mdadm* to verify it's the correct initramfs with correct config inside.

https://wiki.gentoo.org/wiki/Custom_Initramfs#Extracting_the_cpio_archive

I'm not too familiar with genkernel, does it also need additional parameters?

If mdadm.conf says /dev/md1 but it ends up being /dev/md127 then that's odd. Also check that the mdadm.conf is using unix newlines rather than msdos ones, and that the UUID are correct.

----------

## gabrielm79

frostschutz

I am not familiar with making custom initramfs. I use genkernel to generate one with command like 

#genkernel --mdadm initramfs 

I don't know if after this i need to update grub2...

I wll check we will be back at that machine. 

what are the unix/msdos lines? How could i check it? 

Thank you!

----------

## NeddySeagoon

gabrielm79,

If the name of four initrd file has changed, you need to update grub.cfg so the new initrd is used.

DOS/Windows etc use CR/LF an an end of line marker.

*NIX uses just LF

You can run a text file through dos2unix or look at it with hexedit. 

CR is 0x0d LF is 0x0a 

In a DOS text file, each line of text ends in 0x0d 0x0a

This dates back to the days of typewriter like functionality, where it really in lwo operations.

A *NIX file will only have 0x0a between the lines.  Here's the first few lines of my make.conf

```
00000000   23 23 20 54  68 65 73 65  20 73 65 74  74 69 6E 67  ## These setting

00000010   73 20 77 65  72 65 20 73  65 74 20 62  79 20 74 68  s were set by th

00000020   65 20 63 61  74 61 6C 79  73 74 20 62  75 69 6C 64  e catalyst build

00000030   20 73 63 72  69 70 74 20  74 68 61 74  20 61 75 74   script that aut

00000040   6F 6D 61 74  69 63 61 6C  6C 79 0A 23  20 62 75 69  omatically.# bui

00000050   6C 74 20 74  68 69 73 20  73 74 61 67  65 2E 0A 23  lt this stage..#
```

----------

## s4e8

This way it won't add mdadm.conf to initramfs. You should use "genkernel --mdadm --mdadm-config=/etc/mdadm.conf initramfs" or edit /etc/genkernel.conf to uncomment MDADM and MDADM_CONFIG.

 *gabrielm79 wrote:*   

> frostschutz
> 
> I am not familiar with making custom initramfs. I use genkernel to generate one with command like 
> 
> #genkernel --mdadm initramfs 
> ...

 

----------

## gabrielm79

Due to the days i was late to check out. 

s4e8 THANKS!

Your advice on --mdadm-config solved the problem!!! 

After a new initramfs 

```

$ cat /proc/mdstat 

Personalities : [linear] [raid0] [raid1] 

md3 : active raid0 sdf3[0] sdg3[1]

      91382784 blocks super 1.2 512k chunks

      

md2 : active raid1 sdf2[0] sdg2[1]

      12574720 blocks super 1.2 [2/2] [UU]

      

md1 : active raid1 sdf1[0] sdg1[1]

      306880 blocks super 1.2 [2/2] [UU]

      

unused devices: <none>

```

I only have an error during reboot/shutdown 

```

* Shutting down RAID devices (mdadm) ...

 * mdadm: Cannot get exclusive access to /dev/md3:Perhaps a running process, mounted filesystem or active volume group?

 [ !! ]

 * ERROR: mdraid failed to stop

 * Stopping udev ...

 [ ok ]

```

but according to this https://forums.gentoo.org/viewtopic-p-6884792.html seems to be allright cause of /dev/md3 is root /

Thank you all for all the help.

Happy new Year!

----------

