# Searching for HowTo Migrate single disk to raid

## ebnerjoh

Hi All!

I am searching for a good documentation on how to migrate a single disk to Raid 1 for Gentoo with Grub2.

I am playing around since days with the following wikis:

https://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-ubuntu-10.04

https://feeding.cloud.geek.nz/posts/setting-up-raid-on-existing/

https://wiki.gentoo.org/wiki/Complete_Handbook/Software_RAID

But somehow I was not able to boot the raid system. The biggest difference I had I didnt sync md back to sda because I wanted to replace sda after booting works.

So I installed md on sdb, rebooted, took out sda and finished the raid config with sda as md. Maybe this is not allowed?

Finally I was running "Boot Repair Disk" and got the following output: https://paste.ubuntu.com/12037798

Best Regards,

Johannes

----------

## frostschutz

Your disk has physical 4K sector size but the partitions are not 4K aligned. If you're still in the setup phase, I'd re-do it with MiB-aligned partitions.

Also you're using MSDOS Partition on a 3TB disk, likely you will only be able to use 2TB of it as that is the limit of msdos partitions with 512 byte logical sector size. You should switch to GPT.

With GPT, you have two options. If your system is UEFI capable and you want UEFI style booting you need an ESP partition. This can be the same as your /boot partition but VFAT instead of ext2.

If you want to stick with legacy BIOS grub booting, you need a bios_grub partition, which is a place for GRUB to embed itself in. grub only uses a few kilobytes of it, so you could make one from 1MiB-2MiB or you could just squeeze it in before the 1st MiB, from sector 64s to 2047s.

Example: (adapt partition sizes to your liking - but don't make your /boot too small for comfort)

```
#  parted -s -a none sda unit mib mklabel gpt mkpart grub 64s 2047s mkpart boot 1 512 mkpart swap 512 4096 mkpart linux 4096 100% toggle 1 bios_grub toggle 2 raid toggle 3 raid toggle 4 raid print free

Model:  (file)

Disk /dev/shm/sda: 2861588MiB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags: 

Number  Start    End         Size        File system  Name   Flags

        0.02MiB  0.03MiB     0.01MiB     Free Space

 1      0.03MiB  1.00MiB     0.97MiB                  grub   bios_grub

 2      1.00MiB  512MiB      511MiB                   boot   raid

 3      512MiB   4096MiB     3584MiB                  swap   raid

 4      4096MiB  2861588MiB  2857492MiB               linux  raid
```

Your RAID is using old 0.90 metadata. For a /boot partition that is fine, as it allows a boot loader to use that partition w/o being aware of RAID. For your other partitions it should be using the default 1.2 metadata. 0.90 metadata also has a 2TB limit so you won't be able to use it with your 3TB disk either (if you're going to use one large partition for RAID).

Your swap should be RAID as well. If you're going to use LVM on the RAID, you could remove the swap partition and use a swap LV instead. Although a swap partition might be simpler for hibernate / resume-from-disk scenarios.

----------

## NeddySeagoon

ebnerjoh,

The migration, in outline is as follows.

Rebuild your kernel with raid support.

Using your new disc only, partition it as you want your raid set to be.  This need not be the same as your single drive install.

Indeed, you could do the changes suggested by  frostschutz here.

Donate the partitions to a raid1 set that has the other partition missing.  This will be a degraded raid1, which is OK, raid1 can work that way.

Ignore the underlying /dev/sdX from now on - its no longer yours.

Make filesystems on the raid devices, /dev/mdX ...

Boot with a liveCD, since you cannot copy a filesystem that is it use.

mount your existing install at /mnt/gentoo taking care to make in all read only.

mount your raid sets at /mnt/raid - you will need to make that first.

Copy everything from /mnt/gentoo to  /mnt/raid.  

You now have a single drive install of your gentoo unharmed and a copy of it on a degraded raid1

Fix /mnt/raid/etc/fstab so it refers to the raid volumes

Chroot into /mnt/raid/ and install and configure your boot loader.

Do 

```
touch /raid_install
```

so you can tell the installs apart.

Shut down the system.  In the BIOS, choose to boot from the new drive.  This will boot your degraded raid1 install.

Check that ls /  shows raid_install.

look at /proc/mdstat - it will show your raid sets wint one device missing.

Test and fix anything you need to while the raid is degraded.  The next step will destroy the original install.

Repartition the drive holding the original install to match the degraded raid.

Add these partitions to the raid set they belong to.  The kernel will sync the raid sets which will bring them up to full strenght.

You can cantinue to use the system while the sync is in progress.  

If your original kernel supports raid, you can boot into your original install and use it to fix the raid install if you need to.

----------

## ebnerjoh

Hi!

I think I have done all of this, but booting with GRUB2 is still not working:

I am getting "error: disk `md0' not found. Google doesn't tell me very much.

Br,

Johannes

----------

## NeddySeagoon

ebnerjoh,

Post your grub.cfg file please.

----------

## ebnerjoh

 *frostschutz wrote:*   

> Your disk has physical 4K sector size but the partitions are not 4K aligned. If you're still in the setup phase, I'd re-do it with MiB-aligned partitions.
> 
> Also you're using MSDOS Partition on a 3TB disk, likely you will only be able to use 2TB of it as that is the limit of msdos partitions with 512 byte logical sector size. You should switch to GPT.
> 
> 

 

Mhh, I have to stick again into this topic. No, I do not use DualBoot. It is a pure Linux Server.

 *frostschutz wrote:*   

> 
> 
> With GPT, you have two options. If your system is UEFI capable and you want UEFI style booting you need an ESP partition. This can be the same as your /boot partition but VFAT instead of ext2.
> 
> If you want to stick with legacy BIOS grub booting, you need a bios_grub partition, which is a place for GRUB to embed itself in. grub only uses a few kilobytes of it, so you could make one from 1MiB-2MiB or you could just squeeze it in before the 1st MiB, from sector 64s to 2047s.
> ...

 

Its a HP MicroServer N40L, so no UEFI

 *frostschutz wrote:*   

> 
> 
> Your swap should be RAID as well. If you're going to use LVM on the RAID, you could remove the swap partition and use a swap LV instead. Although a swap partition might be simpler for hibernate / resume-from-disk scenarios.

 

This is interesting as somewhere else I read that Raid1 on SWAP is not neccessary and a fstab entry with PRIO=1 on sda and sdb is doing the same but simplier.

Br,

Johannes

----------

## ebnerjoh

Let me try to summarize the needed steps in my own (no english native speaker  :Smile:  ) words:

Initial Situation:

I have a HomeServer (HP MicroServer N40L) which is used for home automation, music streaming, OwnCloud, Backup, ... and many more functions. So I want to prevent a complete new installation.

The whole setup was already running on a Raid1 (2x 1GB).

Due to a faulty drive I had to replace one drive. So I bought a new 3GB HDD, but use for the moment only 1GB (until I replace the second drive in future as well).

During installing the new HDD I made a mistake and somehow destroyed the boot loader. During Recovery I swithced back from the degraded Raid to a single sda drive, which is now again productive with all data in the server. So the System itself is prepared for Raid.

It is a 3 years old Gentoo, but with updated system and world as well as update from Grub to Grub2.

What needs to be done:

1) Install new 3GB Disk (sdb) and boot up current system

2) GPT with Bios Partition, Boot, SWAP and ROOT Partition on sdb

3) Create Raid1 with missing sda on BOOT, SWAP and ROOT

4) Boot from Gentoo Live CD and copy sda[13] to sdb[13]

5) change /etc/fstab on sdb3

5) change /etc/mdadm.conf on sdb3

6) Configure /etc/default/grub --> GRUB_PRELOAD_MODULES="raid mdraid"

7) Some HowTo's mention that "etc/grub.d/09_swraid_install" is needed. Do I need this???

 :Cool:  chroot into new System

9) grub2-install /dev/sdb

10) grub2-mkconfig -o /boot/grub/grub.cfg

11) Shutdown system

12) Boot from sdb

13) GPT sda same as done for sdb

14) add sda[123] to raid

15) grub2-install /dev/sdb

Is this correct? And please advice on the grub2 config file, as this is completley new and strange for me.

Br,

Johannes

----------

## frostschutz

 *ebnerjoh wrote:*   

> No, I do not use DualBoot.

 

I didn't say anything about dual boot.

 *ebnerjoh wrote:*   

> I read that Raid1 on SWAP is not neccessary

 

It depends how you define necessary. It is necessary if you don't want your system to crash when a disk dies.

As for grub: if its raid support gives you any trouble you could also try without - given raid1 with 0.90 or 1.0 metadata, the boot loader does not have to be RAID aware.

----------

## ebnerjoh

 *frostschutz wrote:*   

>  *ebnerjoh wrote:*   No, I do not use DualBoot. 
> 
> I didn't say anything about dual boot.
> 
> 

 

I missunderstood your statement 

 *Quote:*   

> Also you're using MSDOS Partition 

 

What do you mean with that?

I have to dig much more in this partitioning topic...

Br,

Johannes

----------

## frostschutz

Maybe I'm using the wrong terms…

https://en.wikipedia.org/wiki/Disk_partitioning#Partitioning_schemes

by msdos partition I mean the old style that uses 3-4 primary / 0-1 extended / X logical partitions. With a logical sector size of 512 byte it has a 2TB limit.

by gpt partition I mean guid partition table

----------

## ebnerjoh

 *frostschutz wrote:*   

> Maybe I'm using the wrong terms…
> 
> https://en.wikipedia.org/wiki/Disk_partitioning#Partitioning_schemes
> 
> by msdos partition I mean the old style that uses 3-4 primary / 0-1 extended / X logical partitions. With a logical sector size of 512 byte it has a 2TB limit.
> ...

 

ok, thanks, now its clear.

Br,

Johannes

----------

## ebnerjoh

 *NeddySeagoon wrote:*   

> ebnerjoh,
> 
> Post your grub.cfg file please.

 

Hi!

I will try the migration once again with GPT. Then I will prepare all needed files if it is not working.

May you have a look into the summary I wrote above? My biggest concern is the grub2 config, as this is not straight forward...

Br,

Johannes

----------

## NeddySeagoon

ebnerjoh,

I have a N40L too.  I use legacy grub and GPT.  More exactly, grub-static, since the install is /no-multilib/

root is raid1 with ver 0.90 superblocks.  Everything else is LVM over raid5 ver 1.2 superblocks.

It runs 5x2TB greens.

The N40L will check for the bootable flag.  With GPT in use it must be set on the protective MSDOS partition.

----------

## ebnerjoh

 *NeddySeagoon wrote:*   

> 
> 
> The N40L will check for the bootable flag.  With GPT in use it must be set on the protective MSDOS partition.

 

Is this the bios_grub partition which frostschutz is mention?

Br,

Johannes

----------

## frostschutz

In parted, `disk_set pmbr_boot on`

----------

## ebnerjoh

I set now the GPT Partitions on /dev/sdb

```
gandalf ~ # parted /dev/sdb

GNU Parted 3.2

Using /dev/sdb

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print

Model: ATA ST3000VN000-1HJ1 (scsi)

Disk /dev/sdb: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name   Flags

 1      32.8kB  1049kB  1016kB               grub   bios_grub

 2      1049kB  537MB   536MB                boot   raid

 3      537MB   4832MB  4295MB               swap   raid

 4      4832MB  990GB   985GB                linux  raid

```

The END of the 4 partition is a little bit smaller then the end of the last partition of the old 1TB drive, because at the beginning I want to use only 1TB at all because the 1TB drive will be replaced in 1 or 2 years.

```
gandalf ~ # parted /dev/sda

GNU Parted 3.2

Using /dev/sda

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print

Model: ATA WDC WD1003FBYX-0 (scsi)

Disk /dev/sda: 1000GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Disk Flags:

Number  Start   End     Size    Type     File system     Flags

 1      32.3kB  197MB   197MB   primary  ext2            boot

 2      197MB   4294MB  4096MB  primary  linux-swap(v1)

 3      4294MB  1000GB  996GB   primary  ext3

```

I hope this is correct so far...

Again the question which I already asked before:

What settings do I need for grub to be able to boot from a software raid? Grub2 was installed with devicemapper support and kernel has raid included.

-) /etc/default/grub

-) /etc/grub.d/

Br,

Johannes

----------

## frostschutz

 *ebnerjoh wrote:*   

> I set now the GPT Partitions on /dev/sdb

 

Looks fine to me.

 *Quote:*   

> 
> 
> What settings do I need for grub to be able to boot from a software raid? Grub2 was installed with devicemapper support and kernel has raid included.
> 
> -) /etc/default/grub
> ...

 

If /boot RAID1 is using 0.90 (or 1.0) metadata, it *might* work without special setup because this partition can then be used by grub as if it wasn't raid.

Otherwise try GRUB_PRELOAD_MODULES="part_gpt mdraid09 mdraid1x ext2" in /etc/default/grub. That'd be the very verbose setup that forces Grub2 to load all modules necessary for an ext2 boot partition on raid on gpt.

What error messages do you get when you try to install grub to this disk? You're trying this from a correctly chrooted system with /proc /sys /dev bind mounted and /boot mounted?

As long as grub installed, even if everything else fails, you should at the very least get a "grub rescue>" prompt. If you don't get at least that far the problem is probably something different than lack of RAID support.

----------

## ebnerjoh

Ok, I will set the /etc/default/grub accordingly.

Regarding the other settings I am reffering to the following HowTo: https://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-ubuntu-10.04-p2

The guide says that a script in /etc/grub.d needs to be created "09_swraid1_install" which is then removed after booting up into the raid OS and fializing the RAID1 config by adding the missing drive.

I updated now my running config (adding parted, grub2 with devicemapper,...).

The next days I will do a clone of the working disk and then I will try the next steps.

----------

## NeddySeagoon

ebnerjoh,

```
gandalf ~ # parted /dev/sdb

GNU Parted 3.2

Using /dev/sdb

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print

Model: ATA ST3000VN000-1HJ1 (scsi)

Disk /dev/sdb: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt 
```

you will not be able to boot from this disk.  This is your GPT partition table.  The bootable flag is not set but it doesn't matter.

To boot from sdb you must set the bootable flag on the protective msdos partition.

```
fdisk -t dos /dev/sdb  
```

will allow you to see it. Setting the bootable flag is the only thing you must do here.

The entire msdos partition table is a fake - used only by the BIOS at boot time

----------

## frostschutz

 *NeddySeagoon wrote:*   

> The bootable flag is not set

 

It is set (setting the disk flag pmbr_boot  in parted is the same thing as using fdisk -t dos to set the boot flag). Might not hurt to check anyway but should be fine.

----------

## ebnerjoh

Hi!

Here is the output of fdisk:

```

Command (m for help): p

Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disklabel type: dos

Disk identifier: 0x00000000

Device     Boot Start        End    Sectors Size Id Type

/dev/sdb1  *        1 4294967295 4294967295   2T ee GPT

Partition 2 does not start on physical sector boundary.
```

So it seems that the flag was set successfully with 

```
`disk_set pmbr_boot on`
```

Br,

Johannes

----------

## NeddySeagoon

ebnerjoh,

Yep but its always good to check.

----------

## ebnerjoh

Hi!

I am running now the migration process. I have copied all data to md3 (md1=boot, md2=swap, md3=root) and now chrooted with a new gentoo install cd into the Raid-System.

I was running 

```
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
```

 and see now the following entries:

```

ARRAY /dev/md/1  metadata=1.2 UUID=..... name=hostname:1

```

The same for md2 and md3. Where is the "/" coming from and do I have to enter the raid name also in fstab the same way?

Br,

Johannes

----------

## frostschutz

My mdadm.conf looks like this:

```

MAILADDR your@address

ARRAY /dev/md0 UUID=d8b8b4e5:e47b2e45:2093cd36:f654020d

ARRAY /dev/md1 UUID=845b2454:42a319ef:6ec5238a:358c365b

ARRAY /dev/md2 UUID=23cf90d2:c05d720e:e72e178d:414a8128

...

```

So... I don't use /dev/md/name style, this was a nice idea but it never took hold as far as I'm aware; and I don't add conditions such as metadata, hostname, etc. into the ARRAY line because those can prevent the array from being assembled properly. For example if you built this from a livecd the hostname will be completely different from what it's supposed to be - and in most setups it should not matter.

RAID names or RAID UUIDs don't go into fstab - stick to filesystem UUID for that

----------

## ebnerjoh

Ok, I changed my mdadm.cnf accordingly.

My system boots into grub without issues, but then  I get the following errror:

```
>> Determining root Device

!!Could not find the root block device in UUID=cc977df2-...

--> Enter, shell or quit

```

Do I have an issue with the GRUB and UUID? Because this UUID is not listed in the mdadm.conf.

Please note: I have two other disks installed in the system. 1 is the still "productive" one which should be added to the raid after the raid is working. and then a second one which I am using to clone the productive one before doing any changes.[/code]

Here is the mdadm.conf:

```

# mdadm configuration file

#

# mdadm will function properly without the use of a configuration file,

# but this file is useful for keeping track of arrays and member disks.

# In general, a mdadm.conf file is created, and updated, after arrays

# are created. This is the opposite behavior of /etc/raidtab which is

# created prior to array construction.

#

#

# the config file takes two types of lines:

#

#   DEVICE lines specify a list of devices of where to look for

#     potential member disks

#

#   ARRAY lines specify information about how to identify arrays so

#     so that they can be activated

#

# You can have more than one device line and use wild cards. The first 

# example includes SCSI the first partition of SCSI disks /dev/sdb,

# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second 

# line looks for array slices on IDE disks.

#

#DEVICE /dev/sd[bcdjkl]1

#DEVICE /dev/hda1 /dev/hdb1

#

# If you mount devfs on /dev, then a suitable way to list all devices is:

#DEVICE /dev/discs/*/*

#

#

# The AUTO line can control which arrays get assembled by auto-assembly,

# meaing either "mdadm -As" when there are no 'ARRAY' lines in this file,

# or "mdadm --incremental" when the array found is not listed in this file.

# By default, all arrays that are found are assembled.

# If you want to ignore all DDF arrays (maybe they are managed by dmraid),

# and only assemble 1.x arrays if which are marked for 'this' homehost,

# but assemble all others, then use

#AUTO -ddf homehost -1.x +all

#

# ARRAY lines specify an array to assemble and a method of identification.

# Arrays can currently be identified by using a UUID, superblock minor number,

# or a listing of devices.

#

#   super-minor is usually the minor number of the metadevice

#   UUID is the Universally Unique Identifier for the array

# Each can be obtained using

#

#    mdadm -D <md>

#

#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371

#ARRAY /dev/md1 super-minor=1

#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1

#

# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor

# will then move a spare between arrays in a spare-group if one array has a failed

# drive but no spare

#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1

#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1

#

# When used in --follow (aka --monitor) mode, mdadm needs a

# mail address and/or a program.  This can be given with "mailaddr"

# and "program" lines to that monitoring can be started using

#    mdadm --follow --scan & echo $! > /var/run/mdadm

# If the lines are not found, mdadm will exit quietly

#MAILADDR root@mydomain.tld

#PROGRAM /usr/sbin/handle-mdadm-events

MAILADDR johannes@familie-ebner.at

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=4d49317a:be667516:073e21cd:ed2abb54 devices=/dev/sda1,/dev/sdb1

#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=9da9a034:276f2d3a:073e21cd:ed2abb54 devices=/dev/sda3,/dev/sdb3

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda1,/dev/sdb1

#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda3,/dev/sdb3

#ARRAY /dev/md1 UUID=e153228e:bb28606e:e73ef6c3:51f2c869

#ARRAY /dev/md3  metadata=1.2 UUID=73450d44:fa110408:1359c12d:906df243 name=gandalf:3

ARRAY /dev/md1  UUID=543741fa:229fd201:652e20a9:5b86c2aa

ARRAY /dev/md2  UUID=798d9c97:60fc60ac:863fea75:204eecf5

ARRAY /dev/md3  UUID=4513201e:b67be639:1bd70490:b0072bc3

```

Here the grub.cfg

```

#

# DO NOT EDIT THIS FILE

#

# It is automatically generated by grub2-mkconfig using templates

# from /etc/grub.d and settings from /etc/default/grub

#

### BEGIN /etc/grub.d/00_header ###

insmod mdraid1x

if [ -s $prefix/grubenv ]; then

  load_env

fi

if [ "${next_entry}" ] ; then

   set default="${next_entry}"

   set next_entry=

   save_env next_entry

   set boot_once=true

else

   set default="0"

fi

if [ x"${feature_menuentry_id}" = xy ]; then

  menuentry_id_option="--id"

else

  menuentry_id_option=""

fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then

  set saved_entry="${prev_saved_entry}"

  save_env saved_entry

  set prev_saved_entry=

  save_env prev_saved_entry

  set boot_once=true

fi

function savedefault {

  if [ -z "${boot_once}" ]; then

    saved_entry="${chosen}"

    save_env saved_entry

  fi

}

function load_video {

  if [ x$feature_all_video_module = xy ]; then

    insmod all_video

  else

    insmod efi_gop

    insmod efi_uga

    insmod ieee1275_fb

    insmod vbe

    insmod vga

    insmod video_bochs

    insmod video_cirrus

  fi

}

if [ x$feature_default_font_path = xy ] ; then

   font=unicode

else

insmod part_gpt

insmod diskfilter

insmod mdraid1x

insmod ext2

set root='mduuid/4513201eb67be6391bd70490b0072bc3'

if [ x$feature_platform_search_hint = xy ]; then

  search --no-floppy --fs-uuid --set=root --hint='mduuid/4513201eb67be6391bd70490b0072bc3'  cc977df2-7268-4d03-a494-4352b0abf5ec

else

  search --no-floppy --fs-uuid --set=root cc977df2-7268-4d03-a494-4352b0abf5ec

fi

    font="/usr/share/grub/unicode.pf2"

fi

if loadfont $font ; then

  set gfxmode=auto

  load_video

  insmod gfxterm

fi

terminal_output gfxterm

if [ x$feature_timeout_style = xy ] ; then

  set timeout_style=menu

  set timeout=5

# Fallback normal timeout code in case the timeout_style feature is

# unavailable.

else

  set timeout=5

fi

### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###

menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-cc977df2-7268-4d03-a494-4352b0abf5ec' {

   load_video

   if [ "x$grub_platform" = xefi ]; then

      set gfxpayload=keep

   fi

   insmod gzio

   insmod part_gpt

   insmod diskfilter

   insmod mdraid1x

   insmod ext2

   set root='mduuid/543741fa229fd201652e20a95b86c2aa'

   if [ x$feature_platform_search_hint = xy ]; then

     search --no-floppy --fs-uuid --set=root --hint='mduuid/543741fa229fd201652e20a95b86c2aa'  fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

   else

     search --no-floppy --fs-uuid --set=root fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

   fi

   echo   'Loading Linux x86_64-4.0.5-gentoo ...'

   linux   /kernel-genkernel-x86_64-4.0.5-gentoo root=UUID=cc977df2-7268-4d03-a494-4352b0abf5ec ro  

   echo   'Loading initial ramdisk ...'

   initrd   /initramfs-genkernel-x86_64-4.0.5-gentoo

}

submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-cc977df2-7268-4d03-a494-4352b0abf5ec' {

   menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.0.5-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.0.5-gentoo-advanced-cc977df2-7268-4d03-a494-4352b0abf5ec' {

      load_video

      if [ "x$grub_platform" = xefi ]; then

         set gfxpayload=keep

      fi

      insmod gzio

      insmod part_gpt

      insmod diskfilter

      insmod mdraid1x

      insmod ext2

      set root='mduuid/543741fa229fd201652e20a95b86c2aa'

      if [ x$feature_platform_search_hint = xy ]; then

        search --no-floppy --fs-uuid --set=root --hint='mduuid/543741fa229fd201652e20a95b86c2aa'  fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

      else

        search --no-floppy --fs-uuid --set=root fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

      fi

      echo   'Loading Linux x86_64-4.0.5-gentoo ...'

      linux   /kernel-genkernel-x86_64-4.0.5-gentoo root=UUID=cc977df2-7268-4d03-a494-4352b0abf5ec ro  

      echo   'Loading initial ramdisk ...'

      initrd   /initramfs-genkernel-x86_64-4.0.5-gentoo

   }

   menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.0.5-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.0.5-gentoo-recovery-cc977df2-7268-4d03-a494-4352b0abf5ec' {

      load_video

      if [ "x$grub_platform" = xefi ]; then

         set gfxpayload=keep

      fi

      insmod gzio

      insmod part_gpt

      insmod diskfilter

      insmod mdraid1x

      insmod ext2

      set root='mduuid/543741fa229fd201652e20a95b86c2aa'

      if [ x$feature_platform_search_hint = xy ]; then

        search --no-floppy --fs-uuid --set=root --hint='mduuid/543741fa229fd201652e20a95b86c2aa'  fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

      else

        search --no-floppy --fs-uuid --set=root fc6bf9eb-e85f-4ac4-9f06-7fb8b1ff4dbc

      fi

      echo   'Loading Linux x86_64-4.0.5-gentoo ...'

      linux   /kernel-genkernel-x86_64-4.0.5-gentoo root=UUID=cc977df2-7268-4d03-a494-4352b0abf5ec ro single 

      echo   'Loading initial ramdisk ...'

      initrd   /initramfs-genkernel-x86_64-4.0.5-gentoo

   }

}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/30_os-prober ###

### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/40_custom ###

# This file provides an easy way to add custom menu entries.  Simply type the

# menu entries you want to add after this comment.  Be careful not to change

# the 'exec tail' line above.

### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###

if [ -f  ${config_directory}/custom.cfg ]; then

  source ${config_directory}/custom.cfg

elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then

  source $prefix/custom.cfg;

fi

### END /etc/grub.d/41_custom ###

```

In the default/grub I had to remove "mdraid" and "raid" because it seems this was replaced by "mdraid09" and "mdraid1x". I added "mdraid1x"

----------

## frostschutz

Did you update your initramfs after changing mdadm.conf?

When you type shell, do you get a shell and is mdadm available in this shell?

Which command are you using for that? I don't use genkernel myself, it needs some option to support RAID...

----------

## ebnerjoh

 *frostschutz wrote:*   

> Did you update your initramfs after changing mdadm.conf?
> 
> When you type shell, do you get a shell and is mdadm available in this shell?
> 
> Which command are you using for that? I don't use genkernel myself, it needs some option to support RAID...

 

Hi!

No, I was just running 

```
grub2-install /dev/sdb
```

 and 

```
grub2mkconfig /boot/grub/grub.cfg
```

So I guess I have to run genkernel again.

----------

## ebnerjoh

Hi!

I was running 

```
genkernel all
```

 and I noticed, that the initramfs in /boot/ was modified.

Nevertheless i get the same error: "Could not find root block device" with wrong UUID...

Br,

Johannes

----------

## frostschutz

this should help https://wiki.gentoo.org/wiki/Genkernel#Loading_LVM_or_software-RAID

genkernel --mdadm and the domdadm kernel parameter

LVM too if you use it, instead of a filesystem directly on the md.

ditto for encryption. genkernel needs to be told

----------

## ebnerjoh

I think the issue is somewhere else...

I created the mdadm.conf with "mdadm --examine --scan >> /etc/mdadm.conf

I have no there the 3 "MD"-Devices with each an UUID.

But If I do an 

```
ls -la
```

 in 

```
/dev/disk/by-uuid
```

 I can see a different UUID for each MD-Device...

Maybe I should take this UUIDs and paste it in mdadm.conf?

----------

## frostschutz

You are confusing MD RAID UUIDs (you see them with mdadm --examine) with filesystem UUIDs (what's actually stored on the MD).

The way I understood your issue, grub works? you select a kernel entry and it scrolls a few kernel messages on the screen? But then you're stuck in kernel/initramfs? Correct so far? Then it should be the genkernel, or missing kernel parameters, unless there is another error in one of your mdadm.conf, fstab, ... file(s).

----------

## NeddySeagoon

ebnerjoh,

The steps to mount a root on raid are as follows.

grub loads the kernel and the initrd (you must have an initrd)

grub makes its own arrangements to read the raid to do this.

Once the kernel is loaded, grub passes control to it.  It mounts the initrd as its root filesystem.

With a genkernel kernel, all the modules for your hardware are loaded, so the kernel can now see your hard drives.

mdadm is called to assemble your raid sets.

At this point, the /dev/md* nodes are populated and the kernel can see your filesystems.

The initrd init script mounts root, tidies up and piviot roots to the real root.

The real init script now gets started.

If any of this breaks, you should be dropped into a rescue shell.

Do 

```
ls /dev/sd* 
```

    do you see your hard drive partitions?

If not, something ment wrong very early in the process.

What about 

```
cat /proc/mdstat
```

That will show the state of your raid sets if they are assembled.

If raid assembly failed, the kernel cannot see its root filesystem.

You should be able to assemble root by hand if you need to

```
mdam --assemble ... 
```

 NOT --create

```
cat init
```

will show you the end if the init script.

Execute the remaining commands to coax the box to boot.

I don't use genkernel and I hand roll my own initrd files, so I'm not sure what a genkernel initrd looks like.

----------

## ebnerjoh

Hi!

Just not to confuse you: I removed now my real disk and my cloned disk. So the (hopfully in future) raid disk is now sda

```
ls /dev/sd*
```

 shows me sda1 till sda4 (bios_boot, boot, swap, root)

```
cat /proc/mdstat
```

 shows NO RAID system

```
mdadm --assemble /dev/md3
```

 Nothing happens

```
cat init
```

 bad_msg " A fatal error has occured since ${REAL_INIT:-/sbin/init} did not boot correctly, Tryin to open a shell..."

I have no ideas anymore.

Btw.: I was also running 

```
genkernel --mdadm all
```

 no change

Br,

Johannes

----------

## ebnerjoh

 *frostschutz wrote:*   

> You are confusing MD RAID UUIDs (you see them with mdadm --examine) with filesystem UUIDs (what's actually stored on the MD).
> 
> 

 

Ok, clear now.

 *frostschutz wrote:*   

> 
> 
> The way I understood your issue, grub works? you select a kernel entry and it scrolls a few kernel messages on the screen? But then you're stuck in kernel/initramfs? Correct so far? Then it should be the genkernel, or missing kernel parameters, unless there is another error in one of your mdadm.conf, fstab, ... file(s).
> 
> 

 

Exactly, I am getting the Grub-Rescue.

What I am wondering. I was installing this system years ago with genkernel on a older KErnel Version with RAID Support. There were only two big changes since I had the degraded RAID and were I was changing from RAID to NONRAID and rebuilding the Boot-Partition of the still working Harddisk. Changed with "genkernel" to Kernel 4.x.x and changed from grub to grub2...

----------

## frostschutz

Output of:

cat /proc/cmdline /proc/mdstat /etc/mdadm.conf /etc/mdadm/mdadm.conf

mdadm --verbose --assemble --scan

?

My guess is domdadm or some other parameter missing but I'm not an expert for genkernel...

----------

## ebnerjoh

```
cat /proc/cmdline
```

-->

```

BOOT_IMAGE=/kernel-genkernel-x86_64-4.0.5-gentoo root=UUID=UUID-of-/dev/md3 ro domdadm dolvm
```

```
cat /proc/mdstat
```

-->

```
no raid
```

```
cat /etc/mdadm.conf
```

-->

```

# mdadm configuration file

#

# mdadm will function properly without the use of a configuration file,

# but this file is useful for keeping track of arrays and member disks.

# In general, a mdadm.conf file is created, and updated, after arrays

# are created. This is the opposite behavior of /etc/raidtab which is

# created prior to array construction.

#

#

# the config file takes two types of lines:

#

#   DEVICE lines specify a list of devices of where to look for

#     potential member disks

#

#   ARRAY lines specify information about how to identify arrays so

#     so that they can be activated

#

# You can have more than one device line and use wild cards. The first 

# example includes SCSI the first partition of SCSI disks /dev/sdb,

# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second 

# line looks for array slices on IDE disks.

#

#DEVICE /dev/sd[bcdjkl]1

#DEVICE /dev/hda1 /dev/hdb1

#

# If you mount devfs on /dev, then a suitable way to list all devices is:

#DEVICE /dev/discs/*/*

#

#

# The AUTO line can control which arrays get assembled by auto-assembly,

# meaing either "mdadm -As" when there are no 'ARRAY' lines in this file,

# or "mdadm --incremental" when the array found is not listed in this file.

# By default, all arrays that are found are assembled.

# If you want to ignore all DDF arrays (maybe they are managed by dmraid),

# and only assemble 1.x arrays if which are marked for 'this' homehost,

# but assemble all others, then use

#AUTO -ddf homehost -1.x +all

#

# ARRAY lines specify an array to assemble and a method of identification.

# Arrays can currently be identified by using a UUID, superblock minor number,

# or a listing of devices.

#

#   super-minor is usually the minor number of the metadevice

#   UUID is the Universally Unique Identifier for the array

# Each can be obtained using

#

#    mdadm -D <md>

#

#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371

#ARRAY /dev/md1 super-minor=1

#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1

#

# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor

# will then move a spare between arrays in a spare-group if one array has a failed

# drive but no spare

#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1

#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1

#

# When used in --follow (aka --monitor) mode, mdadm needs a

# mail address and/or a program.  This can be given with "mailaddr"

# and "program" lines to that monitoring can be started using

#    mdadm --follow --scan & echo $! > /var/run/mdadm

# If the lines are not found, mdadm will exit quietly

#MAILADDR root@mydomain.tld

#PROGRAM /usr/sbin/handle-mdadm-events

MAILADDR ...

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=4d49317a:be667516:073e21cd:ed2abb54 devices=/dev/sda1,/dev/sdb1

#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=9da9a034:276f2d3a:073e21cd:ed2abb54 devices=/dev/sda3,/dev/sdb3

#ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda1,/dev/sdb1

#ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 devices=/dev/sda3,/dev/sdb3

#ARRAY /dev/md1 UUID=e153228e:bb28606e:e73ef6c3:51f2c869

#ARRAY /dev/md3  metadata=1.2 UUID=73450d44:fa110408:1359c12d:906df243 name=gandalf:3

ARRAY /dev/md1  UUID=543741fa:229fd201:652e20a9:5b86c2aa

ARRAY /dev/md2  UUID=798d9c97:60fc60ac:863fea75:204eecf5

ARRAY /dev/md3  UUID=4513201e:b67be639:1bd70490:b0072bc3

```

```
mdadm --verbose --assemble --scan
```

-->

This has maybe an interesting output, but unfortunaltey I am not able to see on the "Remote MAnagement" of my Microserver the full output. I was also not able o capture a picture with the "PRINT" key...

[/code]

----------

## NeddySeagoon

ebnerjoh,

```
mdadm --verbose --assemble --scan
```

will assemble and start all the raid sets it can find.

If it worked 

```
cat /proc/mdstat
```

 will exist.

I'm still not convinced you don't have your UUIDs mixed up.  /sbin/blkid will show both the UUID of the raid and the UUID of the filesystem on the raid set.

```
$ /sbin/blkid 

/dev/sda1: UUID="9392926d-6408-6e7a-8663-82834138a597" TYPE="linux_raid_member" PARTUUID="0553caf4-01"

/dev/sda2: UUID="b6633d8e-41ef-4485-9bbe-c4c2d69f4e8c" TYPE="swap" PARTUUID="0553caf4-02"

/dev/sda5: UUID="5e3cadd4-cfd2-665d-9690-1ac76d8f5a5d" TYPE="linux_raid_member" PARTUUID="0553caf4-05"

/dev/sda6: UUID="9657e667-5b60-f6a3-0391-65e6dcf662fa" TYPE="linux_raid_member" PARTUUID="0553caf4-06"

/dev/sdb1: UUID="9392926d-6408-6e7a-8663-82834138a597" TYPE="linux_raid_member" PARTUUID="0553caf4-01"

/dev/sdb2: UUID="a5d62e51-ef8c-4b9d-a4cf-faf56dcaa999" TYPE="swap" PARTUUID="0553caf4-02"

/dev/sdb5: UUID="5e3cadd4-cfd2-665d-9690-1ac76d8f5a5d" TYPE="linux_raid_member" PARTUUID="0553caf4-05"

/dev/sdb6: UUID="9657e667-5b60-f6a3-0391-65e6dcf662fa" TYPE="linux_raid_member" PARTUUID="0553caf4-06"

/dev/md125: UUID="741183c2-1392-4022-a1d3-d0af8ba4a2a8" TYPE="ext2"

/dev/md126: UUID="ff5730d5-c28d-4276-b300-5b0b0fc60300" TYPE="ext4"

/dev/md127: UUID="7b2KgY-NHef-kuNk-WBAp-VnLa-h03A-b4ehGy" TYPE="LVM2_member"
```

The UUID reported for /dev/sd[ab]1 is identical.  This is the UUID of the raid set which mdadm.conf needs.

In my case this is /boot

These UUIDs are fed to mdadm to assemble the raid set(s)

Once the raid set is assembled /dev/md125 exists and the kernel can see the filesystem with UUID="741183c2-1392-4022-a1d3-d0af8ba4a2a8" that is stored there.

----------

## frostschutz

Even if no raids were assembled, /proc/mdstat should say something like

```

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

```

to show which RAID levels are supported.

If this is not the case the appropriate modules might not be loaded, or the kernel might not support RAID.

```

$ gunzip < /proc/config.gz | grep MD_RAID # or zgrep MD_RAID /proc/config.gz if available

CONFIG_MD_RAID0=y

CONFIG_MD_RAID1=y

CONFIG_MD_RAID10=y

CONFIG_MD_RAID456=y

```

As for catching output in the Initramfs shell, have an USB stick, mount it, somecommand > /mnt/usb/file.txt should work - if USB and the filesystem is supported. Oh well.

I don't use modules  :Laughing:  stupid little buggers never loaded when you need em

----------

## ebnerjoh

 *frostschutz wrote:*   

> Even if no raids were assembled, /proc/mdstat should say something like
> 
> ```
> 
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
> ...

 

Yes, this is shown, and on the next line 

```
unused devices: <none>
```

Your second question was also interesting. The command showed me, that the RAID-Settings were set as module. But when running 

```
genkernel --menuconfig --mdadm all
```

 I saw for all RAID Options 

```
*
```

...

Will play around with that now

Br,

Johannes

----------

## ebnerjoh

So, it is working!!!

What have I done:

1) Copied and extracted /proc/config.gz to /usr/src/linux

2) Changed RAID Support from "m" to "*"

3) running 

```
genkernel oldconfig --mdadm all
```

Thank you all for your perfect support. I didn't expected a solution anymore...

Br,

Johannes

----------

## ebnerjoh

Hi!

I have now another question: 

In the past I had two swap partitions (sda + sdb) with each 4GB and striped them via "PRIO" in fstab.

Now I changed to softwar-raid again with mirroring, but I forgot to enlarge the SWAP partition. So I have now only 4GB

Is it possible that I resize the root partition with parted so that I can then resize the SWAP partition?

Br,

Johannes

----------

## NeddySeagoon

ebnerjoh,

swap needs to be raided.  Otherwise, programs that have data in swap will get a lobotomy when the drive carrying their swapped data fails.

These programs will crash when they need to use tho swapped data.

If you use LVM add a swap volume, if not, use a swap file.

----------

