# [SOLVED] Testing Raid1 with mdadm

## cmp

Hi folks

I build a Raid from 2x1 TB Disks(Seagate and Hitachi) on which there are md1(home) md2(/) md3(boot) all on ext3 with auto-raid.

Now I tested my raid. For that I unplugged one HD and booted from the other one. This goes without any problem

but the other-way round when I plugged both again and unplugged that one that I was booting from before.

is there a problem with my MBR or grub? how do I check that and repair?

again shorter:

Seagate disconnected

Hitachi booting

md:1of2 mirrors 

....//I was asked if I want to boot [y/N] Y  // booted well

------------

Seagate booting

Hitachi disconnected

Ext3-fs (md-2) error unable to read superblock

Ext2-fs (md-2) error unable to read superblock

Ext4-fs (md-2) unable to read superblock

mount:md /dev/md2 on /root

faild invalid argument

//Booting failed

Bootlog

```
[    2.622431] md: bind<sda3>

[    2.622781] md: bind<sda2>

[    2.624361] md: raid6 personality registered for level 6

[    2.624394] md: raid5 personality registered for level 5

[    2.624428] md: raid4 personality registered for level 4

[    2.628938] md: raid10 personality registered for level 10

[    2.653380] md: bind<sdb2>

[    2.664213] md: bind<sdb1>

[    2.665667] md/raid1:md1: active with 1 out of 2 mirrors

[    2.665711] md/raid1:md2: active with 2 out of 2 mirrors

[    2.665716] md1: detected capacity change from 0 to 948152565760

[    2.665794] md2: detected capacity change from 0 to 51599310848

[    2.665890] md: bind<sdb3>

[    2.666560]  md1:

[    2.666735]  md2:

[    2.667475] md/raid1:md3: active with 2 out of 2 mirrors

[    2.667557] md3: detected capacity change from 0 to 451870720

[    2.667962]  unknown partition table

[    2.668474]  md3: unknown partition table

[    2.677793]  unknown partition table

[    2.746030] EXT3-fs: barriers not enabled

[    2.794232] kjournald starting.  Commit interval 5 seconds

[    2.794310] EXT3-fs (md2): mounted filesystem with ordered data mode

```

mdadm.conf

```

# by default, scan all partitions (/proc/partitions) for MD superblocks.

# alternatively, specify devices to scan, using wildcards if desired.

DEVICE partitions

# auto-create devices with Debian standard permissions

CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system

HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts

MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Sat, 29 Jan 2011 00:12:26 +0100

# by mkconf $Id$

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d64b55ad:ed326586:dcbc05fe:012849dc

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=91a53d19:9ba717c0:dcbc05fe:012849dc

ARRAY /dev/md3 level=raid1 num-devices=2 UUID=bcbba589:1c2e1e7e:dcbc05fe:012849dc

```

cat /proc/mdstat

```

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md1 : active raid1 sdb1[1]

      925930240 blocks [2/1] [_U]

     

md2 : active raid1 sdb2[1] sda2[0]

      50389952 blocks [2/2] [UU]

     

md3 : active raid1 sdb3[1] sda3[0]

      441280 blocks [2/2] [UU]

     

unused devices: <none>
```

mdadm --detail /dev/md1 

```

/dev/md1:

        Version : 00.90

  Creation Time : Sat Jan 29 23:18:42 2011

     Raid Level : raid1

     Array Size : 925930240 (883.04 GiB 948.15 GB)

  Used Dev Size : 925930240 (883.04 GiB 948.15 GB)

   Raid Devices : 2

  Total Devices : 1

Preferred Minor : 1

    Persistence : Superblock is persistent

    Update Time : Wed Feb  2 09:46:59 2011

          State : clean, degraded

Active Devices : 1

Working Devices : 1

Failed Devices : 0

  Spare Devices : 0

           UUID : d64b55ad:ed326586:dcbc05fe:012849dc (local to host one)

         Events : 0.6453

    Number   Major   Minor   RaidDevice State

       0       0        0        0      removed

       1       8       17        1      active sync   /dev/sdb1

```

mdadm --detail /dev/md2

```
/dev/md2:

        Version : 00.90

  Creation Time : Sat Jan 29 23:18:47 2011

     Raid Level : raid1

     Array Size : 50389952 (48.06 GiB 51.60 GB)

  Used Dev Size : 50389952 (48.06 GiB 51.60 GB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 2

    Persistence : Superblock is persistent

    Update Time : Wed Feb  2 09:47:32 2011

          State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

  Spare Devices : 0

           UUID : 91a53d19:9ba717c0:dcbc05fe:012849dc (local to host one)

         Events : 0.1221

    Number   Major   Minor   RaidDevice State

       0       8        2        0      active sync   /dev/sda2

       1       8       18        1      active sync   /dev/sdb2

```

mdadm --detail /dev/md3

```

/dev/md3:

        Version : 00.90

  Creation Time : Sat Jan 29 23:18:52 2011

     Raid Level : raid1

     Array Size : 441280 (431.01 MiB 451.87 MB)

  Used Dev Size : 441280 (431.01 MiB 451.87 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 3

    Persistence : Superblock is persistent

    Update Time : Wed Feb  2 09:24:28 2011

          State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

  Spare Devices : 0

           UUID : bcbba589:1c2e1e7e:dcbc05fe:012849dc (local to host one)

         Events : 0.303

    Number   Major   Minor   RaidDevice State

       0       8        3        0      active sync   /dev/sda3

       1       8       19        1      active sync   /dev/sdb3

```

The home partition was not synced yet because it is NOT needed for booting and would take much time. Dont want to do that severel times because of unfresh partition risk.

thx in advance

----------

## honp

Show us grub.conf  :Smile: 

----------

## chiefbag

Most probably its using the UUID to boot in that order rather then the physical controller order.

Note the config below.

```
# This file was auto-generated on Sat, 29 Jan 2011 00:12:26 +0100

# by mkconf $Id$

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d64b55ad:ed326586:dcbc05fe:012849dc

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=91a53d19:9ba717c0:dcbc05fe:012849dc

ARRAY /dev/md3 level=raid1 num-devices=2 UUID=bcbba589:1c2e1e7e:dcbc05fe:012849dc 
```

----------

## chiefbag

Also you will need to do a grub install on the second disk if you want to boot this as a master.

----------

## depontius

Just an offhand silly question.  When you re-plugged the disconnected drive in, did you give mdadm time to resync the RAID before unplugging the other drive?

----------

## cmp

menu.lst

```

default      0

fallback        1

timeout      3

#Iam not sure if this is Seagate ST31000528A5

title           kernel 2.6.35-25-generic (hd1,2) 

root      (hd1,2)

kernel      /vmlinuz-2.6.35-25-generic root=/dev/md2 ro

initrd      /initrd.img-2.6.35-25-generic

savedefault

# Iam not sure if this is Hitachi HDT721010SLA36

title           kernel 2.6.35-25-generic (hd0,2) 

root            (hd0,2)

kernel          /vmlinuz-2.6.35-25-generic root=/dev/md2 ro 

initrd          /initrd.img-2.6.35-25-generic

savedefault

```

grub.cfg

```
#

# DO NOT EDIT THIS FILE

#

# It is automatically generated by grub-mkconfig using templates

# from /etc/grub.d and settings from /etc/default/grub

#

### BEGIN /etc/grub.d/00_header ###

if [ -s $prefix/grubenv ]; then

  set have_grubenv=true

  load_env

fi

set default="0"

if [ "${prev_saved_entry}" ]; then

  set saved_entry="${prev_saved_entry}"

  save_env saved_entry

  set prev_saved_entry=

  save_env prev_saved_entry

  set boot_once=true

fi

function savedefault {

  if [ -z "${boot_once}" ]; then

    saved_entry="${chosen}"

    save_env saved_entry

  fi

}

function recordfail {

  set recordfail=1

  if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi

}

function load_video {

}

if [ "${recordfail}" = 1 ]; then

  set timeout=-1

else

  set timeout=1

fi

### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###

set menu_color_normal=white/black

set menu_color_highlight=black/light-gray

### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###

menuentry 'Ubuntu, with Linux 2.6.35-27-generic' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   linux   /vmlinuz-2.6.35-27-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro  vga=788  quiet splash

   initrd   /initrd.img-2.6.35-27-generic

}

menuentry 'Ubuntu, with Linux 2.6.35-27-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   echo   'Loading Linux 2.6.35-27-generic ...'

   linux   /vmlinuz-2.6.35-27-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro single  vga=788

   echo   'Loading initial ramdisk ...'

   initrd   /initrd.img-2.6.35-27-generic

}

menuentry 'Ubuntu, with Linux 2.6.35-25-generic' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   linux   /vmlinuz-2.6.35-25-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro  vga=788  quiet splash

   initrd   /initrd.img-2.6.35-25-generic

}

menuentry 'Ubuntu, with Linux 2.6.35-25-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   echo   'Loading Linux 2.6.35-25-generic ...'

   linux   /vmlinuz-2.6.35-25-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro single  vga=788

   echo   'Loading initial ramdisk ...'

   initrd   /initrd.img-2.6.35-25-generic

}

menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   linux   /vmlinuz-2.6.35-22-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro  vga=788  quiet splash

   initrd   /initrd.img-2.6.35-22-generic

}

menuentry 'Ubuntu, with Linux 2.6.35-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {

   recordfail

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   echo   'Loading Linux 2.6.35-22-generic ...'

   linux   /vmlinuz-2.6.35-22-generic root=UUID=11e22944-1f8c-4ec7-90e0-6d6f67941848 ro single  vga=788

   echo   'Loading initial ramdisk ...'

   initrd   /initrd.img-2.6.35-22-generic

}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_memtest86+ ###

menuentry "Memory test (memtest86+)" {

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   linux16   /memtest86+.bin

}

menuentry "Memory test (memtest86+, serial console 115200)" {

   insmod raid

   insmod mdraid

   insmod part_msdos

   insmod ext2

   set root='(md3)'

   search --no-floppy --fs-uuid --set cb1c1bd7-2a15-4493-ad93-4a5c2de6e969

   linux16   /memtest86+.bin console=ttyS0,115200n8

}

### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_os-prober ###

### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/40_custom ###

# This file provides an easy way to add custom menu entries.  Simply type the

# menu entries you want to add after this comment.  Be careful not to change

# the 'exec tail' line above.

### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###

if [ -f  $prefix/custom.cfg ]; then

  source $prefix/custom.cfg;

fi

### END /etc/grub.d/41_custom ###
```

 *Quote:*   

> Just an offhand silly question. When you re-plugged the disconnected drive in, did you give mdadm time to resync the RAID before unplugging the other drive?

 

@depontius yes of course

----------

## honp

Does it work? Have you pushed grub to your second disk?

----------

## cmp

honestly I didn't have the time for that till now. I will try today and report back of course. I am also not sure how? should be something with grub-install I guess.

I heard that there is a way to disable the inner speed-limitation of any HD under linux?! any suggestions?

----------

## honp

Try this:

http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml - part Code Listing 2.33: Install grub on both disks.

I dont know what do you mean by inner "speed-limitation" but best tool to play with harddisk it is hdparm.

----------

## cmp

Thx for the link followed 2.33

```
grub> root (hd0,0)

grub> setup (hd0)

Error 17: Cannot mount selected partition

grub> root (hd1,0)

grub> setup (hd1)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... no

Error 15: File not found

grub>

```

after that I tried to install it again 

```
grub-install --no-floppy --recheck /dev/sdb

Probing devices to guess BIOS drives. This may take a long time.

Searching for GRUB installation directory ... found: /boot/grub

Installing GRUB to /dev/sdb as (hd1)...

Installation finished. No error reported.

This is the contents of the device map /boot/grub/device.map.

Check if this is correct or not. If any of the lines is incorrect,

fix it and re-run the script `grub-install'.

(hd0)   /dev/sda

(hd1)   /dev/sdb

(hd2)   /dev/sdc
```

I did it also on sdc

```
grub-install --no-floppy --recheck /dev/sdc
```

The Speed limit is something also called Sync throttle or Sync Speed .. something like that i found it under 

/proc/sys/dev/raid/speed_limit_max

and /proc/sys/dev/raid/speed_limit_min

I will "cat /proc/sys/dev/raid/speed_limit_max > /proc/sys/dev/raid/speed_limit_min" 

Now I will reboot and see how grub likes new configuration

----------

## cmp

looks like I killed my grub cant boot anymore now

with root (hd0,0) 

I get

Error 15 : File not found

with root (hd1,0) 

I get

Error 17 : Cannot mount selected partition

with root (hd2,0) 

I get

Error 15 : File not found

----------

## gimpel

You are somewhat mixing up grub legacy and grub2 it seems.

menu.lst -> grub1

grub.cfg -> grub2, an Ubuntu'ish one

grub.conf != grub.cfg

http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml <- HowTo for grub1, this will not work with grub2.

For grub2 the grub-shell commands would look somewhat like

```
insmod linux

insmod raid

insmod mdraid  #or mdraid09, depending on grub2 version and md metadata version

insmod part_msdos

insmod ext2

set root='(md3)'  #your /boot partition

linux   /kernel-<version> root=/dev/md2 #your kernel version

```

So, which version of grub is installed actually?

----------

## cmp

Its true I mixed them up.  Shall I delete grub2 ?  I am more familiar with grub-legacy or do you advise me to stay with the second version? 

right now the grub1 is in the MBR but I can only boot with rescue Supergrubdisk. 

grub-probe -V

grub-probe (GRUB) 1.98+20100804-5ubuntu3

grub-install --version

grub-install (GNU GRUB 0.97)

mdadm --version

mdadm - v2.6.7.1 - 15th October 2008

EDIT:

I removed grub2 and did a 

```

grub-install --recheck --no-floppy /dev/md3

Probing devices to guess BIOS drives. This may take a long time.

Searching for GRUB installation directory ... found: /boot/grub

Installing GRUB to /dev/sdb as (hd1,2)...

Installing GRUB to /dev/sdc as (hd1,2)...

Installation finished. No error reported.

```

Does that mean that now both disk will boot independently ?

...to /dev/sdb as (hd1,2)...

...to /dev/sdc as (hd1,2)...

----------

## gimpel

(hd1,2) should be your /-partition, believing your first post. Did you have /boot mounted when running grub-install?

----------

## cmp

hi thx for the response yes /boot was mounted.

I have done the hole sync right now took 210min not so bad. I think I will test tomorrow the raid again and report.

----------

## cmp

so ... tomorrow never comes.

It worked well. I disconnected one HD then booted synced disconnected the other one and now syncing again..

Only grub hd(x,y) changes when a HS is missing but that is not a real problem.

last thing that I need to know is:

mdX: detected capacity change from 0 to BIG NR

Why does the capacity change? and which capacity ? the disks are from different manufactures.

----------

