# [solved] Recovering RAID 5 Reiserfs Partion

## Goopy

Hi All,

I am attempting to recover a RAID 5 Array that was running reiserfs

When I try and mount it using 

```

mount /dev/md4
```

I get the error

```

mount: wrong fs type, bad option, bad superblock on /dev/md4,

       missing codepage or other error

       (could this be the IDE device where you in fact use

       ide-scsi so that sr0 or sda or so is needed?)

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

```

Running dmesg returns the following additional information

```

ReiserFS: md4: warning: sh-2006: read_super_block: bread failed (dev md4, block 2, size 4096)

ReiserFS: md4: warning: sh-2006: read_super_block: bread failed (dev md4, block 16, size 4096)

ReiserFS: md4: warning: sh-2021: reiserfs_fill_super: can not find reiserfs on md4

```

It appears that my reiserfs superblock is corrupt, however when I try and recover it using

```

reiserfsck --rebuild-sb /dev/md4

```

I get the following output

```

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes

bread: Cannot read the block (2): (Invalid argument).

reiserfs_open: bread failed reading block 2

bread: Cannot read the block (16): (Invalid argument).

reiserfs_open: bread failed reading block 16

reiserfs_open: the reiserfs superblock cannot be found on /dev/md4.

what the version of ReiserFS do you use[1-4]

        (1)   3.6.x

        (2) >=3.5.9 (introduced in the middle of 1999) (if you use linux 2.2, choose this one)

        (3) < 3.5.9 converted to new format (don't choose if unsure)

        (4) < 3.5.9 (this is very old format, don't choose if unsure)

        (X)   exit

1

Enter block size [4096]:

4096

glamdring / #

```

At which point I am back at the console!

Does anyone have any idea what is causing the application to crash ? The other RAID partitions on this box are working 100% (2x RAID 1 and another RAID 5) and they are using the same HDDs. There is no physical problem with the HDD either.

- ChrisLast edited by Goopy on Mon Dec 05, 2005 11:48 pm; edited 1 time in total

----------

## Crimson Rider

Hi, I've got some experience in these matters, I'd like to try and help.

Could you post your /proc/mdstat ?

This answer brought to you by the Adopt an unaswered post initiative

----------

## Goopy

Hi,

Thanks for the quick response   :Very Happy: 

```

Personalities : [raid0] [raid1] [raid5] [multipath]

md1 : active raid1 sdb1[1] sda1[0]

      497856 blocks [2/2] [UU]

md5 : active raid5 sdb6[1] sda6[0] hdc1[3] hdb1[2]

      234396096 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdb4[1] sda4[0]

      38073984 blocks [2/2] [UU]

unused devices: <none>

```

As you can see /dev/md4 isn't even running

Just so you can see, here is my /etc/fstab

```
/dev/md1                /boot           reiserfs        noauto,noatime,notail  1 1

/dev/md0                /               reiserfs        noatime                0 0

/dev/sda2               none            swap            sw,pri=1               0 0

/dev/sdb2               none            swap            sw,pri=1               0 0

/dev/md4                /home           reiserfs        noatime                0 1

/dev/md5                /public         reiserfs        noatime                0 1

```

and my /etc/raidtab

```

raiddev /dev/md0

raid-level      1

nr-raid-disks   2

chunk-size      32

persistent-superblock   1

device          /dev/sda4

raid-disk       0

device          /dev/sdb4

raid-disk       1

raiddev /dev/md1

raid-level      1

nr-raid-disks   2

chunk-size      32

persistent-superblock   1

device          /dev/sda1

raid-disk       0

device          /dev/sdb1

raid-disk       1

raiddev /dev/md4

raid-level      5

nr-raid-disks   3

chunk-size      32

persistent-superblock   1

parity-algorithm left-symmetric

device          /dev/sda5

raid-disk       0

device          /dev/sdb5

raid-disk       1

device          /dev/hdb2

raid-disk       2

raiddev /dev/md5

raid-level      5

nr-raid-disks   4

chunk-size      32

persistent-superblock   1

parity-algorithm left-symmetric

device          /dev/sda6

raid-disk       0

device          /dev/sdb6

raid-disk       1

device          /dev/hdb1

raid-disk       2

device          /dev/hdc1

raid-disk       3

```

----------

## Crimson Rider

No problem  :Smile: 

Is there a reason MD4 is not running? Or is that the problem itself? You can't of course mount an array that isn't running.

Have you turned Md4 of yourself? Or did it stop some other way?

----------

## Goopy

I don't know why /dev/md4 is not running.  I didn't manually stop it. The server stopped responding completely, I had to do a hard reboot. All the other arrays came up, but were missing the partitions on disk /dev/sdb, so after checking the disc was physically OK, I used raidhotadd and re-added the partitions to the correct arrays.

However I couldn't add the one to /dev/md4 and its related to the problem I currently have.

I think RAID not starting is the number one problem. If I can get the array to start, then maybe I could rebuild the superblock?

----------

## Crimson Rider

Most definitly get the array started first.

Dumb question probably, but why did you use raidhotadd ? As far as I know, you should only have to raidstart the array.

----------

## Goopy

When I issue the command raidstart /dev/md4 I get the following in dmesg

```

md: autorun ...

md: considering sdb5 ...

md:  adding sdb5 ...

md:  adding hdb2 ...

md:  adding sda5 ...

md: created md4

md: bind<sda5>

md: bind<hdb2>

md: bind<sdb5>

md: running: <sdb5><hdb2><sda5>

md: kicking non-fresh sdb5 from array!

md: unbind<sdb5>

md: export_rdev(sdb5)

md: md4: raid array is not clean -- starting background reconstruction

raid5: device hdb2 operational as raid disk 2

raid5: device sda5 operational as raid disk 0

raid5: cannot start dirty degraded array for md4

RAID5 conf printout:

 --- rd:3 wd:2 fd:1

 disk 0, o:1, dev:sda5

 disk 2, o:1, dev:hdb2

raid5: failed to run raid set md4

md: pers->run() failed ...

md: do_md_run() returned -22

md: md4 stopped.

md: unbind<hdb2>

md: export_rdev(hdb2)

md: unbind<sda5>

md: export_rdev(sda5)

md: ... autorun DONE.

```

I used raidhotadd to readd the partitions to the degraded arrays (because the disk is fine)

----------

## Goopy

OK, using mdadm I have been able to partially reassemble the array

```

glamdring / # mdadm -v --assemble --run /dev/md4 /dev/sdb5 /dev/sda5 /dev/hdb2

mdadm: looking for devices for /dev/md4

mdadm: /dev/sdb5 is identified as a member of /dev/md4, slot 1.

mdadm: /dev/sda5 is identified as a member of /dev/md4, slot 0.

mdadm: /dev/hdb2 is identified as a member of /dev/md4, slot 2.

mdadm: added /dev/sdb5 to /dev/md4 as 1

mdadm: added /dev/hdb2 to /dev/md4 as 2

mdadm: added /dev/sda5 to /dev/md4 as 0

mdadm: failed to RUN_ARRAY /dev/md4: Invalid argument

```

However it won't start

```

glamdring / # mdadm  --run /dev/md4

mdadm: failed to run array /dev/md4: Invalid argument

```

At least now its listed in /proc/mdstat

```

glamdring / # cat /proc/mdstat

Personalities : [raid0] [raid1] [raid5] [multipath]

md1 : active raid1 sdb1[1] sda1[0]

      497856 blocks [2/2] [UU]

md4 : inactive sda5[0] hdb2[2]

      156312256 blocks

md5 : active raid5 sdb6[1] sda6[0] hdc1[3] hdb1[2]

      234396096 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdb4[1] sda4[0]

      38073984 blocks [2/2] [UU]

unused devices: <none>

```

So we are getting somewhere

----------

## Goopy

Oh happiness!

Its fixed!

```

glamdring / # mdadm -v --assemble --run --force /dev/md4 /dev/sdb5 /dev/sda5 /dev/hdb2

mdadm: looking for devices for /dev/md4

mdadm: /dev/sdb5 is identified as a member of /dev/md4, slot 1.

mdadm: /dev/sda5 is identified as a member of /dev/md4, slot 0.

mdadm: /dev/hdb2 is identified as a member of /dev/md4, slot 2.

mdadm: added /dev/sdb5 to /dev/md4 as 1

mdadm: added /dev/hdb2 to /dev/md4 as 2

mdadm: added /dev/sda5 to /dev/md4 as 0

mdadm: /dev/md4 has been started with 2 drives (out of 3).

```

I forgot the force  :Smile: 

I've readded the "broken" partition and now its busy resyncing  :Smile: 

Thank-you very much for your help.

----------

## Crimson Rider

Glad to hear it.

A broken array can make an atheist pray" :Smile: 

----------

## charlesnadeau

 *Goopy wrote:*   

> Oh happiness!
> 
> Its fixed!
> 
> ```
> ...

 

I am in the exact same situation. How can I modify the command above to tell mdadm I added a drive I want to use as a "hot spare". And how do I edit /etc/mdadm.conf afterward?

Thanks!

Charles

----------

