# raid5 superblock and version issues

## Cr0t

My raid5 had two failing drives and I had to start over. I created a new md0 volume and over the last month I re-added the missing two drives. The problem is, that during the boot up the wrong old md0 is being identified. So what I have to do is stop the volume and just re-assemble the volume.

I assume the issue is in the superblock and I guess I could pull out the old volumes and kill the old superblock and just re-add it. The problem is that the resync takes forever. Several hours. Is it possible to I don't know... force the new one? Over write the old superblock with the right one??

dmesg on boot up

```

...

md: Waiting for all devices to be available before autodetect

md: If you don't use raid, use raid=noautodetect

md: Autodetecting RAID arrays.

md: invalid raid superblock magic on sdg1

md: sdg1 does not have a valid v0.90 superblock, not importing!

md: invalid raid superblock magic on sdh1

md: sdh1 does not have a valid v0.90 superblock, not importing!

md: Scanned 7 and added 5 devices.

md: autorun ...

md: considering sdb1 ...

md:  adding sdb1 ...

md:  adding sdf1 ...

md:  adding sde1 ...

md:  adding sdd1 ...

md:  adding sdc1 ...

md: created md0

md: bind<sdc1>

md: bind<sdd1>

md: bind<sde1>

md: bind<sdf1>

md: bind<sdb1>

md: running: <sdb1><sdf1><sde1><sdd1><sdc1>

raid5: device sdb1 operational as raid disk 3

raid5: device sdf1 operational as raid disk 5

raid5: device sde1 operational as raid disk 4

raid5: device sdd1 operational as raid disk 1

raid5: device sdc1 operational as raid disk 6

raid5: allocated 7430kB for md0

3: w=1 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

5: w=2 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

4: w=3 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

1: w=4 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

6: w=5 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

raid5: not enough operational devices for md0 (2/7 failed)

RAID5 conf printout:

 --- rd:7 wd:5

 disk 1, o:1, dev:sdd1

 disk 3, o:1, dev:sdb1

 disk 4, o:1, dev:sde1

 disk 5, o:1, dev:sdf1

 disk 6, o:1, dev:sdc1

raid5: failed to run raid set md0

md: pers->run() failed ...

md: do_md_run() returned -5

md: md0 still in use.

md: ... autorun DONE.

...
```

last boot up step:

```

---BEGIN---

mdadm --stop /dev/md0

mdadm -A /dev/md0 /dev/sd[bcdefgh]1

mount /dev/md0 /home

mount -a

/etc/init.d/nfs zap

/etc/init.d/nfs restart

---END---

```

dmesg with my script at the end:

```

...

md: md0 stopped.

md: unbind<sdb1>

md: export_rdev(sdb1)

md: unbind<sdf1>

md: export_rdev(sdf1)

md: unbind<sde1>

md: export_rdev(sde1)

md: unbind<sdd1>

md: export_rdev(sdd1)

md: unbind<sdc1>

md: export_rdev(sdc1)

md: md0 stopped.

md: bind<sdd1>

md: bind<sde1>

md: bind<sdf1>

md: bind<sdc1>

md: bind<sdg1>

md: bind<sdh1>

md: bind<sdb1>

raid5: md0 is not clean -- starting background reconstruction

raid5: device sdb1 operational as raid disk 0

raid5: device sdh1 operational as raid disk 6

raid5: device sdg1 operational as raid disk 5

raid5: device sdc1 operational as raid disk 4

raid5: device sdf1 operational as raid disk 3

raid5: device sde1 operational as raid disk 2

raid5: device sdd1 operational as raid disk 1

raid5: allocated 7430kB for md0

0: w=1 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

6: w=2 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

5: w=3 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

4: w=4 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

3: w=5 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

2: w=6 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

1: w=7 pa=0 pr=7 m=1 a=2 r=7 op1=0 op2=0

raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2

RAID5 conf printout:

 --- rd:7 wd:7

 disk 0, o:1, dev:sdb1

 disk 1, o:1, dev:sdd1

 disk 2, o:1, dev:sde1

 disk 3, o:1, dev:sdf1

 disk 4, o:1, dev:sdc1

 disk 5, o:1, dev:sdg1

 disk 6, o:1, dev:sdh1

md0: detected capacity change from 0 to 3000629723136

 md0:

md: resync of RAID array md0

md: minimum _guaranteed_  speed: 1000 KB/sec/disk.

md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.

md: using 128k window, over a total of 488383744 blocks.

 unknown partition table

...

--END---

```

check out the version 1.01, but during bootup since the old RAID was detected it finds an old version. I am assuming this is due to the superblock issue.

```

/dev/md0:

        Version : 1.01

  Creation Time : Mon Dec  7 19:33:40 2009

     Raid Level : raid5

     Array Size : 2930302464 (2794.55 GiB 3000.63 GB)

  Used Dev Size : 488383744 (465.76 GiB 500.10 GB)

   Raid Devices : 7

  Total Devices : 7

    Persistence : Superblock is persistent

    Update Time : Fri Mar 26 11:55:55 2010

          State : clean

 Active Devices : 7

Working Devices : 7

 Failed Devices : 0

  Spare Devices : 0

         Layout : left-symmetric

     Chunk Size : 128K

           Name : bigboy:0  (local to host bigboy)

           UUID : 1408e586:f3d5b701:013a2319:73c4430d

         Events : 142679

    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       49        1      active sync   /dev/sdd1

       2       8       65        2      active sync   /dev/sde1

       3       8       81        3      active sync   /dev/sdf1

       5       8       33        4      active sync   /dev/sdc1

       6       8       97        5      active sync   /dev/sdg1

       7       8      113        6      active sync   /dev/sdh1

```

----------

## Cr0t

```
"mdadm --zero-superblock --metadata=0.90 /dev/sd*"
```

 ran this as suggested, but now I get

```
17:57:12^root@bigboy:~ > mdadm -A /dev/md0 /dev/sd[bcdefh]1

mdadm: no RAID superblock on /dev/sdf1

mdadm: /dev/sdf1 has no superblock - assembly aborted
```

----------

