# mdadm raid5 recovery - no superblock assembly aborted

## ali3nx

I've been working with a xeon server which i had configured with mdadm based raid 5 just a few weeks ago but the onboard sata controller turned out to be very unstable for use with mdadm based raid and has now managed to due to ata resets remove three of the four hard drives from the raid5 array. I've been trying to reassemble the array but i'm being politely informed that the superblocks on the raid devices are missing.

So far i've tried using mdadm --assemble to start the array but every time I'm offered the following 

```
 livecd ~ # mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3

mdadm: cannot open device /dev/sda3: Device or resource busy

mdadm: /dev/sda3 has no superblock - assembly aborted
```

 I really need to approach recovering this array with not doing any irreparable damage to the array configuration as it contains critical data than must be recovered. We do have a 5th drive added into the server on the workbench connected to an adaptec 3405 AACRAID controller that I'm planning to use for backing up data from the recovered raid 5 array but could be used for recovery if it can assist with improving success with re-assembling the raid5 array. All of the hard drives are either new or around 6 months old seagate sata II disks. Three of the four hard drives are less than one month old, the fourth is not the same exact model as three however the drive capacity byte and cylinder sizes are an exact match to the newer three hard drives so the side effects of any drive mismatches is negligible. and the aacraid controller is planned to be used to replace mdadm raid given the on board controller has been so unreliable.

I'm sure the failed reassembly is as a result of the array degrading to the state of only having one drive shown as active as the below examples indicate. fortunately i did bitmap the array after it was fully sync'd last so this must be recoverable somehow. Unfortunately i don't have the mdadm.conf file for the array backed up anywhere but on the failed array or i'm sure the correct uuid would more than likely assist with starting the array.

```
livecd ~ # mdadm -E /dev/sd[a,b,c,d]3

/dev/sda3:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 220b505f:c70cab8c:cb201669:f728008a (local to host livecd)

  Creation Time : Wed Aug  4 10:53:46 2010

     Raid Level : raid5

  Used Dev Size : 974542976 (929.40 GiB 997.93 GB)

     Array Size : 2923628928 (2788.19 GiB 2993.80 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Mon Aug 16 02:25:57 2010

          State : active

Internal Bitmap : present

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 276632e0 - correct

         Events : 101857

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3

   1     1       8       19        1      active sync   /dev/sdb3

   2     2       8       51        2      active sync   /dev/sdd3

   3     3       8       35        3      active sync   /dev/sdc3

/dev/sdb3:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 220b505f:c70cab8c:cb201669:f728008a (local to host livecd)

  Creation Time : Wed Aug  4 10:53:46 2010

     Raid Level : raid5

  Used Dev Size : 974542976 (929.40 GiB 997.93 GB)

     Array Size : 2923628928 (2788.19 GiB 2993.80 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Tue Aug 24 18:35:31 2010

          State : active

Internal Bitmap : present

 Active Devices : 3

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 27869ad5 - correct

         Events : 1476201

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     3       8       35        3      active sync   /dev/sdc3

   0     0       0        0        0      removed

   1     1       8       19        1      active sync   /dev/sdb3

   2     2       8       51        2      active sync   /dev/sdd3

   3     3       8       35        3      active sync   /dev/sdc3

/dev/sdc3:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 220b505f:c70cab8c:cb201669:f728008a (local to host livecd)

  Creation Time : Wed Aug  4 10:53:46 2010

     Raid Level : raid5

  Used Dev Size : 974542976 (929.40 GiB 997.93 GB)

     Array Size : 2923628928 (2788.19 GiB 2993.80 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Tue Aug 24 18:35:31 2010

          State : active

Internal Bitmap : present

 Active Devices : 3

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 27869ac1 - correct

         Events : 1476201

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     1       8       19        1      active sync   /dev/sdb3

   0     0       0        0        0      removed

   1     1       8       19        1      active sync   /dev/sdb3

   2     2       8       51        2      active sync   /dev/sdd3

   3     3       8       35        3      active sync   /dev/sdc3

/dev/sdd3:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 220b505f:c70cab8c:cb201669:f728008a (local to host livecd)

  Creation Time : Wed Aug  4 10:53:46 2010

     Raid Level : raid5

  Used Dev Size : 974542976 (929.40 GiB 997.93 GB)

     Array Size : 2923628928 (2788.19 GiB 2993.80 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Tue Aug 24 18:35:31 2010

          State : active

Internal Bitmap : present

 Active Devices : 3

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 27869ae3 - correct

         Events : 1476201

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     2       8       51        2      active sync   /dev/sdd3

   0     0       0        0        0      removed

   1     1       8       19        1      active sync   /dev/sdb3

   2     2       8       51        2      active sync   /dev/sdd3

   3     3       8       35        3      active sync   /dev/sdc3
```

----------

## ali3nx

I appreciate any follow up replies to help anyone with a similar issue but stand down red alert I managed to get the array online and recovered  :Smile: 

The array originally wouldn't start claiming that too many devices were missing but under the pressure you must stop the array after EVERY attempt to assemble or every following attempt will fail to start the array even if attempting to assemble using mdadm --assemble --force because the partitions in the array will have already been activated leaving one with the popular "device is busy or in use" warning...

```
mdadm -S /dev/md0
```

^ = win   :Embarassed: 

----------

