# [SOLVED] mdadm: cannot --add or --re-add

## cfgauss

mdadm 3.2.3 reported

```
A DegradedArray event had been detected on md device /dev/md2.

P.S. The /proc/mdstat file currently contains the following:

...

md2 : active raid5 sdb3[2] sda3[3](F) sdc3[1]

      195318016 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

...
```

so I removed sda3 from md2. 

Now,

```
~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 

md1 : active raid5 sda2[0] sdb2[2] sdc2[1]

      9767296 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

      

md3 : active raid5 sdb4[2] sda4[0] sdc4[1]

      771296512 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

      

md2 : active raid5 sdb3[2] sdc3[1]

      195318016 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

      

md0 : active raid1 sda1[0] sdb1[2](S) sdc1[1]

      192640 blocks [2/2] [UU]

```

When I try to add it in again, I get these messages:

```
~# mdadm /dev/md2 --add /dev/sda3

mdadm: /dev/sda3 reports being an active member for /dev/md2, but a --re-add fails.

mdadm: not performing --add as that would convert /dev/sda3 in to a spare.

mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda3" first.

~# mdadm /dev/md2 --re-add /dev/sda3

mdadm: --re-add for /dev/sda3 to /dev/md2 is not possible

```

Here's an excerpt from dmesg:

```
md/raid:md2: read error not correctable (sector 43992072 on sda3).

md/raid:md2: read error not correctable (sector 43992080 on sda3).

md/raid:md2: read error not correctable (sector 43992088 on sda3).

md/raid:md2: read error not correctable (sector 43992096 on sda3).

md/raid:md2: read error not correctable (sector 43992104 on sda3).

md/raid:md2: read error not correctable (sector 43992112 on sda3).

md/raid:md2: read error not correctable (sector 43992120 on sda3).

md/raid:md2: read error not correctable (sector 43992128 on sda3).

md/raid:md2: read error not correctable (sector 43992136 on sda3).

md/raid:md2: read error not correctable (sector 43992144 on sda3).

RAID conf printout:

--- level:5 rd:3 wd:2

disk 0, o:0, dev:sda3

disk 1, o:1, dev:sdc3

disk 2, o:1, dev:sdb3

RAID conf printout:

--- level:5 rd:3 wd:2

disk 1, o:1, dev:sdc3

disk 2, o:1, dev:sdb3

md: unbind<sda3>

md: export_rdev(sda3)

```

But SMART doesn't flag sda as broken:

```
~# smartctl -a /dev/sda

...

SMART overall-health self-assessment test result: PASSED

...

```

I'm an mdadm neophyte and would welcome any advice on how to proceed from here.

Thanks.

[SOLVED] See druggo's solution, below. [/SOLVED]Last edited by cfgauss on Wed Jul 04, 2012 5:08 pm; edited 1 time in total

----------

## druggo

that 'PASS' is not reliable, post the full output of smartctl.

----------

## cfgauss

 *druggo wrote:*   

> that 'PASS' is not reliable, post the full output of smartctl.

 

Here is the full output of smartctl -a /dev/sda.

----------

## druggo

focus on these items:

 *Quote:*   

> 5 Reallocated_Sector_Ct   0x0033   182   182   140    Pre-fail  Always       -       137
> 
> 196 Reallocated_Event_Count 0x0032   063   063   000    Old_age   Always       -       137
> 
> 197 Current_Pending_Sector  0x0012   187   185   000    Old_age   Always       -       1128
> ...

 

too much bad sectors, your driver is dying, replace it asap.

----------

## cfgauss

 *druggo wrote:*   

> focus on these items:
> 
>  *Quote:*   5 Reallocated_Sector_Ct   0x0033   182   182   140    Pre-fail  Always       -       137
> 
> 196 Reallocated_Event_Count 0x0032   063   063   000    Old_age   Always       -       137
> ...

 

I will do that.  The dying /dev/sda has 4 partitions, all of which participate in the RAID array. If I take the dying one out and format a new one into 4 partitions of the same size as the dying drive, what mdadm commands are needed to restore the RAID array?

Thanks for your help.

----------

## druggo

mdadm /dev/mdX -a /dev/sdaY is enough.

remember to install grub on the new driver.

----------

## cfgauss

 *druggo wrote:*   

> mdadm /dev/mdX -a /dev/sdaY is enough.
> 
> remember to install grub on the new driver.

 

This works! Here's an article which I followed in restoring the RAID arrays.

My /dev/md0 contains my /boot partition (I now suspect having /boot on a RAID partition is a bad idea) and I needed to go into the BIOS to change the hard drive boot order of the three drives. Without this reordering, it didn't boot. I ended up not having to reinstall grub.

Many thanks for saving my /home partition!

----------

## AngelKnight

The new drive, once you get it, probably still needs to have grub blocks put on it if you want it to be able to serve as BIOS disk 0x80 in the future and boot the system.

----------

## cfgauss

 *AngelKnight wrote:*   

> The new drive, once you get it, probably still needs to have grub blocks put on it if you want it to be able to serve as BIOS disk 0x80 in the future and boot the system.

 

My new disk is /dev/sdc. Is this the correct install command?

```
# grub-install /dev/sdc
```

----------

