# [SOLVED] Raid1 recovery restarts at >80%

## aapash

Raid1 recovery restarts at >80%

Raid configuration:

```
Server ~ # cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 md0[2] md1[1]

      601537408 blocks [2/1] [_U]

      [=>...................]  recovery =  9.7% (58476032/601537408) finish=147.2min speed=61448K/sec

md1 : active raid0 sdb1[0] hdi1[2] hdg1[1]

      601537472 blocks 64k chunks

md0 : active raid0 sda1[3] hde1[2] hdc1[1] hda2[0]

      602262912 blocks 64k chunks

unused devices: <none>

```

Original system location is md0 (raid0). md1 has been added as yet another raid0.

md2 (raid0+1) has been created as:

```
Server ~ # mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/md1
```

Data have been copied to md2 (degraded mode). System starts and works just fine from md2.

Now it's time to put missed raid0 (md0) in system:

```
Server ~ # sfdisk -d /dev/md1 | sfdisk /dev/md0 
```

Just to check:

```
Server ~ # sfdisk -d /dev/md1

# partition table of /dev/md1

unit: sectors

/dev/md1p1 : start=        4, size=1203074940, Id=fd

/dev/md1p2 : start=        0, size=        0, Id= 0

/dev/md1p3 : start=        0, size=        0, Id= 0

/dev/md1p4 : start=        0, size=        0, Id= 0

Server ~ # sfdisk -d /dev/md0

# partition table of /dev/md0

unit: sectors

/dev/md0p1 : start=        4, size=1203074940, Id=fd

/dev/md0p2 : start=        0, size=        0, Id= 0

/dev/md0p3 : start=        0, size=        0, Id= 0

/dev/md0p4 : start=        0, size=        0, Id= 0

```

Missing device is added:

```
Server ~ # mdadm /dev/md2 -a /dev/md0 
```

Recovery process is started, but never comes to the end. At the end (>80%is done) it restarts from the beginning.

Can't understand what's going on. Has tried:

```
Server ~ # mdadm /dev/md2 -f /dev/md0

Server ~ # mdadm /dev/md2 -r /dev/md0 

Server ~ # sfdisk -d /dev/md1 | sfdisk /dev/md0

Server ~ # mdadm /dev/md2 -a /dev/md0

```

The same result...   :Confused: 

Any ideas?Last edited by aapash on Sat May 17, 2008 12:10 am; edited 1 time in total

----------

## manaka

You are using a quite weird setup... raid10 md personality would generally be preferred over the raid0 and raid1 combo.

Seems you are using partitionable raid devices... I think you should setup the partitions on the top level array, not in the lower ones... Additionally, when creating a partitionable array, the auto=mdp option should be specified... (Don't know this from sure... Have never tried setting them since I prefer LVM on top of MD  :Wink:  )...

----------

## aapash

Thanks!

----------

## aapash

It were a few badblocks at the end of /dev/hdg included in /dev/md1.

----------

## manaka

Try getting the SMART information from that disc with smartctl. As today's discs do bad block relocation, they only report a read error to the OS when they are about to fail.

----------

## aapash

I've just followed detailed instructions of

http://smartmontools.sourceforge.net/BadBlockHowTo.txt

Now it looks good:

```
Server ~ # smartctl -l selftest /dev/hdg

smartctl version 5.37 [i686-pc-linux-gnu] Copyright (C) 2002-6 Bruce Allen

Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===

SMART Self-test log structure revision number 1

Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error

# 1  Short offline       Completed without error       00%     40133         -

# 2  Extended offline    Completed without error       00%     40121         -

# 3  Short offline       Completed without error       00%     40117         -

# 4  Extended offline    Completed: read failure       10%     40114         358400973

# 5  Short offline       Completed: read failure       60%     40112         358400973

# 6  Short offline       Completed: read failure       60%     40112         358400973

```

----------

