# [SOLVED] Raid 0+1 issue.

## Arrta

Ok..  Here is my issue.. md3 does not create itlself at boot.

Kernel 2.6.14-rc1 built with raid0 and raid1 in kernel not modules.

Here is my mdadm.conf

```
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=f0366206:483d3cba:ff3592e6:cfbd87f3

ARRAY /dev/md2 level=raid0 num-devices=2 UUID=c2cca308:d792889e:a60d9e62:ca9c0314

ARRAY /dev/md3 level=raid1 num-devices=2 UUID=41aaf8f1:4df527f3:33e2038e:00d00a4e
```

Here is how they are configured.

```
md1 = Raid0 of sdd1 + sde1

md2 = Raid0 of sdf1 + sdg1

md3 = Raid1 of md1 + md2
```

Here is my dmesg output

```
md: md1 stopped.

md: bind<sde1>

md: bind<sdd1>

md1: setting max_sectors to 128, segment boundary to 32767

raid0: looking at sdd1

raid0:   comparing sdd1(244195904) with sdd1(244195904)

raid0:   END

raid0:   ==> UNIQUE

raid0: 1 zones

raid0: looking at sde1

raid0:   comparing sde1(244195904) with sdd1(244195904)

raid0:   EQUAL

raid0: FINAL 1 zones

raid0: done.

raid0 : md_size is 488391808 blocks.

raid0 : conf->hash_spacing is 488391808 blocks.

raid0 : nb_zone is 1.

raid0 : Allocating 8 bytes for hash.

md: md2 stopped.

md: bind<sdg1>

md: bind<sdf1>

md2: setting max_sectors to 128, segment boundary to 32767

raid0: looking at sdf1

raid0:   comparing sdf1(244195904) with sdf1(244195904)

raid0:   END

raid0:   ==> UNIQUE

raid0: 1 zones

raid0: looking at sdg1

raid0:   comparing sdg1(244195904) with sdf1(244195904)

raid0:   EQUAL

raid0: FINAL 1 zones

raid0: done.

raid0 : md_size is 488391808 blocks.

raid0 : conf->hash_spacing is 488391808 blocks.

raid0 : nb_zone is 1.

raid0 : Allocating 8 bytes for hash.

md: md3 stopped.
```

md3 does not create itself at boot.. "md: md3 stopped." is the last line relating to md3 in the dmesg after a reboot completes.

It is not until I run 'mdadm --assemble /dev/md3' that the following lines apear in dmesg.

```
md: md3 stopped.

md: bind<md2>

md: bind<md1>

md3: bitmap initialized from disk: read 15/15 pages, set 0 bits, status: 0

created bitmap (233 pages) for device md3

raid1: raid set md3 active with 2 out of 2 mirrors
```

note: the 2 'md: md3 stopped.' lines above are in dmesg in 2 different locations.

I need help making md3 create itself after boot without intervention.Last edited by Arrta on Fri Aug 25, 2006 1:56 am; edited 1 time in total

----------

## sageman

Why don't you just put that "mdadm" line in /etc/init.d/local.start? It would take place during the default runlevel instead of the boot runlevel, but, is that a problem?

Not sure if you can just create an init script, like /etc/init.d/makemd3 with that line and just do:

```

rc-update add makemd3 boot

```

Not sure if that'd work, but, hey, worth a try, right  :Smile: ?

----------

## Arrta

I actually changed to a mdadm RAID10 setup

Only issue I am having with that is I'm not 100% sure which setup that is...

Some information I read says the RAID10 is RAID 0+1 others say it is a 1+0

So I either have mirrored stripes or striped mirrors.. 

Anyone know the difinitive answer?

----------

## sageman

Officially, RAID10 is RAID1+0 and RAIDO1 is RAID0+1. Fairly sure.

EDIT: Thus spake Wikipedia: http://en.wikipedia.org/wiki/RAID10#RAID_10

----------

## Arrta

Thanks.. I thought it was 1+0 as there was more information leading to that.. but there were the ocasional writeups and even a post someone on the gentoo forums saying that raid 10 was a 0+1

----------

## s0be

Not that this will help solve the problem, but a brief explanation of 1+0 vs 0+1.

1+0 is a stripe of mirrors.  0+1 is a mirror of stripes.  In 0+1, if a drive fails in one of the stripes, if either drive in the other stripe dies, you're boned.  That leaves you with 1 possible drive that can fail, and it's already in a failed raid.  In 1+0, you can have 2 drives fail, so long as they're not both from the same mirror.  Here's a (hopefully) understandable pair of diagrams.

Raid 0+1

```

No Failures:

[ [ [(sda) (sdb) Raid 0{clean}] [ (sdc) (sdd) Raid 0 {clean} ] Raid 1 {clean}]

1 Failure, Still working:

[ [ [(XXX) (sdb) Raid 0{dirty}] [ (sdc) (sdd) Raid 0 {clean} ] Raid 1 {dirty} ]

[ [ [(sda) (XXX) Raid 0{dirty}] [ (sdc) (sdd) Raid 0 {clean} ] Raid 1 {dirty} ]

[ [ [(sda) (sdb) Raid 0{clean}] [ (XXX) (sdd) Raid 0 {dirty} ] Raid 1 {dirty} ]

[ [ [(sda) (sdb) Raid 0{clean}] [ (sdc) (XXX) Raid 0 {dirty} ] Raid 1 {dirty} ]

2 Failures, Still working:

[ [ [(XXX) (XXX) Raid 0 {dead} ] [ (sdc) (sdd) Raid 0 {clean} ] Raid 1 {dirty} ]

[ [ [(sda) (sdb) Raid 0 {clean} ] [ (XXX) (XXX) Raid 0 {dead} ] Raid 1 {dirty} ]

2 Failuers, Dead:

[ [ [(XXX) (sdb) Raid 0 {dirty} ] [ (sdc) (XXX) Raid 0 {dirty} ] Raid 1 {dead} ]

[ [ [(XXX) (sdb) Raid 0 {dirty} ] [ (XXX) (sdd) Raid 0 {dirty} ] Raid 1 {dead} ]

[ [ [(sda) (XXX) Raid 0 {dirty} ] [ (sdc) (XXX) Raid 0 {dirty} ] Raid 1 {dead} ]

[ [ [(sda) (XXX) Raid 0 {dirty} ] [ (XXX) (sdd) Raid 0 {dirty} ] Raid 1 {dead} ]

Legend 

(sdN) physical device

(XXX) physical device, failed

[ () () Raid N {status}] logical device of status condition

```

From that, you can see that the only possible way the array can survive 2 drive failures is if they're both drives in the same stripe.  To look at it from the other side of the mirror, that means that if 1 drive fails, you only have 1 other drive that can fail, or the pesimistic POV: you have 2 drives that, if they fail, your system is dead

Raid 1+0

```

No Failures:

[ [ [(sda) (sdb) Raid 1{clean}] [ (sdc) (sdd) Raid 1 {clean} ] Raid 0 {clean}]

1 Failure, Still working:

[ [ [(XXX) (sdb) Raid 1{dirty}] [ (sdc) (sdd) Raid 1 {clean} ] Raid 0 {dirty} ]

[ [ [(sda) (XXX) Raid 1{dirty}] [ (sdc) (sdd) Raid 1 {clean} ] Raid 0 {dirty} ]

[ [ [(sda) (sdb) Raid 1{clean}] [ (XXX) (sdd) Raid 1 {dirty} ] Raid 0 {dirty} ]

[ [ [(sda) (sdb) Raid 1{clean}] [ (sdc) (XXX) Raid 1 {dirty} ] Raid 0 {dirty} ]

2 Failures, Not working:

[ [ [(XXX) (XXX) Raid 1 {dead} ] [ (sdc) (sdd) Raid 1 {clean} ] Raid 0{dead} ]

[ [ [(sda) (sdb) Raid 1 {clean} ] [ (XXX) (XXX) Raid 1 {dead} ] Raid 0 {dead} ]

2 Failuers, Still Working:

[ [ [(XXX) (sdb) Raid 1 {dirty} ] [ (sdc) (XXX) Raid 1 {dirty} ] Raid 0 {dirty} ]

[ [ [(XXX) (sdb) Raid 1 {dirty} ] [ (XXX) (sdd) Raid 1 {dirty} ] Raid 0 {dirty} ]

[ [ [(sda) (XXX) Raid 1 {dirty} ] [ (sdc) (XXX) Raid 1 {dirty} ] Raid 0 {dirty} ]

[ [ [(sda) (XXX) Raid 1 {dirty} ] [ (XXX) (sdd) Raid 1 {dirty} ] Raid 0 {dirty} ]

Legend 

(sdN) physical device

(XXX) physical device, failed

[ () () Raid N {status}] logical device of status condition

```

From that you should be able to see that, in a raid 1+0, you can have 1 drive in each raid fail, which means, if you have 1 drive fail, you have 2 more drives that either could fail and you'll be fine, or the Pessimists POV: if 1 drive fails, you have 1 other drive that could fail rather than 2, that would bork your system.

----------

## Arrta

Adding this for anyone else having the same issue..

MD driver based Raid 10 is still in the experamental stage.. found this out when I kept getting 

```
status=0x51 { DriveReady SeekComplete Error }

error=0x40 { UncorrectableError}
```

messages...

I switched to having 2 raid 1 arrays and a raid 0 between them (Multilevel Raid 10) but still had the issue where md3 would not build at boot without manual interaciton...

Researching the web I found that I needed to tell mdadm that it can look at the md devices

My mdadm.conf file looks like this.

```

DEVICE /dev/sd[defg]1

DEVICE /dev/md[12]

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=e77f4f30:3a0f629c:1dfca557:035fa1f7

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=3750759d:b8692e04:f3501f6f:71aee768

ARRAY /dev/md3 level=raid0 num-devices=2 UUID=b04ab442:61f686c6:9b226117:8a2e241a

```

Note the /dev/md line. This is what I was missing to allow mdadm to see those 'drives' and use them to build the array.

----------

## chewy_rob

I was just considering using the experimental Raid 10 driver when I read this post. Any comments regarding using the Raid1 then Raid0 approach?

----------

