# maybe a simple /dev/md0 question

## alex.blackbit

hello everybody,

i need help with my software raid.

yesterday evening i created a raid10 array which worked fine.

since the reboot it does no more.

here are the things i know:

```
fileserver ~ # cat /proc/mdstat 

Personalities : [raid10] 

unused devices: <none>

fileserver ~ #
```

```
fileserver ~ # mdadm --examine /dev/md0 /dev/sd[abcd]1

mdadm: No md superblock detected on /dev/md0.

/dev/sda1:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 55cee7f9:70d36575:c59b072f:1e542efc

  Creation Time : Mon Nov 20 20:33:09 2006

     Raid Level : raid10

    Device Size : 312568576 (298.09 GiB 320.07 GB)

     Array Size : 625137152 (596.18 GiB 640.14 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Mon Nov 20 23:25:48 2006

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : f1236e32 - correct

         Events : 0.6

         Layout : near=2, far=1

      Number   Major   Minor   RaidDevice State

this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1

   1     1       8       17        1      active sync   /dev/sdb1

   2     2       8       33        2      active sync   /dev/sdc1

   3     3       8       49        3      active sync   /dev/sdd1

/dev/sdb1:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 55cee7f9:70d36575:c59b072f:1e542efc

  Creation Time : Mon Nov 20 20:33:09 2006

     Raid Level : raid10

    Device Size : 312568576 (298.09 GiB 320.07 GB)

     Array Size : 625137152 (596.18 GiB 640.14 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Mon Nov 20 23:25:48 2006

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : f1236e44 - correct

         Events : 0.6

         Layout : near=2, far=1

      Number   Major   Minor   RaidDevice State

this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1

   1     1       8       17        1      active sync   /dev/sdb1

   2     2       8       33        2      active sync   /dev/sdc1

   3     3       8       49        3      active sync   /dev/sdd1

/dev/sdc1:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 55cee7f9:70d36575:c59b072f:1e542efc

  Creation Time : Mon Nov 20 20:33:09 2006

     Raid Level : raid10

    Device Size : 312568576 (298.09 GiB 320.07 GB)

     Array Size : 625137152 (596.18 GiB 640.14 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Mon Nov 20 23:25:48 2006

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : f1236e56 - correct

         Events : 0.6

         Layout : near=2, far=1

      Number   Major   Minor   RaidDevice State

this     2       8       33        2      active sync   /dev/sdc1

   0     0       8        1        0      active sync   /dev/sda1

   1     1       8       17        1      active sync   /dev/sdb1

   2     2       8       33        2      active sync   /dev/sdc1

   3     3       8       49        3      active sync   /dev/sdd1

/dev/sdd1:

          Magic : a92b4efc

        Version : 00.90.00

           UUID : 55cee7f9:70d36575:c59b072f:1e542efc

  Creation Time : Mon Nov 20 20:33:09 2006

     Raid Level : raid10

    Device Size : 312568576 (298.09 GiB 320.07 GB)

     Array Size : 625137152 (596.18 GiB 640.14 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 0

    Update Time : Mon Nov 20 23:25:48 2006

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : f1236e68 - correct

         Events : 0.6

         Layout : near=2, far=1

      Number   Major   Minor   RaidDevice State

this     3       8       49        3      active sync   /dev/sdd1

   0     0       8        1        0      active sync   /dev/sda1

   1     1       8       17        1      active sync   /dev/sdb1

   2     2       8       33        2      active sync   /dev/sdc1

   3     3       8       49        3      active sync   /dev/sdd1

fileserver ~ #
```

i anybody needs additional information, just ask.

thanks in advance and

kind regards

--alex

----------

## Arrta

Alex,

Can you please supply your /etc/mdadm.conf

Marc

----------

## alex.blackbit

thank you for the answer.

i did not make any changes to /etc/mdadm.conf, so it consists only of comments.

maybe this is the problem, but i did not know that i have to do something there.

http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml did not tell me to do so, or i am blind.

anyway, what can i do now to get my array back? there is no critical data on it, maybe it is easier to start from scratch.

any hints greatly appreciated!

--alex

----------

## Arrta

Welcome. I also sent an email to the authors of the guide asking that they add a line to create your mdadm.conf file.

Simplest way is 

```
mdadm --detail --scan >> /etc/mdadm.conf
```

To recreate your array you might be able to add your drives back by

```
mdadm --add /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
```

Once they are added run the above command to create your conf file.

Alternately you should be able to cheat and create your mdadm.conf by hand..

You should have the following in yours minimum. You should be able to copy this into your conf file

```
DEVICE /dev/sd[abcd]1

ARRAY /dev/md0 level=raid10 num-devices=4 UUID=55cee7f9:70d36575:c59b072f:1e542efc

```

Then run

```
mdadm --assemble /dev/md0
```

And the drives should create the array as I copied your UUID into the example.

Oh! and one last thing.. Raid10 was buggy for me so I created 2 raid1's and a raid0 to simulate Raid10 Just an fyi, your milage may vary.

```
DEVICE /dev/sd[defg]1

DEVICE /dev/md[12]

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=e77f4f30:3a0f629c:1dfca557:035fa1f7

ARRAY /dev/md2 level=raid1 num-devices=2 UUID=3750759d:b8692e04:f3501f6f:71aee768

ARRAY /dev/md3 level=raid0 num-devices=2 UUID=b04ab442:61f686c6:9b226117:8a2e241a
```

----------

## alex.blackbit

okay, okay, okay.

i started from scratch because it is surely a better idea not to use the built-in raid10.

now i have this configuration:

```
fileserver ~ # cat /proc/mdstat 

Personalities : [raid0] [raid1] 

md2 : active raid0 md0[0] md1[1]

      625137024 blocks 64k chunks

      

md1 : active raid1 sdc1[0] sdd1[1]

      312568576 blocks [2/2] [UU]

      

md0 : active raid1 sda1[0] sdb1[1]

      312568576 blocks [2/2] [UU]

      

unused devices: <none>

fileserver ~ # cat /etc/mdadm.conf

DEVICE /dev/sd[abcd]1

DEVICE /dev/md[01]

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=57d56cba:36383ca6:bb162e9c:aef490cd

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=716e4c31:2697d447:5c305eff:084bfe3f

ARRAY /dev/md2 level=raid0 num-devices=2 UUID=33b68a64:5f9465b8:08bf25c3:bd6cac13

fileserver ~ # mdadm --detail --scan

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=57d56cba:36383ca6:bb162e9c:aef490cd

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=716e4c31:2697d447:5c305eff:084bfe3f

ARRAY /dev/md2 level=raid0 num-devices=2 UUID=33b68a64:5f9465b8:08bf25c3:bd6cac13

fileserver ~ #
```

which does work!!!

BUT, shouldn't i be able to define partitions on /dev/md2?

that's what i tried with cfdisk. actually it let me... i had md2p1, md2p2, md2p3. i wrote that to the disk. exited cfdisk, opened it again and they were still there. but i do not see them in /dev. so i can not make filesystems on these partitions. what i can do is write a filesystem on /dev/md2 without partitioning.

now, is there a way to define partitions on /dev/md2? md2 should be something like a "harddisk" i thought.

what i want to have is a raid10 out of 4 physical disks with 3 partitions on that array...

thanks again for the active help

kind regards

--alex

----------

## Arrta

Well... 

/dev/md[012] are all virtuals to /dev/md/[012]

Since I only use 1 partition in my setup I just use my md3 as the drive I mount.

I would check your system to see what is inside your /dev/md/2 section. You may find that you have a /dev/md/2/p[123]

If that is the case you may have to create virtuals to them to simulate a /dev/md2p1 node

----------

## alex.blackbit

 :Question:  that's why i wrote the last post.

i do not have anything below /dev/md/2.

does anybody know what i can or what i do not can on md devices?

kind regards

--alex

----------

## Arrta

Well if you cant get anything below /dev/md/2 then you might have to go another route

Partition sd[abcd]1 to be the size of your first partition.

Partition sd[abcd]2 to be the size of your second partition.

Partition sd[abcd]3 to be the size of your third partition.

And then create 3 raid setups one for each partition.

----------

## alex.blackbit

please don't get me wrong, but i would like to get a statement from somebody who really knows if partitioning should work in this case and does not or if it is not possible in my case.

that sounds like: if your computer does not work any more, buy a new one...   :Rolling Eyes:  right?

----------

## Arrta

Alex, 

This will be my last reply.

Check this page.

http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-11.html

----------

## alex.blackbit

okay, this gives the answer.

i will have to use lvm for that.

that's okay. i just had to knew.

thanks for the help.

have a nice day

----------

