# RAID problem, no md device

## koukos

I have compiled the kernel with raid0, raid1, sata_promise inside. I have a working raidtab in my /etc. My /boot partition is raid1 at /dev/md0 and my / is raid0 at /dev/md1. Grub works fine, but when i boot i get a kernal panic, no device md1. I supose that the kernel dont create the md devces. And yes at i have run MAKEDEV md  in dev with a bootable cd. Can anybody help me?

----------

## entemoehre

Are you using udev as device manager?

I used to have a similar problem, the raid devices where not created at boot time. 

There is a setting somewhere in /etc/rc.conf or /etc/conf.d/rc which enables saving of the /dev directory on shutdown and getting all the devices back at boot. That solved it for me.

Hope that helps.

Regards,

soenke

----------

## koukos

I change in /etc/conf.d/rc the  RC_DEVICES="udev" from "auto". Now i will reboot to see what happened.

----------

## koukos

Same error: 

Cannot open root device ¨md1¨ or uknown-block(0,0)

Please append a correct oot="boot option

Kernel Panic - not syncing: VFS: Unable to mount root fs on unkown-block(0,0)

In grub root=/dev/md1

----------

## widan

Check that all the partitions that are part of the RAID arrays have partition type 0xfd (you can check that with "fdisk -l", this type is "Linux Raid Autodetect"). If they do not, the md driver won't automatically detect the arrays at boot.

```
Cannot open root device ¨md1¨ or uknown-block(0,0)
```

In any case, this is not a device node problem - it is possible to boot systems that only have /dev/null and /dev/console as "static" nodes (before udev is run), as the kernels looks up the boot device directly in sysfs. Also the "unknown-block(0,0)" means the device doesn't exist in sysfs, ie no driver registered it, so that means the md driver either is not compiled in (but you say it is) or did not autodetect the array properly.

----------

## muaddib7

I think that your problem is that you have your /boot partition on raid. Can you send a copy of your fstab and your raidtab?

----------

## koukos

My ftab is this:

# /etc/fstab: static file system information.

# $Header: /var/cvsroot/gentoo-src/rc-scripts/etc/fstab,v 1.18.4.1 2005/01/31 23:05:14 vapier Exp $

#

# noatime turns off atimes for increased performance (atimes normally aren't 

# needed; notail increases performance of ReiserFS (at the expense of storage 

# efficiency).  It's safe to drop the noatime options if you want and to 

# switch between notail / tail freely.

#

# See the manpage fstab(5) for more information.

# <fs>			<mountpoint>	<type>		<opts>		<dump/pass>

# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.

/dev/md0		/boot		ext3		noatime		1 2

/dev/md1		/		reiser4		noatime,notail	0 0

/dev/md4		none		swap		sw		0 0

/dev/cdroms/cdrom0	/mnt/cdrom	iso9660		noauto,ro	0 0

/dev/fd0		/mnt/floppy	auto		noauto		0 0

# NOTE: The next line is critical for boot!

proc			/proc		proc		defaults	0 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for 

# POSIX shared memory (shm_open, shm_unlink).

# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will

#  use almost no memory if not populated with files)

shm			/dev/shm	tmpfs		nodev,nosuid,noexec	0 0

and my raidtab is this:

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdb1

raid-disk               1

# / (RAID 0)

raiddev                 /dev/md1

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda2

raid-disk               0

device                  /dev/sdb2

raid-disk               1

# /Files (RAID 0)

raiddev                 /dev/md2

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda3

raid-disk               0

device                  /dev/sdb3

raid-disk               1

# swap (RAID 0)

raiddev                 /dev/md3

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda4

raid-disk               0

device                  /dev/sdb4

raid-disk               1

----------

## muaddib7

I risk being corrected but...   :Very Happy: 

I think that there is always a need for a partition to be outside the raid with software raid solutions. This partition is needed in order to host the kernel and the initrd or initramfs images which will allow for the other parts of the system to load. Now the odd thing is that each disk in a raid1 configuration can e accessed on its own. But I haven't done it. All the times I do software raid, I do it with the /boot on a normal partition. You may do it this way:

1. Stop using the /dev/sda1 /dev/sdb1 as raid parts.

2. Use the /dev/sda1 as a normal partition and move on from there.

3. When finished do a dd if=/dev/sda1 of=/dev/sdb1 to have an exact copy of the /boot partition

If you have a catastrophic disk failure on the first disk, just swap it with the second disk... or better buy a hardware raid solution  :Very Happy: 

Don't forget to update sdb1 to sda1 whenever you make a change to /boot

----------

## widan

 *muaddib7 wrote:*   

> I think that there is always a need for a partition to be outside the raid with software raid solutions. This partition is needed in order to host the kernel and the initrd or initramfs images which will allow for the other parts of the system to load.

 

I am running software RAID1 on /boot with no problems. Also, if there was a problem with /boot, it would cause problems to load the kernel/initrd, so the poblem would occur at the bootloader stage. Here the kernel is loaded fine, it just can't find the root device.

To koukos: Do you have lines like this in the kernel output before it panics ?

```
md: considering sdb1 ...

md:  adding sdb1 ...

md:  adding sda1 ...

md: created md0

md: bind<sda1>

md: bind<sdb1>

md: running: <sdb1><sda1>

raid1: raid set md0 active with 2 out of 2 mirrors

md: ... autorun DONE.
```

If you don't (or you only get "autorun DONE", without any "raid set mdX active"), check that the partitions on which RAID arrays reside are of type 0xfd. It's needed if the root partition is a RAID array.

----------

## koukos

When i do mkraid /dev/md0 or md1 from the livecd, i get an error in superblock. After some search, i found that superblock says to kernel to mount the raid partition. The only way to create the md device is to do mkraid -R /dev/md0, whick -R is force. Any idea how to correct the superblock?

----------

