# RAID 1 Not being recognized

## nekromancer

Hi guys,

I am installing Gentoo (install-x86-minimal-20090422.iso) on an Intel Server (S5000VSA board) and it has a (as lspci detecs) Raid Bus Controller  Intel 631xESB/632xESB SATA Raid Controller.

I setup the raid array from the bios screen, 2 Disks making 1 virtual RAID-1 disk. I also initialized it and the on start up it sees 1 array.

When I boot Gentoo (default options) It sees /dev/sda and /dev/sdb as my disks instead of 1 array. I looked in /dev/mapper and there is just "controler" file.

When checking dmesg I found this message

 *Quote:*   

> 
> 
> GDT-HA Storage raid controller driver no card found, specify I/O address and IRQ using iobase= and irq= . Failed to initialize WD-700 SCSI Card
> 
> 

 

I don't know what the problem really is.

I used to work on IBM servers and raid always worked with gentoo and I always saw 1 block device.

Any help is appreciated.

----------

## nekromancer

I booted with  "dodmraid" option and I now see

```

livecd / # ls /dev/mapper

control ddf1_MegaSR R1 #0

```

Do I use /dev/mapper/ddf1_MegaSR  as my "raid" disk?

----------

## nekromancer

lol I always find my self answering my own posts.

Plus I hate it how people just tell you to just disable raid in the bios and go with LVM2 ..etc.

First off... 1 big problem was spaces in the raid volume name. I had to rename it from the bios raid utility.

Second, put dodmraid in the gentoo boot option when using the livecd (yes I know its "fakeraid" but I don't want to use LVM2)

Third, fdisk /dev/mapper/ddf1_MegaSR_R1_0, add the partitions, then dmraid -ay to reload them.

And that's it.

And those who post doc links going to google... shame on you.

Below are perfectly fine.

http://en.gentoo-wiki.com/wiki/RAID/NVRAID_with_dmraid#Loading_dmraid_with_the_LiveCD

http://forum.soft32.com/linux/gentoo-intel-6300esb-onboard-raid-ftopict327670.html

http://linux.die.net/man/8/dmraid

----------

## gentoo-dev

 *nekromancer wrote:*   

> lol I always find my self answering my own posts.
> 
> Plus I hate it how people just tell you to just disable raid in the bios and go with LVM2 ..etc.

 You might want to invest some time in learing the difference between software raid and LVM2... You don't need the latter to use the former.

----------

## Mad Merlin

 *gentoo-dev wrote:*   

>  *nekromancer wrote:*   lol I always find my self answering my own posts.
> 
> Plus I hate it how people just tell you to just disable raid in the bios and go with LVM2 ..etc. You might want to invest some time in learing the difference between software raid and LVM2... You don't need the latter to use the former.

 

I agree. The only advantage of fake RAID over mdadm RAID in this case is if you need to dual boot non-Linux systems on the same machine, whereas the advantages of mdadm RAID over fake RAID are numerous...

----------

## depontius

I currently run mdadm RAID1 with 2 drives, one on each channel of a Promise card.  Last year my old Promise card blew a channel.  To get back up I marked the drive as missing, and ran in degraded mode.  Then I unplugged the cdrom from ide2 and moved the hard drive from the blown channel on the Promise card to ide2.  Brought the system up, rebuilt the array, and all was fine.  Later I got another Promise card, and went back to the normal way.  All autodetected.

Try doing that with fakeraid.

I have a friend who runs real RAID.  He has 2 spare controller cards, since the RAID is useless without the card that knows about it.  When you're on your last controller card of a given model/spec, it's time to either find another matching one, or to start shopping for multiple copies of a new card to migrate to.  Hardware RAID, or even fake RAID won't survive the death of the last controller hardware.

----------

