# [SOLVED]Need help with multiple RAIDs with onboard chipsets.

## Argetlam

Having some trouble setting up some RAIDs

I have a GA-K8N Ultra-SLI motherboard.  It has the NVIDIA chipset for RAID and also has the Silicon Image 3114 chipset.

I have 5 hard drives. Two are 160GB for a RAID 1 (OS) and three 320GB I want to do RAID 5 (storage). 

This is what I am guessing here and correct me if it can't be done. I setup a RAID1 in the NVIDIA bios during boot and setup a RAID5 in the sil3114 bios during boot.  After messing with dmraid, I was able to get the RAID1 (/dev/mapper/nvidia_ijffdahh). So I partitioned that all up and it seems to be working.

Now here is the issue. The RAID5 setup. I can't get it to show up in '/dev/mapper'. I am getting this message during boot 

 *Quote:*   

> [kernel] device-mapper: table: 253:0: raid45: unknown target type

 

I have checked around and I found these posts.

https://forums.gentoo.org/viewtopic-t-493223-highlight-raid45.html

https://forums.gentoo.org/viewtopic-t-503797-highlight-raid45.html

Didn't lead me anywhere. 

listpci:

 *Quote:*   

> 00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)
> 
> 00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev a3)
> 
> 00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
> ...

 

Any ideas?

Need anymore info?Last edited by Argetlam on Fri Mar 07, 2008 3:57 am; edited 1 time in total

----------

## irasnyd

If you're only using Linux on the system, I'd forget trying to use the fake raid that is on both of those chipsets, and just use Linux software raid. It's easier to setup and use, in my experience. Using software raid, you don't even have to mess around with the raid setup in the bios.

There's plenty of documentation out there. See: http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml and use google for more, if you need it.

You can ignore the stuff about LVM, you don't need it to use raid.

----------

## Argetlam

ya I guess I can go ahead and do that. I will probably just leave the RAID1 alone that I made and just make a RAID 5 with software. I just figured the bios one would be easier, but I guess not.

----------

## Jaglover

http://blog.shaf.net/?p=6

You have to choose between two software RAIDs, Linux kernel RAID is definitely superior.

----------

## irasnyd

 *Jaglover wrote:*   

> http://blog.shaf.net/?p=6
> 
> You have to choose between two software RAIDs, Linux kernel RAID is definitely superior.

 

In addition to to being faster as the article describes, you can move the drives between computers, if the need ever arises. Even if they have different SATA controllers.

----------

## Argetlam

 *Jaglover wrote:*   

> 
> 
> You have to choose between two software RAIDs, Linux kernel RAID is definitely superior.

 

There are two software RAIDs? You mean mdadm or LVM? 

Thanks for the article. That is some good stuff. 

Think I should leave the RAID1 dmraid alone on the chipset or do it again with software?

Also I guess that means I should disable the onboard raid chips if thats the case, or just the silicon one?

----------

## Jaglover

dmraid you use with fakeraid controllers is a software RAID, too. Fakeraid (aka BIOS RAID) is called fake because it IS fake. This is not something Linux does, it is software RAID in Windows too, people just don't know this. They think they have got a hardware RAID ... for $150 with m/b.

----------

## Argetlam

 *irasnyd wrote:*   

> If you're only using Linux on the system, I'd forget trying to use the fake raid that is on both of those chipsets, and just use Linux software raid. It's easier to setup and use, in my experience. Using software raid, you don't even have to mess around with the raid setup in the bios.
> 
> There's plenty of documentation out there. See: http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml and use google for more, if you need it.
> 
> You can ignore the stuff about LVM, you don't need it to use raid.

 

I went to that link and followed the steps, minus the LVM stuff and and worked out great. The only weird thing is that it detects my drives in the motherboard raid slots before the regular SATA slots. So instead of my RAID1 being sda and sdb, they are sdd and sde. It's not a big deal I just thought it was odd. I don't believe its the OS but rather the motherboards screwy configuration.

Why is it when I do an fdisk I get the 'md' messages?

```
Disk /dev/sdd: 160.0 GB, 160041885696 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1   *           1          16      128488+  fd  Linux raid autodetect

/dev/sdd2              17         564     4401810   82  Linux swap / Solaris

/dev/sdd3             565        8467    63480847+  fd  Linux raid autodetect

/dev/sdd4            8468       19457    88277175   fd  Linux raid autodetect

Disk /dev/sde: 160.0 GB, 160041885696 bytes

255 heads, 63 sectors/track, 19457 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sde1   *           1          16      128488+  fd  Linux raid autodetect

/dev/sde2              17         564     4401810   82  Linux swap / Solaris

/dev/sde3             565        8467    63480847+  fd  Linux raid autodetect

/dev/sde4            8468       19457    88277175   fd  Linux raid autodetect

Disk /dev/md4: 90.3 GB, 90395705344 bytes

2 heads, 4 sectors/track, 22069264 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn't contain a valid partition table

Disk /dev/md3: 65.0 GB, 65004306432 bytes

2 heads, 4 sectors/track, 15870192 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md3 doesn't contain a valid partition table

Disk /dev/md1: 131 MB, 131465216 bytes

2 heads, 4 sectors/track, 32096 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table
```

----------

## irasnyd

The detection of hard disks in Linux doesn't necessarily follow any order. It basically works like this: whichever driver gets run first gets the sda, etc. The second one starts where the first left off, etc. I've had the order change between kernel releases. The md system (software raid) uses UUIDs to make sure the correct partitions are raided together. Don't worry about the drive names changing.

I wouldn't worry about the "Disk /dev/mdX doesn't contain a valid partition table" messages. You probably don't want a partition table on the md devices anyway. I use cfdisk, so I've never seen that. Actually, running "fdisk -l" right now, I see the same thing, so it's nothing to worry about.

I'd guess you get those messages because it is possible to put a partition table on an md device. For example, if you make a raid using just disks (like /dev/sda and /dev/sdb) rather than partitions, which is what you did. Then you can partition the md device. Basically, you can partition devices, then raid the partitions. Or you can raid the devices, then partition the raid. I don't think there's any real advantage to doing it either way.

Congrats on getting it set up  :Smile: 

----------

## Argetlam

Great info. Thanks for all the help I appreciate it. I'm sure I will be back with more posts.

Thanks again.

----------

