# 2.6, promise pdc 20268 and dmsetup

## gipsy

I have a Promise 20268 with 2 80G Discs in a Stripe Set and searched the web over and over to find a solution to mount this set with kernel 2.6.1. Finally I stumbled over a post on the LKML which describes how to use dmsetup to create the device mapper device for accessing the Stripe Set. But now I'm stuck; All I can find is people telling, they managed to get it working, but nowhere they describe how they did it.

I've managed to create a device /dev/mapper/ataraid, which I can correctly fdisk:

```

# fdisk /dev/mapper/ataraid

Command (m for help): p

Disk /dev/mapper/ataraid: 160.0 GB, 160052723712 bytes

255 heads, 63 sectors/track, 19458 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

              Device Boot      Start         End      Blocks   Id  System

/dev/mapper/ataraid1               1       19458   156296353+  83  Linux

```

This is exactly the same output like in 2.4 with the ataraid driver so it should be more or less correct, but now I don't know how to mount or mknod this /dev/mapper/ataraid1 because it's nonexistant.

Perhaps someone in here can help; I think it would also be interesting for a lot of other linux-users ...

----------

## okapi

Can you post a little howto on the way you manage to get the device mapped. I found the same tread on LKML and I did not manage to map the device.

----------

## vlad_tepes

I just managed to get my PDC20271 (TX2000) to work.

Here's what I have:

  * 2.6.3-gentoo-r1

  * Promise TX2000

  * two 40GB HD's, one partition (NTFS, whole disk)

Here's what I did:

1) install pdc202xx_new kernel module

after modprobe'ing pdc202xx_new I have now

```
    - /dev/hde  (first disk of raid-array)

    - /dev/hdg  (second disk of raid-array)

```

2) install device mapper

```
emerge device-mapper
```

3) re-compile kernel with device-mapper support

checked options "Device Mapper support" and "ioctl interface v4" in 

Device Drivers --> Multi-device support

4) create a device for the raid-array

First we need a mapping file for dmsetup:

```
echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) striped 2 128 /dev/hde 0 /dev/hdg 0 > devmap.raid
```

my RAID consists of two disk's (/dev/hde and /dev/hdg) and is a stripe. 

the size of a chunk is 65KB -> 128 sectors. see "man dmsetup" for a description

of the mapping-file.

the file should look something like this:

```
0 156330720 striped 2 128 /dev/hde 0 /dev/hdg 0
```

now create a device for the RAID

```
dmsetup create raid0 devmap.raid0
```

the new mapped device will be placed in /dev/mapper/raid0

5) create devices for the partitions

sfdisk gives us the details:

```
sfdisk -d /dev/mapper/raid0
```

should print something like this:

```
# partition table of /dev/mapper/raid0

unit: sectors

/dev/mapper/raid0p1 : start=       63, size=156328452, Id= 7, bootable

/dev/mapper/raid0p2 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p3 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p4 : start=        0, size=        0, Id= 0

```

So create another mapping file for each partition, in my case this is just

one file (I named it devmap.raid0p1)

```
# start  size       type    destination         start

  0      156328452  linear  /dev/mapper/raid0   63

```

'start' and 'size' should be the values from sfdisk above

...and create a device for this partition:

```
dmsetup create raid0p1 devmap.raid0p1
```

...and finally

```
mount -t ntfs /dev/mapper/raid0p1 /mnt/RAID0
```

hope that helps 

-robert  :Wink: 

----------

## gipsy

this works like a charm ... very good work and thanks a lot.

I think this post should go somewhere into the faq or docs, because I think there are more then one interested in this solution.

----------

## tychop

Just a Q:

What needs to be done to retain this info after booting?

Edit the fstab?

Or do I need to do more?

----------

## dotcom

first - thanx for the info.

would this solution also work when trying to switch from a 2.4 kernel (with working ataraid (promise)) to a 2.6 kernel?

I might not have understood completely how this is can be done  :Shocked: , but if anyone suceeded, please let me know.

thx a lot in advance!

----------

## vlad_tepes

 *tychop wrote:*   

> What needs to be done to retain this info after booting?
> 
> Edit the fstab?
> 
> Or do I need to do more?

 

The first step I did was to add a script to /etc/init.d called ataraid0

which will create the mappings on boot.

```
#!/sbin/runscript

depend() {

   need modules

}

start() {

   ebegin "Initializing software mapped RAID devices"

   /etc/stripemaps/stripe-mapper.sh /etc/stripemaps/*.devmap

   eend $? "Error initializing software mapped RAID devices"

}

stop() {

   ebegin "Removing software mapped RAID devices"

   dmsetup remove_all

   eend $? "Failed to remove software mapped RAID devices."

}

```

this script assumes the mapping-files (*.devmap) and a script called 

stripe-mapper.sh in /etc/stripemaps

Here is stripe-mapper.sh (please forgive my poor sh- and awk-knowledge...)

```
#!/bin/sh

SELF=`basename $0`

if [[ $# < 1 || $1 == "--help" ]]

then

   echo usage: $SELF mapping-file

   exit 1;

fi

# setup vars for mapping-file, device-name and devce-path

FNAME=$1

NAME=`basename $FNAME .devmap`

DEV=/dev/mapper/$NAME

# create device using device-mapper

dmsetup create $NAME $FNAME

if [[ ! -b $DEV ]]

then

   echo $SELF: could not map device: $DEV

   exit 1;

fi

# create a linear mapping for each partition

sfdisk -l -uS $DEV | awk '/^\// {   

   start = $3;size = $5;

   if ( size == 0 ) next;

   base = substr($1,0,length($1)-2);

   part = substr($1,length($1)-1);

   ("basename "  $1) | getline dev;

   print 0, size, "linear", base, start | ("dmsetup create " dev); }'

```

the script assumes mapping-files of the form device.devmap and 

first creates a mapping for the whole disk: /dev/mapper/device

and a mapping for each partition: /dev/mapper/devicep1, /dev/mapper/devicep2, ...

then add the script to boot runlevel

```
rc-update add ataraid0 boot
```

and test it:

```
/etc/init.d/ataraid0 start
```

the devices for the partitions should be now in /dev/mapper

The second step is (as you mentioned) to edit fstab and append

a line for each partition on the disk, for example:

```
/dev/mapper/raid0p1  /mnt/RAID0  ntfs  noatime,ro  0 0
```

WARNING: i haven't tested the scripts a lot before posting them here, especially with disks having more than one partition !  

also putting scripts in /etc is not a good idea, maybe /bin or even /sbin would be better. 

-robert

----------

## vlad_tepes

 *dotcom wrote:*   

> first - thanx for the info.
> 
> would this solution also work when trying to switch from a 2.4 kernel (with working ataraid (promise)) to a 2.6 kernel?
> 
> I might not have understood completely how this is can be done , but if anyone suceeded, please let me know.
> ...

 

I switched from 2.4 (SuSE) which detected my TX2000 without any problems to 2.6.3 gentoo. As i understand the device-mapper does much the same in 2.6 as ataraid did in 2.4 (except that it's now in user-mode I think) so There-Should-Be-No Problem(TM)  :Wink: 

If you try to INSTALL gentoo on a IDE-RAID, that would be something different...

-robert

----------

## dotcom

Hi Robert,

thx again. I think I am getting closer to understand devicemapping.   :Confused: 

Just one more question - how about grub/lilo boot/kernel parameters?

Would it look like...

```

e.g. grub

boot hd(0,x)

kernel hd(0,x) /boot/bzImage.whatever root=/dev/mapper/raid0py

```

...for a system booting from ataraid partitions?

----------

## vlad_tepes

 *dotcom wrote:*   

> Hi Robert,
> 
> thx again. I think I am getting closer to understand devicemapping.  
> 
> Just one more question - how about grub/lilo boot/kernel parameters?
> ...

 

unfortunately not. booting from software-raids would need a little bit more magic (at leats from a raid0, raid1 should work though)

because the /dev/mapper/whatsoever device is created by the devicemapper (which is in or loaded by the kernel) its not accessible to grub. 

grub knows something about the physical drives+partitions in the system (from the BIOS) and a few about filesystems but nothing of device-mapping I fear.

maybe using initrd (booting from a ramdisk) would be a solution here, 

but I have no expirience with that  :Sad: 

-robert

----------

## linux_on_the_brain

 *Quote:*   

> So create another mapping file for each partition, in my case this is just
> 
> one file (I named it devmap.raid0p1)
> 
> Code:
> ...

 

Vlad_tepes, great work, I have one question what command are you using to create devmap.raid0p1. I created 2 partitions but I don't know what needs to passed to the dmsetup command to create the 2 additional devmap files.

here is my config

```
# partition table of /dev/mapper/raid0

unit: sectors

 

/dev/mapper/raid0p1 : start=       63, size=585954747, Id=83

/dev/mapper/raid0p2 : start=585954810, size=390829320, Id=83

/dev/mapper/raid0p3 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p4 : start=        0, size=        0, Id= 0
```

thanks

----------

## linux_on_the_brain

OK, well I'm getting a little farther but I have hit road block, not sure where I went wrong, Would somebody fix me up. Thanks

Here is what I have done

```
sfdisk -d /dev/mapper/raid0

# partition table of /dev/mapper/raid0

unit: sectors

 

/dev/mapper/raid0p1 : start=       63, size=585954747, Id=83

/dev/mapper/raid0p2 : start=585954810, size=390829320, Id=83

/dev/mapper/raid0p3 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p4 : start=        0, size=        0, Id= 0

digital_linux root # echo -ne '0 585954747 linear /dev/mapper/raid0 63' > devmap.raid0p1

digital_linux root # dmsetup create raid0p1 devmap.raid0p1

device-mapper ioctl cmd 3 failed: Device or resource busy

Command failed
```

----------

## Kileak

Well, at least this helped me a little more in getting my HPT370-Raid working with the 2.6 kernel... But still it doesn't work really  :Sad: 

Everything worked and I can mount the raid-Partition. So I was quite happy to see it working, but then I looked into some directories, and besides the files which should be there, there were many trashy files (àla %$??ÄÄÄ"§.???  :Wink: ), and of course I couldn't access any files...

I thought it might be that I've got a different cluster size (32KB), so I tried to change it in devmap.raid0 from 128 to 64, but then I couldn't even mount the partition... 

Any ideas what it could be, or what else I could try? Would be very nice to get the raid working under 2.6, because 160GB of space more or less aren't something I would like to waste  :Wink: 

```

kileak raid # cat devmap.raid0

0       312736032 striped 2 128 /dev/hde 0 /dev/hdg 0

kileak raid # cat devmap.raid0p1

0       163846872       linear  /dev/mapper/raid0       63

kileak raid # cat devmap.raid0p2

0       148874355     linear  /dev/mapper/raid0       163846935

kileak raid # sfdisk -d /dev/mapper/raid0

# partition table of /dev/mapper/raid0

unit: sectors

/dev/mapper/raid0p1 : start=       63, size=163846872, Id= c

/dev/mapper/raid0p2 : start=163846935, size=148874355, Id= c

/dev/mapper/raid0p3 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p4 : start=        0, size=        0, Id= 0

```

----------

## vlad_tepes

 *Kileak wrote:*   

> Well, at least this helped me a little more in getting my HPT370-Raid working with the 2.6 kernel... But still it doesn't work really 
> 
> Everything worked and I can mount the raid-Partition. So I was quite happy to see it working, but then I looked into some directories, and besides the files which should be there, there were many trashy files (àla %$??ÄÄÄ"§.??? ), and of course I couldn't access any files...
> 
> I thought it might be that I've got a different cluster size (32KB), so I tried to change it in devmap.raid0 from 128 to 64, but then I couldn't even mount the partition... 
> ...

 

have you verified the chunk size of your raid? which filesystem do you use on these two partitions?

my first steps with this failed because i configured the wrong chunk-size. i think you could even mount a fat-partition with a incorrect chunk-size as long as the fat (the meta-information for the filesystem itself) fits in one chunk (or so...).

what's the output of fsck on these two partitions?

----------

## Kileak

Well, the chunk-size is 32K and the fs is FAT32...

But I already tried to use 8,16,32,64,128,256 as chunk size (just to be sure  :Wink: ) and nothing worked... If I enter a chunk size less than 128 I can't even mount it, and if I choose >= 128 I can mount the partitions, but the filesystem is just trash...

Could the mistake be somewhere else? 

```
kileak raid # ls /dev/hde*

/dev/hde  /dev/hde1  /dev/hde2

kileak raid # ls /dev/hdg*

/dev/hdg

```

if I choose a chunk size of 64 dmesg gives me

```
FAT: Did not find valid FSINFO signature.

     Found signature1 0x00000000 signature2 0x00000000 (sector = 1)

FAT: invalid first entry of FAT (0xffffff8 != 0x1b01)

VFS: Can't find a valid FAT filesystem on dev dm-1.

```

everything less 64 gives

```
FAT: bogus number of reserved sectors

VFS: Can't find a valid FAT filesystem on dev dm-3.

```

----------

## vlad_tepes

 *Kileak wrote:*   

> Well, the chunk-size is 32K and the fs is FAT32...
> 
> But I already tried to use 8,16,32,64,128,256 as chunk size (just to be sure ) and nothing worked... If I enter a chunk size less than 128 I can't even mount it, and if I choose >= 128 I can mount the partitions, but the filesystem is just trash...
> 
> Could the mistake be somewhere else? 
> ...

 

sorry, i am at the end of my knowledge...

but a few more questions: 

- can you access the filesystem from DOS or Windows without errors? 

- Did you make the Filesystem through DOS/Windows or through Linux/mkfs?

- How do you mount the filesystem (which options)?

I think this is not a problem of device-mapping. Maybe the filesystem is damaged. There are also a lot of posts telling about problems with the HPT37x chip. 

sorry, can't help...

-robert

----------

## Kileak

 *vlad_tepes wrote:*   

> 
> 
> but a few more questions: 
> 
> - can you access the filesystem from DOS or Windows without errors? 
> ...

 

well, no problem in win. but i created the fs with partition magic, since windows couldn't create 80GB FAT32 partitions, maybe this is the problem?

but i had no problems with kernel 2.4.25 and the hpt370 software raid moudle...

to mount the partitions i made the following entry in /etc/fstab

```

/dev/mapper/raid0p1   /mnt/raid/part1   vfat      users,exec,rw,noauto,gid=users,umask=0002 0 0

/dev/mapper/raid0p2   /mnt/raid/part2   auto      users,exec,rw,noauto,gid=users,umask=0002 0 0

```

 *vlad_tepes wrote:*   

> sorry, can't help...
> 
> -robert

 

well, thanks anyway  :Smile: 

----------

## defined

 *Kileak wrote:*   

>  *vlad_tepes wrote:*   
> 
> but a few more questions: 
> 
> - can you access the filesystem from DOS or Windows without errors? 
> ...

 

i tried to do this on a highpoint 372 setup with 4 (maxtor80gb) disks attached,

tried 16->128 chunks...only 128 produced devices which i could mount..but didnt seem valid filesystems

eventho, when i check in fdisk, all chunksizes produce the correct partitiontable(2partitions, 1 ntfs and 1ext3)

this is what my raid0 looks like:

0 960486912 striped 4 64 /dev/hde 0 /dev/hdf 0 /dev/hdg 0 /dev/hdh 0

# sfdisk -d /dev/mapper/raid0

# partition table of /dev/mapper/raid0

unit: sectors

/dev/mapper/raid0p1 : start=550884915, size=409593240, Id= 7

/dev/mapper/raid0p2 : start=       63, size=550884852, Id=83

/dev/mapper/raid0p3 : start=        0, size=        0, Id= 0

/dev/mapper/raid0p4 : start=        0, size=        0, Id= 0

so my raid0p1 looks like:

0       409593240       linear  /dev/mapper/raid0       550884915

and raid0p2 is:

0       550884852       linear  /dev/mapper/raid0       63

no success so far.. if anyone gets it working on a highpoint controller, plz let me know  :Smile: 

----------

## WL(inux)

I love linux but what i hate is my raid controller  :Sad: 

After spending 2 days surfing through forums and other infos, setting up ataraid and later device-mapper ...

WHY THE HELL is there no solution:

the situation:

2x 120GB WD Stripe Set Promise onboard Raid.

Windows can boot from this.

Bootloaders supportet by linux fail  :Sad: 

You always need to load device-mapper or other scripts before accessing the disk array ... why the hell is windows able to do this and linux not ... i'm sure there is a way but its tooo complex or is there no way ?

After 3 days I am no going to give it up and do a normal 2 disk dual boot system. one disk linux and the other is win

If there is a solution out there without using a third disk so please tell me   :Exclamation: 

----------

## zeano

I used this method on my Silicon Image Sil 3112 SATA RAID and got it working quite well, i have been getting some strange errors when putting the array under heavy load.

I've tryed both the siimage module and the sata_sil module that uses libata. The siimage module tended to lockup quite a bit so i'm currently using the sata_sil module although it adds the drives as scsi devices which means i can't use some SMART features such as hddtemp or many of hdparm's features. :/

I've edited vlad_tepe's script to create the devmaps so that it works with multiple partitions, but my bash scripting knowledge is very poor so i won't post it unless anyone requests.Last edited by zeano on Tue Jun 15, 2004 2:11 pm; edited 2 times in total

----------

## libolt

Hi I've followed this thread with much interest as I have a Promise pdc20271 controller and 2 200 gigabyte hard drives that are setup in a raid1 mirror with data already on them.

However, I'm having trouble figuring out what are the proper options to set in the table for dmsetup

I have /dev/hde and /dev/hdg which are identical in size.

Any help is greatly appreciated,

Mike

----------

## cyrillic

 *defined wrote:*   

> no success so far.. if anyone gets it working on a highpoint controller, plz let me know 

 

I got it working on my HPT374 after browsing the source code for the 2.4 kernel's ataraid/hptraid driver.  I found this little piece of info.

 *drivers/ide/raid/hptraid.c wrote:*   

>         /* All but the first disk have a 10 sector offset */
> 
>         if (i>0)
> 
>                 bh->b_rsector+=10; 

 

So, I adjusted my devmap files accordingly.

```
# cat devmap.raid0

0 312602720 striped 2 128 /dev/hde 0 /dev/hdg 10

# cat devmap.raid0p1

0 61432496 linear /dev/mapper/raid0 63 
```

I can now mount this NTFS partition without getting errors.   :Very Happy: 

----------

## ennservogt

Thank you...

For the last three month I have tried to get my Promise PDC20271 Controller in RAID mode 0 (stripe) under Linux to work. I even mailed the Promise support, but they sait that it is impossible at the moment. 

By the way, your little how-to is one of the best I have ever read. No joke! It's easy to understand and you can follow everything step by step!

THANK YOU

THANK YOU 

THANK YOU

----------

## c_riis

hey, 

I use a 2.6.6-rc1 kernel,

in proc/pci i have:

RAID bus controller: Promise Technology, Inc. PDC20276 IDE (rev 1).

But following your howto I can't get modprobe pdc202xx_new working, it says

FATAL: module pdc202xx_new not found.

How do i get the module ? is it because of kernel 2.6.6 ?

I'd like a bit of guidance.

- Christian

----------

## rockthesmurf

I have 2x80gig hard drives (hde and hdg) in a raid0 array. This array is then split into 3 seperate paritions, I have been following the instructions, and first ran :

```
echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) striped 2 128 /dev/hde 0 /dev/hdg 0 > devmap.raid
```

This gave me a file 'devmap.raid' containing the following:

0 312711168 striped 2 128 /dev/hde 0 /dev/hdg 0

So far so good, but when I try and activate this I get an error:

dmsetup create raid0 devmap.raid

device-mapper ioctl cmd 9 failed: No such device or address

Command failed

Could anyone suggest what I might be doing wrong? The devices hde and hdg are there, so I'm not sure why it says no such device and so on  :Confused: 

Much appreciated, 

Steven Craft

----------

## unz

i'd like to thank mr vlad_tepes that helped me with my 80+80gb on pdc20276 controller. But i got some questions more   :Shocked:  :

@ booting i got some strange errors and i am adviced to /sbin/depscan.sh .. i do but nothing .. errors just continue, but it's only extetic [ i hope]

-your script creates a partition for every partion found, but i need only 2, how can i custumize it? [ i need only raid0p5, raid0p6] 

-another problem is that only root can really use those partitions, normal user can only read [just the same icon for all the files] ... my fstab is ignored.

thanks a lot guy, without your how-to i'd not installed 2.6.7  :Wink: 

unz

----------

## danone

As i posted before does someone got a RAID1 Mirrorset working?How to use dmsetup to crate a mirrortaget I lokked ito man pages and so but no luck

----------

## unz

ok, i've done ...   :Very Happy: 

it's not elegant ... but works!

my way differs a bit from vlad, 'cos his script didn't work for me ... i changed stripe-mapper.sh in a ugly

```

#!/bin/sh

echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) striped 2 128 /dev/hde 0 /dev/hdg 0 > /etc/stripemaps/raid.devmap

dmsetup create raid0 /etc/stripemaps/raid.devmap

dmsetup create raid0p1 /etc/stripemaps/raid0p1.devmap

dmsetup create raid0p5 /etc/stripemaps/raid0p5.devmap

dmsetup create raid0p6 /etc/stripemaps/raid0p6.devmap

mount -t vfat -o,uid=1000 /dev/mapper/raid0p5 /windows/E

mount -t vfat -o,uid=1000 /dev/mapper/raid0p6 /windows/F

```

solving all the problems about chmodes and the right size of my partitions

cheers

----------

## mauricec

Great howto.....

I've installed Gentoo with 2.4.27-gentoo-r1 and enabled the device-mapper, so I can test it with ataraid and dev-mapper ( and be able to boot from disk ).

All seems great, expected results given in this topic, but I'm unable to mount any of my paritions.... 

And I really want to change to 2.6 

What i did :

Read the topic very well  :Wink: 

Using Vlad_tepes scripts, wich works fine 

```
echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) striped 2 128 /dev/hde 0 /dev/hdg 0 > /etc/stripemaps/raid0.devmap
```

Create disk in devmapper :

```
dmsetup create raid0 /etc/stripemaps/raid0.devmap
```

sfdisk gives me this:

```
/dev/mapper/raid0p1 : start=       63, size=   499905, Id=83

/dev/mapper/raid0p2 : start=   499968, size=  4007808, Id=83

/dev/mapper/raid0p3 : start=  4507776, size=  9773568, Id=83

/dev/mapper/raid0p4 : start= 14281344, size=305883648, Id= 5

/dev/mapper/raid0p5 : start= 14281407, size=  2007873, Id=82

/dev/mapper/raid0p6 : start= 16289343, size= 39070017, Id=83

/dev/mapper/raid0p7 : start= 55359423, size=264805569, Id=83

```

So i made the file /etc/stripemapper/raid0p1.devmap containing this :

```
0 499905 linear /dev/mapper/raid0 63
```

Using /etc/init.d/ataraid to create the mapper devices.

Now i should be able to :

```
mount -t ext3 /dev/mapper/raid0p1 /mnt/floppy
```

But it returns :

```
mount: wrong fs type, bad option, bad superblock on /dev/mapper/raid0p1,

       or too many mounted file systems

```

And that really pisses me off .....

Anyone any idea ???

Because now I'm stuck with the ataraid driver ( wich works fine ), and I'm not able to move on .... ie not able to use 2.6 kernel...

----------

## smog_at

Has anyone tested it with a mirror?

It's a good how-to but i need a mirror instead of an stripe. Need I only change the argument?

From:

```

echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) striped 2 128 /dev/hde 0 /dev/hdg 0 > devmap.raid

```

to

```

echo 0 $(expr $(blockdev --getsize /dev/hde) '*' 2) mirrored 2 128 /dev/hde 0 /dev/hdg 0 > devmap.raid

```

?

lg smog_at

----------

## Rad

You don't have the option to use mdadm instead of dmraid?

Because if you can, you should.

----------

## smog_at

Hmmm... i've the option, but I didn't found any tutorials or how-tows about mdadm

Lg smog_at

----------

## Rad

Creating an array with mdadm is very simple. Something like "mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1" does the job, provided you have the kernel modules and mdadm installed.

Gentoo-wiki also contains information, which should be helpful even if you don't try to install gentoo on a raid array.

----------

