# Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup

## garlicbread

Yet another Hybrid Raid 0 / 1 Howto for 2.6 with dmsetup

I've been experimenting a lot with setting up a Hybrid Raid Array using a couple of 200Gb Maxtor SATA disks via device mapper

and I've found a couple of things that I couldn't find on google / gentoo forums or the dm-devel list

this relates to dmsetup / dmraid and Hybrid Raid arrays (especially with regards to Raid 1 and dmsetup)

So I decided to write this little HowTo

(this is also for when I forget all about this and have to remember how I setup my Raid arrays in the first place  :Very Happy: )

In my own case I've been booting off of a 3rd disk which I'll eventually copy the contents to the Raid Array

although it may also be possible to use the LiveCD as well, but I've not actually tried this myself yet

a large amount of the info here has been pieced together from other messages on the gentoo forum and from experimentation

Please feel free to copy or indicate if any part of this is incorrect

(i.e. I take no responsibility for any loss of data if your PC blows up etc)

also this is my first HowTo and use of BBCode, so apologies if I haven't got the formatting right

(I'm also starting to think that I've written far too much here for one document  :Smile: )

Table of Contents

1.0 A bit about Raid in general

1.1 Hybrid Raid

1.2 dmsetup

1.3 dmraid

2.0 Determining the size of an individual disk

2.1 Setting up Raid 1

2.2 Setting up Raid 1  Disk synchronization

2.3 Setting up Raid 1  Without synchronization

2.4 Setting up Raid 1  Specific options for the mirror target

3.0 Setting up Raid 0

4.0 Hiding the Raid / Bios metadata

5.0 Mapping out the partitions

6.0 Automating via scripts

7.0 Performance

1.0 A bit about Raid in general

At the moment there are 3 different types of Raid implementation available under Linux

Software Raid - This uses the user space tools within Linux to present a typical /dev/md0 access. this is Raid support native to Linux but not usually compatible with Windows

True Hardware Raid - typically only seen on Servers, or machines with separate Hardware PCI Raid card add ons

There is also another type of raid that is seen on some of the new motherboards. It's not true hardware raid as a lot of the work is still carried out by the CPU and the operating system,

For the rest of this document I'll refer to this type as Hybrid Raid. This is the type that this HowTo is concerned with

In my case the motherboard is a Asus A8V Deluxe, which means I have 2 different Hybrid Raid controllers to experiment with

a VIA VT8237 and the Fast Track Promise 20378 RAID controller   

My end Goal was to get a dual boot system up and running with Win XP and Gentoo Linux

something that could use the hybrid raid setup so that Win XP and Linux would both be raided and could co-exist

Also something that would use the VIA controller in preference, as initial indications using Windows benchmarking tools appear to show that this is a little faster than the Fastrack controller when using both disks in combination.

I've noticed that it is possible to use Linux software raid for Linux, and the Hybrid Raid for Win XP

and both will be compatible (at least with Raid 1). But this is only if the super block option is switched off for software raid

This makes it a pain to setup. Also since device mapper is closer to the kernel I was hoping that it may perform better

or at the very least be easier to implement

For Raid 1 purposes I also used the windows VIA utility after trying different setups with Linux to confirm that the disks were still considered in-sync by the Hybrid Array setup

1.1 Hybrid Raid

Typically with hybrid raid, the setup / initialization is controlled from the bios when you first boot up

With the controllers that I was using (at least in my case the VIA VT8237), typically 1 or 2 bytes is written

near to the partition table as an indicator that the hybrid raid array is there

Also the meta data which describes how the raid array is setup (size of array number of disks / type etc)

appears to be located somewhere right at the end of the disk on each of the raid members

I believe this is what the bios usually reads / writes to when initializing the array at bootup

when you view the disk via DOS / windows / bios etc, it sees the end result of the raid array

which is a disk slightly shorter than normal

(I'd guess this is to prevent the meta data from being overwritten / affected at the end of the disk)

Linux on the other hand doesn't see the disks via the bios

it sees the disks as non-raided entities including the meta data at the end

in my case as an example with the latest kernel 2.6.9-rc2-love4 my SATA drives were showing up as /dev/sda /dev/sdb

as the newer SATA drivers now appear to be a part of the SCSI driver set

1.2 dmsetup

The new 2.6 kernel doesn't currently have support for Hybrid Arrays like 2.4 used to have

For 2.4 there were some drivers that could be used, including a proprietary driver for the VIA system

but for 2.6 this is now moving towards device mapper

Device mapper is a kernel feature that is controlled by a user space program called dmsetup

the way to imagine device mapper / dmsetup

a block device is fed in, it is then manipulated with a resultant block device out

you specify the output block device name and a map file

the map file contains what input block devices there are, the type of target: linear / stripe / mirror etc

with a couple of other parameters

block devices created from dmsetup usually end up within /dev/mapper/

For one example of this

If we imagine /dev/hda as a single hard disk, that has 10000 sectors as an example (0 - 10000)

and /dev/hda1 which is the first partition, starts from sector 63 and ends at sector 5063

accessing sector 0 on hda1 actually accesses sector 63 on hda

also accessing sector 5000 on hda1 actually accesses sector 5063 on hda

in the case of hda and hda1 these are both setup automatically when the kernel first boots up

but hda1 responds in the same way that a linear device map would

using the above example if we'd used dmsetup with the above disk

```
echo "0 5000 linear /dev/hda 63" | dmsetup create testpart
```

we'd end up with a block device /dev/mapper/testpart that would respond exactly the same way as hda1

1.3 dmraid

One tool I've been experimenting with is dmraid

this calls dmsetup with a map to setup Raid arrays automatically by reading the meta data off the disks in use

however it's still in beta testing at the moment

also dmraid doesn't currently recognize the VIA VT8237

for the Fastrack 20378 (dmraid pdc driver) it was able to identify the Raid 0 array I'd setup correctly

However the raid 1 array appeared to be only half the size that it should be (100Gb instead of 200Gb)

this is probably just a bug, but it's something to be aware of (the version used was dmraid-1.0.0-rc4)

homepage is here

http://people.redhat.com/~heinzm/sw/dmraid/

EDIT

I've just found that someone else has made a much better ebuild here

https://bugs.gentoo.org/show_bug.cgi?id=63041

to use with gentoo portage overlay, create the directory and place the ebuild within /usr/local/portage/sys-fs/dmraid/

make sure that PORTDIR_OVERLAY="/usr/local/portage is set within /etc/make.conf

```
cd /usr/local/portage/sys-fs/dmraid/

ebuild dmraid-1.0.0_rc4.ebuild fetch

ebuild dmraid-1.0.0_rc4.ebuild digest

emerge dmraid
```

2.0 Determining the size of an individual disk

One of the values we may need, to create the map, is the size of one of the individual raid members (single disk)

this can be viewed by a couple of different ways

by looking at /sys/block/<block device>/size, assuming that /dev/sda is one of the raid members

```
cat /sys/block/sda/size
```

by using the blockdev command

```
blockdev --getsize /dev/sda
```

this will show the length of the disk in sectors

for the length of the disk in bytes this value can be multiplied by 512, (typically 1 sector = 512 bytes)

2.1 Setting up Raid 1

Raid 1 is primarily designed for resilience, in that if one disk fails, there is still another disk with an identical copy of data 

It is possible to setup Raid 1 just using Device mapper without Software Raid

what we need to do is to create a single block device to represent the Raid 1 Array of the 2 disks

we feed 2 disks in and get 1 block device out

whatever is written to the output block device it written to both disks at the same time

anything Read could be read from ether disk

the below sections indicate how to put the map together

the map can be stored in a file and then called by

```
dmsetup create <map file>
```

or you can specify it manually on the command line e.g.

```
echo "0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0" | dmsetup create testdevice
```

2.2 Setting up Raid 1  Disk synchronization

so far the only mention of this table I've seen documented for Raid 1 device mapper is the following

```
0 398283480 mirror core 1 128 2 /dev/sda 0 /dev/sdb 0
```

the only problem with this is that dmsetup will start to try and synchronize the disks and copy sda to sdb

sometimes this is something you want to happen to ensure that both disks have identical data

However for normal boot up this is not suitable

on my own system while windows takes around 1Hr to synchronize the disks, I've worked out it would take around 5Hrs to wait for the disks to synchronize using dmsetup in this way, for a pair of 200Gb disks

Also from what I've observed it would appear that data is not currently written to both disks at the same time while the synchronization is taking place

This is noticeable if synchronization is stopped half way through with dmsetup remove_all

the only parameters you'll probably need to alter (assuming you want your disks to synchronize)

The length of the Array. The figure used above 398283480 is the full length of one of the individual disks in my case, see the above section to get this value

The device nodes /dev/sda and /dev/sdb may be different for your system but represent the block device for each individual disk

also one other thing to remember is that the first disk specified will be the source and the second disk the destination for the synchronization

the full parameter list for a Raid 1 map is listed in a section further below

One other consideration is that this will present the full disk as a Raid array, which means that the meta data that the Bios uses at bootup will also be visible (if this was corrupted then potentially the system could become unbootable)

see the section relating to hiding the metadata to get around this.

2.3 Setting up Raid 1  Without synchronization

I spent a long time trying to figure this one out. A way for dmsetup to setup a Raid 1 array without the disks synchronizing

which ideally is what's required for normal use / bootup

I finally figured something out by looking at the map that dmraid had created on my FastTrack Array with Raid 1

we can up the number of options to 2 and specify nosync as the second option

```
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0
```

again you'll need to adjust the disk length and device nodes for your own setup

this has the effect of setting up the output device node, but doesn't do the syncing of the whole disk as above, which is what we need for normal operation

2.4 Setting up Raid 1  Specific options for the mirror target

The mirror target uses the following syntax for the table

<output start> <output length> <target> <log type> <number of options> <... option values> <number of devices> ... <device name> <offset>

the numbers used here are a measurement of the number of sectors on the disk

the first 2 parameters affect the output block device

the rest of the parameters affect the devices going into the map

the first is the offset for the output, this should always be 0

the second parameter represents the length of the output device. typically for Raid 1 this should be the size of a single Raid member

the target parameter in this case is mirror for Raid 1

for log type, the only one supported at the moment is "core" (there is also another one called disk from looking at the kernel sources, but I wouldn't try to use this at the moment)

next this is the number of options to feed into the mirror target. this can be 1 or 2 (unless someone is aware of a 3rd option)

the next 1 or 2 parameters can be specified here. first is the region size, (see below for more info on this). next if the number of options is 2, you can specify nosync here as well

next is the number of disks going into the map, in the case of Raid 1 this will always be 2

finally we specify the device block's going in and the offset (typically the offset should always be 0 for raid 1)

For the region size I've found the optimum value appears to be 128 which is what dmraid appears to use by default

I tested this by timing the amount of blocks synchronized using a synchronize map and reading off the number of blocks completed within a minute

using

```
dmsetup status /dev/mapper/<block device>
```

to read off the number of blocks completed

anything smaller than 128 appears to have no affect, while anything larger appears to slow things down

3.0 Setting up Raid 0

Raid 0 has the advantages of using both disks for maximum capacity (2 x 200Gb = 400Gb)

Also depending on the size of the file, (for large files) an increase in the read performance may be obtained.

The disadvantages are that if one disk fails then the whole array is lost

This means that resilience is half that of a single Disk (in other words make sure you have lots of backups)

The data is striped across both disks, e.g. with a 64K stripe size the first 64K is written to the first disk, the second 64K to the second, the third back to the first disk just after the first stripe and so on in an alternating fashion

we can use Device mapper in the same way to setup access to a Raid 0 Array

what we need to do is to create a single block device to represent the Raid 0 Array of the 2 disks

we feed 2 disks in and get 1 block device out

to keep things simple if we use the full size of a single disk e.g. 398283480 sectors (see the above sections on how to obtain this)

now we multiply it by 2 (398283480 * 2 = 796566960)

we'll use this figure as the full size of the Raid 0 array

```
0 796566960 striped 2 128 /dev/sda 0 /dev/sdb 0
```

in this example the 1st parameter should always be 0 as this is the start offset for the resultant output block device

the next parameter represents the full size of the raid array when it is created, this is one that you will need to set based on the size of your own disks in the array

this parameter specifies a striped type of target, which is always required for Raid 0

this parameter specifies the number of disks involved, in most cases it is always 2 disks

this parameter is linked to the stripe size, if for example when creating the array in the Bios menu you've used a stripe size of 64K then this value will be (64 * 2 = 128) or for a 32K stripe (32 * 2 = 64)

finally we specify each disk followed by the offset, the offset is the number of sectors to skip before reading / writing the first stripe on the disk

the order of the source disks will be important for raid 0, usually the first disk picked up by linux will be the one with the first stripe on

e.g. if the raid members are sda and sdb then sda usually comes first

or for hde and hdg, hde would usually have the first stripe

however this might not always be the case and will depend on the raid controller that you are using

in my own case I've used an offset of 0 for both sources disks

But depending on your controller sometimes it is necessary to set the the offset for the 2nd source disk to a value other than 0

e.g. on one thread within the gentoo forum I've seen one person mention that a sector offset of 10 would be required for the second disk for the HPT374 controller

In order to test that dmsetup has mapped the the Raid 0 Array the same way as the bios

create a partition within DOS / windows with a filesystem on

use dmsetup to access the array within Linux

setup the linear maps for the partitions (see below sections for this)

attempt to mount the filesystem

this is about the only way I know of testing that the array is working correctly via dmsetup

dmraid may be a better option if it works with your system

Also if you want to set the overall size of the array to hide the Raid Bios meta data then see the below sections

4.0 Hiding the Raid / Bios meta data

In the above examples for setting up Raid 1 or Raid 0 Arrays, the full span or size of the disk has been used to present the raid array as a block device

one thing to note with this however is that in some cases the bios will store it's information about the raid array at the end of the disk

If a partition has been created on the array within Linux that crosses over into this area, then you risk overwriting this data which could  potentially make the system unbootable

i.e. Grub would probably read it's information from the bios which would in turn be unable to read the Array if the meta data has been corrupted

If you've used a windows or DOS utility such as PQMagic to setup your partitions then you probably don't need to worry about this

as these utilities would see the array via the bios which would in turn display the Raid Array as slightly shorter than the physical disk

this way the last partition on the disk won't cross over into this area

If you want to make sure that the resultant device nodes for the array under Linux cannot view the meta data

then we also need to make the Raid Array seem slightly shorter than the full span in order to hide it

this way any partitioning tools used under Linux (such as sfdisk or fdisk) won't be able to see or allocate the space used by the meta data, also any backup / restore utility that may affect the entire disk won't interfere as well

There are 2 ways to do this

if you're raid controller is supported, then use dmraid as it is able to read off the specific values from the meta data and set the correct lengths

do it manually

The Manual method:

unfortunately this relies on using DOS or windows

use a Windows or DOS utility (such as PQMagic) to create a partition located right at the end of the Raid Array

boot into Linux and setup the Raid array using the full span of the disk to begin with

run sfdisk -l -uS to find the end sector (last sector used) for the partition created at the end of the Array, since the partition was created under DOS / windows it won't go right to the end of the disk

For Raid 1

in my case as an example the full size of the disk was 398297088 sectors but the last partition created under PQMagic ended at sector 398283479 on a Raid 1 Array

now add 1 to this value (as the partition needs to sit within the disk) 398283479+1 = 398283480

398283480 is now the value I use for the length of the Raid 1 Array

e.g. 

```
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0
```

For Raid 0

you could just add 1 the same as above but to be sure when I tried this myself

I wanted to make sure that the length of the array was an even number of stripes

this may be over complicating things a bit, but as an example

size of individual disk -  398297088

Full size of Raid 0 Array - (398297088 x 2) = 796594176

end sector of the last partition on the disk 796583024

for 64K stripe size 65536 / 512 = 128 sectors for each stripe on the disk

796583024 / 128 = 6223304.875 stripes

rounding this up to an even whole number =  6223306 stripes

working backwards  6223306 * 128 = 796583168 sectors

which is the value I've used in the raid map for Raid 0

```
0 796583168 striped 2 128 /dev/sda 0 /dev/sdb 0
```

I'm not sure if this is necessary but it's one way of looking at it

Realistically I've found that the Win / DOS tools won't go right to the end of the array when creating the partition, which means the array is probably slightly longer than this value, but since we're only talking about a couple of Mb or so and we want to make sure to hide the Bios Raid meta data, this appears to be a safe value to use. (dmraid is more accurate in this regard assuming it can recognize your setup)

Also something to note is that some hybrid raid array controllers have the option for a Gigabyte Boundary in the bios setup for Raid 1

All this means is that the Bios will shorten the length of the Array to the nearest Gb, that way if a replacement Disk is not exactly the same size as the old one, it will still function in the Array, as long as it is the same length in Gb

This can also have the affect of making Raid 1 Arrays appear shorter than it might otherwise be and will also affect the end sector for the last partition on the disk

5.0 Mapping out the Partitions

while the above maps for Raid 1 and Raid 0 will create device nodes for the entire array within /dev/mapper

we still need to create device nodes for the individual partitions as this is something which isn't done automatically

this is similar to sda1, sda2 for sda or hda1, hda2 for hda etc

the easy way to do this is to just use the partition-mapper script mentioned at the end of this How-to

if we assume that the raid array has been setup as /dev/mapper/raidarray

and that you've already used a partitioning tool to setup the partitions on the disk

we need to use a map with a linear target

first we run

```
sfdisk -l -uS /dev/mapper/raidarray
```

in my case with a test setup I end up with

```

   Device Boot    Start       End   #sectors  Id  System

/dev/mapper/raidarray1            63 102414374  102414312   c  W95 FAT32 (LBA)

/dev/mapper/raidarray2     102414375 204828749  102414375   c  W95 FAT32 (LBA)

/dev/mapper/raidarray3     204828750 307243124  102414375   c  W95 FAT32 (LBA)

/dev/mapper/raidarray4     307243125 398283479   91040355   c  W95 FAT32 (LBA)                                         

```

as an example to create the device node for the first partition

```
echo "0 102414312 linear /dev/hda 63" | dmsetup create raidarray1
```

we've used 2 values here the first 102414312 is the length or the #sectors for the partition

the second value 63 is the offset from the beginning of the raidarray device node taken from the output of sfdisk

assuming the partition has a filesystem on it we can now mount /dev/mapper/raidarray1

If you want to be really clever you could feed the output of the linear map into cryptsetup to encrypt the partition as well

(but there are already other HowTo's for how to do that)

6.0 Automating via Scripts

There are a couple of scripts that I've spotted on another thread which can be useful for setting up raidmaps and partitionmaps

I'm not taking credit for ether of these but I did modify one slightly to be more compatible with sfdisk

(sometimes sfdisk will list partitions for a device node by placing p1 at the end of the name, but for device nodes with long names it will sometimes just add a number without the p)

first I started off by creating a directory called /etc/dmmaps

the first 2 scripts I placed within this directory

while the last one was located within /etc/init.d

dm-mapper.sh script

```
#!/bin/sh

SELF=`basename $0`

BASEDIR=`dirname $0`

if [[ $# < 1 || $1 == "--help" ]]

then

        echo usage: $SELF mapping-file

        exit 1;

fi

# setup vars for mapping-file, device-name and devce-path

FNAME=$1

NAME=`basename $FNAME .devmap`

DEV=/dev/mapper/$NAME

# create device using device-mapper

dmsetup create $NAME $FNAME

if [[ ! -b $DEV ]]

then

        echo $SELF: could not map device: $DEV

        exit 1;

fi

# create a linear mapping for each partition

$BASEDIR/partition-mapper.sh $DEV

```

partition-mapper.sh script

```
#!/bin/sh

SELF=`basename $0`

if [[ $# < 1 || $1 == "--help" ]]

then

        echo usage: $SELF map-device

        exit 1;

fi

NAME=$1

if [[ ! -b $NAME ]]

then

        echo $SELF: unable to access device: $NAME

        exit 1;

fi

# create a linear mapping for each partition

sfdisk -l -uS $NAME | awk '/^\// {

        if ( $2 == "*" ) {start = $3;size = $5;}

        else {start = $2;size = $4;}

        if ( size == 0 ) next;

        part = substr($1,length($1)-1);

        ("basename "  $1) | getline dev;

        print 0, size, "linear", base, start | ("dmsetup create " dev); }' base=$NAME

```

This script I placed within /etc/init.d/

dmraidmapper script

```
#!/sbin/runscript

depend() {

        need modules

}

start() {

        ebegin "Initializing software mapped RAID devices"

        /etc/dmmaps/dm-mapper.sh /etc/dmmaps/*.devmap

        eend $? "Error initializing software mapped RAID devices"

}

stop() {

        ebegin "Removing software mapped RAID devices"

        dmsetup remove_all

        eend $? "Failed to remove software mapped RAID devices."

}

```

also you can place a text file called device.devmap (or what ever you want to call it) within the /etc/dmmaps directory

containing a raidmap

e.g. I have one called via_rd1.devmap that contains

```
0 398283480 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0
```

by calling

```
cd /etc/dmmaps

./dm-mapper.sh via_rd1.devmap

```

this will setup the array and the partitions within /dev/mapper

dm-mapper.sh will by default automatically call partition-mapper.sh

partition-mapper.sh takes one parameter as input which is the block device of the raid array

e.g.

```

partition-mapper.sh /dev/mapper/via_rd1

```

this will create the device nodes for the individual partitions automatically

starting dmraidmapper as a service manually

```
/etc/init.d/dmraidmapper start
```

or by adding it into your default or boot run levels will make it hunt around for any raid maps called *.devmap within the /etc/dmmaps directory and set them up automatically

although please note that if your root filesystem is on the array you'll probably need to setup a manual initrd that contains these scripts / devmaps and sfdisk to make the root filesystem available for boot

7.0 Performance

One interesting thing I've also been looking into is the performance of the different methods of accessing the disks

to see which is the fastest

zcav is a part of the bonnie++ toolset and reads 100Mb at a time from the block device and outputs the K/s per 100Mb of data

Note - disclaimer these are not precise benchmarks, also for Raid0 I've always used a stripe size of 64K

better results in certain circumstances may be obtained with different stripe sizes

Also measuring the performance in this way, is at a disk level not at a particular software level

using the form

```
zcav -c3 /dev/<input device> >output_result.dat
```

I then plotted the data onto a graph using gnuplot

the steps in the graph appear to represent the different zones on the disk

from the way that I interpret this (I could be wrong)

the steps on the graph appear to indicate that there shouldn't be a bottleneck between the disk and zcav

a flat line may be an indication of a bottle neck of the Raid implementation or SATA raid controller on the motherboard

This gave some very interesting results

Accessing a single disk from the Promise or Via chip set gives the same result - full speed of the disk used as it zones down

when accessing both disks in combination for Raid 0 via Device Mapper, the Promise controller appears to bottle neck at around 95Mb/s as a straight line

while the via controller appears to use the full capacity of the disks starting at 120Mb/s and slowly zoning down

Raid 1 via Device Mapper follows the performance of a single disk almost identically

I'm wondering if in the future this may improve, if there is an option for the data to be written to both disks at the same time but is read from different disks in a stripe fashion to improve read performance

Software Raid 1 this appears to follow the zone of the disk as a fuzzy line, just under the performance of a single disk

(in other words its probably around 3Kb/s slower than using Device mapper which isn't that much of a difference)

For Software Raid 0 compared to Device Mapper Raid 0 there appears to be a large difference (at least for 64K stripe)

Device Mapper appears to be around 30Mb/s better off at Raid 0 starting at the beginning of the disk

while software Raid 0 appears to flat line further down the graph

For the win XP results I used diskspeed32 to get the raw data, although this reads 10Mb at a time instead of 100Mb

Also the X axis (disk position) appeared to be to a different scale for diskspeed32 so I had to write a small C program to multiply the X axis by a certain factor to get the graphs to match up, considering that the performance of a single disk under XP and Linux appears to  match I believe I've got this right.

I've included a picture of the graph and the raw data if anyone's interested

Graph

Raw Data

for gnuplot I just edited the gp.txt file to include / exclude different results

and used

```
load gp.txt
```

within gnuplot to load up / display the graph

Next I'm going to see if I can get grub to work properly, along with setting up an initrd

and to compare the bootup times for Raid 0 / Raid 1 as I'm setting the array up for final use  :Very Happy: Last edited by garlicbread on Sun Jan 23, 2005 8:56 pm; edited 1 time in total

----------

## anderlin

Thank you very much for this extensive post! 

I have VIA VT8237 and Fast Track Promise 20378 myself, and would very much like to dualboot with raid 0.

I have some difficulties drawing conclusions, however. Maybe your post was too theoretical for me. Therefore I would like to ask some questions:

1. Is it working? Sometime you wrote about a third disk, did you have to use this in the end? Is it necessary for the setup?

2. I didn't understand all of your plots. Is it best to use VIA or PROMISE? I understood I have to do more manually with VIA, but it gives greater performance, right?

3. How about grub? Have you got that working?

Again thank you for this great post!

regards, Anders Båtstrand

----------

## anderlin

I have now figured out everything except grub. Any progress there?

I use 64bit, so I can not use lilo.

----------

## garlicbread

I wrote the howto for users whereby dmraid wouldn't work and would have to use dmsetup to manually setup the array

VIA is faster than promise at the moment on the Asus board

with earlier versions of dmraid it would work with promise but not with Via

but with the newer version of dmraid this appears to now support the Via chipset as well (5f)

http://people.redhat.com/~heinzm/sw/dmraid/src/

ebuild here

https://bugs.gentoo.org/show_bug.cgi?id=63041

I've got the drives working in that I can copy data to and from them ether in Raid 0 or Raid 1 and are still valid / recognisable using Win Raid as well

Also I was able to get grub to recognise the raid array in the boot menu

But I'm still booting of a temporary 3rd IDE disk at the moment

I need to setup a custom initrd in order to get the thing to boot and document how I did it as well (still haven't got around to it yet)

as the initrd will need to call ether dmsetup or dmraid to setup the array

prior to accessing any of the partitions on the disk

Also I've heard rumours that the 5f version of dmraid had problem mapping out the 6th partition node and beyond, so we still may need to manually map the partitions out during the initrd phase

but at least if it maps the main drive node correctly this solves a whole heap of messing about

For grub a couple of things to note

1. while within linux, grub will see the disks via the Linux OS so will see the disks independently i.e. as separate disks

2. when at the boot menu (before the OS has loaded) grub will see the entire Raid array as a single disk, as it's looking through the Bios instead (something to bear in mind when setting up grub.conf)

The idea is that since grub can read via the array using the Bios at bootup, it should be able to read off the kernel and initrd image files from the disk and into memory

Once the kernel is then loaded into mem it will then call an initial script within the initrd

At this point the kernel cannot see the array only the separate disks (the linux kernel doesn't use the Bios to access Hard Disks) therefore it only has access to the data within the initrd and cannot see the array yet

so the next step for the initrd is to run dmraid or dmsetup (which needs to be included in the initrd archive) to create the device nodes for the Array before booting to the root partition

----------

## benoitc

I read the tutorial but can't figurer how to do setup raid 1 exactly.

I have two hd /dev/sda /dev/sdb, each have the same size :  241254720.

Partition table is :

```

/dev/sda1 /boot ext2 defaults 0 1

/dev/sda2 swap swap defaults 0 0

/dev/sda3 / ext3 defaults 0 1

/dev/sda4 /home ext3 defaults 0 1

```

and raid 1 have been created by bios.

So how to use dmsetup to create raid 1 device ? And how to have this device loades each times I boot linux and boot on it ? Any more help would be appreciated, thanks for advance  :Smile: 

----------

## garlicbread

If you don't have dual booting with windows to worry about

then it may be easier just to use software Raid 1 using the md driver

as the performance for device mapper compared to software raid 1 is fairly close (although there is a fair difference for Raid 0)

Are you using the Raid support builtin to the motherboard via the Bios?

if so what chipset is it?

have you tried dmraid? (as that's a lot easier to setup)

the above is meant for those situations where dmraid doesn't work

although even if you use dmraid or dmsetup, more than likley you'll need to setup a custom initrd to boot off the array, and that's someting I'm still working on

----------

## dalamar

Maybe only a stupid question ...

 *Quote:*   

> for 64K stripe size 64000 / 512 = 125 sectors for each stripe on the disk 

 

Why don't ?

for 64K stripe size 65536 / 512 = 128 sectors for each stripe on the disk

Dalamar

----------

## anderlin

Are some of you trying with 64bit? I start to think my problems is related to that...

----------

## anderlin

I now have a working initrd, and I boot windows x64 xp and gentoo amd64 from the same raid0 array!

Previously I made the initrd on a 32bit machine, but after some changes I could make it on the 64bit, and now it works. I will post back my settings as soon I get time (later today or tomorrow).

Regards, Anders Båtstrand

----------

## garlicbread

 *dalamar wrote:*   

> Maybe only a stupid question ...
> 
>  *Quote:*   for 64K stripe size 64000 / 512 = 125 sectors for each stripe on the disk  
> 
> Why don't ?
> ...

 

That's a good question

I think I originaly got the figure from another thread on the forum

but looking at it now I'm starting to wonder

I think I'll try writing a bit of code to compare the chunks to see how many sectors are being used per stripe chunk

----------

## anderlin

This is what I did:

I installed Windows on the second partition, and made the other partitions from within Windows. I left some free space at the end, the be sure the raid meta data didn't get overwritten.

Then I booted from a livecd (2004.1 is the only one that works for me), and got the size of the disks with the following:

```

# blockdev --getsize /dev/hde

234441648

```

This is the number of sectors on my disk. Change /dev/hde with your device. Both my disks are the same size. Be aware that the livecd and the installed kernel often gives the same disks different names. With me it was hde and hdg with the livecd, and sda and sdb with the installed kernel. 

Then I used dmsetup to map the disk to /dev/mapper/hdd:

```

# echo "0 468883296 striped 2 128 /dev/hde 0 /dev/hdg 0" | dmsetup create hdd

```

Change 468883296 to 2 times the number you got from blockdev. Change 128 according to your arrays stripe size (2 x 64 = 128).

Then I read the partition table from /dev/mapper/hdd:

```

# sfdisk -l -uS /dev/mapper/hdd 

[...]

   Device Boot    Start       End   #sectors  Id  System

/dev/mapper/hdd1   *        63    208844     208782  83  Linux

/dev/mapper/hdd2        208845  51407999   51199155   7  HPFS/NTFS

/dev/mapper/hdd3      51408000 468873089  417465090   f  W95 Ext'd (LBA)

/dev/mapper/hdd4             0         -          0   0  Empty

/dev/mapper/hdd5      51408063  53464319    2056257  82  Linux swap / Solaris

/dev/mapper/hdd6      53464383  84196664   30732282  83  Linux

/dev/mapper/hdd7      84196728 468824894  384628167  83  Linux

```

Change the following commands to match your table:

```

echo "0 208782 linear /dev/mapper/hdd 63" | dmsetup create boot

echo "0 2056257 linear /dev/mapper/hdd 51408063" | dmsetup create swap

echo "0 30732282 linear /dev/mapper/hdd 53464383" | dmsetup create root

echo "0 384628267 linear /dev/mapper/hdd 84196728" | dmsetup create media

echo "0 51199155 linear /dev/mapper/hdd 208845" | dmsetup create windows

```

Then you can install Gentoo the usual way:

```

# mkreiserfs /dev/mapper/root

# mkfs.ext3 /dev/mapper/boot

# mkswap /dev/mapper/swap

# swapon /dev/mapper/swap

# mount /dev/mapper/root /mnt/gentoo

# mkdir /mnt/gentoo/boot

# mount /dev/mapper/boot /mnt/gentoo/boot

[ ... continue as normal ... ]

```

Remember to compile into the kernel support for your sata controller, device-mapper, ramdisk, initrd and ext2. Her is my .config

Then install grub:

```

# grub --device-map=/dev/null

grub> device (hd0,0) /dev/mapper/boot

grub> device (hd0) /dev/mapper/hdd

grub> root (hd0,0)

grub> setup (hd0,0)

grub> quit

```

If this doesn't work, try with a more recent version of grub. I had to use sys-boot/grub-0.95.20040823.

Then download the following files, which are modified versions of the ones made by Gerte Hoogewerf:

http://anderlin.dyndns.org/filer/mkinitrd

http://anderlin.dyndns.org/filer/linuxrc

Change the following lines in linuxrc to suite your needs:

```

echo "0 468883296 striped 2 128 /dev/sda 0 /dev/sdb 0" | dmsetup create hdd

echo "0 208782 linear /dev/mapper/hdd 63" | dmsetup create boot

echo "0 2056257 linear /dev/mapper/hdd 51408063" | dmsetup create swap

echo "0 30732282 linear /dev/mapper/hdd 53464383" | dmsetup create root

echo "0 384628267 linear /dev/mapper/hdd 84196728" | dmsetup create media

echo "0 51199155 linear /dev/mapper/hdd 208845" | dmsetup create windows

```

Then install busybox (I used busybox-1.00-r1)  and make the initrd:

```

# USE="static" emerge busybox

# chmod +x mkinitrd

# ./mkinitrd linuxrc initrd

# cp -v linuxrc initrd /boot/

```

This is my grub.conf:

```

timeout 10

default 0

title GNU/Linux

root (hd0,0)

kernel /kernel-2.6.9-gentoo-r14 root=/dev/ram0 real_root=/dev/mapper/root

init=/linuxrc

initrd /initrd

title Windows

root (hd0,1)

rootnoverify

chainloader +1

```

Then it worked for me.

(sorry for any bad grammar, and bad layout)

----------

## garlicbread

I've spotted a couple of things recently

1. I've checked and the number of sectors within a 64K chunk is 128 (not 125)

i.e. a chunk is 65536 bytes long

thinking about it, the option we pass to dmsetup is probably the number of sectors

and there are 2 sectors for each Kb

2. using dmraid (version 5f) in combination with the Via chipset for Raid 0

dmraid appears to always use a chunk size of 32K

so if the chunk size is set to 64K or 16K when creating the array within the Bios, it won't be mapped properly if using dmraid, but should be okay using dmsetup (as your manually specifying the chunk size)

But at least we can use it to gain the sector length of the whole array

3. dmraid is okay mapping the 4 primary partitions

and or an extended partition

but not for mapping partitions within an extended partition (they just don't show up)

----------

## Deep-VI

Wonderful information - I had been trying to get this to work on my own using other resources, but then I found this and similar threads and really made some progress.

I have the Asus IC-7 which uses the ICH5 chipset.  I configured the onboard RAID to create RAID-0 across 2 80G drives.

My partition scheme is:

40G WinXP

64M BOOT

2G SWAP

(extended partition begins here)

10G ROOT

40G HOME

XXX unpartitioned for future use

Except for the partition scheme, I followed the steps outlined by anderlin closely.  I, like a lot of others, cannot use dmraid because it does not properly map some extended partitions.  Using dmsetup manually, I can properly map all partitions on the array.

One thing happened to me that I've seen others complain about.  When specifying the root partition inside GRUB, I had to use the geometry command to tell it how big my array was.  I used (total array sectors / 255 / 63).  After that, it had no complaints and could see all the mapped partitions.

I can successfully boot XP.  However, I am fighting an init script issue with Gentoo.  The initrd runs through successfully and creates all the needed /dev/mapper devices (verified by modifying the linuxrc script to 'ls' them after creation), but they poof sometime after real_root is mounted and Gentoo starts up.  I'm running pure udev, with no /dev tarball.

I saw 1 or 2 other people having the same problem in other threads, but no solutions posted.  I got to this point late last night and had to quit to sleep.  It's been very very educational - I knew nothing about initrd scripts, dev-mapper, or RAID metadata before this.  Now I guess it's time to probe the Gentoo boot scripts!

----------

## rone69

Ciao,

it's possible share the same disks in raid0 between OS/linux 64bit & XP (I need winxp for my job) whitout  data loss??

exuse for my bad english

thx

----------

## garlicbread

 *Quote:*   

> One thing happened to me that I've seen others complain about.  When specifying the root partition inside GRUB, I had to use the geometry command to tell it how big my array was.  I used (total array sectors / 255 / 63).  After that, it had no complaints and could see all the mapped partitions.

 

That's interesting

I'm guessing I've not seen that before as I usually just edit the grub.conf file directly

 *Quote:*   

> I can successfully boot XP.  However, I am fighting an init script issue with Gentoo.  The initrd runs through successfully and creates all the needed /dev/mapper devices (verified by modifying the linuxrc script to 'ls' them after creation), but they poof sometime after real_root is mounted and Gentoo starts up.  I'm running pure udev, with no /dev tarball.
> 
> I saw 1 or 2 other people having the same problem in other threads, but no solutions posted.  I got to this point late last night and had to quit to sleep.  It's been very very educational - I knew nothing about initrd scripts, dev-mapper, or RAID metadata before this.  Now I guess it's time to probe the Gentoo boot scripts!

 

I think the default configuration for udev under gentoo doesn't map out the device mapper nodes properly by default within /dev/mapper As there are similar problems getting the EVMS or LVM device nodes (which also use device mapper) to appear in the correct place with pure udev as well at bootup

I'm not sure how your initrd was created, but I think some initrd's get around the problem my just having the device nodes mapped manually using mknod when they were created initially

there's a howto here that relates to EVMS / LVM that might be relevant

https://forums.gentoo.org/viewtopic.php?t=263996&highlight=

in your case if your not using LVM or EVMS then

you may need to use a rule such as

KERNEL="dm-[0-9]*", PROGRAM="/etc/udev/scripts/dmmapper.sh %M %m", NAME="mapper/%c{1}"

within /etc/udev/rules.d/10-local.rules

although you'll need to setup the dmmapper.sh script (see howto)

and install multipath tools for the devmap_name binary

a simpler way without multipath tools or the script is to use

KERNEL="dm-[0-9]*",     NAME="mapper/%k"

But all the device names under mapper will be called dm0 through to dm9 (instead of their actual names)

----------

## Deep-VI

Very nice, garlic - I had suspected udev in the back of my mind and thank you for nudging me in the right direction.  I allowed me to save time and narrow down my search for answers.

Since this might help others:

In /etc/udev/rules/50-udev.rules, I commented out the KERNEL="dm-[0-9]*", NAME="" line in  and uncommented the similar line below it that calls /sbin/devmap_name...

Next, I downloaded and installed the latest multipath tools from http://christophe.varoqui.free.fr/multipath-tools/ (this package wouldn't compile without first emerging device-mapper for the libdevmapper library).

The next reboot went without a problem.  In /dev, the the device mapper devices are showing up as dm-0 through 5 and in /dev/mapper are the custom devices that my initrd creates (boot, home, swap, etc).  Both have the same major and minors.

It was a fun few days of troubleshooting and learning and the boost in speed is definitely worth it!

----------

## garlicbread

There's also an ebuild for the multipath-tools ebuild located here for info

https://bugs.gentoo.org/show_bug.cgi?id=71703

I think this has the correct depends to get it to compile correctly, but you'll have to use portage overlay to use it

I'm eager to get this working with Raid 0 now that I've got a couple of those 74Gb 10K super duper Raptor drives (180Mbps according to HDTach in Win O_o)

Although I'm still getting some freaky results trying to use them with dmraid (although I think I know why). Also I've nearly finished a conversion bash script that should make it easier to create initrd / initramfs archives from an existing directory containing files

----------

## anestis

I downloaded the livecd with the dmraid support from http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ but I'm having some problems:

Hello, 

I¢m new to linux and I¢m trying to set up gentoo on my 2X120GB Intel ICH5-R system (RAID-0)

I wanted to install kernel 2.6 so I wanted to go with the dmraid instead of the other guides that I found in the gentoo forum for the kernel 2.4

My two RAID-0 drives have several partitions. The first one is my windows my partition (NFTS). I have made a second unformatted partion which I want to install gentoo onto...

So here's what I tried  

```
livecd root # dmraid –ay 
```

 (dmraid worked straight away, I didn¢t had to modprobe any modules?)

```
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

isw_bfbgjhhedb_RAID_Volume1   isw_bfbgjhhedb_RAID_Volume11 
```

(see here I can see only two entries. However I don¢t understand the numbering system.. Volume1 seems to be the whole raid volume and Volume11 the fist partition?If that's the case why don't I see the rest?)

```
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1 /mnt/data

mount: /dev/mapper/isw_bfbgjhhedb_RAID_Volume1 already mounted or /mnt/data busy

mount: according to mtab, /dev/mapper/isw_bfbgjhhedb_RAID_Volume11 is already mounted on /mnt/data
```

(tried to mount Volume1 but didn¢t work)

```
livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume11 /mnt/data

 

livecd root # dir /mnt/data

1             CONFIG.SYS                FlexLM     NTDETECT.COM    System\ Volume\ Information  boot.ini      

hiberfil.sys

AUTOEXEC.BAT  Config.Msi                Games      NtfsUdel.log    UsageTrack.txt               boot.lgb      log.txt

BJPrinter     Default.wallpaper         IO.SYS     Program\ Files  WINDOWS                      bootbak.bat   ntldr

BOOT.BKK      Documents\ and\ Settings  MSDOS.SYS  RECYCLER        boot.bak                     gendel32.exe  

pagefile.sys
```

(Volume11 mounted ok. It¢s my windows partition)

```
livecd root # fdisk -l /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

omitting empty partition (5)

 

Disk /dev/mapper/isw_bfbgjhhedb_RAID_Volume1: 247.0 GB, 247044243456 bytes

255 heads, 63 sectors/track, 30034 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

                                   Device Boot      Start         End      Blocks   Id  System

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p1   *           1        4716    37881238+   7  HPFS/NTFS

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p4            4717       30034   203366835    f  W95 Ext'd (LBA)

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p5            6065       12049    48074481    7  HPFS/NTFS

/dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6           12050       30034   144464481    7  HPFS/NTFS

 

livecd root # mount /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6 /mnt/test1/

mount: special device /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p6 does not exist 
```

(how on earth do you access for example partition5)

Any help is greatly appreciated! I just need to understand how to format the partion I want to install gentoo onto with reiser4. From that point on I can keep up with the gentoo docs.

Thanks,

Anestis

----------

## garlicbread

dmraid can setup the main device node that represents the entire disk

and the first 4 primary partitions

but at the moment it has trouble with any partitions located inside an extended partition

so you'll need to map some of the partition nodes manually

the partition-mapper.sh script does this automaticly, but relies on sfdisk (I'm not sure if sfdisk is included on the livecd)

it's probably best to add the -p option to dmraid if using partition-mapper.sh to prevent dmraid from creating any partition nodes at the begining, and to let partition-mapper.sh create them all

to do it manualy instead

fdisk -lu /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

or

sfdisk -l -uS /dev/mapper/isw_bfbgjhhedb_RAID_Volume1

should give a list of partitions for the device using sector boundarys (-u option) which is more accurate 

using the start and end sector numbers for each partition

this can be fed into dmsetup using a linear map to create the partition node (eg /dev/mapper/isw_bfbgjhhedb_RAID_Volume1p5)

see anderlin's post above on how to do this

----------

## flipy

 *Quote:*   

> Then download the following files, which are modified versions of the ones made by Gerte Hoogewerf:
> 
> http://anderlin.dyndns.org/filer/mkinitrd
> 
> http://anderlin.dyndns.org/filer/linuxrc

 

I cannot find them... so I'm trying to figure out what to change...

can you post another URL or comment what to modify?

Thanks

edit

thanks again  :Very Happy: 

----------

## flipy

 *Deep-VI wrote:*   

> Very nice, garlic - I had suspected udev in the back of my mind and thank you for nudging me in the right direction.  I allowed me to save time and narrow down my search for answers.
> 
> Since this might help others:
> 
> In /etc/udev/rules/50-udev.rules, I commented out the KERNEL="dm-[0-9]*", NAME="" line in  and uncommented the similar line below it that calls /sbin/devmap_name...
> ...

 

did you do anything more in particular? (scripts,devmaps...)

edit

i've installed almost everything, but still it's not working:

something is wrong with dmraidmapper in the depend() clause

whenever i boot, i get something like "cannot check the fs", and it's gives me to enter a shell. if i do mount command, i can see that everything is mounted, but if i do a ls /dev/mapper just control shows up, and /dev/dm-X are mapped (and already mounte though)

i've followed this how-to and the steps that anderlin posted, installed udev, configured /etc/conf.d/rc...

so, my system boots ok until it has to check the root fs, then it crashes with check error

so i'm a little bit lost, trying to figure out what's going wrong...

edit2

it seems that everything is mapped in the /dev directory... but still don't get the point in how to solve that

----------

## Ravilj

* Erm * made a mistake...

----------

## garlicbread

 *flipy wrote:*   

> 
> 
> It was a fun few days of troubleshooting and learning and the boost in speed is definitely worth it!
> 
> did you do anything more in particular? (scripts,devmaps...)
> ...

 

Not sure of your exact setup

By default the udev setup will not map device mapper entrys at all

by enabling the rule within /etc/udev/rules.d/50-udev.rules you'l get basic support of /dev/dm-0 /dev/dm-1 etc

but this makes it impossible to tell which dm entry relates to what (evms / dmraid / dmsetup etc)

to get the entrys to show up the same way as they do with devfs e.g. /dev/mapper/ or /dev/evms/

there's a little more work to do:

https://forums.gentoo.org/viewtopic-t-263996-highlight-.html

Also it's important to remember when using a initrd or initramfs to boot the system the changes or additions made to udev relating to the boot device need to be made in 2 places

first on the root filesystem and second within the initrd / initramfs image

I've recently been messing with initramfs images and evms / udev / device mapper raid etc

Some initrd scripts get around the problem by using mknod to create the device nodes, but I've found a better way by using udevstart, but this means that udev has to be configured properly in the initramfs image as well

I've recently got a bootable initramfs image working with pure udev / evms / run-init (from a klibc ebuild I'm working on)

I've nearly finished writing another howto for all of this, but I need to:

a. finish off a script for creating initrd / initramfs images.

b. finish off the ebuild for klibc 

c. I need to test that I can now boot to a device-mapper based raid set

----------

## m@cCo

Good, maybe i should get my raid0 to be seen by linux with this method, very very good  :Very Happy: 

But i have a question, well two in fact...

First, is dmsetup available in the live installation cd?

If the answer is yes i could try the totally manual method to set up my disks.

Otherwise here's the second question for you  :Very Happy: 

I have dmraid sources in the livecd, but i have to compile them accordingly to my kernel version (i have nothing installed yet).

Unfortunately when i try to compile it in the livecd i get a "cannot create executables" error from gcc.

I tried to edit /etc/make.conf adding the proper (i think) variables for an Athlon64 processor.

CHOST="what the manual says to be here"  :Very Happy: 

CFLAGS="-march=athlon64 -pipe -o2"

CXXGFLAGS={$CFLAGS} (i don't remeber the exact sintax, sorry).

Anyway i'm referring to the installation guide so i hope i was right.

I've read that somebody has had the same problem but after having the system installed.

I try to run gcc-config but it doesn't exist.

Could you help me in some way?

Thanks a lot.

----------

## garlicbread

I think (or suspect) that dmsetup does probably exist on the Live CD although I'm not entirely certain

(I know the LVM tools exist and they need device-mapper so I'm guessing that it should be on there)

it sounds like what your trying to do is to compile dmraid while in the Live CD

but I'm not sure that's possible

usually to add stuff into the Live CD enviroment involves manualy creating a custom LiveCD with the tools on

one way is to use catalyst

and another way is to do it manually

https://forums.gentoo.org/viewtopic-t-244837-highlight-livecd+creation.html seems to have more info on this

This is one of those things I've not looked into properly myself yet (but will need to eventually to setup a rescue CD in the event the system becomes unbootable for any reason)

on a side note for anyone with a VIA chipset

I've recently found that the rc5f version of dmraid did not in fact support the Via chipset

what was happening was it was picking up the pdc metadata written from a different controller on the same motherboard (I was swapping the drives around experimenting)

and the Via metadata was being overlayed on top of the pdc metadata which sort of made it work on the Via chipset with dmraid (weird)

this is why my newer 74Gb drives were not being picked up (since I'd never set them up on the pdc controller)  :Embarassed: 

saying that, there's a new version of dmraid out rc6 which does appear to now support via chipsets  :Very Happy: 

https://bugs.gentoo.org/show_bug.cgi?id=63041

Edit

it looks like for Via / Raid 0 dmraid will only use a cluster size of 16K (32 sectors)

but on the plus side it does now appear to map out the extended partitions properly

Another Edit

it looks as if there is still a problem with the Promise Raid 1 array being reported half lengthLast edited by garlicbread on Thu Mar 10, 2005 12:09 am; edited 2 times in total

----------

## m@cCo

So have i to create my own livecd?

Or can i load device-mapper and use dmsetup in some other way?

Thanks again

----------

## m@cCo

Nobody?

----------

## Phk

Errrrm.... I'm having problems booting from the RAID0 partition.. 

And i don't have a clue on how to fix it...  :Crying or Very sad: 

Please take a look: here

Thanks rightaway...

----------

## garlicbread

 *m@cCo wrote:*   

> Nobody?

 

Method 1 use dmsetup on the live CD to access the raid array

this is difficult as this is the "manual" method, you'll need to figure out what maps to pass to dmsetup prior to using it

Method 2 create your own live CD with dmraid on using catalyst (assuming dmraid recognises your array)

no idea how to do this at this point

For myself, I have a few spare IDE drives knocking about so I've installed an initial system onto one of these (not Raid'd) and I'm planning on moving everything across from this spare disk to the final Raid array once it's sorted

I've nearly finished another Howto for evms / udev / dmraid on an initramfs image (using genkernel as an initial step). So I'll probably be looking into how to use catalyst next to create an emergency boot / Live CD

----------

## Gruffi

Hello garlicbread,

I have been trying to get raid0 to work on my Asus A8V Deluxe.

When i boot the gen2raid cd i can see my windows striped partition with no problem, however the cd does not support chrooting from an amd64 cpu.

When i boot from an IDE harddisk and load the same modules the gen2raid cd loads mount says it is not a valid partition.

I think i misconfigured something in the kernel or i'm loading the wrong modules.

Would you post your .config please?

Thanks  :Very Happy: 

Gruffi Gummi

----------

## garlicbread

 *Gruffi wrote:*   

> Hello garlicbread,
> 
> I have been trying to get raid0 to work on my Asus A8V Deluxe.
> 
> When i boot the gen2raid cd i can see my windows striped partition with no problem, however the cd does not support chrooting from an amd64 cpu.
> ...

 

I've not used gen2raid yet

but I think the relevant section in the .config for the kernel is

```
#

# Multi-device support (RAID and LVM)

#

CONFIG_MD=y

CONFIG_BLK_DEV_MD=y

CONFIG_MD_LINEAR=y

CONFIG_MD_RAID0=y

CONFIG_MD_RAID1=y

CONFIG_MD_RAID10=m

CONFIG_MD_RAID5=y

# CONFIG_MD_RAID6 is not set

CONFIG_MD_MULTIPATH=y

CONFIG_MD_FAULTY=m

CONFIG_BLK_DEV_DM=y

CONFIG_DM_CRYPT=y

CONFIG_DM_SNAPSHOT=y

CONFIG_DM_MIRROR=y

CONFIG_DM_ZERO=y

# CONFIG_DM_MULTIPATH is not set

CONFIG_BLK_DEV_DM_BBR=m

# CONFIG_DM_FLAKEY is not set
```

or within "make menuconfig"

Device Drivers -> Multi-device support (RAID and LVM) -> <options here>

----------

## Erlend

I think partition-mapper.sh script might be slightly broken.

This line is giving the problem:

```

 print 0, size, "linear", base, start | ("dmsetup create " dev);

```

Not sure exactly why though, as I'm not great with sh or awk.

Erlend

----------

## garlicbread

 *Erlend wrote:*   

> I think partition-mapper.sh script might be slightly broken.
> 
> This line is giving the problem:
> 
> ```
> ...

 

One option is to grab kpartx from multipath-tools ebuild on bugzilla

kpartx -l /dev/mapper/<drive> will list the partitions it will create

kpartx -a /dev/mapper/<drive> will add them

kpartx -d /dev/mapper/<drive> will remove them

I've recently done a re-write of partition-mapper.sh so that it behaves just like kpartx

bearing in mind it uses sfdisk / bash / awk / dmsetup so make sure these are installed

this is the new version

EDIT

this should now work without having to specify the full path to the /dev/mapper/<node>

e.g. cd /dev/mapper/

partition-mapper.sh -a pdc_rd1_gbon

should now work as well

EDIT2

New busybox freindly version

```

#!/bin/sh

# This script emulates the behavior of kpartx using sfdisk / dmsetup

# Richard Westwell <garlicbread@ntlworld.com>

SFDISK_CMD="/sbin/sfdisk"

DMSETUP_CMD="/sbin/dmsetup"

ID='$Id: partition_mapper,v 1.0 2005/20/01 00:00:00 genone Exp $'

VERSION=0.`echo ${ID} | cut -d\  -f3`

PROG=`basename ${0}`

verb="0"

mode=""

delimiter=""

map_sfdisk() {

   process_list "" | while read line; do

      eval "local ${line}"

      sfdev_num=${sfdev#${sfdev_base}}

      fulldevnode="${sfdev_base}${delimiter}${sfdev_num}"

      if [ "${mode}" = "list" ];then

         echo "${fulldevnode} : 0 ${sfsize} ${device_node} ${sfstart}"

      elif [ "${mode}" = "delete" ];then

         [ "${verb}" -gt "0" ] && echo "del devmap : ${fulldevnode}"

         ${DMSETUP_CMD} remove "${fulldevnode}"

      elif [ "${mode}" = "add" ];then

         [ "${verb}" -gt "0" ] && echo "add map ${fulldevnode} : 0 ${sfsize} linear ${device_node} ${sfstart}"

         echo "0 ${sfsize} linear ${device_node} ${sfstart}" | "${DMSETUP_CMD}" create ${fulldevnode}

      fi

   done

}

process_list() {

# put together a list of variables to export for each partition

base_dev_name=`basename ${device_node}`

${SFDISK_CMD} -l -uS ${device_node} 2>/dev/null | awk '/^\// {   

   if ( $2 == "*" ) {start = $3;size = $5;}

   else {start = $2;size = $4;}

   if ( size == 0 ) next;

   ("basename "  $1) | getline dev;

   print \

   "sfstart=\""start"\";", \

   "sfsize=\""size"\";", \

   "sfdev=\""dev"\";" \

   "sfdev_base=\""base_dev_name"\";" \

   }' base_dev_name=${base_dev_name}

}

usage() {

   echo "${PROG} v. ${VERSION}

usage : ${PROG} [-a|-d|-l] [-v] wholedisk

        -a add partition devmappings

        -d del partition devmappings

        -l list partitions devmappings that would be added by -a

        -p set device name-partition number delimiter

        -v verbose"

   exit 1

}

###########

#Parse Args

###########

params=${#}

while [ ${#} -gt 0 ]

do

   a=${1}

   shift

   case "${a}" in

   -a)

      mode="add"

      device_node=${1}

      shift

      ;;

   -d)

      mode="delete"

      device_node=${1}

      shift

      ;;

   -l)

      mode="list"

      device_node=${1}

      shift

      ;;

   -p)

      delimiter=${1}

      shift

      ;;

   -v)

      let $((verb++))

      ;;

   -*)

      echo "${PROG}: Invalid option ${a}" 1>&2

      usage=y

      break

      ;;

   *)

      # Anything else just ignore

      ;;

   esac

done

[ ! -n "${mode}" ] && usage=y

[ ! -n "${device_node}" ] && usage=y

[ "${usage}" ] && usage

# make sure this is the full Absolute path

bas_nm=`basename ${device_node}`

dir_nm=`dirname ${device_node}`

[ "${dir_nm}" = "." ] && dir_nm=`pwd`

device_node="${dir_nm}/${bas_nm}"

if [ ! -b "${device_node}" ];then

   echo "${PROG}: unable to access device: ${device_node}" 1>&2

   exit 1

fi

map_sfdisk

```

I'm also working on re-writing the startup scripts so that settings for dmsetup / dmraid can be pulled from /etc/dmtab which is included with some of the newer device-mapper ebuilds

(although this has involved altering the layout of /etc/dmtab a little bit, but it's not in full use anyway yet)Last edited by garlicbread on Thu Apr 07, 2005 11:00 pm; edited 2 times in total

----------

## Erlend

That script is great, thanks.  I run it and it works (I've been wondering for some time now, why does dmraid do a lot of fancy metadata stuff when it is possible to just use a script like this?  Is dmraid supposed to be safer?).

The only thing with your script, it didn't run without editing...

```
map_sfdisk() {

   while read line; do

      eval "local ${line}"

      sfdev_num=${sfdev#${sfdev_base}}

      fulldevnode="${sfdev_base}${delimiter}${sfdev_num}"

      if [ ${mode} = "list" ];then

         echo "${fulldevnode} : 0 ${sfsize} ${device_node} ${sfstart}"

      elif [ ${mode} = "delete" ];then

         [ "${verb}" -gt "0" ] && echo "del devmap : ${fulldevnode}"

         ${DMSETUP_CMD} remove "${fulldevnode}"

      elif [ ${mode} = "add" ];then

         [ "${verb}" -gt "0" ] && echo "add map ${fulldevnode} : 0 ${sfsize} linear ${device_node} ${sfstart}"

         echo "0 ${sfsize} linear ${device_node} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}"

      fi

   done <<<"$(process_list)"

}
```

As you can see I've changed:

```
echo "0 ${sfsize} linear ${sfdev_base} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}" 
```

to

```
echo "0 ${sfsize} linear ${device_node} ${sfstart}" | ${DMSETUP_CMD} create "${fulldevnode}"
```

As I think you need to access the device as /dev/mapper/name rather than just name.

Cheers,

Erlend

----------

## garlicbread

thanks for the amendment

as far as I know I think dmraid does do some checking of the metadata

e.g. if the Bios has marked a Raid 1 Array as being out of sync then I think dmraid will refuse to activate it

for Raid 0 there is no checking as such (since Raid 0 has no redundency anyway)

also I think it can pick up as to which drives belong to which arrays

e.g. I have 4 drives that make up 2 arrays

/dev/sda /dev/sdb (Via Raid 0) - mapped via dmraid

/dev/sdc /dev/sdd (PDC Raid 1) - mapped via dmsetup (as dmraid thinks the array is half the size it should be)

if one of the drives in the middle e.g. sdb went missing then this would mess up the whole mapping for both Arrays if dmsetup was used, but dmraid should be able to distinguish which drives belong to which arrays by observing the metadata

One thing I plan on doing is using udev to create symbolic links for the individual drives based on the drive serial number, and then to pass the symbolic links to dmsetup for the mapping

that way you could completly re-arrange the drives (the Bios or Windows might not like that, but for Linux it would just auto map the correct drives to the correct links based on the serial no)

the problem is, is that the serial no for Sata drives is not currently visible under sysfs so I may have to find a utility that can display the serial number based on the major / minor device number (dmraid has the ability to show this info based on the block device name so I may be able to write a wrapper script to sit inbetween udev and dmraid or some other util)

----------

## zpet731

Hi, I posted to another thread earlier, actually two but they say three is lucky. Well actually I just want to see what works best before I start installing.

I currently built a:

AMD64 system 3200+

GA-N8NF-9 motherboard

6600 GT graphics card

1GB RAM

2 SATA 160GB drives

Now, Im only planning to run gentoo on this system so no dual boot or anything.

I've read quite a bit on SATA raid threads, most of the threads are excellent but I still need a few things answered before I start installing gentoo on it. Im using a minimal 2005.0 image that I downloaded off the net.

Now if I am to use Raid 0 setup what is my best option do I use the motherboard raid or not? Im not sure which way is better so hopefully someone can enlighten me on this issue.

Also if I am to use the software raid and control it completely from linux do I need to disable the raid in the bios? My motherboard asks me to setup the array each time I boot up and the sata raid is enabled by default. Can someone explain what needs to be done. Thanks!!!

 would also like to know which one is faster (BIOS or Kernel) and has less strain on the CPU, or is it the same?

----------

## garlicbread

If your planning to dual boot windows / Linux on the same array then setting up the array via the Bios is the way to go

If this is just for Linux (no Windows) then software Raid is easier to setup (especially if you use evms)

something to bear in mind is that Linux cannot access the drives through the Bios

for Linux only software Raid, you simply turn off the bios raid support and set it up within Linux

for Linux Raid support that co-exists with the Bios setup (which is needed to dual boot) you can ether use device mapper to read / write the data to the disks the same way the Bios would do

or use a Linux software Raid setup that has the metadata section turned off which essentialy does the same thing (although this is actually more difficult to setup I think)

in terms of speed

I've found that a Bios raid array accessed via device mapper (dmraid / dmsetup) tends to be a bit faster than software raid

this is the same link that's on the bottom of the first message graph

this has some results for a pair of Maxtor 200Gb sata drives (haven't got around to benchmarking my 10K raptor drives yet however)

for Raid 1 it looks as if the results are not much different between sofware / device-mapper

for Raid 0 it looks as if the results are around 89K flat line for Software Raid (via) and the max speed of the disk for Device-mapper raid (the stepping on the graph represents the actual zones of the disk which means the full capacity is being used, ranging from 120K at the start down to 89K)

----------

## Erlend

garlicbread, does your script work on extended partitions?

Is that graph showing read/write speed from beginning to end of drive?  If so, why does Promise Linear Raid 0 64KS 400GB Software not get slower towards the end of the drive?  Oh, and out of curiousity, since you've done benchmarks, what would you expect to get out of 2 Seagate Baracudas with Promise FastTrak raid 0?  I'm getting about 70 MB/s with that, but that is after the first 120GB (or 240GB array).

Thanks,

Erlend

----------

## garlicbread

both kpartx and the above script should map primary and extended parititions, dmraid seems to do this okay as well for the moment

I think the only difference is that while kpartx skips the 5th one (the one that normaly represents the entire extended region) the script will map this in as well. Both appear to map any partitions within the extended region without a problem

I've noticed that for the Promise controller this appears to be slower than the VIA chipset on my own mobo, I think this is due to the promise connecting through the PCI bus (so I think it depends on how the chipset is connected)

also dmraid has a tendency to map the main drive node for Promise to half the size it should be (which in turn stops the partitions from being mapped properly) so I still have to use dmsetup to map the main drive node for the Promise controller

for Raid 0 on the Promise controller

through device-mapper this can be seen as the turqoise line just above the red one which starts off at approx 95Mb/s and then trails off at the end (although it's masked by the red line in front) down to about 82Mb/s

I think this means it's bottlenecking at the beginning of the disk (as it's a flat line) due to the PCI bus and then near the end the combined drive speed is less than the avaialble bus speed which is why it zones down

through Software Raid it seems to flat line at about 69MB/s, I suspect this doesn't zone down at the end as it's already beneath the total drive speed capability all the way through (which is approx 82Mb/s minimum from looking at the previous graph)

----------

## zpet731

Thanks Garlicbread,

I decided to go for software raid as I am only using linux and even though your benchmarks show that for bios raid 0 you get slightly better performance. After I completed everything I was very satisfied with the results and achieved a 54-58MB/s for individual hard disks and a 106-112MB/s for raid 0. Therefore I'm a very happy gentoo user...  :Razz: 

----------

## movrev

I have read almost the whole thread, but cannot make my mind regarding RAID.

I am dual-booting winXP and gentoo64, so I set up RAID 1 through the BIOS and used the provided drivers for the windows install without a problem (I am booting from this array). I am now ready to install gentoo64, but I don't know what exactly I should use. My array has 1 primary and 1 extended with 6 logical drives. Windows is installed in the last logical drive, and the primary drive is going to be /boot.

At this point in time, is dmraid good enough for what I need to do, or should I go with dmsetup. Also, is there any quick way of setting up an initrd or something that would let me use the array for booting? Also, from what I understand, the method that includes device mapper for RAID 1 does read and write at the same speeds of one hard drive, right? So, technically, the only good thing about this setup is data resiliency (permanent backup) and nothing more. I would love to have it read a part from each disk so as to double read speed as well.

Lamentably, I am starting to think of not using this fake RAID at all  :Sad: ...it seems too hard to maintain plus maybe being unstable.

----------

## flipy

 *movrev wrote:*   

> I have read almost the whole thread, but cannot make my mind regarding RAID.
> 
> I am dual-booting winXP and gentoo64, so I set up RAID 1 through the BIOS and used the provided drivers for the windows install without a problem (I am booting from this array). I am now ready to install gentoo64, but I don't know what exactly I should use. My array has 1 primary and 1 extended with 6 logical drives. Windows is installed in the last logical drive, and the primary drive is going to be /boot.
> 
> At this point in time, is dmraid good enough for what I need to do, or should I go with dmsetup. Also, is there any quick way of setting up an initrd or something that would let me use the array for booting? Also, from what I understand, the method that includes device mapper for RAID 1 does read and write at the same speeds of one hard drive, right? So, technically, the only good thing about this setup is data resiliency (permanent backup) and nothing more. I would love to have it read a part from each disk so as to double read speed as well.
> ...

 

well, first, you can use dmraid, try to download the gen2dmraid 0.99a (which uses pure udev) and see if it autodetects your raid and check the size of the partitions (AFAIK dmraid had a bug with more than 4 partitions... but try it anyway).

moreover, RAID 1 is for data consistency, so if you ever have any problems with the primary disk, the raid should detect and correct that.

RAID 0 is for speed, and will increase your read/write to almost 180%.

Setting up the RAID 0 following garlicbread's steps it's quite easy, but i've downloaded the dmraid initrd and hack it (i think someone posted how to do that on the 1st page).

----------

## movrev

 *flipy wrote:*   

> 
> 
> well, first, you can use dmraid, try to download the gen2dmraid 0.99a (which uses pure udev) and see if it autodetects your raid and check the size of the partitions (AFAIK dmraid had a bug with more than 4 partitions... but try it anyway).
> 
> moreover, RAID 1 is for data consistency, so if you ever have any problems with the primary disk, the raid should detect and correct that.

 

Booting with gen2dmraid 0.99a gave me all my partitions as far as I can tell:

Raid 1 Array (2 x 200GB Maxtor SATA)

nvidia_dagcbaeb (mapper devide)

nvidia_dagcbaeb1

nvidia_dagcbaeb5

nvidia_dagcbaeb6

nvidia_dagcbaeb7

nvidia_dagcbaeb8

nvidia_dagcbaeb9

nvidia_dagcbaeb10

Correct me if I am wrong, but these seem to be the devices that map onto my two hard drives, right? I don't know where he got the names out of, but all I can tell is that it technically did recognize well my primary partition and the 6 logical partitions inside the extended one.

I used fdisk on the nvidia_dagcbaeb device and the sizes are exactly what I had formatted the different partitions to be. So, I booted with the gen2dmraid + vga option and it autodetected my network and gave me a simple framebuffer, which I could technically use to install.

However, you guys are talking about hacking the initrd to make it recognize and set up the RAID array at boot. I usually use splashtuils to make this initrd because I like to have a framebuffer + splash. Is it possible to hack this initrd to make it usable for this purposes? And I would have to hack it every time I make a new initrd, which is not that often thankfully. Would I have to mess with any other thing other than the initrd and well, the usual grub.conf?

Coming back to your last points about RAID. I understand that RAID0 gives you speed, but I want data resiliency, which confines me to RAID1. You are saying that not only will RAID1 do a permanent backup, but also correct for errors, which I understand. However, will it correct for errors in linux? Because, from what it seems to me, since we are not identifying it as RAID, is that it will only save the same thing to two hard drives at the same time (which is what we want, but a power failure will make that fail). Would I have to use the BIOS to rebuild it then? Or windows?

Also, for those of you guys running it so far. How stable is this? Because I want RAID 1 to keep a permanent backup without data corruption and maybe with an increase in read speed, and I would die if this setup in linux would ruin my data. It would just defeat the purpose of making all this effort. Thanks for your help.

----------

## garlicbread

I've been writing some scripts recently to allow easy editing / modification of initrd / initramfs files

as far as I know splashutils creates an initramfs file and just puts some config / image data into it which the splash driver in the kernel picks up on

but so far I've not seen widespread use of initramfs for booting a script at runtime to make boot devices visible (e.g. for evms or dmraid)

as there are some differences that need to be present within the script (i.e. run-init from klibc instead of pivot_root)

currently my system at the moment is using an initramfs image with evms and pure udev

but I still need to try device-mapper raid out with this as well

I should be releasing another howto on this pretty soon now that I've finished the below scripts

Note for resiliance it may be better to use linux software raid 1 at the moment as that's something that's been tried / tested

in terms of speed it's not much different than device-mapper raid 1, the only down side is lack of compatibility with windows

Raid 1 was primarily designed for servers for maximum up time

the idea is if one drive fails it can just be swapped out for another and it'l keep on ticking

However it's still no substitute for proper backups (if both drives fail at the same time your still screwed)

by using device-mapper raid your just trying to write to the disks under Linux the same way the bios or the official windows drivers would

but this is still fairly beta at the moment, from what I've seen of dmraid it's not entirely finished yet, and mapping things manually can be a bit tricky

here's some new scripts that may come in handy let me know what you think

first a modified version of /etc/dmtab

I've seen this included with the latest version of device-mapper, but so far it only appears to be used by /lib/rcscripts/addons/dm-start.sh

which I think is something still being worked on

I've modified the rule set by adding another field and called the config file something else to avoid confliction

/etc/dmrtab

```

# Format: <volume name> : <options> : <table>

# Example: isw:p: 0 312602976 striped 2 128 /dev/sda 0 /dev/sdb 0

# Modified for own use with partition-mapper and dmraid

# 1. The first field indicates the volume name for standard dmsetup type rules

#    or the program name when 'e' is present in the second option field

# 2. Second field indicates certain options for the script to process

#    valid values currently are 'e', 'p' or 'x'

#    'p' indicates that parition-mapper or kpartx should be used to map out the partitions after creation

#    'e' indicates that the rule is not for dmsetup (i.e. for dmraid)

#    'x' indicates that the rule should not be mapped with the 'all' target, it can only be mapped by it's given name

# 3. Third field indicates the table to be passed to dmsetup, or the command line options for dmraid

# Example dmsetup type mapping rules

#pdc_raid1_dev:xp: 0 398296960 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0

#pdc_raid1_dev:: 0 398296960 mirror core 2 128 nosync 2 /dev/sda 0 /dev/sdb 0

# non-dmsetup rule indicated by "e" in the second field

# The first field should indicate the program name, so far dmraid is the only one supported

# The third field represents any additional command line options to be passed to dmraid

# e.g. such as -f pdc to specifiy only the "pdc" type arrays

# command line options for -an and -ay (activate/deactivate) are added in by the script automaticaly

#dmraid:e: -fvia via_bjedgggbc

# -p in the third field tells dmraid not to map the partitions itself

# p in the second field will use kpartx or partition-mapper to map out the partitions instead

#dmraid:ep: -p -fvia via_bjedgggbc

```

EDIT

New busybox freindly version

Changed the preference to partition-mapper.sh instead of kpartx

due to timing issue between udev and kpartx

Next a script to actually use it

/usr/local/bin/dmmap

```

#!/bin/sh

DMTAB="/etc/dmrtab"

PARTMAPPER_DIR="/dev/mapper"

DMSETUP_CMD="/sbin/dmsetup"

DMRAID_CMD="/usr/sbin/dmraid"

KPARTX_CMD="/sbin/kpartx"

PARTMAPPER_CMD="/usr/local/bin/partition-mapper.sh"

PM_CMD="${PARTMAPPER_CMD}"

ID='$Id: dmmap,v 1.0 2005/20/01 00:00:00 genone Exp $'

VERSION=0.`echo ${ID} | cut -d\  -f3`

PROG=`basename ${0}`

verb="0"

# used to return string values from functions

retvalue=""

map_list_target() {   

   # Filter comments and blank lines

   #each loop equals one valid entry in ${DMTAB}

   grep -n -v -e '^[[:space:]]*\(#\|$\)' "${DMTAB}" | \

   while read line_entry; do

      local test1=""

      auto_partition="false"

      dm_raidrule="false"

      all_exclude="false"

      

      # grab line number from first field (added by grep)

      get_first_field "${line_entry}"; line_number="${retvalue}"

      line_entry="${line_entry#*:}"

      

      # now grab the volume name from the next field

      get_first_field "${line_entry}"; volume_name="${retvalue}"

      line_entry="${line_entry#*:}"

      

      # grab any volume options such as if to partition the drive as well

      get_first_field "${line_entry}"; volume_option="${retvalue}"

      line_entry="${line_entry#*:}"

      

      # grab any parms to be passed to dmsetup or dmraid (remainder of the line)

      volume_parms="${line_entry}"

      

      # check to make sure that volume_name / volume_parms fields are not empty

      if [ ! -n "${volume_name}" ] || [ ! -n "${volume_parms}" ];then

         echo "${PROG}: error fields empty or incorrect number of fields, Line ${line_number}" && continue

      fi

      

      test1=`echo "${volume_option}" | grep "p" 2>/dev/null`

      [ -n "${test1}" ] && {

         auto_partition="true"

         [ -x "${KPARTX_CMD}" ] && PM_CMD="${KPARTX_CMD}"

         [ -x "${PARTMAPPER_CMD}" ] && PM_CMD="${PARTMAPPER_CMD}"

         [ ! -n "${PM_CMD}" ] && echo "${PROG}: Error unable to locate kpartx or partition-mapper script" && exit 1

         }

      

      test1=`echo "${volume_option}" | grep "e" 2>/dev/null`

      [ -n "${test1}" ] && {

         dm_raidrule="true"

         [ ! -x "${DMRAID_CMD}" ] && echo "${PROG}: Error unable to locate ${DMRAID_CMD}" && exit 1

         }

      [ "${dm_raidrule}" = "false" ] && {

         [ ! -x "${DMSETUP_CMD}" ] && echo "${PROG}: Error unable to locate ${DMSETUP_CMD}" && exit 1

         [ ! -c "/dev/mapper/control" ] && echo "${PROG}: Error unable to locate /dev/mapper/control" && exit 1

         }

      test1=`echo "${volume_option}" | grep "x" 2>/dev/null`

      [ -n "${test1}" ] && all_exclude="true"

      

      if [ ! "${list_target}" = "all" ] && [ ! "${list_target}" = "${volume_name}" ];then

         continue

      fi

      if [ "${list_target}" = "all" ] && [ "${all_exclude}" = "true" ];then

         continue

      fi

      

      # use dmraid to process the rule

      [ "${dm_raidrule}" = "true" ] && map_dmraid

      # use dmsetup to process the rule

      [ "${dm_raidrule}" = "false" ] && map_dmsetup   

   done

   return 0

}

map_dmraid() {   

   # Strip off the -p

   local list_parms=`echo "${volume_parms}" | sed "s/-p//"`

   # List of volume names that would be created

   local dmraid_volnames=`"${DMRAID_CMD}" -s "${list_parms}" | grep name | sed "s/name[ \t]*://;s/^[ \t]*//;s/[ \t]*$//"`

   

   if [ "${volume_name}" = "dmraid" ];then

      if [ "${mode}" = "add" ];then

         # Map the main drive nodes

         [ "${verb}" -gt "0" ] && echo "${PROG}: activating dmraid with parameters -ay ${volume_parms}"

         if ! ("${DMRAID_CMD}" -ay "${volume_parms}"); then

            #"${DMRAID_CMD}" -ay "${volume_parms}" || {

            echo "${PROG} there was a problem with ${DMRAID_CMD}"

            return 1

         fi

      

         # If partition-mapper / kpartx is to be used

         if [ "${auto_partition}" = "true" ];then

            echo "${dmraid_volnames}" | while read x; do

               map_part_mapper -a "${PARTMAPPER_DIR}/${x}" || return 1

            done

         fi

      

      elif [ "${mode}" = "delete" ];then

         # Remove the partition nodes if partition-mapper / kpartx is to be used

         if [ "${auto_partition}" = "true" ];then

            echo "${dmraid_volnames}" | while read x; do

               map_part_mapper -d "${PARTMAPPER_DIR}/${x}" || return 1

            done

         fi

      

         # Remove the main Drive nodes

         [ "${verb}" -gt "0" ] && echo "${PROG}: deactivating dmraid with parameters -an ${volume_parms}"

         if ! ("${DMRAID_CMD}" -an "${volume_parms}"); then

            echo "${PROG} there was a problem with ${DMRAID_CMD} during the removal of ${volume_parms}"

            return 1

         fi

      

      elif [ "${mode}" = "list" ];then

         local x=""

         [ "${auto_partition}" = "true" ] && x="p:"

         echo "${volume_name}:${x} Command Line Opts: ${volume_parms}"

         "${DMRAID_CMD}" -s -ccc "${list_parms}" 2>/dev/null | while read x; do

            echo "   $x"

         done

         echo ""

      fi

   fi   

}

map_dmsetup() {

   if [ "${mode}" = "add" ]; then

      # Skip if already mapped

      dmvolume_exists "${volume_name}" && return 0

      [ "${verb}" -gt "0" ] && echo "${PROG}: creating volume ${volume_name}:${volume_parms}"

      if ! (echo "${volume_parms}" | "${DMSETUP_CMD}" create "${volume_name}"); then

         echo "Error creating volume: ${volume_name}"

         return 1

      fi

      [ "${auto_partition}" = "true" ] && map_part_mapper -a "${PARTMAPPER_DIR}/${volume_name}"

   elif [ "${mode}" = "delete" ] && [ "${dm_raidrule}" = "false" ];then

      # Skip if not already mapped

      dmvolume_exists "${volume_name}" || continue

      [ "${verb}" -gt "0" ] && echo "${PROG}: removing volume ${volume_name}:${volume_parms}"

      [ "${auto_partition}" = "true" ] && map_part_mapper -d "${PARTMAPPER_DIR}/${volume_name}"

      "${DMSETUP_CMD}" remove "${volume_name}"

   elif [ "${mode}" = "list" ]; then

      local x=""

      [ "${auto_partition}" = "true" ] && x="p:"

      echo "dmsetup:${x} ${volume_name}: ${volume_parms}"

   fi

}

usage() {

   echo "${PROG} v. ${VERSION}

   ${PROG} activate a device mapper entry within the table ${DMTAB}

usage : ${PROG} [-a|-d|-l] [-v] wholedisk

        -a add device devmappings

        -d del device devmappings

        -l list device devmappings that would be added by -a

        -v verbose"

   exit 1

}

map_part_mapper() {

   [ "${verb}" -gt "0" ] && echo "${PROG}: calling ${PM_CMD} ${@}"

   "${PM_CMD}" "${@}" || {

   echo "${PROG} there was a problem with ${PM_CMD}"

   return 1

   }

}

get_first_field() {

   local temp

   # Use sed to extract the first ":" seperated field from the line

   temp=`echo ${@} | sed "s/:\(.*\)//"`

   # remove any trailing / leading spaces

   temp1=`echo ${temp} | sed "s/^[ \t]*//;s/[ \t]*$//"`

   retvalue="${temp}"

   return 0

}

#   Return true if volume already exists in DM table

dmvolume_exists() {

   local test1 x line volume=$1

   [ -z "${volume}" ] && return 1

   

   test1=`${DMSETUP_CMD} ls 2>/dev/null | grep "${volume}" 2>/dev/null`

   [ -n "${test1}" ] && return 0

   return 1

}

[ ! -f "${DMTAB}" ] && echo "${PROG}: Error unable to locate ${DMTAB}" && exit

###########

#Parse Args

###########

params=${#}

while [ ${#} -gt 0 ]

do

   a=${1}

   shift

   case "${a}" in

   -a)

      mode="add"

      list_target=${1}

      shift

      ;;

   -d)

      mode="delete"

      list_target=${1}

      shift

      ;;

   -l)

      mode="list"

      list_target=${1}

      shift

      ;;

   -v)

      let $((verb++))

      ;;

   -*)

      echo "${PROG}: Invalid option ${a}" 1>&2

      usage=y

      break

      ;;

   *)

      # Anything else just ignore

      ;;

   esac

done

[ ! -n "${mode}" ] && usage=y

[ ! -n "${list_target}" ] && usage=y

[ "${usage}" ] && usage

[ "${verb}" -gt "0" ] && echo "${PROG}: specified targets are ${list_target}"

map_list_target

```

finally an init script (this won't be enough if your planning on having your root device on the array)

/etc/init.d/dmraidmapper

```

#!/sbin/runscript

depend() {

        before checkfs evms

        need checkroot modules

}

start() {

        ebegin "Initializing software mapped RAID devices"

        /usr/local/bin/dmmap -a all

        eend $? "Error initializing software mapped RAID devices"

}

stop() {

        ebegin "Removing software mapped RAID devices"

        /usr/local/bin/dmmap -d all

        eend $? "Failed to remove software mapped RAID devices."

}

```

the dmmap script uses the config file to create or remove device nodes and partition nodes

the config file can contain settings for manual mappings from dmsetup or for settings for dmraid

I'm not a gentoo developer or anything so I'm not saying this is the best or standard way of doing things

but this works for me (until something better comes along at least)

EDIT

now seems to work okay under busyboxLast edited by garlicbread on Sun Apr 10, 2005 1:49 pm; edited 2 times in total

----------

## flipy

garlicbread, if you need some testing let me know. I've a amd64 with via raid set up (fake-raid dualboot).

----------

## movrev

Thanks for your extensive reply garlicbread, but I think I am going to desist from using RAID 1 for now untill all this is at least tested. I really want to start using my new computer which I now have been working on for about a week and if I start testing all these scripts, I will have to wait for several more weeks and still not be as stable as I would want to be.

----------

## flipy

 *movrev wrote:*   

> Thanks for your extensive reply garlicbread, but I think I am going to desist from using RAID 1 for now untill all this is at least tested. I really want to start using my new computer which I now have been working on for about a week and if I start testing all these scripts, I will have to wait for several more weeks and still not be as stable as I would want to be.

 

using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it).

----------

## movrev

 *flipy wrote:*   

> using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it).

 

Yeah, as I said before, dmraid detected everything, but my issue is using this array as my boot partition. What do you mean when you say that genkernel has it? I am usually not going with genkernel as I configure the kernel myself. Thanks.

----------

## garlicbread

arrgh just when I though i had this sorted

everything works fine when the system is already booted up with a full version of bash

but getting the scripts to work under a busybox version of sh is just a pain

each time I fix one thing, it throws up something else  :Sad: 

----------

## flipy

 *movrev wrote:*   

>  *flipy wrote:*   using RAID with gentoo is easy, just follow any how-to you like and you'll have it. Try dmraid, should detect everything (and genkernel has it). 
> 
> Yeah, as I said before, dmraid detected everything, but my issue is using this array as my boot partition. What do you mean when you say that genkernel has it? I am usually not going with genkernel as I configure the kernel myself. Thanks.

 

genkernel dev's had included the dmraid (along with udev and some other stuff). so you just need to run 

```
genkernel --udev --dmraid all
```

 and everything has to be done.

for the boot partition, I've ran into many problems until I found my issue. I guess you know how to install grub into the boot partition, and after that the comp should boot from it...

reading again your posts...

for what i've seen dmraid works out the box, the only issue is that in the past it just detected 4 partitions, and sometimes get messed up with the sizes (which not seems to be any problem for you).

as garlicbread said, having RAID 1 is just for data consistency, but we're talking about hw issues, so if the power goes down or your sister throw a cup of coffee to your tower, your data will be lost...

in terms of speed, AFAIK, with RAID 0 i've almost the same speed under linux that under windows (have to do some more benchmarks). so i'll guess with RAID 1 will be the same.

if you ever have any troubles and need to rebuild the raid, i think it has to be done under windows or with the bios utility.

for the initrd thing, i've been unable to find out how to do it to have *splash and the initrd at the same time, but you can try gensplash built into the bzImage and the initrd, that should work (mind putting everything into modules).

i hope you don't give up with gentoo64, as it's the best distro to get involved with your system.Last edited by flipy on Mon Apr 04, 2005 7:41 am; edited 1 time in total

----------

## movrev

So, technically, I shouldn't even need an initrd if I follow genkernel? I mean, it should be taken care of, right?

----------

## garlicbread

If

```
 dmraid -ay 
```

sets up your array perfectly then I think genkernel should be able to generate an initrd that will work with the bootup

if on the other hand you have to use mappings via dmsetup (e.g. for certain Fast Track Promise arrays or for Via with a Raid 0 cluster size other than 16K) then some manual fiddling may be in order

I think the bug of dmraid not mapping the extended partitions has been fixed in the latest versions

the only time it seems to mess up is if it gets the overall size of the array wrong

----------

## flipy

 *garlicbread wrote:*   

> 
> 
> if on the other hand you have to use mappings via dmsetup (e.g. for certain Fast Track Promise arrays or for Via with a Raid 0 cluster size other than 16K) then some manual fiddling may be in order

 

last version of dmraid also detects my via raid 0 32k cluster size and maps it ok (just a little trouble with the 2005.0-r1 amd64 cd).

so, movrev, i think you always will need an initrd to start. and it's logical. firs you boot the kernel, which detects only hardware, then you run the initrd, that just makes the nodes and maps them, and then you're ready to start booting all software stuff...

----------

## AlphaHeX

I have 2x 120 Seagate SATA harddrives set in bios to be a RAID0 (It's intel ICH5R). I'm booting from Gen2dmraid LiveCD and after 

```
dmraid -ay 
```

command i have a block device is /dev/mapper/iswfsfsaf[HeX]. I know that raid was discovered by dmraid as [HeX] is a name of the raid which i've set up in the bios. Invoking 

```
fdisk /dev/mapper/ called iswfsfsaf[HeX]
```

 is showing me the whole raid (concerning diskspace - it's 240GB). Do i have to do partitioning before setting up raid in Linux (In Windows) or can i just make partitions under Linux using fdisk? You all are talking here that i have to do partitioning in windows because setting up partitions in Linux will overwrite RAID0 information stored at the begining and at the end of each disk ?Last edited by AlphaHeX on Thu Apr 07, 2005 2:17 pm; edited 1 time in total

----------

## garlicbread

 *AlphaHeX wrote:*   

> Do i have to do partitioning before setting up raid in Linux (In Windows) or can i just make partitions under Linux using fdisk? You all are talking here that i have to do partitioning in windows because setting up partitions in Linux will overwrite RAID0 information stored at the begining and at the end of each disk ?

 

The Raid metadata for most of these types of Arrays appears to be stored at a region right at the end of the disk.

The safe easy way is just to partition the disk under windows (which would use the official drivers)

or a Dos utility such as PQMagic (which would use the BIOS).

through Dos or windows the drives in combination would simply look as if they were a little bit shorter to the partitioning utility which would prevent the partitions overlapping into that metadata region

dmraid should also do the same thing. i.e. map the array from the begining of the disk only up to where the meta data starts, that way the dev node created is the same correct length as it would be in windows (it excludes the meta data) and you should (in theory) be able to create partitions safely with sfdisk or fdisk

The bit where it gets tricky is if dmsetup is used instead to manually map out the array on the drive

Since it's difficult to know wherebouts the metadata starts at the end of the disk

if dmsetup has just been used to map out the full disk (including the metadata) then you risk setting up a partition under Linux that could overlap into that metadata region

if you've got the lengths right for the parameters to dmsetup then there shouldn't be a problem

So in short if dmraid works then there shouldn't be a problem with partitioning under Linux (in theory)

----------

## makton3g

Exactlly when along the installation process (by the handbook) am I supposed to setup the Raid 0? I was not about to do when setting up Hard Disks. 

dmraid is not on the minimal CD and dmsetup said it had some files missing and was asking if it was installed properly. Am I supposed to make my file systems and bootstrap before I setup my Raid?

----------

## garlicbread

You have to setup the raid array using dmraid first (assuming dmraid recognises your array correctly)

before you write anything to the Hard disk

i.e. right at the begining of section 4 I think

assuming you've already setup / created the array in the Bios

dmraid -ay should create the dev node /dev/mapper/<whatever>

which represents the whole disk of the array

If the disk isn't partitioned yet, then you'll need to run fdisk /dev/mapper/<whatever>

on it to setup the partitions

and perhaps dmraid -ay a second time so that the partition nodes will show up

e.g.

/dev/mapper/<whatever>1

/dev/mapper/<whatever>2

etc

the <whatever> is usually pdc or via or whatever type of array yours is followed by a long number

dmraid isn't included on the official livecd yet (perhaps becuase it isn't considered stable enough I don't know)

but theres supposed to be one over here called gen2dmraid

but I've not got around to trying it out myself yet

----------

## flipy

 *garlicbread wrote:*   

> 
> 
> dmraid isn't included on the official livecd yet (perhaps becuase it isn't considered stable enough I don't know)

 

it is included in 2005.0 (amd64 and x86, AFAIK)

----------

## makton3g

 *flipy wrote:*   

>  *garlicbread wrote:*   
> 
> dmraid isn't included on the official livecd yet (perhaps becuase it isn't considered stable enough I don't know) 
> 
> it is included in 2005.0 (amd64 and x86, AFAIK)

 

when I type the command "dmraid" I get a "command not found". I am using the minimal install CD 2005.0. Is this or "gen2dmraid" on the minimal CD?

----------

## flipy

 *makton3g wrote:*   

>  *flipy wrote:*    *garlicbread wrote:*   
> 
> dmraid isn't included on the official livecd yet (perhaps becuase it isn't considered stable enough I don't know) 
> 
> it is included in 2005.0 (amd64 and x86, AFAIK) 
> ...

 

well, 2005.0 has dmraid on its initrd (linuxrc), and not in the livecd environtment; so, just check /dev/mapper if there is something there... if not, try gen2dmraid >=0.99a (pure udev system) and you'll be able to configure everything.

another way it's to mount the initrd, so you'll be able to execute dmraid, but this is more difficult if you're not an advance linux user...

----------

## mcfly.587

Hi everybody,

I downloaded the cd with the dmraid support from http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ but I have a problem: 

On boot :

 *Quote:*   

> 
> 
>  invalid metadata checksum on /dev/sda 
> 
>  invalid metadata checksum on /dev/sdb 
> ...

 

Configuration :

Epox 8rda3+ with SILicon Image 3112 ;

 2* Raptors 36go ;

If i test with two seagate there is no problem, all work perfect ! Have you a solution plz ? 

On /dev/mapper   there is nothing with the raptors ...  :Sad:  no silafefazfdf ... 

How can i resolve this error at boot ? Thx in advance  :Smile: 

----------

## garlicbread

It looks as if it can see the metadata is there but thinks it's invalid

I would have thought that if it works for one set of disks then it should work for another

is this on Raid 0? (could you have unplugged then re-plugged the drives in the wrong way round?)

Have you tried deleting then re-creating the Array in the bios for those disks?

(this would destroy all data mind you)

----------

## mcfly.587

Yes raid 0.

I have recreated 2 times the array  -> 16k,32k ... same result.

Its very strange ...

----------

## garlicbread

was the array on the Seagate drives created on the same motherboard / raid controller / set of Sata connections?

I remember something weird happening with mine, when I was messing about this this

I'd set up an array with a couple of disks on a pdc controller, switched the drives across to a via controller again set it up

dmraid didn't reconise the via data at the time (as it was an old version) but picked up the pdc data that was laft over from the previous array and setup the array using the pdc driver

which sort of made it work even when it shoudn't have

other than that the only other way is manually dojng it through dmsetup which is difficult to setup

----------

## Erlend

The kpartx script doesn't run from my initrd.  Not sure if it is compatible with busybox?

Thanks,

Erlend

----------

## garlicbread

kpartx isn't a script it's a binary included with multipath-tools that will create dev nodes for the different partitions

devmap_name is another binary also included in multipath that can identify a device-mapper's name and can be used within udev scripts to properly setup the dev nodes 

since the ebuild for multipath-tools can't build static yet you'll also need to copy across any libs that kpartx would use

partition-mapper.sh is a script which does exactly the same thing as kpartx, except it uses sfdisk, awk and dmsetup (so these need to be present within the initrd if this is being used)

since dmsetup (version 1.0.20 onwards) can now do the same thing as devmap_name within the udev rules I've stopped using multi-path tools altogether for my own system and just use the partition-mapper.sh script instead

----------

## Erlend

Okay thanks.  When I said kpartx script I was referring to your script above.  I'm trying to make an initrd for mapping the partitions so that I can repartition my drive easier.  After that I'll revert back to using "static mapping".

Erlend

----------

## garlicbread

I've tried the latest one under busybox inside an initrd and it seems to be okay

without knowing what the error or problem is I'm not sure what to recomend

I know that a pure udev system by default won't map device nodes correctly for dev-mapper so I'm assuming you've already set this up (see other Evms + Udev Howto)

I've recently managed to get evms working on top of the array as well but that involved using a loopback device, so I'm looking into patching evms as well at the moment

----------

## Erlend

Actually, I'm unlikely to try this myself, but does lvm2 work on the arrays?

Erlend

----------

## Erlend

My initrd doesn't work - sdisk isn't working.  It says 

```
"/bin/sh: /sbin/sfdisk: not found"
```

I think it needs a library, not sure which one?  Does anybody know?

Thanks,

Erlend

----------

## garlicbread

The script is looking for sfdisk in /sbin/sfdisk which I think is the default location on an installed system

for my own initrd I have all the binaries located within /bin/ on the initrd

with symbolic links from /sbin /usr/bin /usr/sbin /usr/local/bin/ that all point to /bin

so depending on your initrd, ether sfdisk is not there, located in the wrong place, or there's no symlink for /sbin to the bin directory where sfdisk is located

to get a list of libs that a binary depends on you can try running ldd <path to the bin> for a list

also for sfdisk I think it's possible to get a static version by using

USE="static" ebuild /usr/portage/<path to sfdisk ebuild file> install

which should go as far as compiling / installing into /var/tmp/portage but not emerging into the main system

from there you can then copy the static version /var/tmp/portage/sfdisk/image/sbin/sfdisk (I think that's right) into your initrd

----------

## Erlend

 *Quote:*   

> USE="static" ebuild /usr/portage/<path to sfdisk ebuild file> install 

 

That fixed it thanks.

I'll post the linuxrc and my busybox config in this thread when I'm eventually satisfied with it.  I'm sure some people will find it helpful.

Erlend

----------

## Zate

problem i have is finding a CD with dmraid and ntfsresize on it.. i have the gen2raid livecd or knoppix.  I see a bunch of links for a static linked ntfsresize about the place, but none of them work.

any ideas?.. my winXP ntfs is taking up the whole raid... so i cant resize it with a normal knoppix CD (3.7) as it doesnt see the raid.. and the gen2raid cd which see's the raid fine has no resizing tool for ntfs.  :Sad: 

----------

## Erlend

There are a few things you could try.

BootIT NG is a commercial product, with a working trial version:

http://www.bootitng.com/downloads/bootitng.zip

If you have any livecd (knoppix) with ntfsresize on it you could use that.  Most livecds have dmsetup on them (which is the program dmraid calls in order to map the drives - you can call it yourself if you know the mapping.  Use garlicbread's script if you're stuck.)  You can actually complete the install using dmsetup (I have to use dmsetup - since dmraid doesn't work for me).  If you really must have dmraid, then finding a static compiled version of that should be easy - people are always statically compiling it to put in their initrds.

In fact, you could use knoppix, but use the "Install Software" feature to install dmsetup/dmraid while you are running to CD.  I think it is located: K->KNOPPIX->Utilities->Install software.

Good luck,

Erlend

----------

## garlicbread

For info

I've noticed that an error can sometimes be genereated when setting up a mirror target for Raid 1

in the initrd it's more visible, it seems to mention something about trying to access the device outside of it's bounds (even though the map given to dmsetup isn't)

while this doesn't stop it from working, mapping / unmapping several times over (like if your experimenting with the setup) can cause a kernel oops

I think it might be something in the kernel's dm mirror source

looking at the latest Changelog for gentoo-sources-2.6.11-r6 it seems to mention something

"Removed the dm patches as they caused oopses under certain circumstances"

but I can't compile this at the moment due to some weird make error on my amd64 system

I'll need to re-compile my toolchain and world packages to try an sort this out first

EDIT

figured out the problem and tried out 2.6.11-r6, but it still has the same problem

you should be able to map then unmap a mirror target at least once with no problems, constantly re-mapping / unmapping it however will probably cause an oops (at least for mirror raid1, for raid0 there seems to be no problem)Last edited by garlicbread on Thu Apr 14, 2005 12:07 pm; edited 1 time in total

----------

## Zate

i couldnt get dmsetup down onto knoppix (3.7)... but what i did do is get ntfsprogs down onto my FC2 web server and compile it staticaly... i was then able to boot with the gen2raid disk.. wget the static ntfsprogs with ntfsresize .. i ran my dmraid -ay -v which created nvidia*1 in /dev/mapper .. i only have one partition on the raid.. 150 Gb... so i did ntfsresize -s 90G /dev/mapper/nvidia*1 (used the full name) .. it said i should do a chkdsk /f under windows that the ntfs partition might be corrupted?

i stopped right there and booted back to windows.. too much data on here to nuke it doing something i'm not sure of..  i am thinking of just backing up the important stuff. nuking windows.. braking the raid back into its original 100gb and 75 gb drives and using one for each OS... be much simpler i think... maybe i can go win XP 64 when i do that.. 

man if EQ2 would run under Linux i could ditch it totally... my wife is ready to switch to linux completely.. i am the one keeping myself on winXP.. her little downstairs computer gets a gentoo install this week.. lol. how sad is that.. the techy is on windows and the housewife will be running gentoo .. lol

----------

## garlicbread

If your trying to resize a win partition, you first need to defrag the drive to get all the data located at the begininng of the partition

the statement about chkdsk, may suggest that there is something wrong with the filesystem (or perhaps something implemented in NTFS that it doesn't recognise) so it's also worth running scandisk on the drive as well

you could try PQMagic (also called Partition Magic) which can be run via a DOS bootup disk, but this is something you have to buy (unless your willing to pirate it off p2p which I would never recomend  :Wink: )

----------

## Zate

 *garlicbread wrote:*   

> If your trying to resize a win partition, you first need to defrag the drive to get all the data located at the begininng of the partition
> 
> the statement about chkdsk, may suggest that there is something wrong with the filesystem (or perhaps something implemented in NTFS that it doesn't recognise) so it's also worth running scandisk on the drive as well
> 
> you could try PQMagic (also called Partition Magic) which can be run via a DOS bootup disk, but this is something you have to buy (unless your willing to pirate it off p2p which I would never recomend )

 

Thanks  :Smile: 

I think my plan will be to build my wifes system first... and then backup my important data and rebuild on separate drives.  I'll mess with the dmraid stuff more later when i install a couple of 36GB Raptors on SATA.

Greatly appreciate the help.

----------

## Zate

update :  I was able to use BOOTITNG to create a bootable CDrom with a partition manager that could both see, and resize my nvraid NTFS partition.  I set it checking the part last ngiht.. set it resizing from 150 GB to 75 GB this morning and had my wife check on it while i was at work. She finalised it and rebooted the PC and windows booted normally.  Tonight I use gen2dmaid to do my gentoo install.  :Smile:  YAY !

----------

## Zate

so i guess i am a little lost now.

worked out I could only have 4 partitions..  win has one.. the bootitng created another left me with one for swap one for linux.. i figured i could put it all in 1.. not the best solution.. but workable for now.

the problem i have is to see my raid i needed to use the gen2raid livecd... but to chroot to my amd64 environment i need to have a 64 bit kernel.. so what do i do ?.. will the usual 64 bit live CD see my raid drives ? (someone mentioned the initrd has dmraid so it will see the raid..but doesnt have dmsetup or dmraid) .. 

what now ?

----------

## Erlend

 *Quote:*   

> worked out I could only have 4 partitions

 

You can have many more if you use an extended partition.  Just don't put your /boot on an extended partition as it could cause problems later.

 *Quote:*   

> bootitng created another

 

BootITNG doesn't need a partition to operate - it can run from the CD.  Remove the BootNG Partition.

 *Quote:*   

> the problem i have is to see my raid i needed to use the gen2raid

 

Any recent official gentoo live cd will work, as they have dmsetup on them.  If you can work out the mapping yourself, or maybe use dmraid to find out what mappings it is using.

 *Quote:*   

> but to chroot to my amd64 environment i need to have a 64 bit kernel

 

I don't think you do.  The kernel you are running from the livecd obviously works - you used it to install stuff, chroot only changes, for example, /mnt/gentoo to be / - nothing fancy is done with the kernel.

Erlend

----------

## hwood

Howdy folks,

Well I have managed to get my pdc to recognize the full 200gb of my raid1 mirror. Here is how I did it.

First, I burned http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/gen2dmraid-0.99a.iso and booted the -mm sources. Then I booted it up and as soon as I got to a root prompt I did dmraid -an. I then went and wgot this.

http://tienstra4.flatnet.tudelft.nl/~gerte/dmraid-static-glibc which is an rc8 binary of dmraid. I then used the new binary to see my partitions etc...  ./dmraid-static-glibc -ay I was able to emerge -e system and get a kernel installed. I installed grub since I hate lilo and tried to boot it up on it's own. When it should show the grub screen all I see is a blinking cursor. I have tried making my own initrd and even tried genkernel. I still cannot get grub to boot it. My grub install worked well following your instructions, I just can't seem to get it to boot. I have tried using the grub console off the bootcd and still cannot seem to get it to boot, I got it partially boot tho.

Any suggestions?

----------

## falcon_munich

Hi, i have cant get this working, can anybody help please.

First my config, i have a K8T Neo2 Borad with an AMD64

rnd mapper # lspci |grep -i raid

0000:00:0d.0 RAID bus controller: Promise Technology, Inc. PDC20579 SATAII 150 IDE Controller

0000:00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller (rev 80)

rnd mapper # emerge info

Portage 2.0.51.22-r2 (default-linux/amd64/2005.1, gcc-3.4.4, glibc-2.3.5-r1, 2.6.12-gentoo-r10 x86_64)

=================================================================

System uname: 2.6.12-gentoo-r10 x86_64 AMD Athlon(tm) 64 Processor 3500+

Gentoo Base System version 1.6.13

ccache version 2.3 [enabled]

dev-lang/python:     2.3.5-r2

sys-apps/sandbox:    1.2.12

sys-devel/autoconf:  2.13, 2.59-r6

sys-devel/automake:  1.4_p6, 1.5, 1.6.3, 1.7.9-r1, 1.8.5-r3, 1.9.6

sys-devel/binutils:  2.15.92.0.2-r10

sys-devel/libtool:   1.5.18-r1

virtual/os-headers:  2.6.11-r2

ACCEPT_KEYWORDS="amd64"

AUTOCLEAN="yes"

CBUILD="x86_64-pc-linux-gnu"

CFLAGS="-march=k8 -O3 -pipe -fomit-frame-pointer"

CHOST="x86_64-pc-linux-gnu"

CONFIG_PROTECT="/etc /usr/kde/2/share/config /usr/kde/3.4/env /usr/kde/3.4/share/config /usr/kde/3.4/shutdown /usr/kde/3/share/config /usr/lib/X11/xkb /usr/share/config /var/qmail/control"

CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/env.d"

CXXFLAGS="-march=k8 -O3 -pipe -fomit-frame-pointer"

DISTDIR="/usr/portage/distfiles"

FEATURES="autoconfig ccache distlocks sandbox sfperms strict"

GENTOO_MIRRORS="ftp://sunsite.informatik.rwth-aachen.de/pub/Linux/gentoo http://ftp.belnet.be/mirror/rsync.gentoo.org/gentoo/ ftp://ftp-stud.fht-esslingen.de/pub/Mirrors/gentoo/ ftp://194.117.143.69/mirrors/gentoo"

LINGUAS="de"

MAKEOPTS="-j2"

PKGDIR="/usr/portage/packages"

PORTAGE_TMPDIR="/var/tmp"

PORTDIR="/usr/portage"

SYNC="rsync://rsync.de.gentoo.org/gentoo-portage"

USE="X a52 aac aalib acpi alsa amd64 apache2 arts avi berkdb bitmap-fonts bzip2 cdparanoia cdr crypt cups curl dbus dedicated dga dio directfb dri dts dv dvb dvd dvdr dvdread eds emboss encode esd ethereal exif fam fbcon ffmpeg flac foomaticdb fortran ftp gif gphoto2 gpm gps gstreamer gtk gtk2 hardened ieee1394 imagemagick imap imlib ipv6 java javascript joystick jpeg kde lm_sensors lzw lzw-tiff mad memlimit mozilla mp3 mpeg nas ncurses nls nsplugin nvidia ogg oggvorbis openal opengl osc pam pda pdflib perl php png portaudio python qt quicktime readline samba scanner sdl sndfile sockets speex spell ssl tcpd tiff truetype truetype-fonts type1-fonts usb userlocales v4l vcd videos wmf wxwindows xface xine xinerama xml2 xmms xpm xprint xv xvid zlib linguas_de userland_GNU kernel_linux elibc_glibc"

Unset:  ASFLAGS, CTARGET, LANG, LC_ALL, LDFLAGS, PORTDIR_OVERLAY

My System is installed at hda - standart ide - no problem here

I have bulid a raid 0 with the via raid controler, on this raid device i have three ntfs partitions with windows xp on it. now i want to mount these devices.

this is what i have done:

rnd mapper # cat /sys/block/sda/size

234441648

rnd mapper # cat /sys/block/sdb/size

234441648

 echo "0 468883296 striped 2 128 /dev/sda 0 /dev/sdb 0"|dmsetup create raid

rnd mapper # sfdisk -l -uS /dev/mapper/raid

Disk /dev/mapper/raid: cannot get geometry   <------------------------ IS THIS OK ????

Disk /dev/mapper/raid: 0 cylinders, 0 heads, 0 sectors/track

Warning: The partition table looks like it was made

  for C/H/S=*/255/63 (instead of 0/0/0).

For this listing I'll assume that geometry.

Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System

/dev/mapper/raid1   *        63  30716279   30716217   7  HPFS/NTFS

/dev/mapper/raid2      30716280 133114589  102398310   7  HPFS/NTFS

                start: (c,h,s) expected (1023,254,63) found (1023,0,1)

/dev/mapper/raid3     133114590 468873089  335758500   7  HPFS/NTFS

                start: (c,h,s) expected (1023,254,63) found (1023,0,1)

/dev/mapper/raid4             0         -          0   0  Empty

 echo "0 30716217 linear /dev/sda 63"|dmsetup create raid1

 echo "0 102398310 linear /dev/sda 30716280"|dmsetup create raid2

 echo "0 335758500 linear /dev/sda 133114590"|dmsetup create raid3

device-mapper ioctl cmd 3 failed: Device or resource busy

Command failed

the next strange thing, raid1 and raid2 is ok, raid3 produces an error. 

when i want to mount one of the partition i get this error

rnd mapper # mount /dev/mapper/raid1 /mnt/drive_c/

mount: wrong fs type, bad option, bad superblock on /dev/mapper/raid1,

       missing codepage or other error

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

could anybody help me please ? this is my first touch with gentoo.

----------

## wbreeze

Hi falcon_munich:

You should try changing 

```
echo "0 30716217 linear /dev/sda 63"|dmsetup create raid1

echo "0 102398310 linear /dev/sda 30716280"|dmsetup create raid2

echo "0 335758500 linear /dev/sda 133114590"|dmsetup create raid3 
```

to

```
echo "0 30716217 linear /dev/mapper/raid 63"|dmsetup create raid1

echo "0 102398310 linear /dev/mapper/raid 30716280"|dmsetup create raid2

echo "0 335758500 linear /dev/mapper/raid 133114590"|dmsetup create raid3 
```

I think I have gotten:

```
device-mapper ioctl cmd 3 failed: Device or resource busy

Command failed 
```

after I had tried the dmsetup create already and then tried to do it again a second time

try:

```
dmsetup remove raid3
```

and then try your dmsetup create again

EDIT:

or maybe you got that error because you ran off the end of /dev/sda. You will have more room on /dev/mapper/raid

----------

## falcon_munich

Hi wbreeze, 

thank you very much, it works now.

I think that was an very stupid mistake   :Embarassed: 

Sorry

----------

## wbreeze

Thats how I learned.

By making lots and lots and lots of mistakes.

----------

## jackxh

Hi:

I was able to successfully creating the /dev/sda as raid 1 and install Gentoo Linux on a motherboard with VIA VT8237. everything boot fine. Except I can't figure out how to rebuild the broken array. The via raid bios/utilities clearly state that this is a broken array. But when I do dmraid -s

I got this:

name : via_hgbicibb

size : 312581807

stride : 0

type : mirror

status : ok

subsets: 0

devs : 1

spares : 0

Would you please give me some hint?

sincerely yours

Jack Xie

----------

## Erlend

Why do you want to rebuild the array if you can boot?

If you want to rebuild the array, I'm guessing you'll have to dd each partition (or use partimage, in portage) to make an exact backup, rebuild the array using the bios utilities, and then put your backup back on the disk from dd or partimage.

Erlend

----------

## garlicbread

with Raid 1 each disk should have identical data with maybe a slighlty different signature for each drive embeded in the raid metadata at the end of the drive

(in other words doing a direct copy within linux from the current okay disk to the new disk may not work)

most Raid bios's allow the array to be rebuilt inside the bios

it's usually just a case of fitting the new disk, go into the Raid Bios, select rebuild and select the correct source disk to copy from

that last bit is reaaally important, you don't want to copy from the blank new disk over the existing old disk with the data on

you should be able to boot to an individual raid 1 member if you want to, as a Raid1 member disk is pretty much the same as a normally partitioned drive with the exception of the raid metadata at the end of the disk

probably just a case of turning raid off in the bios to get the system to see it as a single disk, and fiddling around with grub so it selects the right drive

----------

## frankjr

dmsetup no longer works for me under 2.6.16.

----------

## Erlend

No me neither.

The reason is that a new patch was introduced into the kernel which forces all striped devices to have a chunk size which exactly divides the size of the device.  Since your raid BIOS created the array, but it didn't have this limitation, it almost certainly didn't create a cluster-size that divides the array-size.

The way to fix it is to create a new linear-mapped device that cuts the end of your array off so that the cluster size divides it.

Let's assume your array is 24575000, and your stripe size is 64.  24575000/64 = 383984.375, but since we want it to fit exactly we just take the quotient (383984).  Now multiply the quotient by the stripe size to get the valid size of the array... 383984 x 64 = 24574976, and create a new device this size:

```

echo "0 24574976 linear /dev/mapper/raid 0" | dmsetup create dmsetupcompatibledevice

```

This feels like a bit of a hack to me - I don't like it.  If you have data at the very end of the array you'll lose it.  I think I'm going to just blow my array away and install on md native linux raid, since I no longer need to dual-boot with Windows.  I've heard linux raid is faster any ways.

Erlend

----------

## Erlend

I want to use the built-in linux md software raid instead of dmraid/dmsetup now does anybody know how to disable the raid functionality on a Promise FastTrak S150 Tx2Plus?

----------

## Neo_0815

Anyone there which can give an example of dmsetup for an raid10 ( mirror set on top of a stripe set )?

----------

