# How to do a gentoo install on a software RAID

## chatwood2

How to do a gentoo install on a software RAID

by Chris Atwood

1.About the Install

Before you start reading this how-to you should read the x86 install instructions several times to become familiar with the gentoo install process. Also note that as you install following these instructions you should have easy access to the normal instructions, you will need to refer back to them.

This how-to assumes that you are installing on two IDE drives, and that both are masters on different IDE channels (thus they are /dev/hda and /dev/hdc.) The CD-ROM you install off of could be /dev/hdb or /dev/hdd (it doesn't matter).

I decided to partition my drives similarily to how the gentoo install docs suggest.

```
device         mount         size

/dev/hda1      /boot         100MB

/dev/hda2      swap          >=2*RAM

/dev/hda3      /             (big)

/dev/hda4      /home         (big) (this partiton is optional)
```

/boot and / will be a RAID 1 (mirror), /home will be a RAID 0, the swap parition will be on both hda and hdc, but will not be a RAID (more will be explained later).

At this point let me just explain the common RAID levels and their pro and cons.

RAID 0: 2 or more hard drives are combined into one big volume.  The final volume size is the sum of all the drives. When data is written to the RAID drive, the data is written to all drives in the RAID 0.  This means that drive reads and writes are very fast, but if 1 drive dies you lose all your data.

RAID 1: 2 hard drives are combined into one volume the size as the smallest of the physical drives making it.  The two hard drives in the RAID are always mirrors of each other.  Thus, if a drive dies you still have all your data and your system operates as normal.

RAID 5: 3 or more hard drives are combined into one larger volume.  The volume size is (# of drives -1) * drive size.  You lose one drive of space because part of the space on each drive is a backup of the other drives.  Thus if one drive dies you still have all your data, but if 2 die you lose everything. 

Some general RAID notes.  Ideally, all drives in a RAID should be the same size.  Any difference in the drives makes it harder for the computer to manage the RAID.  Also, and IDE drive in a a RAID should be on its own IDE channel.  With IDE, a dead drive on a channel can bring down the whole channel. In a RAID setup this means that if a drive dies, two go down and your machine crashes.

2. Booting

Follow normal gentoo instructions in this section.

3. Load kernel modules

My machine uses a sis900 compatible network chip, so I use that driver. You should, of course, use your own network driver name in its place.

```
#modprobe sis900
```

We also have to load the module that allows for RAID support, so:

```
#modprobe md
```

4. Loading PCMCIA kernel modules

Follow normal gentoo instructions in this section.

5. Configure installation networking

Follow normal gentoo instructions in this section.

6. Set up partitions

You need to use fdisk to setup your partitions. There is nothing different here except to make sure you fdisk both disks and that you set all partitions (except swap) to partition type fd (linux raid auto-detect). If you fail to do either of these steps you RAID will not work.  Swap should be set to linux swap type.

This might be a good time to play with the "hdparm" tool. It allows you to change hard drive access parameters, which might speed up disk access. There is a pretty good forum thread about hdparm, I suggest doing a search for it.

Before we put any filesystem on the disks we need to create and start the RAID drives. So, we need to create /etc/raidtab. This file defines how the virtual RAID drives map to physical partitions. If you have hard drives of different size in a RAID 1 (not suggested), the smaller of the two should be raid-disk 0 in this file. My raidtab file ended up looking like this:

```
# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda1

raid-disk               0

device                  /dev/hdc1

raid-disk               1

   

# / (RAID 1)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda3

raid-disk               0

device                  /dev/hdc3

raid-disk               1

   

# /home (RAID 0)

raiddev                 /dev/md3

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda4

raid-disk               0

device                  /dev/hdc4

raid-disk               1
```

While I didn't have enought hard drives to do this, adding hot-spares is often a good idea.  This means if a drive in your RAID 1 goes down, you already have a spare drive in the machine that can be used as a replacement.  A raidtab with the hot spare option looks like this:

```
# / (RAID 1 with hot-spare)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

nr-spare-disks 1

chunk-size              32

persistent-superblock   1

device                  /dev/hda3

raid-disk               0

device                  /dev/hdc3

raid-disk               1

device /dev/hdd1

spare-disk 0
```

And a RAID 5 (with hot-spare) raidtab looks like this:

```
raiddev /dev/md4

raid-level 5

nr-raid-disks 3

nr-spare-disks 1

persistent-superblock 1

chunk-size 32

parity-algorithm right-symmetric

device /dev/hda4

raid-disk 0

device /dev/hdb4

raid-disk 1

device /dev/hdc4

raid-disk 2

device /dev/hdd4

spare-disk 0
```

Now we need to create the RAID drives so:

```
#mkraid /dev/md*
```

for all raid drives, where * is replaced by the device specified in the raidtab file.

I decided to put an ext2 filesystem on the /boot RAID drive:

```
#mke2fs /dev/md0
```

Remember how I mentioned that I was going to setup the swap space so that it would exist on more than one drive, but would not be in a RAID? So when we make the swap space we make two of them.  Make the swap:

```
#mkswap /dev/hda2

#mkswap /dev/hdc2
```

And, since I want xfs on the / and /home RAIDs

```
#mkfs.xfs -d agcount=3 -l size=32m /dev/md2

#mkfs.xfs -d agcount=3 -l size=32m /dev/md3
```

The parameters added to the mkfs.xfs command come from the suggestions made in orginal x86 install guide. Both my / and /home partitions are about 9 GB, and XFS likes at least one allocation group per 4 GB. Thus I used an agcount of 3.

If you want an ext3 filesystems use:

```
# mke2fs -j /dev/md2

# mke2fs -j /dev/md3
```

Or to create ReiserFS filesystems, use:

```
# mkreiserfs /dev/md2

# mkreiserfs /dev/md3
```

7. Mount partitions

Turn the swap on:

```
#swapon /dev/hda2

#swapon /dev/hdc2
```

Mount the / and /boot RAIDs:

```
# mkdir /mnt/gentoo

# mount /dev/md2 /mnt/gentoo

# mkdir /mnt/gentoo/boot

# mount /dev/md0 /mnt/gentoo/boot
```

8. Mounting the CD-ROM

Follow normal gentoo instructions in this section.

9. Unpack the stage you want to use

Follow normal gentoo instructions in this section, except for one addition.  You need to copy your raidtab file over to your new gentoo root.  So after you copy resolv.conf do this:

```
# cp /etc/raidtab /mnt/gentoo/etc/raidtab
```

10. Rsync

Follow normal gentoo instructions in this section.

11. Progressing from stage1 to stage2

Follow normal gentoo instructions in this section.

12. Progressing from stage2 to stage3

Follow normal gentoo instructions in this section.

13. Final steps: timezone

Follow normal gentoo instructions in this section.

14. Final steps: kernel and system logger

When in menuconfig be sure to compile in support for RAID devices and all RAID levels you plan to use.  And, be sure to compile them into the kernel, don't compile them as modules.  If you compile them as modules you have to load the modules before mounting the RAID devices, but if your / and /boot are the RAID you are in a catch-22.  There are work-arounds, but it is much easier to just compile all RAID support into the kernel.

Also, since I put xfs filesystems on my machine I emerged the xfs-sources. Other than that, follow the instructions normally.

15. Final steps: install additional packages

Since I put xfs filesystems on my machine I emerged xfsprogs. Other than that, follow the instructions normally.

16. Final steps: /etc/fstab

Here again we need to let the computer know about our two swap partitions.  You specify two or more paritions has swap here, and if you specify them with the same priority all of them will be use at the same time.

Also, be sure to specify the RAID drives and not the physical hard drives in the the fstab file for any drive that is a RAID. My fstab looks like this:

```
/dev/md0      /boot     ext2      noauto,noatime     1 2

/dev/md2      /         xfs       noatime            0 1

/dev/hda2     swap      swap      defaults,pri=1     0 0

/dev/hdc2     swap      swap      defaults,pri=1     0 0

/dev/md3      /home     xfs       noatime            0 1

/dev/cdroms/cdrom0   /mnt/cdrom   iso9660      noauto,ro      0 0

proc         /proc      proc      defaults           0 0
```

After that, follow instructions normally untill you get to grub.

Once you type

```
#grub
```

The commands are the same as the standard install if you follow my partion setup. If you have deviated, type

```
grub> find /boot/grub/stage1
```

to get the hard-drive to specify in place of (hd0,0). Since the /boot partition is a RAID, grub cannot read it to get the bootloader. It can only access physical drives. Thus, you still use (hd0,0) in this step.

The menu.lst does change from the normal install. The difference is in the specified root drive, it is now a RAID drive and no longer a physical drive. Mine looks like this:

```
default 0

timeout 30

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=My example Gentoo Linux

root (hd0,0)

kernel /boot/bzImage root=/dev/md2
```

17. Installation complete!

Follow normal gentoo instructions in this section.

18. Misc RAID stuff

To see if you RAID is functioning properly after reboot do:

```
#cat /proc/mdstat
```

There should be one entry per RAID drive. The RAID 1 drives should have a "[UU]" in the entry, letting you know that the two hard drives are "up, up". If one goes down you will see "[U_]". If this ever happens your system will still run fine, but you should replace that hard drive as soon as possible.

To rebuild a RAID 1:

Power down the system

Replace the failed disk

Power up the system once again

Use

```
raidhotadd /dev/mdX /dev/hdX
```

      to re-insert the disk in the array

Watch the automatic reconstruction run

----------

## idiotprogrammer

default 0 

timeout 30 

splashimage=(hd0,0)/boot/grub/splash.xpm.gz 

title=My example Gentoo Linux 

root (hd0,0) 

kernel /boot/bzImage root=/dev/md2

Shouldn't this be: 

title=My Gentoo Linux on RAID

root (hd0,0)

kernel (hd0,0)/boot/bzImage root=/dev/md2

(this bottom example is from the official documentation)

----------

## mghumphrey

I forgot to set my password before doing the initial reboot after install. Upon rebooting from the LiveCD, I found I didn't know how to mount the RAID drive.

If this happens to you, here's what to do:

Boot from your Rescue device (it must have RAID support of course).

  # startraid /dev/md0

Replace "md0" with the device that contains your root partition.

  # mount -t xfs /dev/md0 /mnt/gentoo

Of course, replace "xfs" with the appropriate filesystem type.

----------

## vikwiz

Hi,

if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. We have some machines running like this since years, and had 2 diskcrash, without real problems. In first case I didn't even realise for days that it happened   :Exclamation:   Of course better to have at least a cronjob cheking your /proc/mdstat for 'U' with '_'. My servers are not near to my location, serving real tasks, so uptime is very concerned.

----------

## lurker

Thanks for the useful article.   RAID works well under Gentoo except for shutdown.   I get a failure to stop raid on the root device (because it is busy).    I suspect that this does cause problems from time to time.

In contrast a RAID-ed Redhat system I have shuts down smoothly.   

Any ideas?

----------

## vikwiz

Hi,

 *lurker wrote:*   

> RAID works well under Gentoo except for shutdown.   I get a failure to stop raid on the root device (because it is busy).    I suspect that this does cause problems from time to time.
> 
> 

 

I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case.

It could means also that not all processes terminates cleanly until that point. Can it remounts root in read-only?

----------

## lurker

[quote="vikwiz"I had to change the order LVM and RAID shut down[/quote]

Yes, me too.  I still have a root partition directly on raid that prevents clean shutdown.   I need to look at how Red Hat does the trick.

----------

## crown

 *vikwiz wrote:*   

> Hi,
> 
> if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. 
> 
> 

 

If I want the Swap partition to also be mirrored should that partition be of type "fd" or does it have to be 82? If it's 82 what else do I need to do mirror it properly?

----------

## dreamer3

 *vikwiz wrote:*   

> if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also.

 

That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...

----------

## vikwiz

 *crown wrote:*   

>  *vikwiz wrote:*   Hi,
> 
> if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. 
> 
>  
> ...

 

It's a normal mirror set, said /dev/md/3, and the following line in fstab:

```
/dev/md/3      none      swap   sw
```

You cannot set the type of the md, and the type of the partitions are normal Linux RAID Auto (that's "fd").

----------

## vikwiz

 *dreamer3 wrote:*   

>  *vikwiz wrote:*   if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. 
> 
> That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up...

 

The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.

Yes, it's maybe slow, but having a lot of memory saves you from swaping in normal circumstances.

----------

## dreamer3

 *vikwiz wrote:*   

>  *dreamer3 wrote:*   That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... 
> 
> The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow...

 

Duh, must have not had my good brain mounted...   :Embarassed: 

Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely?

----------

## vikwiz

 *dreamer3 wrote:*   

> Question though, how does ANY RAID configuration deal with drives that don't actually die but just start writing corrupt data, or is this the case only rarely?

 

Yes, it's a good question. I lost my strong belief in RAID. Earlier I thought it can save me of any disk corruption.

The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror!   :Wink:   And an apropriate RAID driver optimized on reliability (which we don't have AFAIK). And they say it's not better with RAID5 (not much experience with this).

So RAID finally is a wrong belief! It saves most of your data when one of your drives burns, or goes wrong dramaticaly, but doesn't helps in case of small read/write errors. You should still have a good uptodate backup to sleep well. And check SMART info often.

----------

## Auka

 *vikwiz wrote:*   

>  *dreamer3 wrote:*    *vikwiz wrote:*   if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. 
> 
> That sounds slow... what about equal sized swap partitiontion of each drive (2x what you need TOTAL) and a smart script that only enables swap on online/working drives on start-up... 
> 
> The problem with this is that in case of the current swap goes wrong, then actual applications, which are swapped out, will segfault or die anyhow.
> ...

 

Hi,

Yes, this is true you should  also "mirror" your swap to save you from segfaults when a disk dies at least if you really want 24x7 and 100% uptime.  :Smile: 

Swap priority should be your friend. If you mount multiple swap partitions they will usually get different priorities, i.e. they will be used "one after another". 

If you are keen on performance, you can also use swap priority settings to  set partitions to the same priority, then the kernel will automatically use "raid0" (round-robin):

```

## SWAP

/dev/hdc1               none            swap            sw,pri=1        0 0

/dev/hdd1               none            swap            sw,pri=1        0 0

```

i.e. just use pri=1 and pri=2 to have an backup... See man 2 swapon for more information. As far as I remember the Linux Sftware RAID HowTo also has a section regarding swap (prio).

Same priorities seem acceptable for me, as I like the performance boost and accept the IMHO neglible potential problems. (if your server really swaps _a lot_ then you might have another problem then the possibility of a dying disk.) And Linux seems quite robust for me regarding swap - in contrary to, say Solaris which acts IMHO really sensitive to problems with low/corrupted swap. YMMV I really do like Linux software RAID.

----------

## dreamer3

 *vikwiz wrote:*   

> The reality is that it writes the blocks on both drives (I talk about mirorrs I have experience with), 'hopefuly' right. In case of UDMA CRC error you should have a message in syslog. But no sign of corruption in case of material error. And when it reads, RIAD doesn't compare the two disks, but accepts the first block arrives (thus optimized for performance, not for reliability). It happened with me, that about 50% of reads went wrong, and I cannot explain this other way. And anyway, how could he decide which data is right? For this you would need 3 disks in mirror!

 

And for everyone who just though RAID 5 when you read that last sentence... it doesn't compare the checksumming information on ever read (nor could it do anything like that and preserver it's speed benefits), it merely calculates it before writing data to the disks, so if one disk were to start having corruption problems it could corrupt all of the data in the RAID array...

Now if you caught this after writing files ONCE it would be possible to correct the error as the checksumming information spread across the "good" drives could be used to rebuild the files of the "bad" drive... but if you've opened and saved files a few times then the corruption will have spread all over your RAID volume into the checksumming information on the other drives.

Wow, this all sounds scary.  Can anyone jump in here and paint a happier picture...

Of course today's modern drives are very fast and reliable... until they crash without warning...  :Smile: 

----------

## ElCondor

 *chatwood2 wrote:*   

> 
> 
> We also have to load the module that allows for RAID support, so:
> 
> ```
> ...

 

I booted from a 1.4rc2 or rc3, but there is no module md  :Exclamation: 

Do I have to use another install-cd  :Question: 

* ElCondor pasa *

----------

## delta407

 *ElCondor wrote:*   

> I booted from a 1.4rc2 or rc3, but there is no module md 

 md has been mushed into EVMS and is compiled into the kernel. (Check dmesg; it loads automatically.) However, even though it is loaded, I can't figure out how to get the RAID tools to see it using 1.4rc2... I think I have to use the evms tools, but they fail with a version mismatch.

In short, I don't think software RAID works -- at least with 1.4rc2.

----------

## delta407

Okay, 1.4rc3 works. Just be sure to `mkraid /dev/md0; mkraid /dev/md1; ...` instead of /dev/md* -- /dev seems to contain 256 md entries, and mkraid gets kind of confused.  :Wink: 

----------

## ElCondor

I took livecd-basic-x86-2003011400.iso , this works fine   :Smile: 

I tried with the rc2 before, have to update my install-cds  :Wink: 

* ElCondor pasa *

----------

## ptbarnett

 *vikwiz wrote:*   

> I dont't like unclean shutdowns either, I had to change the order LVM and RAID shut down, because I have LVM volumes on top of RAID, not the opposite. It's in an init-script, can't tell you, which, but 'grep LVM -r /etc' should find it wirst case.

 

I found it:  it's in /etc/init.d/halt.sh:

```

# Try to unmount all filesystems (no /proc,tmpfs,devfs,etc).

# This is needed to make sure we dont have a mounted filesystem

# on a LVM volume when shutting LVM down ...

ebegin "Unmounting filesystems"

# Awk should still be availible (allthough we should consider

# moving it to /bin if problems arise)

for x in $(awk '!/(^#|proc|devfs|tmpfs|^none|^\/dev\/root|[[:space:]]\/[[:space:]])/ {print $2}' /proc/mounts |sort -r)

do

        umount -f -r ${x} &>/dev/null

done

eend 0

# Stop RAID

if [ -x /sbin/raidstop -a -f /etc/raidtab -a -f /proc/mdstat ]

then

        ebegin "Stopping software RAID"

        for x in $(grep -E "md[0-9]+[[:space:]]?: active raid" /proc/mdstat | awk -F ':' '{print $1}')

        do

                raidstop /dev/${x} >/dev/null

        done

        eend $? "Failed to stop software RAID"

fi

```

However, it appears that it does in unmounting the root filesystem because the root filesystem is still in use.

It appears to follow with mounting it read-only (after stopping LVM).  When I reboot, the filesystem has always clean, so I'm not too worried.  / is also ReiserFS, which will allow any necessary recovery to go much faster.

----------

## Forse

What should I do if I want to use cfdisk instead of fdisk?   :Cool: 

----------

## gaz

very nice chatwood! ,  I managed to create a RAID 0 with 2x 20gb hdd's , then I cloned my current gentoo install onto my newly created RAID booting from GRUB (from a non raid partition) the only problem I had with the whole process was marking the partitions as RAID autodetect, which I had to go searching around how to do.. but it works now  :Smile: 

Im having the same problem in regards to bringing the RAID down on my root partition, which always fails when shutting down, but system boots clean each time so its not a real problem.

----------

## golemB

I have a pair of nice IBM drives that I was using as hardware RAID 0 with my motherboard's mostly-unsupported raid chip, back when I was using Windows.  Unfortuntely they seem to have been corrupted and Windows won't boot.

I'm not sure if the hardware's bad, but I was wondering, if I set the stripe size to be the same, can I simply try to boot and read data off the drives as a software RAID 0 ?  In other words, using a 1.4_final boot CD, can I load software raid without creating partitions?  The drives appear (hde and hdg) in /dev when I boot from the CD.

thanks in advance,

golemB

(p.s. The raid chip on the mobo is a AMI Megaraid IDE 100, which has very little linux support - only the SCSI version is supported in SuSE.  AMI sold their RAID stuff to LSILogic - I can find a bootdisk image but only for RedHat or SuSE 7.x... not sure if these are safe.)

----------

## Lovechild

IBM drives... those babies are buggy as hell, they are prone to sudden failure and death.

My local store has a ~100% fault rate for most of the newer IBM drives - average lifetime is about 6 months. These would be GXP60 and up drives, older IBM drives are just fine - my old 13GB drive is still going strong.

I dunno a single person who would recommend those drives for any kind of usage.

So my bet is that your hds are dead or dying.

----------

## dreamer3

Just to soften the previous post a little I've been using IBM drives for a few years (15 gig, 30 gig, and new 120gb Hitachi/IBM) with NO problems... I just bought my new 120gb a month or two ago (it's Hitachi since they bought the IBM data storage division or something) and haven't had any problems or signs or problems...  Of course I'll let you know when I pass that 6 month point, but I expect no problems.

One of my good friends whos sys admin at a private college I was web admin at for a while backs them and I put a lot of stock in his opinion.

Not trying to step on toes lovechild, just saying I haven't seen the roof cave in on IBMs yet here...

----------

## Obz

just some possible errata/corrections, re the original post:

 *Quote:*   

>  RAID 0: 2 or more hard drives are combined into one big volume. The final volume size is the sum of all the drives. When data is written to the RAID drive, the data is written to all drives in the RAID 0 . This means that drive reads and writes are very fast, but if 1 drive dies you lose all your data. 

 

While I understand that the poster may have meant something along the lines of, "the data is written across all drives" it's possible that the definition may be prone to misinterpretation such as,

"give me some data" -> "now write that data to each(all) of the drives"

which is clearly not RAID0.

Just thought someone might like to update that as those new to RAID (like I was when I first read this topic) could easily be confused.

Thanks,

Mike.

----------

## golemB

Alright, so between this helpful post and a few other webpages (see below), I was able to get these drives recognized by Linux as /dev/md0, and ``fdisk -l /dev/md0'' even was happy to report the existence of the vfat and ntfs partitions on the array!  Woohoo Linux!

Now, maybe because I'm doing this from a Gentoo boot CD and not from a functional bootable hard drive, even after I do a mkraid to achieve this, there is no such device as /dev/md0p1 and /dev/md0p2, which are mysteriously what fdisk reports are the locations of my old, old partitions.  So I can't mount these ``partitions'' within the /dev/md0 ``drive''.  Unlike what is recommended in this post, my initial (pre-Linux) setup had both raw drives acting as a single drive in RAID 0.  So my raidtab is set accordingly.  Now how do I get mount to see these partitions md0p1 and md0p2?  Do I have to reboot and have the kernel see the ``drive'' before it will accept this?  In other words, must I get a functional Linux kernel running on a separate boot drive before I can start reading data off my array?

Many thanks,

golemB

References:

Another great general Linux RAID intro / howto: 

http://www.linuxplanet.com/linuxplanet/tutorials/4349/1/

Manpage for /etc/raidtab:

http://www.linuxvalley.it/encyclopedia/ldp/manpage/man5/raidtab.5.php

----------

## bryon

I am trying to make a raid 1 setup with a spare.  All 40GB drives on a P2(350).  The problem that I am having is that when I do mkraid /dev/md* i get 

 *Quote:*   

> 
> 
> cdimage root # mkraid /dev/md*
> 
> detected error on line 1:
> ...

 

and /proc/mdstat gives me 

 *Quote:*   

> 
> 
> Personalities :
> 
> read_ahead not set
> ...

 

i am pretty stumped here, anny sugestions?

----------

## Obz

we'll need to have a look at your raidtab, so if you could post that please it might help, thanks.

mike.

----------

## bryon

My /etc/raidtab is as follows

 *Quote:*   

> 
> 
> # / (RAID 1 with hot-spare)
> 
> raiddev                 /dev/md2
> ...

 

I have hda and hdb on ide one and hdd on the second ide.  What I want it to do is have hda and hdd mirrors of each other(keep them of diffrent channels so if one channel fails) and hdb the hot spare.

I just figured out that i had some extra code in the /etc/raidtab but can you make sure that the above is what I am trying to do.

I seam to have a new problem now

 *Quote:*   

> 
> 
> device /dev/md199 is not described in config file
> 
> handling MD device /dev/md2
> ...

 

mkraid: aborted is not supos to happen right?

----------

## Obz

your raidtab configuration looks ok, that's the correct configuration for the spare configuration you're after. i find the line

 *Quote:*   

> device /dev/md199 is not described in config file 

 

rather odd, but it doesnt seem to be fatal.

the real error is in:

 *Quote:*   

> /dev/hdb1 appears to be already part of a raid array -- use -f to
> 
> force the destruction of the old superblock

 

implying that the hard drive was previously part of a raid array.

try running

```
mkraid -f /dev/md2
```

which should force the overwriting of the previous superblock.

----------

## bryon

when i do mkraid -f /dev/md* I get this big warning.  

 *Quote:*   

> 
> 
> cdimage root # mkraid -f /dev/md*
> 
> --force and the new RAID 0.90 hot-add/hot-remove functionality should be
> ...

 

so then i run plain mkraid  /dev/md* and get the same error.

How should I get around it?

----------

## Obz

basically the warning is the raidtools covering their arses in the best way they can, _just_in_case_ something goes wrong. there are two reasons why you're getting all this.

it's possible that hdb1 was previously part of another raid array, im not sure about this because i dont know the history of your hard drives. if this is the case, then you clearly dont intend to use that previous array anymore, and so there is no reason why you cant overwrite the superblock and get on with it.

the second possibility is that earlier in this process, perhaps when you were changing your raidtab or something like that, you accidentally wrote the superblock on that drive without writing the other ones (it's easy to do things like this if your config is just slightly off etc, ive done it before). in this case, again, there's no reason why you cant overwrite the superblock now you have the correct configuration.

the bottom line is, if _you_ are absolutely sure that you want to use hdb1 in _this_ raid array, then give it the --really-force flag and it will go and create the array for you. the bit about your config being in sync with your raid array is fine - the raidtab you have setup _is_ correct for the configuration you intend.

```
mkraid --really-force /dev/md2
```

should be what you're after. i'm trusting that since you want to create a new array on these disks/partitions you dont have any vital data left on them, because you might want to move that somewhere permanent first :)

mike.

----------

## bryon

Yes these where all 40GB disks that are having a new install on and does not matter what relly happens to them since I am doing a new install.  Tahnks for your help.  Hopefully the rest of the install will go smothly.

----------

## Obz

heh, not a problem. good luck with the rest of it.

mike.

----------

## Forse

I was just wondering that how much does software raid stress CPU. I have Pentium III 500Mhz with ~1gig ram. I was wondering will it be capable of softwareRAID 5? It's server...and I would use 3 scsi disk. Any ideas, comments?   :Cool: 

----------

## puke

 *Quote:*   

> I was just wondering that how much does software raid stress CPU. I have Pentium III 500Mhz with ~1gig ram. I was wondering will it be capable of softwareRAID 5? It's server...and I would use 3 scsi disk. Any ideas, comments?

 

You should be fine, providing you are not doing any intensive read/writes.

I have moved to software RAID 1 on several Gentoo boxen and have never noticed any performance decrease.  These machines are ~800MHz but only 0.5GB RAM.  But none are doing any excessive disk I/O.

Software RAID seems to be really solid.  Always make sure you have offline copies of docs, fstab and raidtab, in the event of a crash.  Test rebuilding your array etc.

----------

## Forse

I don't like using fdisk (too geeky), what should I do if I want to use cfdisk? I mean that is something important to know if I am planning to use cfdisk? In linuxdoc.org guide to SoftwareRAID:

```
The partition-types of the devices used in the RAID must be set to 0xFD (use fdisk and set the type to ``fd'')
```

How to do that with cfdisk?

thnx in advance   :Very Happy: 

----------

## Forse

 *Quote:*   

> And, since I want xfs on the / and /home RAIDs 
> 
> ```
> 
> #mkfs.xfs -d agcount=3 -l size=32m /dev/md2 
> ...

 

I am planning to have 200gig RAID 0 with xfs, what parameters would u recommend for mkfs.xfs and for chunksize in raidtab.   :Twisted Evil:  thnx a lot

----------

## MehdiYM

Hello,

I have a KT3 Ultra mother board with a ATA133 RAID.

I have bought a maxtor HDD 40Go ATA133 and I have connected it to the first connector of the RAID controller.

Then I have activated the RAID in the BIOS.

I have activated the modules md, raid0 and raid1 of my kernel.

I have booted my linux box and my new HDD has been detected as /dev/hde.

So I haven't made anything of your HOWTO and it works.

Why ?

----------

## golemB

 *MehdiYM wrote:*   

> 
> 
> I have a KT3 Ultra mother board with a ATA133 RAID.
> 
> I have bought a maxtor HDD 40Go ATA133 and I have connected it to the first connector of the RAID controller.
> ...

 

First of all, RAID requires at least two hard drives, and they should be the same brand, model, and size for best results.  If you don't understand why at this time, then you probably shouldn't try to use RAID.

----------

## golemB

Quick update - I put in a plain hard drive and got a reasonable Gentoo system up and running (multiboot w/ Win98 for kicks).  Then I tried again with the RAID setup in another vain attempt to resurrect my old data.  Yet again, the strange "/dev/md0p1" thing is still reported by fdisk but still cannot be mounted.  I think I shall have to simply wipe out the disks and start anew.   :Sad: 

----------

## puddpunk

 *MehdiYM wrote:*   

> Hello,
> 
> I have a KT3 Ultra mother board with a ATA133 RAID.
> 
> I have bought a maxtor HDD 40Go ATA133 and I have connected it to the first connector of the RAID controller.
> ...

 

Because that's hardware RAID isn't it. The hardware is doing all the work of RAID-ing your drives together (the onboard controller on your motherboard). What is described in this HOWTO is software RAID, where the kernel does all the work of RAID-ing your hard drives together.

----------

## MehdiYM

thx puddpunk and golemB, is there a way to put lilo (or grub) on a HDD that is connected on my raid controller ?

My aim is to have a dual boot with linux & winxp but without using the NTloader.

As winxp use the MBR of my first HDD (IDE) I though I can install lilo on the MBR of my second HDD (RAID), is it possible ?

----------

## paleck

I just wanted to say that those instructions were perfect for my set-up here.  Thanks for taking the time to put them up.

----------

## xunil

Some folks who don't mind losing their system in case of disk failure or who buy more reliable disks than retail or OEM IDE drives  :Razz:  might be interested in using RAID-0 on / for maximum performance. This doesn't "just work" like it does w/ RAID-1; instead you must provide some information to the kernel on the boot command line to jump-start the RAID-0 array so the kernel can mount it. At the end of your kernel line in your /boot/grub/grub.conf file, append something like the following:

```
md=0,/dev/hda1,/dev/hdc1
```

Let me explain this so you can write your own for your personal configuration: 0 is the md device which you'll be jump-starting (in this case, 0). Following your md device number is a comma-seperated list of the devices which make up that md device (in this case, /dev/hda1 and /dev/hdc1). So, let's say your md device which you want to mount as / is 2 which corresponds to /dev/md2, and /dev/md2 is composed of /dev/hda3, /dev/hdc2, and /dev/hde1. Here's what you'd append to your kernel line in /boot/grub/grub.conf:

```
md=2,/dev/hda3,/dev/hdc2,/dev/hde1
```

One caveat: these instructions only work if your md device uses a persistent superblock. If not (there's no reason not to, BTW), read /usr/src/linux/Documentation/md.txt.

----------

## greg32

can you guys have a look at my problem (link posted below) cause I am having real problems getting software raid0 to work on my system.

https://forums.gentoo.org/viewtopic.php?t=62575

thanks

----------

## wrex

Just curious why no-one seems to be using mdadm instead of raidtools?

Seems a much cleaner interface to me (the 50 or so lines in /etc/init.d/checkfs are reduced to one line: "mdadm --assemble --scan").

There is a nice O'Reilly article on mdadm.

(I submitted bug 2437 to request that checkfs support mdadm and to change the startup order of raid and lvm -- raid should be started first, shouldn't it?!!)Last edited by wrex on Fri Jun 27, 2003 4:31 am; edited 1 time in total

----------

## wrex

 *chatwood2 wrote:*   

> /boot and / will be a RAID 1 (mirror)

 

Since grub needs a real physical disk partition to bootstrap the OS, I don't see the value of mirroring /boot (especially since gentoo is wise enough to leave /boot unmounted by default).

My preference is to do a "poor man's RAID-1" for /boot.  I make /dev/hda1 and /dev/hdb1 equal sized and periodically copy /dev/hda1 to /dev/hdb1 by hand.  The backup command is simply

```
dd if=/dev/hda1 of=/dev/hdb1 bs=8192b
```

In my case, "periodically" means every time I make a change to /boot (but not until I've successfully booted with the change!).  The copy only takes a few seconds for a normal sized (100MB or less) /boot.

Not making the copy until you've verified the change is okay (by booting) is an important point (and actually a very good argument for NOT putting /boot on a RAID1 mirror -- human error in writing to /boot is far more likely than a disk failure [at least in my case!].   

[Auspex (RIP) explicitly did NOT mirror the OS drive in their NFS file servers  for exactly this reason (and manually "cloned" the OS drive after upgrades instead).]

 I've even tinkered with the idea of putting a script that runs at the very end of the default runlevel that looks something like

```
(sleep 3600; dd if=/dev/hda1 of=/dev/hdb1) &
```

I don't build a new kernel all that often though, so I'm content to just manually copy to hdb after booting a new kernel (or bootsplash image or any other change to /boot).

----------

## jerome187

i'm having trouble with step #6, the mkraid thingie.

```
cdimage root # mkraid /dev/md0

cannot determine md version: No such file or directory
```

heres my /etc/raidtab/

```
# / (RAID 0)

raiddev                 /dev/md0

raid-level              0

nr-raid-disks           4

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdb1

raid-disk               1

device                  /dev/sdc1

raid-disk               2

device                  /dev/sdd1

raid-disk               3

# swap (RAID 0)

raiddev                 /dev/md1

raid-level              0

nr-raid-disks           4

chunk-size              32

persistent-superblock   1

device                  /dev/sda2

raid-disk               0

device                  /dev/sdb2

raid-disk               1

device                  /dev/sdc2

raid-disk               2

device                  /dev/sdd2

raid-disk               3
```

i have 4 SCSI disks (all same size make model even partions are the same sizes).  I only want a / and swap partions.  whats wrong?

----------

## xunil

First of all, this configuration will not work since neither LILO nor GRUB can read from RAID-0. You'll need a partition for /boot to hold your kernel and bootloader files unless you plan to use a floppy for booting (an unreliable method at best). Second, there's no need to put your swap on a RAID-0 array since Linux swap supports "priorities." Make the Linux software RAID autodetect partitions you were going to use swap partitions (mark them as Linux swap partitions w/ fdisk and mkswap each partition) and then add pri=0 to the options column for each swap partition. This will assign the same priority to each swap partition, requiring the kernel to distribute the swap load across each swap partition evenly.

----------

## fleed

 *Quote:*   

> i'm having trouble with step #6, the mkraid thingie.
> 
> Code:
> 
> cdimage root # mkraid /dev/md0
> ...

 

Have you done

```
modprobe md
```

?

----------

## jerome187

 *xunil wrote:*   

> First of all, this configuration will not work since neither LILO nor GRUB can read from RAID-0. You'll need a partition for /boot to hold your kernel and bootloader files unless you plan to use a floppy for booting (an unreliable method at best). Second, there's no need to put your swap on a RAID-0 array since Linux swap supports "priorities." Make the Linux software RAID autodetect partitions you were going to use swap partitions (mark them as Linux swap partitions w/ fdisk and mkswap each partition) and then add pri=0 to the options column for each swap partition. This will assign the same priority to each swap partition, requiring the kernel to distribute the swap load across each swap partition evenly.

 

where do i add the pri=0 too?  is that something in fdisk?

----------

## xunil

 *jerome187 wrote:*   

>  *xunil wrote:*   First of all, this configuration will not work since neither LILO nor GRUB can read from RAID-0. You'll need a partition for /boot to hold your kernel and bootloader files unless you plan to use a floppy for booting (an unreliable method at best). Second, there's no need to put your swap on a RAID-0 array since Linux swap supports "priorities." Make the Linux software RAID autodetect partitions you were going to use swap partitions (mark them as Linux swap partitions w/ fdisk and mkswap each partition) and then add pri=0 to the options column for each swap partition. This will assign the same priority to each swap partition, requiring the kernel to distribute the swap load across each swap partition evenly. 
> 
> where do i add the pri=0 too?  is that something in fdisk?

 

In the options column in your /etc/fstab. Gentoo puts a <opts> header over it. The default Gentoo configuration has just "sw" in the column for the one swap partition. You want yours to look like "sw,pri=0" for each of your four swap partitions.

----------

## jerome187

i'm having trouble creating a rieserfs filesystem on raid0, it does its thing up to %80 and then dies.  i tryed removing the last drive and trying again and it did the same thing.  i;ve also waited for about half a hour and it still stayed at %80

???

----------

## jerome187

could this be a power supply problem?  I have 5 SCSI hard drives, 2 P2 233's, and 1 SCSI cdrom on a 350 watt PSU, could that be a problem?

----------

## xunil

 *jerome187 wrote:*   

> could this be a power supply problem?  I have 5 SCSI hard drives, 2 P2 233's, and 1 SCSI cdrom on a 350 watt PSU, could that be a problem?

 

Quite possibly. That or you might have duplicate SCSI IDs (although if that were the case, I'm not sure your SCSI adapter would even POST).

----------

## jerome187

i'll disconnect 2 or 3 drives and try again.  i'm pretty sure all my scsi ids are right, cause i had them messed up before and they drives wouldent post like you said.

----------

## anoland

Sorry for resurrecting an old thread... but I just used it and it worked perfectly. Although I would add that you should emerge raidtools before you finish the install. I was tripped up for a little bit because I configured my mirror with a failed disk in the raidtab and when I went to add it later raidhotadd was not found.

My 2¢

----------

## axa

 :Crying or Very sad: 

In the First, i've tried to implement/install software RAID in my new Gentoo box

if i success, i could implement/install Gentoo OS on EVMS2 smoothly..

Unfortunately, i've tried to implement it many days......

i always got Kernel panic error message when i boot from my SoftwareRAID kernel-image named "bzImage.swraid"......

  ‧Kernel panic error message 

>%------>%-----CUT-OUT>%---->%------>%

EXT3-fs:unable to read superblock

EXT2-fs:unable to read superblock

isofs_read_super: bread failed , dev=09:02 , iso_blknum=16,block=32

romfs: unable to read superblock

read_super_block: bread failed (dev09:02,block 64,size 1024)

Kernel panic: VFS:Unable to mount root fs on 09:02

------------------END-------------------

 :Embarassed: 

it kernel-image (bzImage.swraid) i was builded ,includes some of major kernel options as follow:

Multi-device support (RAID and LVM)  --->

[*] Multiple devices driver support (RAID and LVM)

<*>  RAID support

<*>   RAID-0 (striping) mode

<*>   RAID-1 (mirroring) mode

<*>   RAID-4/RAID-5 mode 

<*>  Logical volume manager (LVM) support    

<*>  Device-mapper support (EXPERIMENTAL) (NEW)    

<*>    Bad Block Relocation Device Target             

<*>    Sparse Device Target          

File systems  --->

<*> Reiserfs support

[*] /dev file system support (EXPERIMENTAL)  

[*]   Automatically mount at boot      

[ ]   Debug devfs  

i'm using ResierFS all of my gentoo box and my fstab and grub menu.lst as follow:

‧/etc/fstab

 *Quote:*   

> 
> 
> /dev/md0                /boot           reiserfs        noauto,noatime          1 2
> 
> /dev/md2                /               reiserfs        noatime                 0 1
> ...

 

/boot/grub/menu.lst

 *Quote:*   

> 
> 
> default 2
> 
> timeout 3
> ...

 

i'vd tried to trace problem by myself...but can not find any mistaken configurations......

i should NOT build devfs into kernel? since device name can not mapping from /dev/md* ??  :Rolling Eyes: 

Do u wanna more infomation about my software RAID or EVMS configuration???

i'm happy to refer to u.....  :Rolling Eyes: 

----------

## labrador

I don't see an entry for proc in your /etc/fstab:

proc             /proc            proc     defaults        0 0

----------

## NicholasDWolfwood

Although I'm not too informed in the software RAID area, the kernel panic is because you cannot boot a software RAID0 array...the boot partition cannot be on a RAID0 array

----------

## symbiat

 *chatwood2 wrote:*   

> 
> 
> Now we need to create the RAID drives so:
> 
> ```
> ...

 

I followed these instructions and have problems. Im using SCSI disks, so I setup my /etc/raidtab as follows:

```

# /boot - RAID 1

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdc1

raid-disk               1

# / - RAID 1

raiddev                 /dev/md1

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda3

raid-disk               0

device                  /dev/sdc3

raid-disk               1

# /home - RAID 1

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda5

raid-disk               0

device                  /dev/sdc5

raid-disk               1

# /usr - RAID 1

raiddev                 /dev/md3

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda6

raid-disk               0

device                  /dev/sdc6

raid-disk               1

```

sda1 is the same size as sdc1, and sda3 is the same size as sdc3, etc etc. (This machine initially was RedHat but Im switching over to Gentoo). I load the RAID module with modprobe. Then when I run mkraid /dev/md0, I get a warning saying that this partition already has an ext2 file-system on it. So I use the -f flag to force the initialisation of the RAID:

```

DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!

handling MD device /dev/md0

analyzing super-block

disk 0: /dev/sda1, 104391kB, raid superblock at 104320kB

disk 1: /dev/sdc1, 104391kB, raid superblock at 104320kB

md: md0: raid array is not clean -- starting background reconstruction

raid1: raid set md0 not clean; reconstructing mirrors

```

At this point the whole machine hangs and I see no drive lights and I cannot switch consoles. Im using the basic LiveCD to do this install. Any ideas? Anyone come across this before?

----------

## slais-sysweb

 *symbiat wrote:*   

> 
> 
> I followed these instructions and have problems. Im using SCSI disks, so I setup my /etc/raidtab as follows:
> 
> ```
> ...

 

You appear to be trying to put /boot on md0 This will not work as you have no software RAID until the kernel is loaded. Make sure you have a separate /boot partition. Provided the md driver is built in to the kernel and your partitions are type fd autodetect you can use RAID for the / (root) partition.

But having said that you do not seem to have got that far. Check you formatting with fdisk and overwrite the beginning of the partition with

```

dd if=/dev/zero of=/dev/sdc1 bs=512 count=2 

dd if=/dev/zero of=/dev/sda1 bs=512 count=2 

 
```

----------

## symbiat

 *slais-sysweb wrote:*   

> 
> 
> You appear to be trying to put /boot on md0 This will not work as you have no software RAID until the kernel is loaded. Make sure you have a separate /boot partition. Provided the md driver is built in to the kernel and your partitions are type fd autodetect you can use RAID for the / (root) partition.
> 
> 

 

I want to use RAID 1 on /boot - I know RAID 0 doesn't work for LILO or Grub.

I am booting off the LiveCD - does this mean that the installation kernel doesn't support RAID on /boot?

 *slais-sysweb wrote:*   

> 
> 
> But having said that you do not seem to have got that far. Check you formatting with fdisk and overwrite the beginning of the partition with
> 
> ```
> ...

 

This is prob. a good idea.

----------

## ZeroS

I'm also trying to get this to work.

/boot is RAID1

/ is RAID0

My raidtab

```

# /boot (RAID 1)

raiddev                 /dev/md0

        raid-level              1

        nr-raid-disks           2

        chunk-size              32

        persistent-superblock   1

        device                  /dev/hde1

        raid-disk               0

        device                  /dev/hdf1

        raid-disk               1  

# / (RAID 0)

raiddev                 /dev/md1

        raid-level      0

        nr-raid-disks   2

        persistent-superblock   1

        chunk-size      32

        device  /dev/hde2

        raid-disk       0

        device  /dev/hdf2

        raid-disk       1

```

My GRUB.conf

```

default 0

timeout 30

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=Gentoo Linux/WOLK4.9s

root (hd0,0)

kernel (hd0,0)/kernel-2.4.20-wolk4.9s root=/dev/md1 vga=791 md=1,/dev/hde2,/dev/hdf2

initrd (hd0,0)/initrd-2.4.20-wolk4.9s

```

my fstab

```

# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.

/dev/md0                /boot           ext2            noauto,noatime  1 1

/dev/md1                /               reiserfs        noatime         0 0

/dev/hde3               none            swap            defaults,pri=0  0 0

/dev/hdf3               none            swap            defaults,pri=0  0 0

/dev/cdroms/cdrom0      /mnt/cdrom      iso9660         noauto,ro       0 0

# NOTE: The next line is critical for boot!

none                    /proc           proc            defaults        0 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

# POSIX shared memory (shm_open, shm_unlink). 

# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will

#  use almost no memory if not populated with files)

# Adding the following line to /etc/fstab should take care of this:

none                    /dev/shm        tmpfs           defaults        0 0

```

Now after rebooting I get hit with this ugly message

```

md: invalid raid superblock magic on ide/host0/bus1/target1/lun0/part2

md: ide/host0/bus1/target1/lun0/part2 has invalid sb, not imported

md: md_import_device returned -22

md: ide/host0/bus1/target1/lun0/part2's event counter: 00000004

md: former device ide/host0/bus1/target1/lun0/part2 is unavailble, removing from array!

```

After it the kernel panics because it cant mount /dev/md1.

I can raidstart /dev/md0 and /dev/md1 and mount them just fine from the LiveCD. 

It must be something else, but I cant think of what. All of the RAID options are select and built into my kernel.

----------

## slais-sysweb

 *symbiat wrote:*   

> 
> 
> I want to use RAID 1 on /boot - I know RAID 0 doesn't work for LILO or Grub.
> 
> I am booting off the LiveCD - does this mean that the installation kernel doesn't support RAID on /boot?
> ...

 

Well, I've never tried using RAID for /boot. My assumption being that it is not possible to have software RAID without a kernel and that has to be loaded from a disk. True RAID 1 duplicates the disks so in principle you could boot from just one of the pair, but why complicate things? You only need to mount /boot to write a new kernel. As I only do that about twice a year I simply format two partitions as /boot, duplicate the content, and if anything goes wrong use a floppy boot disk to load from the spare disk.   For a server that will only need rebooting for a new kernel that dosn't seem too much work.

----------

## symbiat

 *slais-sysweb wrote:*   

> 
> 
> Well, I've never tried using RAID for /boot. My assumption being that it is not possible to have software RAID without a kernel and that has to be loaded from a disk.

 

True, though RedHat allows you to do this quite easily.

 *slais-sysweb wrote:*   

> 
> 
> True RAID 1 duplicates the disks so in principle you could boot from just one of the pair, but why complicate things? You only need to mount /boot to write a new kernel. As I only do that about twice a year I simply format two partitions as /boot, duplicate the content, and if anything goes wrong use a floppy boot disk to load from the spare disk.   For a server that will only need rebooting for a new kernel that dosn't seem too much work.

 

I see your point - I will try it without RAID for /boot. I just assumed Gentoo would support everything RedHat does (and then some more  :Smile: .

----------

## slais-sysweb

 *symbiat wrote:*   

> 
> 
> I see your point - I will try it without RAID for /boot. I just assumed Gentoo would support everything RedHat does (and then some more .

 

I did read the RAID howto that was based on Red-Hat. It appeared that it required an initial ram-disk to boot. So that would still need to be somewhere on a disk that did not rely on RAID.

I'm sure Gentoo can do everything with enough effort. But then my choice of Gentoo, espcially for servers, was very much motivated by the possibility of only installing what I really need.

The beauty of Gentoo is that after installing the system

```

emerge -pv ntp mysql apache mod_php

nano -w etc/make.conf

emerge ntp mysql apache mod_php
```

provide all that I need and nothing more.

----------

## symbiat

 *slais-sysweb wrote:*   

> 
> 
> I did read the RAID howto that was based on Red-Hat. It appeared that it required an initial ram-disk to boot. So that would still need to be somewhere on a disk that did not rely on RAID.

 

I have been using /boot on a RAID partition with Redhat on three servers with no problems. Disk Druid allows you to set this up.

 *slais-sysweb wrote:*   

> 
> 
> I'm sure Gentoo can do everything with enough effort.
> 
> 

 

Im sure it can, but I think this is a chicken-and-egg type situation for Gentoo  :Smile: 

 *slais-sysweb wrote:*   

> 
> 
> But then my choice of Gentoo, espcially for servers, was very much motivated by the possibility of only installing what I really need. The beauty of Gentoo is that after installing the system:
> 
> ```
> ...

 

I understand all this - this is one reason why Im switching. I generally build stuff myself from source for my servers precisely because I did not want to rely on the vendor for updates and patches. So all my RAID'ed RadHat servers run custom builds of Apache, PHP and MySQL (also qmail + vpopmail + Courier IMAP). I just prefer to work from source for important stuff - you can blame my BSD background!

----------

## cybrjackle

I was just curious what people thought I should to with my system.

Dell Precision WorkStation 420

Dual p3@866MHz

512MB Rambus 800

4x36 GB scsi 10k (on the same controller)  "ch A"

&

Sun D130 w/ 3x36GB scsi 10k "ch B"

I like the idea of raid, but I also like LVM (so I can create multiple file systems and resize) /usr /var /opt /home /usr/local /tmp (maybe /usr/portage)  BUT, I also will want to use 2.6 kernel at sometime.  Any recommendations?  Is lvm --> lvm2 easy transition and will lvm2 work back with 2.4 kernel?  Should I go with evms on top of raid.

This is my "everything box" internet, work, GAMES!  Does anyone have any suggestions what you would do with this kind of setup?  I don't know how much I will use the D130, might just be backups and extra's like mp3's or something.  Anyway, I'm all :ears:

Thank you,

----------

## BrianW

 *xunil wrote:*   

> Some folks who don't mind losing their system in case of disk failure or who buy more reliable disks than retail or OEM IDE drives  might be interested in using RAID-0 on / for maximum performance. This doesn't "just work" like it does w/ RAID-1; instead you must provide some information to the kernel on the boot command line to jump-start the RAID-0 array so the kernel can mount it. At the end of your kernel line in your /boot/grub/grub.conf file, append something like the following:
> 
> ```
> md=0,/dev/hda1,/dev/hdc1
> ```
> ...

 

If this were added to the original How-To in the first post it would be very helpful. I have been caught up on this for a day....

Thanks for the great community guys. On our 2 X 40gb maxtor 133 drives we are getting 80+mb/s on dd and hdparm tests. Great option for increased i/o without expensive hardware.

Brian

----------

## bryon

I am thrying to recover a raid aray with out destroying the data.  My file server had been humming along for a while just fine and then I rebotted it to update the kernel.  Apon rebooting it I got a kernel panic, so i trited to switch to the old kernel and that one did not want to work so now I am trying to restart it from the install cd.  I have the orginal /etc/raidtab inserted.  But i get neverous when it says DESTROYING the contents of /dev/md.  Should i do this, will i loose any of my data?

  I have inserted the /etc/raidtab  and am about to   #mkraid /dev/md*  but it says that it is about to       DESTROYING the contents of /dev/md    so i cancled it

----------

## BrianW

What raid level are you running? If you are running raid 0, and have no backup, I feel for you. Sorry not to be of any help, as I am such a noob with this linux software raid stuff.... But a word to the wise: If you run raid 0, and care about your data, make a backup regurlarly.

Brian

----------

## edge3281

I am having a problem with grub.  When try to boot my machine normally it just hangs at grub and won't let me do anything.  The keyboard doesn't even respond.  I am doing a raid on /boot could that be the problem?  I have double and tripple check all of my config files and they match the howto with the exception that I just /dev/md1 instead of /dev/md2 for the / raid.

Any ideas as to why this might happen this way?

Thanks

----------

## Corw|n of Amber

That howto worked for me! w00t! Now my computer is FAST even for the disk accesses!

----------

## BlinkEye

may anyone of you help me out? i'd appreciate any help. i made a new topic: https://forums.gentoo.org/viewtopic.php?p=930844#930844

----------

## peaceful

 *axa wrote:*   

> 
> 
>   ‧Kernel panic error message 
> 
> >%------>%-----CUT-OUT>%---->%------>%
> ...

 

I'm having almost the exact same problem (kernel panic) using an almost identical setup with ReiserFS.  Has anyone solved this?

----------

## GNU/Duncan

I have created a raid array, but when formatting with mkfs.xfs /dev/md1 an error occur

MD array /dev/md1 not clean state

if I use raiser or ext2 all is ok. Any solution?  :Wink: 

----------

## PillowBiter

ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume?

----------

## peaceful

 *PillowBiter wrote:*   

> ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume?

 

modprobe md

mdadm --assemble [raid device] [devices in the raid volume]

example:

mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1

Now if only I could get my raid volumes to be read when I try to boot off them...

----------

## skyfolly

I often heard that BT is very good at killing HDs(I download a lot of crap from the net, As the how-to suggests, if one dies, data in the array is gone also. Do you guy think it worths the efforts?

----------

## taskara

if you are using either 2004.0 testing or 2004.0 official livecds make sure you stop the arrays before you reboot after initial install, otherwise the array fails to start.

found this out the hard way  :Wink: 

I did report this during testing.. but it wasn't fixed  :Sad: 

----------

## PenguinPower

Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.

Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.

When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:

mkfs.ext3 -b 4096 -R stride=8 /dev/md?

(when using a chunk-size of 32 on that raid)

stride = (Chunksize in KB) / (Blocksize in KB)

All these optimatisations doubled my raid speed. And I used 3x Maxtor Maxline II Plus 250GB (7200RPM)

----------

## BumptiousBob

 *PenguinPower wrote:*   

> Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.
> 
> Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.
> 
> When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:
> ...

 

Thanks for the tip, I was just about to setup a large RAID 5 array and will avoid XFS.

----------

## BlinkEye

 *PenguinPower wrote:*   

> Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.
> 
> Also, don't use right-symmetric as parity-algorithm. Use left-symmetric!! I don't know why somebody need right-symmetric, because all harddisk have better performance with left-symmetric as parity algorithm.
> 
> When using EXT3/EXT2 on your software raid device, I recommend to use mkfs.ext3 or 2 with the following parameters:
> ...

 

any chance to rebuild your raid with these parameters without loosing your system?

----------

## smith84594

 *peaceful wrote:*   

>  *PillowBiter wrote:*   ok, so I screwed this up... so I'v boot back up with the gentoo 2004.0 livecd, and am trying to mount /dev/md1 to /mnt/gentoo but it won't let me. I'v modprobe'd md but that didn't help any and raidstart dosn't work from the live cd. How do I get back into that raid volume? 
> 
> modprobe md
> 
> mdadm --assemble [raid device] [devices in the raid volume]
> ...

 

Try boot options:

gentoo doataraid

-then-

when at the command line, do:

modprobe raid*

Worked for me.

----------

## PillowBiter

Grrrr...... This one really has me racking my brain. I'v tryed following this how-to twice now, both with the same problem. I know my raidtab is ok, and I'v followed the directions exactly. (except for the partition layout) I'v got:

/dev/hde1 <---boot

/dev/hdg1 <---swap

/dev/hde2 \_____/dev/md0

/dev/hdg2 /

Both times everything seemed to go ok untill I rebooted and got:

md: autodetecting RAID arrays.

md: autorun ...

md: ... autorun DONE.

EXT3-fs: unable to read superblock

EXT2-fs: unable to read superblock

FAT: unable to read boot sector

VFS: Cannot open root device "md0" or md0

Please append a correct "root=" boot option

Kernel panic: VFS: Unable to mount root fs on md0

I have my grub.conf set-up like this:

title=Gentoo RAID

root (hd0,0)

kernel /bzImage root=/dev/md0

And I turned on RAID and RAID 0 support in the kernel (onyl useing raid 0) What could I be doing wrong?

----------

## mudrii

 *PenguinPower wrote:*   

> Please don't use XFS when using RAID 5. (learned it the hard way) Although I think its a great FS with a great performance, I doesn't perform at all with Software raid. This is because XFS writes in 4096 blocks to the disk and inbetween writes 512 blocks for journaling.
> 
> 

 

This only for RAID 5  or for RAID 0 , RAID 1  is same bad ?

----------

## PenguinPower

 *Quote:*   

> This only for RAID 5 or for RAID 0 , RAID 1 is same bad ?

 

With Raid 0 and Raid you won't have a problem with XFS since the parity algorithm is different.

 *Quote:*   

> any chance to rebuild your raid with these parameters without loosing your system?

 

Yes. Its called raidreconf. But be aware, it's not production ready. For instance, i screw'd up by running out of memory and the kernel killed raidreconf  :Sad: ( If forgot to mount the swappartitions, and totally screw'd up the first 256MB of my raid, i still am very gratefull that is was only with first 256MB since this was only gentoo stuff  :Smile: , and not my movies and MP3's, not to mention my school work)

make sure all your drives are up (see cat /proc/mdstat) 

do nano -w /etc/raidtab and change the right-symmetric into left-symmetric in the running environment

then reboot with the gentoo live cd.

```

modprobe md

swapon /dev/<your swappartition(s)>

hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD1>

hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD2>

hdparm -d1 -c3 -u1 -m16 -a64 -A1 /dev/<HD3>

```

Now make  raidtab again.. make sure this is the EXACT copy of the one you are using right now, thus with the right-symmetric, otherwise, raidreconf will fail

```
nano -w /etc/raidtab.old 

cp /etc/raidtab.old /etc/raidtab
```

Now change right-symmetric into left-symmetric

```
nano -w /etc/raidtab
```

```
raidreconf -o /etc/raidtab.old -n /etc/raidtab.new -m /dev/<your raid device>
```

Now, have a good bedtime sleep, because its gonna take a while, depending on your drive speed and size

A conversion is possible to from XFS to Ext3 for your raid 5. By throughing a harddisk out of the array and copying the data onto it. formatting ext3 onto the raid and copying the data back to the raid and hot add the harddisk back to raid.

 But only if your total amount of MB's of your raid doesn't exceed the size of one harddisk in the raid.

(use df / to determine this)

make sure all your drives are up (see cat /proc/mdstat) and that you have Ext3 and xfs support enabled in the kernel (cat /proc/filesystems)

----------

## krypt

 *GNU/Duncan wrote:*   

> I have created a raid array, but when formatting with mkfs.xfs /dev/md1 an error occur
> 
> MD array /dev/md1 not clean state
> 
> if I use raiser or ext2 all is ok. Any solution? ;)

 

Had the Same Problem, ubgrading to xfsprogs 2.6.3 (ACCEPT_KEYWORDS="~x86"' emerge xfsprogs for the x86 Plattform) solved it.

 This is a known Bug on the XFS developer Mailinglist.

 It happens when there had been an other Partition with another Filestystem on the Disk.

 Don't forget the -f Switch to force even with the 2.6.3 version of xfsprogs.

bye Alex

----------

## Whitewolf

 *xunil wrote:*   

> (...)
> 
> Second, there's no need to put your swap on a RAID-0 array since Linux swap supports "priorities."

 

Yes, it does; but what's about reliability?

Assuming there are 3 SWAP Partitions in a system, each with pri=1: What happens with the processes wich swapped onto one of that drives when it crashes without RAIDed SWAP?

----------

## mudrii

I have some problem boot-ing in my box  :Sad: 

So I boot and Grub start with menu but when I try to run Kernel nothing happens.

My Kernel Setings commpile with 

```

Multi-device support (RAID and LVM) ---> 

[*] Multiple devices driver support (RAID and LVM) 

<*> RAID support 

<*> RAID-0 (striping) mode 

<*> RAID-1 (mirroring) mode 

< > RAID-4/RAID-5 mode 

File systems ---> 

<*> Reiserfs support 

[*] /dev file system support (EXPERIMENTAL) 

[*] Automatically mount at boot 

[ ] Debug devfs

```

My /etc/raidtab

```

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda1

raid-disk               0

device                  /dev/hdc1

raid-disk               1

   

# / (RAID 0)

raiddev                 /dev/md1

raid-level              0

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda2

raid-disk               0

device                  /dev/hdc2

raid-disk               1 

```

My /boot/grub/grub.conf

```

default 0

timeout 10

splashimage=(hd0,0)/grub/splash.xpm.gz

title=Gentoo

root (hd0,0)

kernel(hd0,0)/boot/kernel-2.6.5 root=/dev/md1

md=0,/dev/hda1,/dev/hdc1

md=1./dev/hda2,/dev/hdc2

```

My /etc/fstab

```

/dev/md0      /boot     reiserfs      noauto,notail,noatime     1 2

/dev/md1      /         xfs       noatime            0 1

/dev/hda3     swap      swap      defaults,pri=1     0 0

/dev/hdc3     swap      swap      defaults,pri=1     0 0

/dev/cdroms/cdrom0      /mnt/cdrom      iso9660         noauto,ro             $

/dev/fd0                /mnt/floppy     auto            noauto                $

none                    /proc           proc            defaults              $

none                    /dev/shm        tmpfs           defaults              $

```

My Kernel not boot no error mesage nothing just  grub menu  :Sad: 

PLS HELP  :Wink: 

----------

## gringo

thanks for this great guide !

Im building software raids on my sata drives and get errors when building the devs with mkraid. 

Md0 is created without problems, but when i try to build md2 it says "/dev/md2 no such file". Any tip ??

TIA

----------

## Ganto

 *bryon wrote:*   

> when i do mkraid -f /dev/md* I get this big warning.  
> 
>  *Quote:*   
> 
> cdimage root # mkraid -f /dev/md*
> ...

 

i just ran into the same situation. i wanted to install a gentoo box from a live-cd, where the raid-modules weren't compiled into the kernel. after a restart from the live-cd the arrays weren't builded during the modprobing of the neccessary modules (naturally i copied the valid raidtab to /etc first). a mkraid -R /dev/md* could then reconfigure my arrays without loosing any data. 

is this situation normal? when i compile these modules into the kernel, the arrays are buildet automatically during every startup? or is there another config i need?

ganto

----------

## BlinkEye

 *PenguinPower wrote:*   

>  *Quote:*   any chance to rebuild your raid with these parameters without loosing your system? 
> 
> Yes. Its called raidreconf. But be aware, it's not production ready. For instance, i screw'd up by running out of memory and the kernel killed raidreconf ( If forgot to mount the swappartitions, and totally screw'd up the first 256MB of my raid, i still am very gratefull that is was only with first 256MB since this was only gentoo stuff , and not my movies and MP3's, not to mention my school work)
> 
> make sure all your drives are up (see cat /proc/mdstat) 
> ...

 

i'm truly sorry i haven't replied to you. unfortunately the topic-reply notification doesn't allways work. nevertheless i stumbled upon your post and tried your mini-guide (thanks a lot). the problem is, that one hd is mapped as a scsi device within the livecd environment and hence i'm only allowed to change the readahead size (which imho is allready set to 64k). unfortunately the hole raid and livecd environment doesn't work properly. there are never all partitions up and while booting back to my system i normally have to 

```
# mdadm /dev/mdX --add /dev/sdXY
```

 for the drive(s) that is (are) down to get them all up and running again. i nevertheless tried your suggestions but it didn't work (i haven't noted the error messages though). 

i'm curious of a hdparm of your raid arrays.

mine are (and they are only half as fast as other people's arrays with a raid0):

```
 pts/9 hdparm -tT /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc

/dev/md0:

 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec

 Timing buffered disk reads:  64 MB in  1.13 seconds = 56.60 MB/sec

/dev/md1:

 Timing buffer-cache reads:   128 MB in  0.38 seconds =337.78 MB/sec

 Timing buffered disk reads:  64 MB in  1.15 seconds = 55.61 MB/sec

/dev/md2:

 Timing buffer-cache reads:   128 MB in  0.38 seconds =338.68 MB/sec

 Timing buffered disk reads:  64 MB in  1.12 seconds = 57.15 MB/sec

/dev/md3:

 Timing buffer-cache reads:   128 MB in  0.38 seconds =332.52 MB/sec

 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.09 MB/sec

/dev/sda:

 Timing buffer-cache reads:   128 MB in  0.37 seconds =349.78 MB/sec

 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.02 MB/sec

/dev/sdb:

 Timing buffer-cache reads:   128 MB in  0.37 seconds =341.39 MB/sec

 Timing buffered disk reads:  64 MB in  1.18 seconds = 54.38 MB/sec

/dev/sdc:

 Timing buffer-cache reads:   128 MB in  0.36 seconds =355.61 MB/sec

 Timing buffered disk reads:  64 MB in  1.21 seconds = 52.77 MB/sec

```

im not at all satisfied with these results but it may be related to my mistake of setting up the arrays with a chunksize of 4.

----------

## martijnkr

I had a hard time figuring out why I couldn't specify an md device in grub as a root device. It turned out that for some reason my kernel does not automatically recognize the md partitions as possible candidates for a raid array rebuild.

This is the normal type of boot:

Apr 12 07:24:02 woodpecker kernel: md: Autodetecting RAID arrays.

Apr 12 07:24:02 woodpecker kernel: md: autorun ...

Apr 12 07:24:02 woodpecker kernel: md: considering hdd10 ...

Apr 12 07:24:02 woodpecker kernel: md:  adding hdd10 ...

And this is (about) what I got:

md: Autodetecting RAID arrays.

md: autorun ...

md: ... autorun DONE.

VFS: Cannot open root device "md2" or unknown-block(0,0)

Please append a correct "root=" boot option

Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)

The solution to this is to add a specific hint to grub to consider the boot partition to be candidates:

# RAID boot

title root (hd0,0) 2.6.6-0 root RAID boot

root (hd0,0)

kernel /vmlinuz-2.6.6-0 root=/dev/md2 md=2,/dev/hda2,/dev/hdc2

See more info in the kernel documentation: /usr/src/linux/Documentation/md.txt

Cheers,

-Martijn

----------

## cornet

Hello,

I have a "Quick and dirty" guide to getting Gentoo installed with EVMS2 support thus supporting lvm and raid together under one set of tools.

The guide is here

Cornet

----------

## PenguinPower

 *BlinkEye wrote:*   

> i'm truly sorry i haven't replied to you. unfortunately the topic-reply notification doesn't allways work. nevertheless i stumbled upon your post and tried your mini-guide (thanks a lot). 

 

Same problem here wiith your reply on my reply...

[quote="BlinkEye" ]unfortunately the hole raid and livecd environment doesn't work properly. there are never all partitions up and while booting back to my system i normally have to 

```
# mdadm /dev/mdX --add /dev/sdXY
```

 for the drive(s) that is (are) down to get them all up and running again.[/quote]

This is rather strange. Are you using the same drives? Because during boottime, sometimes when one hd is slower then the other 2

 *BlinkEye wrote:*   

> 
> 
>  i nevertheless tried your suggestions but it didn't work (i haven't noted the error messages though). 
> 
> i'm curious of a hdparm of your raid arrays.
> ...

 

sure... MD1 = RAID 1 (16MB for boot purpose, thats why its slow), MD0=RAID5:

```

server2 root # hdparm -tT /dev/md0 /dev/md1 /dev/hde /dev/hdg /dev/hdi

/dev/md0:

 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec

 Timing buffered disk reads:  222 MB in  3.00 seconds =  74.00 MB/sec

/dev/md1:

 Timing buffer-cache reads:   548 MB in  2.00 seconds = 274.00 MB/sec

 Timing buffered disk reads:    6 MB in  0.10 seconds =  60.00 MB/sec

/dev/hde:

 Timing buffer-cache reads:   552 MB in  2.00 seconds = 276.00 MB/sec

 Timing buffered disk reads:  174 MB in  3.01 seconds =  57.81 MB/sec

/dev/hdg:

 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec

 Timing buffered disk reads:  166 MB in  3.03 seconds =  54.79 MB/sec

/dev/hdi:

 Timing buffer-cache reads:   552 MB in  2.01 seconds = 274.63 MB/sec

 Timing buffered disk reads:  172 MB in  3.00 seconds =  57.33 MB/sec

```

As you see, I am not using SCSI drives. I use a HPT374.

----------

## BlinkEye

 *martijnkr wrote:*   

> I had a hard time figuring out why I couldn't specify an md device in grub as a root device. It turned out that for some reason my kernel does not automatically recognize the md partitions as possible candidates for a raid array rebuild.
> 
> This is the normal type of boot:
> 
> Apr 12 07:24:02 woodpecker kernel: md: Autodetecting RAID arrays.
> ...

 

i had this issue myself - but magically i don't need such a line like 

```
md=2,/dev/hda2,/dev/hdc2
```

 any more. if interested see the last two posts of https://forums.gentoo.org/viewtopic.php?t=157573&highlight=raid+uuLast edited by BlinkEye on Thu Apr 29, 2004 3:38 pm; edited 1 time in total

----------

## BlinkEye

 *PenguinPower wrote:*   

> 
> 
> ```
>  pts/9 hdparm -tT /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/sda /dev/sdb /dev/sdc
> 
> ...

 

gnah! this can't be possible! why are your drives that speedy? what is a HPT374? and is it possible that hdparm doesn't return correct values for drives recognized as SCSI drives (i don't use SCSI drives, but my SATA drives get mapped to scsi drives from the two onboard raid controller). are you using software or hardware raid? i use software raid...

i'm asking because a 

```
emerge sync
```

 is that fast that my raid arrays must be a hole lot faster than a single ide drive.

----------

## PenguinPower

 *BlinkEye wrote:*   

> 
> 
> gnah! this can't be possible! why are your drives that speedy? what is a HPT374? and is it possible that hdparm doesn't return correct values for drives recognized as SCSI drives (i don't use SCSI drives, but my SATA drives get mapped to scsi drives from the two onboard raid controller). are you using software or hardware raid? i use software raid...
> 
> i'm asking because a 
> ...

 

HPT374 (Highpoint RocketRAID 454) is raid controller, but its a software raid card solution, so i don't use it's raid drivers, I only use it as a ide card, because it has 4 ports on it. (binairy drivers for the HPT374 are just software RAID, but not half as good as the linux software raid) I have to say I use very high end IDE drives. 3xMaxtor MaXLine Plus II 250GB (7200RPM,8MB cache, ATA 133, <9,0 AVR seek time)

Are your sata drives on one cable? or all on a seperate sata cable ? Because I got simulair results when I was Using the onboard IDE (VIA VT82XXXX) with 2 drives on 1 cable.

And you are right. hdparm is a poor testing tool for software raid. Take a look at http://www.tldp.org/HOWTO/Software-RAID-HOWTO-9.html#ss9.5 for better tools like IOzone which should be in the portage tree, but isn't !!  :Sad: 

----------

## BlinkEye

well, i have a via8237 chipset and a Promise R20376 RAID controller (both onboard), so were probably talking about the same chip altough i've connected all my 3 drives on seperate cables. i think we're using similar hd's (altough your's are twize my size): 

```
120 GB, SATA, Seagate ST3120 SATA-150, 7200/ 9ms/ 8MB 
```

if you have a further suggestion on what might be wrong with my setup let me know. i'm trying to install a benchmark program you suggested above, hope one runs under a 64bit system   :Wink: 

----------

## BlinkEye

this was a excellent link you gave me: i am just running IOzone (exists even for 64bit systems, see http://www.iozone.org/)

may i ask you if you could install that program too so we could exchange the results? it is done quickly: download the tar ball, extract and make it. after that it is run with 

```
./iozone -s 4096
```

i get the following result: 

```
./iozone -s 4096

        Iozone: Performance Test of File I/O

                Version $Revision: 3.217 $

                Compiled for 64 bit mode.

                Build: linux-AMD64

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins

                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss

                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,

                     Randy Dunlap, Mark Montague, Dan Million,

                     Jean-Marc Zucconi, Jeff Blomberg,

                     Erik Habbinga, Kris Strecker.

        Run began: Fri Apr 30 00:18:31 2004

        File size set to 4096 KB

        Command line used: ./iozone -s 4096

        Output is in Kbytes/sec

        Time Resolution = 0.000001 seconds.

        Processor cache size set to 1024 Kbytes.

        Processor cache line size set to 32 bytes.

        File stride size set to 17 * record size.

                                                            random  random    bkwd  record  stride

              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread

            4096       4  106326  445596   893165   537815  996812  386926  897458 1487846  898443   249117   432428  635143  1016331

iozone test complete.
```

----------

## PenguinPower

 *BlinkEye wrote:*   

> this was a excellent link you gave me: i am just running IOzone (exists even for 64bit systems, see http://www.iozone.org/)
> 
> may i ask you if you could install that program too so we could exchange the results? it is done quickly: download the tar ball, extract and make it. after that it is run with 
> 
> ```
> ...

 

You beat my raid 3 to 8 times...

This means i have a performance issue  :Very Happy: 

```
        Run began: Fri Apr 30 01:20:29 2004

        File size set to 4096 KB

        Command line used: ./iozone -s 4096

        Output is in Kbytes/sec

        Time Resolution = 0.000001 seconds.

        Processor cache size set to 1024 Kbytes.

        Processor cache line size set to 32 bytes.

        File stride size set to 17 * record size.

                                                            random  random    bkwd  record  stride

              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread

            4096       4   94896  153202   178824   181423  172616  148083  173727  448872  170772    91647   143708  173891   175877
```

----------

## BlinkEye

hmm. i'm not yet persuaded, but thanks a lot for your results. i've read the manual, and there i found the following command: 

```
0 ./iozone -Raz -b test.wks -g 1G
```

 which will keep your system occupied for several hours (of course the biggest file (=1G) can be changed to something smaller). the results will be written int test.wks which will be readable by openoffice calc. let me know if you plan to make a big test like that, so we could change the results (they get big, so i suggest email). i hear from you

----------

## PenguinPower

Sure... I am about to step in my bed... I will disable my cronjobs, and run it now... so I can email you tomorrow. Hopefully he will be done by then  :Smile: 

Iam not sure your iozone results are correct. since 898 Mb/s is a lot for stride read. And with Serial ATA is virtualy impossible

----------

## mudrii

Strange performance  :Sad: 

On /dev/md0 RAID 1 reiserfs

ON /dev/md1 RAID 0 xfs

```

gentoo / # hdparm -tT /dev/hda /dev/hdc dev/md0 /dev/md1

/dev/hda:

 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1071.63 MB/sec

 Timing buffered disk reads:  172 MB in  3.02 seconds =  56.98 MB/sec

/dev/hdc:

 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1071.09 MB/sec

 Timing buffered disk reads:  160 MB in  3.02 seconds =  52.94 MB/sec

dev/md0:

 Timing buffer-cache reads:   2144 MB in  2.00 seconds = 1072.70 MB/sec

 Timing buffered disk reads:  124 MB in  2.31 seconds =  53.76 MB/sec

/dev/md1:

 Timing buffer-cache reads:   2136 MB in  2.00 seconds = 1068.16 MB/sec

 Timing buffered disk reads:  138 MB in  3.02 seconds =  45.66 MB/sec

```

iozone

```

gentoo /# ./iozone -s 4096

        Iozone: Performance Test of File I/O

                Version $Revision: 3.217 $

                Compiled for 32 bit mode.

                Build: linux

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins

                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss

                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,

                     Randy Dunlap, Mark Montague, Dan Million,

                     Jean-Marc Zucconi, Jeff Blomberg,

                     Erik Habbinga, Kris Strecker.

        Run began: Fri Apr 30 20:16:51 2004

        File size set to 4096 KB

        Command line used: ./iozone -s 4096

        Output is in Kbytes/sec

        Time Resolution = 0.000001 seconds.

        Processor cache size set to 1024 Kbytes.

        Processor cache line size set to 32 bytes.

        File stride size set to 17 * record size.

                                                            random  random    bkwd  record  stride

              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread

            4096       4  343390  451701   252186   243897  198661  321509  239323  967383  221643   140862   158207  142479   145439

iozone test complete.

```

HDD maxtor 160G X 2 7200rpm 8Mb Cache

 Why is so slow my RAID  :Sad: 

----------

## BlinkEye

could you do also a 

```
./iozone -Raz -b test.wks -g 1G
```

 and let me know of your results? if interested of my file pm me with your email address (takes about 2 hours to do the above test).

----------

## mudrii

I run acovea for now after finish I will send you PM

----------

## mahir

!!

right

i basically followed the instructions as the howto wanted

i chose reiserfs rather then xfs (i dont know how to use xfs properly)

i used 2.6 kernel with genkernel

i booted

and it said  unable to mount /dev/md2 on /newroot

so i went back to the liveCD and now i cant even mount the raid so i can chroot and look at the settings!

i am trying to mound the physical disks but

that dont work either!

it is saying to me

```

sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 8, size 1024)

sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 64, size 1024)

sh-2005: reiserfs read_super_block : can not find reiserfs on md(9,2)

```

what does this mean?! plz help! i need this system up by yesterday!

----------

## mahir

ok i went back in via the liveCD and

i did a reiserfsck on the physical drives

and they are ok

this is my grub.conf

```

root (hd0,1)

kernel /kernel-2.6.5 root /dev/ram0 init=/linuxrc real_root=/dev/md/2

initrd /initrd-2.6.5

```

i get the boot things saying

that mdXXX is to large for blablabnla,

it lists thru every md numberr..!!

and then it says

type in a path to mount as real root as /dev/md/2 valid or something?!

any ideas people?!

----------

## mahir

bump...

i just changed my grub.conf to say

real_root=/dev/hda5 (my reiserfs / partiton)

and i get he same thing!!

```

Error lstat(2)ing file "/dev/md/dXXX" Value to large for defined data type

>> Determining root device...

>> Block device /dev/hda5 is not a vlid root device...

>> The root block device is unspecified or not detected.

please specify a device to boot, or "shell" for a shell.

boot() :: _

```

this is what i get!!!!!!

----------

## mahir

i just booted backw ith liveCD

i did

mount /dev/md2 /mnt/gentoo

it says

/dev/md2: Invalid argument.

mount : you must specify the filesystem type.

so i do

mount  -t reiserfs /dev/md2 /mnt/gentoo

then i get this again

sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 8, size 1024)

sh-2005: reiserfs read_super_block : bread failed (dev 09:02, block 64, size 1024)

sh-2005: reiserfs read_super_block : can not find reiserfs on md(9,2)

but i can mount /dev/hda5 onto /mnt/gentoo!!!

WHAT IS GOING ON..

----------

## BlinkEye

i'm sorry not being able to help you. one question though: when you builded your arrays have you waited until 

```
cat /proc/mdstat
```

showed now more activity? i think this was what i neglected first, i didn't wait for the drives to sync. 

maybe you could post your /etc/raid.conf, you may have overlooked something

----------

## steved411

Hello,

I've recently tried setting up RAID1 via software raid. I've followed a couple HOW-TO's (Even this one) but still keep seeing this problem.

When I first add a disk into the array I see it sync up. However, I recently had one drive crash (the primary) and found out the data on the 2nd drive hasn't been updated since I put it into the array.

I did some testing, and sure enough it doesn't! I have two RAID1's made. MD0 and MD2. MD0 is my /boot that one seems to work fine. My / (root) is MD2. This is the one that doesn't sync after it's put into an array.

Here is my MDSTAT:

cat /proc/mdstat

Personalities : [raid1]

read_ahead 1024 sectors

md0 : active raid1 scsi/host0/bus0/target8/lun0/part1[0] scsi/host0/bus0/target0/lun0/part1[1]

249472 blocks [2/2] [UU]

md2 : active raid1 scsi/host0/bus0/target8/lun0/part4[0] scsi/host0/bus0/target0/lun0/part4[1]

8092224 blocks [2/2] [UU]

unused devices: <none>

Here is my RAIDTAB:

# /boot (RAID 1)

raiddev /dev/md0

raid-level 1

nr-raid-disks 2

chunk-size 32

persistent-superblock 1

device /dev/sda1

raid-disk 0

device /dev/sdb1

raid-disk 1

# / (RAID 1)

raiddev /dev/md2

raid-level 1

nr-raid-disks 2

chunk-size 32

persistent-superblock 1

device /dev/sda4

raid-disk 0

device /dev/sdb4

raid-disk 1

Here is my FSTAB:

/dev/md0 /boot ext2 noauto,noatime 1 2

/dev/md2 / ext2 noatime 1 1

has anyone seen this before? I can't seem to find out where to start looking for problems!

Thanks!

Steve

----------

## BlinkEye

i don't see the problem. your 

```
cat /proc/mdstat
```

shows that all two raid arrays are running with both drives. if one would be down you would see something like that (md0 and md1 have one drive down): 

```
pts/1 cat /proc/mdstat 

 Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] 

 md1 : active raid5 sdc3[1] sdb3[0] 

       39085952 blocks level 5, 4k chunk, algorithm 2 [3/2] [UU_] 

 

 md2 : active raid5 sdc5[1] sdb5[0] sda5[2] 

       97675008 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU] 

 

 md3 : active raid5 sdc6[1] sdb6[0] sda6[2] 

       94124544 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU] 

 

 md0 : active raid5 sda2[2] sdb2[0] 

       3148544 blocks level 5, 4k chunk, algorithm 0 [3/2] [U_U] 

 

 unused devices: <none>

```

if you ever run into this problem follow this topic: https://forums.gentoo.org/viewtopic.php?t=157573&highlight=software+raid+uu

----------

## blake121666

Thanks for the how-to.  I made a cheat sheet while following it to make a root on LVM2 on RAID setup which had some gotchas.  I figured I'd cut and paste my cheat sheet here in case anyone else is groping for how to do this.

- Boot a live cd

- Load the md and dm-mod modules:

```

modprobe md

modprobe dm-mod

```

- Create partitions:

```

   Device Boot      Start         End      Blocks   Id  System

/dev/hda1   *           1          25      100768+  fd  Linux raid autodetect

/dev/hda2              26         969     3806208   fd  Linux raid autodetect

/dev/hdh1   *           1         200      100768+  fd  Linux raid autodetect

/dev/hdh2             201        7752     3806208   fd  Linux raid autodetect

/dev/hdh3            7753       59554    26108208   8e  Linux LVM

/dev/hde1               1       14593   117218241   fd  Linux raid autodetect

/dev/hdg1               1       14593   117218241   fd  Linux raid autodetect

```

- Create /etc/raidtab:

```

# Mirror /dev/hda with /dev/hdh

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda1

raid-disk               0

device                  /dev/hdh1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md1

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda2

raid-disk               0

device                  /dev/hdh2

raid-disk               1

# Mirror /dev/hde with /dev/hdg

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hde1

raid-disk               0

device                  /dev/hdg1

raid-disk               1

```

- Make the RAID

```

mkraid /dev/md0

mkraid /dev/md1

mkraid /dev/md2

```

- /boot will be ext3 (no LVM)

```

mke2fs -j -L BOOT -m 1 -v /dev/md0

```

- Create LVM PVs

```

pvcreate /dev/md1 /dev/md2 /dev/hdh3

```

- Create /etc/lvm/lvm.conf

```

echo 'devices { filter=["r/cdrom|hdh[12]|hd[a-gi-z]/"] }' >/etc/lvm/lvm.conf

```

- Create LVM VGs

```

vgcreate m1 /dev/md1   # mirrored volume group 1

                       # will hold a compact fully-functioning base system

vgcreate m2 /dev/md2   # mirrored volume group 2

                       # stuff not particularly needed to run linux

                       # such as /usr/portage, mp3 directory, ... etc

vgcreate um /dev/hdh3  # unmirrored volume group

                       # for things like /tmp, /var/tmp, ... etc

```

- Create LVM LVs

```

lvcreate -L 200M -n root    m1 # root         on m1

lvcreate -L 100M -n swap    m1 # swap         on m1

lvcreate -L 1G   -n usr     m1 # /usr         on m1

lvcreate -L 200M -n var     m1 # /var         on m1

lvcreate -L 500M -n X       m2 # /usr/X11R6   on m2

lvcreate -L 2G   -n portage m2 # /usr/portage on m2

lvcreate -L 400M -n swap    m2 # swap         on m2

lvcreate -L 2G   -n ushare  m2 # /usr/share   on m2

lvcreate -L 1G   -n usrc    m2 # /usr/src     on m2

lvcreate -L 2G   -n ccache  um # /um/ccache   on um (CCACHE_DIR)

lvcreate -L 5G   -n tmp     um # unmirrored tmp filesystem

                              # for /tmp, /var/tmp, ... etc

                              # create symbolic links to this for these

```

- Make and activate the swap files

```

mkswap /dev/m1/swap

mkswap /dev/m2/swap

swapon /dev/m1/swap

swapon /dev/m2/swap

```

- Put filesystems on LVs

```

mke2fs -j -L ROOT    -m 1 -v /dev/m1/root

mke2fs -j -L USR     -m 1 -v /dev/m1/usr

mke2fs -j -L VAR     -m 1 -v /dev/m1/var

mke2fs -j -L X       -m 1 -v /dev/m2/X

mke2fs -j -L PORTAGE -m 1 -v /dev/m2/portage

mke2fs -j -L USHARE  -m 1 -v /dev/m2/ushare

mke2fs -j -L USRC    -m 1 -v /dev/m2/usrc

mke2fs -j -L CCACHE  -m 1 -v /dev/um/ccache

mke2fs -j -L TMP     -m 1 -v /dev/um/tmp

```

- Mount the filesystems

```

mount /dev/m1/root    /mnt/gentoo

mount /dev/md0        /mnt/gentoo/boot

cd /mnt/gentoo; mkdir um usr var

mount /dev/m1/usr     usr

mount /dev/m1/var     var

cd usr; mkdir X11R6 portage share src

mount /dev/m2/X       X11R6

mount /dev/m2/portage portage

mount /dev/m2/ushare  share

mount /dev/m2/usrc    src

cd ../um; mkdir ccache tmp

mount /dev/um/ccache  ccache

mount /dev/um/tmp     tmp

cd ..

find . -exec chmod 777 {} \;

```

- Create and mount /proc

```

mkdir proc

mount -t proc proc /mnt/gentoo/proc

```

- Untar the stage file and remove it

```

tar -xvjpf stage*

rm stage*

```

- Copy over configuration files

```

cp /etc/raidtab etc

mkdir etc/lvm

cp /etc/lvm/lvm.conf etc/lvm

```

(also copy from backups: hosts, resolv.conf, make.conf, /root/*, ...)

- chroot

```

chroot /mnt/gentoo

```

- Make links to /tmp

```

rm -r /tmp;     ln -sf    um/tmp /tmp

rm -r /var/tmp; ln -sf ../um/tmp /var/tmp

rm -r /usr/tmp; ln -sf ../um/tmp /usr/tmp

```

- Create an /etc/fstab

```

# <fs>             <mountpoint>      <type> <opts>     <dump/pass>

/dev/m1/swap       none               swap  sw,pri=1       0 0

/dev/m2/swap       none               swap  sw,pri=1       0 0

/dev/md0           /boot              ext3  noatime        1 2

/dev/m1/root       /                  ext3  noatime        0 1

/dev/m1/usr        /usr               ext3  noatime        0 0

/dev/m1/var        /var               ext3  noatime        0 0

/dev/um/ccache     /um/ccache         ext3  noatime        0 0

/dev/um/tmp        /um/tmp            ext3  noatime        0 0

/dev/m2/X          /usr/X11R6         ext3  noatime        0 0

/dev/m2/portage    /usr/portage       ext3  noatime        0 0

/dev/m2/ushare     /usr/share         ext3  noatime        0 0

/dev/m2/usrc       /usr/src           ext3  noatime        0 0

/dev/cdroms/cdrom0 /mnt/cdrom         auto  noauto,ro,user 0 0

/dev/fd0           /mnt/floppy        auto  noauto,user    0 0

none               /proc              proc  defaults       0 0

none               /dev/shm           tmpfs defaults       0 0

```

- Run through the install as usual until the step to make the kernel

- Create an empty /initrd to pivot_root the initrd into

```

mkdir /initrd

```

- Create a 16MB LVM initrd

```

mkdir /tmp/initrd

cd /tmp/initrd

dd if=/dev/zero of=devram count=16384 bs=1024

mke2fs -F -m0 -L INITRD devram 16384

mkdir tmpmnt

mount -o loop devram tmpmnt

cd tmpmnt

mkdir bin dev etc lib proc root sbin tmp usr var

```

- Throw files into this tmpmnt directory that are needed to bootstrap

  the root filesystem as well as anything that you'd need to troubleshoot

  the system if something goes wrong (essentially create a minimal

  gentoo livecd).  I put all executables except for "init" in bin.

- Make sure the library files are included

```

ldd bin/* | awk '{if (/=>/) { print $3 }}' | sort -u \

          | awk 'system("cp "$1" lib")'

```

- Create the sbin/init file

```

#!/bin/bash

# include in the path some dirs from the real root filesystem

# for chroot, blockdev

export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/initrd/bin:/initrd/sbin"

PRE="initrd:"

do_shell() {

 /bin/echo

 /bin/echo "*** Entering LVM2 rescue shell. Exit shell to continue booting. ***"

 /bin/echo

 /bin/bash

}

echo "$PRE Remounting / read/write"

mount -t ext2 -o remount,rw /dev/ram0 /

# We need /proc for device mapper

echo "$PRE Mounting /proc"

mount -t proc none /proc

# Create the /dev/mapper/control device for the ioctl

# interface using the major and minor numbers that have been allocated

# dynamically.

echo -n "$PRE Finding device mapper major and minor numbers "

MAJOR=$(sed -n 's/^ *\([0-9]\+\) \+misc$/\1/p' /proc/devices)

MINOR=$(sed -n 's/^ *\([0-9]\+\) \+device-mapper$/\1/p' /proc/misc)

if test -n "$MAJOR" -a -n "$MINOR"

then

 mkdir -p -m 755 /dev/mapper

 mknod -m 600 /dev/mapper/control c $MAJOR $MINOR

fi

echo "($MAJOR,$MINOR)"

# Device-Mapper dynamically allocates all device numbers. This means it is

# possible that the root volume specified to LILO or Grub may have a different

# number when the initrd runs than when the system was last running. In order

# to make sure the correct volume is mounted as root, the init script must

# determine what the desired root volume name is by getting the LVM2 root

# volume name from the kernel command line. In order for this to work

# correctly, "lvm_root=/dev/Volume_Group_Name/Root_Volume_Name" needs to be

# passed to the kernel command line (where Root_Volume_Name is replaced by

# your actual root volume's name.

for arg in `cat /proc/cmdline`

do

 echo $arg | grep '^lvm_root=' > /dev/null

 if [ $? -eq 0 ]

 then

  rootvol=${arg#lvm_root=}

  break

 fi

done

echo "$PRE Activating LVM2 volumes"

# run a shell if we're passed lvmrescue on commandline

grep lvmrescue /proc/cmdline 1>/dev/null 2>&1

if [ $? -eq 0 ]

then

 lvm vgscan

 lvm vgchange --ignorelockingfailure -P -a y

 do_shell

else

 lvm vgscan

 lvm vgchange --ignorelockingfailure -a y

fi

echo "$PRE Mounting root filesystem $rootvol ro"

mkdir /rootvol

if ! mount -t auto -o ro $rootvol /rootvol

then

 echo "\t*FAILED*";

 do_shell

fi

echo "$PRE Umounting /proc"

umount /proc

echo "$PRE Changing roots"

cd /rootvol

if ! pivot_root . initrd

then

 echo "\t*FAILED*"

 do_shell

fi

echo "$PRE Proceeding with boot..."

exec chroot . /bin/sh -c "/bin/umount /initrd; \

 /sbin/blockdev --flushbufs /dev/ram0; \

 exec /sbin/init $*" < dev/console > dev/console 2>&1

```

- Remove the "lost+found" directory

```

rm -r lost*

```

- Create the compressed initrd in the /boot directory

```

cd ..

umount tmpmnt

dd if=devram bs=1k count=16384 | gzip -9 >/boot/initrd-2.6.6-rc1.gz

```

- Compile the kernel as usual, make sure the following are compiled:

```

General Setup->Support for hot-pluggable devices

Block devices->Loopback device suppport

Block devices->RAM disk support

Block devices->Default RAM disk size = (16384)

Block devices->Initial RAM disk (initrd) support

Multi-device support (RAID and LVM)->Multiple ...

Multi-device support (RAID and LVM)->RAID support

Multi-device support (RAID and LVM)->RAID-1 (mirroring) mode

Multi-device support (RAID and LVM)->Multipath I/O support

Multi-device support (RAID and LVM)->Device mapper support

File systems->Pseudo filesystems->/proc ...

File systems->Pseudo filesystems->Virtual memory ...

[i]Don't compile in /dev filesystem support[/i]

```

- Compile and install

```

make && make modules_install

cp arch/i386/boot/bzImage /boot/kernel-2.6.6-rc1

cp System.map /boot/System.map-2.6.6-rc1

cp .config /boot/config-2.6.6-rc1

```

- Setup grub as usual on the MBR of the boot disk

  (not the MD - it will have to resync)

Example lines in grub.conf:

```

root (hd0,0)

kernel /kernel-2.6.6-rc1 root=/dev/ram0 lvm_root=/dev/m1/root vga=788

initrd /initrd-2.6.6-rc1.gz

```

- Modify /etc/runlevels/boot/checkroot so that it creates LVM nodes

  before trying to check the root filesystem.  Add this line in an

  appropriate place:

```

/sbin/vgscan -v --mknodes --ignorelockingfailure

```

- Wrap up.  Exit out of the chroot environment, back up everything

  to an rsync server, unmount everything, and reboot.

----------

## symbiat

 *edge3281 wrote:*   

> When try to boot my machine normally it just hangs at grub and won't let me do anything.  The keyboard doesn't even respond.  I am doing a raid on /boot could that be the problem?

 

Are you using RAID 1 for /boot? RAID 0 will not work. Also which LiveCD did you use to install Gentoo?

FWIW, I have managed to get this working on three servers.

When you did the GRUB setup, did you install bootloaders on the individual disks that make up the RAID array for /boot?

Its no problem to have /dev/md? in your grub.conf assuming your kernel has RAID support and your partitions are of type "Linux RAID autodetect" - this is how I have it set.

Also, are you using a 2.6 kernel and pure udev?

----------

## zeek

 *PenguinPower wrote:*   

> Please don't use XFS when using RAID 5. (learned it the hard way) 

 

The solution to the problem is in the manpage:

```

       -s     Sector size options.

              This  option  specifies  the  fundamental  sector  size  of  the

              filesystem.  The valid suboptions are: log=value and size=value;

              only  one  can be supplied.  The sector size is specified either

              as a base two logarithm value with log=, or in bytes with size=.

              The  default  value  is 512 bytes.  The minimum value for sector

              size is 512; the maximum is 32768 (32 KB).  The sector size must

              be a power of 2 size and cannot be made larger than the filesys-

              tem block size.

```

The option -ssize=4096 during mkfs.xfs makes the XFS sector size match the MD block size.

----------

## zeek

 *bryon wrote:*   

> 
> 
>  PLEASE dont mention the <redacted> flag in any email, documentation or
> 
>  HOWTO, just suggest the --force flag instead. Thus everybody will read
> ...

 

What part of "Please dont mention this flag in any email, documentation of HOWTO" didn't you understand???

 :Rolling Eyes: 

----------

## petrjanda

Can any one help me out?   :Sad: 

https://forums.gentoo.org/viewtopic.php?t=178795

Thank you

----------

## ali3nx

i'm attempting to make an md raid0 drive from the 2004.1 livecd, have modprobed md, raid0 and scripted /etc/raidtab correctly duplicating a running system however when i mkraid /dev/md0 i recieve an error stating cannot determine md version: no MD device file in /dev. Does anyone have any suggestions?

```
livecd root # cat /proc/mdstat

Personalities : [raid0]

unused devices: <none>

livecd root # mdadm --assemble /dev/md0 /dev/hda3 /dev/hdc1

-/bin/bash: mdadm: command not found

livecd root # raidreconf --help

cannot determine md version: no MD device file in /dev.

livecd root #

livecd root # cat /etc/gentoo-release

Gentoo Base System version 1.4.3.12

livecd root # uname -a

Linux livecd 2.6.1-gentoo-r1 #1 Tue Jan 20 02:27:50 Local time zone must be set-

-see zic manu i686 AMD Duron(tm) Processor AuthenticAMD GNU/Linux

```

Author -:edit:-  after some sleep and a week later the facts materialzied my remote collegue was using an selinux gentoo livecd. md is b0rked in said cd.

----------

## zeek

 *ali3nx wrote:*   

> livecd root # mdadm --assemble /dev/md0 /dev/hda3 /dev/hdc1
> 
> -/bin/bash: mdadm: command not found
> 
> 

 

emerge mdadm

Guess this is a chicken and egg if you're trying to install on a raid0 and mdadm isn't on the livecd.  If so you need to setup a /etc/raidtab - this thread already has an excellent tutorial.

If your /etc/raidtab is already setup properly as you say then you need to start the raid.

----------

## hover

Thanks for the wonderful HOWTO at the beginning of this thread!

1) Though, the author uses raid1 for /boot (which is wise) but does not put GRUB onto the second disk. As the result, the two drives are not interchangeable, and you cannot boot from the second if the first fails.

Installing GRUB to the second disk is somewhat tricky; luckily, it was thoroughly described in another excellent HOWTO at http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html

I will quote a bit in case this valuable link goes down. The key point there is:

 *Quote:*   

> Grub>device (hd0) /dev/sdb (/dev/hdb for ide)
> 
> Grub>root (hd0,0) and then:
> 
> Grub>setup (hd0)
> ...

 

2) The HOWTO of this thread also does not mention the partitioning process that should be done before rebuilding a fresh disk. You cannot 'raidhotadd' a partitionless disk drive. The recommended procedure is also suggested in http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html. Again, I will quote some:

 *Quote:*   

> 
> 
> 1st is to backup the drive partition tables and is rather simple command:
> 
> #sfdisk -d /dev/sda > /raidinfo/partitions.sda
> ...

 

I will also notice that you should repeat the GRUB installation sequence with every fresh disk replacing a failed one.

Installation of GRUB to a /dev/hdX (or /dev/sdX) does not brake the synchronized status of the active /dev/mdX running on top of it.

Warning: do not apply these advices blindly for raid0 or raid5 - neither GRUB nor LILO would boot from them straight!

----------

## BlackB1rd

I'm currently installing a new Gentoo system with software RAID1 and it's giving me a "dirty, no errors" state (using mdadm -D /dev/md*). Should I be worried? How to get it clean instead of dirty?

----------

## lampshad3

 *vikwiz wrote:*   

> Hi,
> 
> if you want not just your data safe, but your server/workstation up and running in case of disk failure, even if you are not there, you should put your swap on RIAD1 also. We have some machines running like this since years, and had 2 diskcrash, without real problems. In first case I didn't even realise for days that it happened    Of course better to have at least a cronjob cheking your /proc/mdstat for 'U' with '_'. My servers are not near to my location, serving real tasks, so uptime is very concerned.

 

Care to relay that script?

----------

## dodger10k

 *BlackB1rd wrote:*   

> I'm currently installing a new Gentoo system with software RAID1 and it's giving me a "dirty, no errors" state (using mdadm -D /dev/md*). Should I be worried? How to get it clean instead of dirty?

 

Same thing here with me, if anyone knows a solution I´d like to hear it  :Wink: 

Oliver

----------

## Tazok

My raid always fails, because I have no /dev/md* files besides /dev/md0.

I'm running udev. What could be the cause for this?

Solved: I had to run "mknod /dev/mdX b 9 X" for each array, where X ist the device number of the array.

----------

## BlackB1rd

And my problem extends since it's not only the dirty status it returns. It also doesn't seem to update the 'Update time', which is very concerning. I guess that means is doesn't keep the RAID array synchronized?  :Sad: 

----------

## mr.twemlow

Tazok, where do I run that command?  Off the live cd?  Wouldn't it just get overwritten when udev restarts?  My raid won't boot because it can't find /dev/md2 (or /dev/md/2).

EDIT:  Ok, I see what mknod does.  But when I reboot onto the live cd, how do I remount my raid partitions?  I've recreated /etc/raidtab, but it complains if I try to mount /dev/md2.

----------

## SkaMike

I didn't read through all the posts, but apparently you can do a raid0 for the swap which is definitely a good idea.

----------

## jasonpf

 *Quote:*   

> 
> 
> I'm currently installing a new Gentoo system with software RAID1 and it's giving me a "dirty, no errors" state (using mdadm -D /dev/md*). Should I be worried? How to get it clean instead of dirty?
> 
> 

 

Dirty, no errors is normal.  When a raid system is running it will normally be dirty (dirty means that the parity blocks haven't been updated yet).  It is normal and nothing to be concerned about..  If you wish to see the array clean, just stop the array (I use mdadm - mdadm -S /dev/mdX) and then examine with mdadm -E /dev/hdX (The disc or partition making up part of the array).

What I'd like to know, is why I'm unable to get my root raid to work with genkernel.  I get the following message as did mahir:

```

...

Error lstat(2)ing file "/dev/md/d252" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d253" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d254" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d255" Value to large for defined data type

>> Determining root device...

>> Mounting root...

mount: Mounting /dev/md0 on /newroot failed: Input/output error

>> Could not mount specified ROOT, try again

>> The root block device is unspecified or not detected.

       Please specify a device to boot, or "shell" for a shell...

boot() :: _ 

```

The above code was typed by hand, so please excuse any typos.

I have built the kernel with genkernel --menuconfig all and chose md support built into the kernel.  I then saved, exited and allowed genkernel to continue.  At boot I get that error.  And ideas?

Edit: I just realized that the default genkernel does not include support for my SCSI card.  I'm building a new kernel with Genkernel now with SCSI support for my card built in.

----------

## jasonpf

Ok, I've built support into my kernel for my SCSI card and also built in md (raid 0-5).  I still get this error:

```

Error lstat(2)ing file "/dev/md/d252" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d253" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d254" Value to large for defined data type

Error lstat(2)ing file "/dev/md/d255" Value to large for defined data type 

```

but it now boots successfully.  :Wink: 

----------

## borchi

i also get this error and at least for me it is limited to 2.6 kernel. should i be worried about that error?

----------

## Snooper

I was wondering if anyone here has software raid working on a msi k7n2 delta-ilsr. I have been unable to get it to do anything for the bios to see the sata controller and boot from it it has be setup as a raid within the controller bios. This does not work because it's not true hardware raid once you get into linux. the other option is breaking the raid then using software raid but this 2 does not work as the bios is unable to boot from the controller without it setup as a raid.

by the way the controller is a promise 376

----------

## Snooper

I was wondering if anyone here has software raid working on a msi k7n2 delta-ilsr. I have been unable to get it to do anything for the bios to see the sata controller and boot from it it has be setup as a raid within the controller bios. This does not work because it's not true hardware raid once you get into linux. the other option is breaking the raid then using software raid but this 2 does not work as the bios is unable to boot from the controller without it setup as a raid.

by the way the controller is a promise 376

----------

## Hivemind

I'm putting together a samba server box with 4 80GB disks in a RAID5 + hot spare.

How do i boot from said raid 5 array?

----------

## SkaMike

 *Hivemind wrote:*   

> I'm putting together a samba server box with 4 80GB disks in a RAID5 + hot spare.
> 
> How do i boot from said RAID5 array?

 

If you're talking about a software raid (which I assume you are), you can't boot from a RAID5.  Only a raid1 is bootable since the disk can be read without the others.   Just make one of the partitions a RAID1 to boot off of.

----------

## jasonpf

 *borchi wrote:*   

> i also get this error and at least for me it is limited to 2.6 kernel. should i be worried about that error?

 

I mananged to get it working on mine after adding the proper drivers built into the kernel (genkernel --menuconfig all).  I think that for some reason the initrd has too many md devices.  They haven't had any ill effects on my system and once its out of initrd that error is irrelevant anyways.

----------

## mr.twemlow

I'm getting this error when I try to boot:

```
md: raid0 personality registered as nr 2

md: raid1 personality registered as nr 2

md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27

md: Autodetecting RAID arrays

md:  autorun ...

md: ... autorun DONE.

RAMDISK: Counldn't find valid RAM disk image starting at 0.

ReiserFS: md1 watrning: sh-2006: read_super_block: bread failed (dev md1, block 2, size 4096)

ReiserFS: md1 watrning: sh-2006: read_super_block: bread failed (dev md1, block 16, size 4096)

EXT2-fs: unable to read superblock

Kernel panic: VFS: Unable to mount root fs on uknown-block(0,0)

```

My raidtab is as follows:

```
# /mnt/backup raid1

raiddev /dev/md/0

raid-level 1

nr-raid-disks 2

chunk-size 32

persistent-superblock 1

device /dev/hde3

raid-disk 0

device /dev/hdg3

raid-disk 1

# / raid0

raiddev /dev/md/1

raid-level 0

nr-raid-disks 2

chunk-size 32

persistent-superblock 1

device /dev/hde4

raid-disk 0

device /dev/hdg4

raid-disk 1
```

My grub.conf is:

```
root (hd0,0)

kernel /kernel-2.6.7-r11 root=/dev/md/1
```

If I change that to be:

```
root (hd0,0)

kernel /kernel-2.6.7-r11 root=/dev/md/1 md=1,/dev/hde4,/dev/hdg4
```

I end up with 

```
md: raid0 personality registered as nr 2

md: raid1 personality registered as nr 2

md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27

md: Autodetecting RAID arrays

md:  autorun ...

md: ... autorun DONE.

md: Unknown device name: hde4
```

The system is a couple of 80 Gig WDs on an external IDE controller, hence hde and hdg.  They are both partitioned like this:

```
Partition 1: 64 MB, ext2

Partition 2: 1 GB, swap

Partition 3: 20 GB, raid1

Partition 4: The rest, raid0
```

Both raids have reiserfs.  I originally had the boot partition raided (mirrored), but changed that to try to fix my problem.  Didn't work.

I can reboot into the livecd and mount the raids fine, they are valid and not corrupted.  The only weird thing I noticed is that since getting rid of the /boot raid partition and changing the / partition from md/2 to md/1, when I go to use raidstart i do

```
raidstart /dev/md/1
```

And it works fine, but then when I try to mount /dev/md/1 it gives me "SQUASHFS" errors.  I looked at /proc/mdstat and it said that md2 was running, though I had started md1.  I tried, and I can mount md2 fine, and it's my / partition.  I don't know why starting md/1 starts md/2.

I've messed around specifying md/2 in my grub.conf and doing the md=*,/dev/hde4,/dev/hdg4.  But nothing seems to work, it just won't boot.

I know my raid is fine, since I can mount it still.  Any ideas?  (I can provide my fstab too, if needed.)

----------

## mr.twemlow

All right, silly me.  I realized that I probably had the wrong driver for the IDE card, so I put the RAID drives on the onboard nforce2 controller.  Now the kernel boots so far that I can see that it starts the RAID device, but dies with this error:

```
VFS: Mounted root (reiserfs filesystem) readonly.

mount_devfs_fs(): unable to mount devfs, err: -2

Freeing unused kernel memory: 140k freed

Warning: unable to open an inital console.

Kenel panic: No init found.  Try passing init= option to the kernel.
```

----------

## PartyCharly

i got some problems with my md0 raid device

here my datas

```
    partycharly sbin # mount                    

/dev/md0 on / type ext3 (rw,noatime)

none on /dev type devfs (rw)

none on /proc type proc (rw)

none on /sys type sysfs (rw)

none on /dev/pts type devpts (rw)

/dev/md1 on /home type ext3 (rw,noatime)

/dev/md2 on /var type ext3 (rw,noatime)

none on /dev/shm type tmpfs (rw)

```

but my / is write protected,

i tried

```
    partycharly sbin # mount  -no remount,rw / 

mount: block device /dev/md0 is write-protected, mounting read-only

```

but i got the answer that md0 iss wirte protected.

first i checked the disks, both are fine and not write protected.

plz help me, i have no more ideas.

----------

## VinnieNZ

I'm trying to do a new install on a server and I need this support.

I've booted from the 2004.2 live cd and when I preform 'modprobe md' I get a 'can't locate module md' error.

I've done a bit of hunting around and it appears as though the modules is available under /lib/modules/2.6.7-kernel.../kernel/drivers/md/md.ko but not under the 2.4 kernel path of similar.

I'm not 100% sure what kernel version the 2004.2 cd actually boots when you call for the gentoo kernel.

Does anyone have any ideas for this.

Cheers

[Edit]  Ooops, don't mind me.  I was looking for something else in the output of dmesg and it appears that md starts automatically on the new cd's.  Heh   :Embarassed:   :Smile:  [/edit]

----------

## J-ke

```
modprobe raid1
```

works for me...

----------

## Tuinslak

Hi, I just tryed this on a server, and everything seems to work

till I get to grub:

```
grub-install --root-directory=/boot /dev/hda
```

What should this be for raid? I tryed /dev/md and so on, which didn't work

I use raid1 with 2 sata maxtor hdd's:

/dev/sda and /dev/sdb

An other thing I tryed:

```
grub> setup (hd0)

Error 12: Invalid device requested
```

This is what I get:

```
livecd / # grub-install --root-directory=/boot /dev/sda

/dev/md1: Not found or not a block device.
```

update:

```
grub --no-floppy

root (hd0,0)

setup (hd0,0)
```

worked, and my gentoo is booting now

but when I try to boot from the other hdd (sdb) only, nothin happens

I tryed the same commands for hd1,0, which didnt help?

Thanks

Tuinslak.

----------

## JWU42

Same issue with the 

```
grub-install  --root-directory=/boot /dev/md0 
```

I will try the suggestion and do the "old" grub setup

----------

## Gentii

 *Snooper wrote:*   

> I was wondering if anyone here has software raid working on a msi k7n2 delta-ilsr. I have been unable to get it to do anything for the bios to see the sata controller and boot from it it has be setup as a raid within the controller bios. This does not work because it's not true hardware raid once you get into linux. the other option is breaking the raid then using software raid but this 2 does not work as the bios is unable to boot from the controller without it setup as a raid.
> 
> by the way the controller is a promise 376

 

well, I've the exact same problem. I have a p4c800-e mobo, which has 2 sata controller, one promise and one ich5r. I first put my 2 drives on the first, and after 3 gentoo install, I still didnt get grub to load. I tried to put the 2 drives on the ich5r controller and it worked fine. But it isn't as fast as it should be, maybe the promise is faster. So did you solve your problem, is there a way to use the promise controller ?

----------

## Naughtyus

I've followed this guide to a T, but no matter what I do, after I reboot my system, it doesn't detect that my drives are bootable.  There aren't any error messages during install, grub seems to install and setup perfectly fine, it just isn't detected as a bootable drive when I restart.

Any thoughts?

----------

## pi-cubic

 *krypt wrote:*   

>  *GNU/Duncan wrote:*   I have created a raid array, but when formatting with mkfs.xfs /dev/md1 an error occur
> 
> MD array /dev/md1 not clean state
> 
> if I use raiser or ext2 all is ok. Any solution?  
> ...

 

how can you update the xfsprogs when booting from a live-cd?

----------

## Valin

I've setup a 3 drive RAID 0, but Grub refuses to setup.

/boot = /dev/md0

swap = /dev/sda2, /dev/sdb2, /dev/sdc2

/ = /dev/md1

Additionally, the instrustions at the beginning of this topic forget to mention you have to first create the md devices in /dev before you can run mkraid.

The problem I'm getting is that Grub refuses to recognize hd0 to install itself into on the MBR.  Can I get around this by installing to the boot partition instead?  Even if I try it still whines about not finding anything about my drives in the BIOS...huh?

----------

## Arainach

(Please Delete this Post)

----------

## icn

Is one single 36GB Raptor faster than 2 60GB SATA drives in RAID0?  Also is it easy to have both Windows and Gentoo on the same drive with RAID 0 and which one do you install first if its possible?  Any help would be appreciated.   Thanks.

----------

## R!tman

I have serious performance problems on my new pc. I set up reiserfs on a raid 5 on 4 sata disks with a chunk size of 64K (mdadm standart). But compared to my old pc with only 1 ata disk (and reiser4) the raid performance is horrible.

My old system:

```
# time cat /scratch/big.file > /dev/null 

real    0m49.614s

user    0m0.053s

sys     0m7.746s

# du -h /scratch/bigfile

2.2G    /scratch/big.file
```

My new system:

```
# time cat /scratch/big.file > /dev/null

real    0m55.600s

user    0m0.012s

sys     0m4.471s

# du -h /scratch/bigfile

2.2G    /scratch/big.file
```

Even a single sata drive should be faster than the rather old ata drive. The raid 5 is way too slow in my opinion.

What is really strange though is that:

Old system

```
# iozone -s 4096 

        Iozone: Performance Test of File I/O

                Version $Revision: 3.226 $

                Compiled for 64 bit mode.

                Compiled for 32 bit mode.

                Build: linux 

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins

                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss

                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,

                     Randy Dunlap, Mark Montague, Dan Million, 

                     Jean-Marc Zucconi, Jeff Blomberg,

                     Erik Habbinga, Kris Strecker.

        Run began: Thu Mar 10 12:43:36 2005

        File size set to 4096 KB

        Command line used: iozone -s 4096

        Output is in Kbytes/sec

        Time Resolution = 0.000001 seconds.

        Processor cache size set to 1024 Kbytes.

        Processor cache line size set to 32 bytes.

        File stride size set to 17 * record size.

                                                            random  random    bkwd  record  stride                                   

              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  

freread

            4096       4  104049  314064   512255   486165  397089  266838  392078  385437  394798   278637   286135  468387   

451354

iozone test complete.
```

New system

```
# iozone -s 4096 

        Iozone: Performance Test of File I/O

                Version $Revision: 3.226 $

                Compiled for 64 bit mode.

                Build: linux-AMD64 

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins

                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss

                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,

                     Randy Dunlap, Mark Montague, Dan Million, 

                     Jean-Marc Zucconi, Jeff Blomberg,

                     Erik Habbinga, Kris Strecker.

        Run began: Thu Mar 10 12:27:55 2005

        File size set to 4096 KB

        Command line used: iozone -s 4096

        Output is in Kbytes/sec

        Time Resolution = 0.000001 seconds.

        Processor cache size set to 1024 Kbytes.

        Processor cache line size set to 32 bytes.

        File stride size set to 17 * record size.

                                                            random  random    bkwd  record  stride                     

              

              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   

fread  freread

            4096       4  323947 1037443  2026691  2119975 2020731 1160033 1967279 1410868 1952300   297132   985321 15

76018  2020731

iozone test complete.
```

Here it seems as the raid would indeed be quite faster. Strange...

----------

## fatalglitch

I haven't read through this whole post, so I apologize if I am repeating, but....

It should be noted, with software raid, there is one thing many people forget about, but is HUGE when is comes to both speed AND redudancy

- Each physical drive should be on a SEPERATE ide bus.

Ok, now I will explain this...

Performance - if both drives are on the same IDE bus....your really not gaining any speed increase by RAID0 (striping) due to the fact that the bus still has a maximum thoroughput, and will not exceed that amount...

Redundancy - this should be obvious. Yes hard drive's do go bad....but what happens when the bus goes bad? This may not be necessary for a desktop system, but for a server setup....this is MAJOR. Bus goes dead, then both your physical drives are gone....and the server crashes.

Just some FYI...

-Tom

----------

## fatalglitch

RAID0 will not stripe across 3 drives...your setup (if using software RAID) would be using 2 physical drives to stripe, and a backup drive...(which would be pointless).

If you have 3 drives, your better off using RAID5, as it gives the benefits of RAID0 as well as the redudancy of RAID1

Check your /etc/raidtab for your setup....

Tom

 *Valin wrote:*   

> I've setup a 3 drive RAID 0, but Grub refuses to setup.
> 
> /boot = /dev/md0
> 
> swap = /dev/sda2, /dev/sdb2, /dev/sdc2
> ...

 

----------

## fatalglitch

Try doing a simulated hotswap of one of the partitions....software raid will sync both disks based on the most current times in the files.

Hotswapping (simulated!!! do NOT try to do a REAL IDE HOTSWAP) will resync the drives and should remove the "dirty" error.

-Tom

 *BlackB1rd wrote:*   

> And my problem extends since it's not only the dirty status it returns. It also doesn't seem to update the 'Update time', which is very concerning. I guess that means is doesn't keep the RAID array synchronized? 

 

----------

## dioxmat

FWIW, raid 5 software sucks. See this blog entry for some explanations on the subject.

----------

## BlinkEye

 *dioxmat wrote:*   

> FWIW, raid 5 software sucks. See this blog entry for some explanations on the subject.

 

i couldn't more agree. i just threw away my partitions and am setting up a raid0 system

----------

## neo_phani

ok i tried this with 2 SATA drives and everything went but when i boot on the new kernel

checking root file system ...

ext2fs_check_if_mount: No such file or directory while determining whether /dev/md2 is mounted

fsck.ext3:NO such file

what could i have done wrong ? How do i fix this without having to do it all over again

Also i noticed i had a xfs in my /etc/fstab for md3 which should be ext3 instead ......please tell me how i can change this ? Can i use the boot cd and mount the /dev/md2 and edit stuff...i am lost

/dev/md0      /boot     ext2      noauto,noatime     1 2

/dev/md2      /           ext3      noatime            0 1

/dev/hda2     none      swap      sw,pri=0           0 0

/dev/hdc2     none      swap      sw,pri=0           0 0

/dev/md3      /home     xfs       noatime            0 1

/dev/cdroms   /cdrom0   /mnt/cdrom   iso9660      noauto,ro      0 0

proc         /proc      proc      defaults           0 0


my /etc/raidtab

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdc1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda3

raid-disk               0

device                  /dev/sdc3

raid-disk               1  

# /home (RAID 1)

raiddev                 /dev/md3

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda4

raid-disk               0

device                  /dev/sdc4

raid-disk               1

----------

## shakti

If you get this error create a script and run it...

```
for i in 0 1 2 3 4 5 6 7 8 9 10; 

do mknod /dev/md$i b 9 $i; 

done
```

----------

## sro

This HowTo didn't work for me  :Confused: 

my setup:

/dev/md0 = /boot, RAID1

/dev/md1 = /, RAID1

/dev/sda2, /dev/sdb2 = SWAP, no RAID

SATA Software Raid, Onboard SATA Controller, using module SATA_NV (compiled into the kernel).

when trying to boot, i'm receiving this error:

 *Quote:*   

> 
> 
> ext2fs_check_if_mount: No such file or directory while determining wether /dev/md1 is mounted.
> 
> fsck.ext3: No such file or directory while trying to open /dev/md1
> ...

 

devfs=nomount gentoo=udev is set in my grub.conf

The /dev/md0 and /dev/md1 device nodes are successfully created by 'md' while booting, but when it's time for the kernel to access root, i'm getting an error as quoted above.

/etc/fstab is correct, the line for /dev/md1 says "ext3".

I'm working on this problem the last 7 days, but can't get my software raid1 booting...

----------

## BlinkEye

chroot back and create a kernel with devfs support (do NOT activate "Automount at boot" though). after that boot your system with the following kernel options

```
gentoo=noudev
```

i had the same problems. this is udev  :Twisted Evil: 

after a succesfull boot with devfs you may switch back to udev - it will work.

----------

## sro

Its working!

I followed your steps - udev is causing this problems.

With devfs support, the system now boots perfectly.

Thank you.

----------

## dac

I am new and just learning.  Thererfore, I was following the directions when I ran into a problem.

Motherboard is ASUS A8V

I have setup two SATA drives using fdisk.  

I run modprobe md

I create my raidtab file.  I want to have Raid 1.

# / (RAID 1)

raiddev /dev/md2

raid-level 1

nr-raid-disks 2

chunk-size 32

persistent-superblock 1

device /dev/sda3

raid-disk 0

device /dev/sdb3

raid-disk 1

and then when I run "mkraid /dev/md*" , I get "Cannot determine md version: no MD device file /dev"

Why?

----------

## sro

running "mkraid /dev/md*" means, you should run "mkraid" for each of your arrays.

example for your RAID1:

mkraid /dev/md2

----------

## BlinkEye

if this doesn't solve your problem you use udev and MUST do the following:

```
cd /dev

MAKEDEV md
```

after that, try

```
mkraid /dev/md*
```

again

----------

## dac

I think that help because it is now syncing.  Thanks.

----------

## tscolari

i followed the instructions but now when i tryed to do:

mkraid /dev/md0

i got:

```
cannot determine md version: no MD device file in /dev
```

ive alread modprobed md

also can I start md# with 1? 0 looks confusing :p

----------

## BlinkEye

how about reading my post second last from your post?

----------

## fast40x

I am installing Gentoo 2005.0 over top of my Redhat installation. I copied my /etc/raidtab from my old partition. I ran MAKEDEV md and then mkraid. 

mkraid /dev/md0 says "cannot determine md version: 6." 

I have tried with and without the boot option DOSCSI. 

I read somewhere that someone had a problem with the same output, and ended up fixing it by recompiling without RAID 6. 

Is there anything I can do to get this to work? Here is my old raidtab: 

```
raiddev             /dev/md0 

raid-level                  1 

nr-raid-disks               2 

chunk-size                  64k 

persistent-superblock       1 

nr-spare-disks              0 

    device          /dev/sda5 

    raid-disk     0 

    device          /dev/sdb5 

    raid-disk     1 

... ommitted for brevity...

 
```

----------

## fast40x

I think I have my problem solved.

The first thing I did to start getting me on the right track is:

```
modprobe md
```

It was in the instructions, but I missed it.

then I realized I shouldn't be doing the mkraid in the first place, but rather:

```
raidstart
```

after that, mounting the systems was a breeze.

----------

## kevev

since this post hasnt been touched in a while I tried using the instructions in the first post with no success. 

Here are my steps that did work.

I did everything in the first post except when it says to edit menu.lst you should really edit grub.conf.

When it came to running grub to install the boot loader i had errors. I found a bug report on another site

that helped me. you have to edit /etc/mtab to reflect the actual real physical drive and partition that will

hold the grub stage1. my /etc/mtab stated-->

/dev/md0 /boot *****

I changed it to-->

/dev/hda1 /boot ****

(the stars represent the part of the line I did not change)

this allowed me to run grub-install. I also changed /dev/hda1 to /dev/hdc1 and ran grub-install again

so both drives will have the boot loader. I am hoping this will allow me to still boot if the first drive fails.

Here is the original page with the bug report.

https://bugzilla.redhat.com/bugzilla/long_list.cgi?buglist=138572

----------

## JeffBlair

Got a question for y'all. I am about to get a new server. It has 4 SCSI 9.1G HDD's right now. Later on I am going to add 18.2G. I was wondering 2 things. One, can I swap out the 4 drives and replace them with 6 new ones? I know if I swap out 1 at a time it should rebuild the RAID, right? Also, could you look at what I think the config should be for the 4 drives.

```

-------------cfdisk

sda1   512   /boot   RAID 1

sda2   200M   /   RAID 5

sda3   3G   /var   RAID 5

sda5   3G   /home   RAID 5

sda6   500M   /tmp   RAID 5

sda7   *   /usr   RAID 5

sdb1   512   /boot   RAID 1

sdb2   200M   /   RAID 5

sdb3   3G   /var   RAID 5

sdb5   3G   /home   RAID 5

sdb6   500M   /tmp   RAID 5

sdb7   *   /usr   RAID 5

sdc1   512   swap   NO RAID

sdc2   200M   /   RAID 5

sdc3   3G   /var   RAID 5

sdc5   3G   /home   RAID 5

sdc6   500M   /tmp   RAID 5

sdc7   *   /usr   RAID 5

sdd1   512   swap   NO RAID

sdd2   200M   /   RAID 5

sdd3   3G   /var   RAID 5

sdd5   3G   /home   RAID 5

sdd6   500M   /tmp   RAID 5

sdd7   *   /usr   RAID 5

-------------- /etc/raidtab

# /boot (RAID 1) 

raiddev                 /dev/md0 

raid-level              1 

nr-raid-disks           2 

chunk-size              32 

persistent-superblock   1 

device                  /dev/sda1 

   raid-disk               0 

device                  /dev/sdb1 

   raid-disk               1 

# / (RAID 5)

raiddev    /dev/md1

raid-level 5 

nr-raid-disks 4

persistent-superblock 1 

chunk-size 32 

parity-algorithm right-symmetric 

device /dev/sda2 

   raid-disk 0 

device /dev/sdb2 

   raid-disk 1 

device /dev/sdc2 

   raid-disk 2 

device /dev/sdd2 

   raid-disk 3 

# /var (RAID 5)

raiddev    /dev/md2

raid-level 5 

nr-raid-disks 4

persistent-superblock 1 

chunk-size 32 

parity-algorithm right-symmetric 

device /dev/sda3 

   raid-disk 0 

device /dev/sdb3 

   raid-disk 1 

device /dev/sdc3 

   raid-disk 2 

device /dev/sdd3 

   raid-disk 3

# /home (RAID 5)

raiddev    /dev/md3

raid-level 5 

nr-raid-disks 4

persistent-superblock 1 

chunk-size 32 

parity-algorithm right-symmetric 

device /dev/sda5 

   raid-disk 0 

device /dev/sdb5 

   raid-disk 1 

device /dev/sdc5 

   raid-disk 2 

device /dev/sdd5 

   raid-disk 3

# /tmp (RAID 5)

raiddev    /dev/md4

raid-level 5 

nr-raid-disks 4

persistent-superblock 1 

chunk-size 32 

parity-algorithm right-symmetric 

device /dev/sda6 

   raid-disk 0 

device /dev/sdb6 

   raid-disk 1 

device /dev/sdc6 

   raid-disk 2 

device /dev/sdd6 

   raid-disk 3

# /usr (RAID 5)

raiddev    /dev/md5

raid-level 5 

nr-raid-disks 4

persistent-superblock 1 

chunk-size 32 

parity-algorithm right-symmetric 

device /dev/sda7 

   raid-disk 0 

device /dev/sdb7 

   raid-disk 1 

device /dev/sdc7 

   raid-disk 2 

device /dev/sdd7 

   raid-disk 3

```

I am guessing I can just add on the other 2 drives later on when I get them. Also, is it posible to do LVM on top of this? If so, how would I do it?  Thanks a lot.

----------

## devilrick

I have followed this how-to word for word and everything has gone smoothly, until it came to rebooting into the new kernel.

I have setup grub on a mirrored boot partition which is found and the kernel loads, but upon loading the striped root partition a kernel panic occurs where it cannot open root device "md2".

kernel support is built in for raid mirrored and striped and so is reiserfs (my only file system type). 

Running 2.6 kernel.

The striped raid volume works on the livecd after appropriate MAKEDEV modprove md and mkraid and can be accessed after mounting, so it looks like a kernel problem although I can't think what I am missing.

Any ideas?

Thanks.

----------

## schism39401

I have setup a raid1 on my workstation but cat /proc/mdstat shows this:

```

Personalities : [raid1]

md2 : active raid1 hda3[0]

      29294912 blocks [2/1] [U_]

md3 : active raid1 hda4[0]

      8761920 blocks [2/1] [U_]

md0 : active raid1 hda1[0]

      45248 blocks [2/1] [U_]

unused devices: <none>

```

Also when I boot i get an error about /dev/md0 having a bad superblock. That is my boot partition and is ext3. I have run fsck and it says it's fixed. But when i reboot I get the same error.

My raidtab:

```

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda1

raid-disk               0

device                  /dev/hdd1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda3

raid-disk               0

device                  /dev/hdd3

raid-disk               1

# /home (RAID 1)

raiddev                 /dev/md3

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/hda4

raid-disk               0

device                  /dev/hdd4

raid-disk               1

```

And my fstab:

```

/dev/md0                /boot           ext3            noauto,noatime  1 2

/dev/md2                /               reiserfs        noatime         0 1

/dev/hda2               none            swap            sw              0 0

/dev/md3                /home           reiserfs        noatime         0 0

```

Any help would be appreicated.

----------

## als

I have been looking all over these forums to try and install gentoo linux on my SATA (2) 80gb Hard Drives Raid 1. Im also using a PCIe (ATI x700) video card. Not sure if that makes a difference for this part. I have followed this guide step by step about 15 different times. 

Here's my error. (When booting)

>> Loading modules

     :: Scanning for .......

         //

         //

>> Activating udev

>> Determining root device...

!! The root block device is unspecified or not detected.

    Please specify a device to boot, or "Shell" for a shell...

Fdisk /dev/sda

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          32      257008+  fd  Linux raid autodetect

/dev/sda2              33        9451    75658117+  fd  Linux raid autodetect

/dev/sda3            9452        9963     4112640   82  Linux swap / Solaris

Fdisk /dev/sdb

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *           1          32      257008+  fd  Linux raid autodetect

/dev/sdb2              33        9451    75658117+  fd  Linux raid autodetect

/dev/sdb3            9452        9963     4112640   82  Linux swap / Solaris

Fstab

/dev/md0      /boot     ext2      noauto,noatime     1 2

/dev/md1      /         reiserfs       noatime            0 1

/dev/sda3     swap      swap      defaults,pri=1     0 0

/dev/sdb3     swap      swap      defaults,pri=1     0 0

/dev/cdroms/cdrom0   /mnt/cdrom   iso9660      noauto,ro      0 0

proc         /proc      proc      defaults           0 0

/dev/fd0                /mnt/floppy     auto            noauto          0 0

shm                     /dev/shm        tmpfs           nodev,nosuid,noexec

0 0

Raidtab

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdb1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md1

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda2

raid-disk               0

device                  /dev/sdb2

raid-disk               1

and lastly here is my grub.conf

default 0

timeout 30

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=Gentoo Linux 2.6.13-r5

root (hd0,0)

kernel /boot/kernel-genkernel-x86-2.6.13-gentoo-r5 root=/dev/md1 vga=791 splash=

silent

initrd /boot/initramfs-genkernel-x86-2.6.13-gentoo-r5

md=0,/dev/sda1,/dev/sdb1

md=1./dev/sda2,/dev/sdb2

Suggestions and experience would be helpful. Thank you!

----------

## Massimo B.

Did you mention the problem of Raid0 on two disks with ext3?  The Software-RAID HOWTO 

There is also the possibility to give mkfs.ext3 the stripesize, but I am not sure if that brings the same result.

----------

## Attila

Hiho,

Very cool Howto!

Some notes for sparc64 users:

- use sun disklabel, not Dos (sure, dos works as long you did not try to boot from it)

- if you have problems writing an initial sun disklabel, boot a solaris-cd (you can get it for free from sun.com) and use "format" to write a correct disk-label

- while partitioning: DO NOT TOUCH PARTITION 3 (slice 2 under solaris)

- use /boot (20 mb or so)

- start /boot at cylinder 0 (i can't get silo work on a partition not starting at cyl 0)

- no not mirror /boot (if you do so, you will mirror you partition table too)

- Set partition type 0xfd on all partitions you want to mirror (on both disks)

- to make it possible to boot from both disks, add both /boot's to the fstab (e.g. s /boot and /boot.bkp or so)

- copy kernel & silo.conf onto both /boot's

- Start Silo twice 

```

silo -C /boot/silo.conf

silo -C /boot.bkp/silo.conf

```

- When you compile a new kernel, do not forget to copy it to both /boot's

- All other things are the same as described in this howto 

Now you are able to boot from both disks - sure you can remove (or replace) one and your system will still boots without a problem. Simply use "boot diskX". 

I test around and i never get mirroring /boot *really* working. Sure you *can* mirror it, but i *always* has problems with silo, corrupt filesystems, etc... so it works sometimes, somehow ...  :Smile: 

That's all.

  Atti

----------

## ericxx2005

Thanks for the howto, now getting 93Mb/sec on two 7200.7's!

----------

## Aries-Belgium

Sorry to bump up an older thread but ...

I'm installing gentoo on a raid system as we speak, so far everything is going great, thanks to this howto!  :Wink: 

But I got a question about a worst case scenario: what if grub doesn't won't to boot linux and I have to reconfigure and recompile my kernel. How do I mount the raid partitions again? I think, correct me if I'm wrong, I have to paste the raidtab file again in /etc and then run raidstart /dev/md0. But what if I get an error?

----------

## BlinkEye

i had to do that a couple of times. last week my reiserfs wasn't bootable and i had to use a livecd. actually, there are two tools around:  raidtools (raidstart, raidstop etc.) and mdadm.

 *portage wrote:*   

> sys-fs/mdadm
> 
>       Latest version available: 1.12.0
> 
>       Latest version installed: 1.12.0
> ...

 

i made better experience with mdadm. i'm running a raid1 and one harddisk was bad. so i started the raid1 with one harddisk only:

```
cd /dev

MAKEDEV md

mdadm --assemble /dev/md3 --force /dev/sda5
```

replace /dev/md3 and /dev/sda5 according to your hardware. 

to start your raid if you have two drives use:

```
cd /dev

MAKEDEV md

mdadm --assemble /dev/md3 /dev/sda5 /dev/sdb5
```

----------

## Aries-Belgium

Okay I was able to get it working and it boots now (had a few problems). I did it with two 80GB harddisks. My boot partition is in raid1 and my root partition is in raid0. And I always thought that raid0 combined the sizes of the two disks, but when I type "df" it shows me that there is only 74GB free space availible. Was I wrong about raid0? Or did I do anything wrong? And if I did anything wrong, does that mean I can start all over again?   :Confused: 

[EDIT]

Okay, I did something wrong. md2 also is raid1. I forgot to edit the raid level when I copied it from this tutorial. Can I fix it without losing data?  :Confused:  Probably not  :Crying or Very sad: 

----------

## BlinkEye

if you really have the wrong raid i do not know of a way to resync them without actually backing up everything. you may be interested in this backup howto.

----------

## Aries-Belgium

 *BlinkEye wrote:*   

> if you really have the wrong raid i do not know of a way to resync them without actually backing up everything. you may be interested in this backup howto.

 

Thanks, for your reply. I decided to use this system to test everything and testing a few applications and stuff. I will do a final reinstall next month or something.

----------

## Zee

Hi.

  I hope I'll get a reply for I'm really desperate. I found sw RAID how-to on the Gentoo-wiki page and I followed it to the word. I'm try ing to setup a RAID 1 array that would be used as / on my home-made NAS. The installation went through without any problems, but when I try to boot udev complains: wrong fs type, bad option, bad superblock on udev

  there are NO errors prior this message.

can you please help me,

zee

----------

## BlinkEye

Please provide the following information:

/etc/fstab

/boot/grub/grub.conf

Just a thought: in case you use a filesystem other than ext2/3 like reiserfs or jfs did you emerge the corresponding utils described in "Chapter 9.d"?

----------

## Zee

fstab:

/dev/md0                /               reiserfs        noatime,notail  0 1

/dev/hdc1               /home       reiserfs        noatime,notail  0 2

/dev/hdc2               none         swap            sw              0 0

# NOTE: The next line is critical for boot!

proc                    /proc           proc            defaults        0 0

shm                     /dev/shm    tmpfs           nodev,nosuid,noexec 0 0

instead of grub I use lilo:

compact

# Should work for most systems, and do not have the sector limit:

lba32

# If lba32 do not work, use linear:

#linear

# MBR to install LILO to:

boot = /dev/hda

map = /boot/.map

install = /boot/boot-menu.b

menu-scheme=Wb

#prompt

# If you always want to see the prompt with a 15 second timeout:

#timeout=150

#delay = 50

# Normal VGA console

vga = normal

# VESA console with size 1024x768x16:

#vga = 791

#default=kernel-2.6.15

image = /boot/kernel-2.6.15

        root = /dev/md0

        label = Gentoo

        read-only # read-only for checking

I use reiserfs and the reiserfsprogs package has already been installed.

I tried with udev version 079 and 087, but it's always the same.

The strange part is that /sys and /proc get mounted w/o problems.

thanks,

zee

----------

## Dikkiedik

I'm doing a software raid install but in the meanwhile the software howto on the gentoo-wiki site has changed... It's missing some details. Like the emerging of DMRAID etc... Luckely I noticed the link below the page pointing to the original wiki/howto.. the current wiki stinks a bit in my opinion.. but I guess thats because it's new. I love the original howto though. Thanks for the work =), I'm very grateful for it.

----------

## guid0

on my machine it was in fact udev that failed. Forgot to compile Unix domain sockets in the kernel (network section) which fixed it for me.

----------

