# Software RAID 0 HOW-TO

## cosjef

If you like speed, but don't like spending money, software RAID 0 is for you! 

Generally, RAID (Redundant Array of Inexpensive Disks) is found on expensive 

corporate servers, where performance and reliability matter most. Such servers 

generally use SCSI RAID cards that allow RAID modes 0-5 on SCSI disks.

For the home user, SCSI drives and SCSI RAID cards are often overkill, not to

 mention very expensive. The alternative is IDE RAID, which requires either

an add-on RAID controller card (approx. $100 retail), or an integrated

motherboard RAID controller. The latter usually tends to be a form of software

RAID (that only works in Windows), not true hardware raid.

There are multiple modes of RAID, each with tradeoffs between speed and data 

redundancy. The highest-performing mode is RAID 0, which evenly distributes disk 

writes and reads across all member disks of the RAID array. While this mode 

offers zero redundancy, it boosts the speed of two drives working together to 

greatly outperform a single drive acting alone. 

This tutorial will show you how to create a high-performance RAID-0 setup using

 two IDE hard drives and NO hardware RAID controller. The Linux community has

created a form of software RAID that is available for free, and is not tied to any

specific vendor hardware. This should result in a huge jump in I/O performance 

on your system.

The price of this performance boost is some additional CPU usage; however

most Gentoo systems are built on modern CPU's of a gigahertz or more, 

which is more than enough power to drive the array without impacting system performance.

You'll need two IDE hard drives of equal (or almost equal) size. It does not 

matter if the drives come from different hardware vendors, but they must be as 

close to the same size as possible.  (Note: if the size of each drive

differs substantially, the RAID array will shrink to be double the size of the 

smallest drive in the array, regardless of whether one drive is larger.)

This document is meant to be used in tandem with the Gentoo Linux Installation 

Instruction document. You will need to swap between that document and this one 

during installation, paying attention to where the steps differ. 

(Note: I  owe a great deal of credit for this document to Chris Atwood, 

whose RAID-1 directions got me started.)

This tutorial assumes you have both drives connected to your IDE controller. The 

ideal setup for maximum performance is for each drive to be set as "master" and 

connected to a separate IDE channel. If you have other devices (such as a CDROM) 

as slave drives on the same channel, you may experience some performance degradation. 

Let's walk through it:

1) Boot the Gentoo LiveCD.

2) Load the module for multi-device support: 

# modprobe md

3) Configure networking

4) Set your system date/time.

5) Filesystems, partitions and block devices step:

Follow the Gentoo guidelines of creating /boot, swap, and / partitions, but 

create IDENTICAL partitions of the SAME SIZE on BOTH drives. Each physical drive 

should look identical to the other, in terms of partitioning. When choosing 

the partition type, be certain to select type 83 Linux for the /boot partition. 

DO NOT make it a RAID partition. Make the swap partition a type 82 swap, and 

make the large / partition type FD (Linux RAID autodetect).

(Note: A software RAID array is not bootable, so you need to use a separate 

/boot partition that is not part of the array)

6) Create an /etc/raidtab file. This file maps virtual RAID drives to physical 

partitions, and is required for your array to function. If your drives are of 

different sizes (which is not recommended), the smaller of the two drives should 

be raid-disk 0 in this file.

# / partition 

raiddev                 /dev/md0	# raid device name

raid-level              0 		# raid 0

nr-raid-disks           2 		# number of disks in the array

chunk-size              32 		# stripe size in kilobytes

persistent-superblock   1 		

device                  /dev/hda3 	# device that comprises the raid array

raid-disk               0 		# disk positing index in array

device                  /dev/hdc3 	# device that comprises the raid array

raid-disk               1 		# disk position index in array

For more information on the parameters of this configuration file, see the man 

page here: 

http://leaf.sourceforge.net/devel/cstein/Packages/man/raidtab.5.man.htm

7) Creating filesystems step:

Run the mkdraid command to create the RAID device

# mkraid --really-force /dev/md0

(ignore the scary warnings here)

Type the command cat /proc/mdstat to verify the success of this operation:

# cat /proc/mdstat

Personalities : [raid0]

read_ahead 1024 sectors

md0 : active raid0 ide/host0/bus1/target0/lun0/part3[1] 

ide/host0/bus0/target0/lun0/part3[0]

      74100352 blocks 32k chunks

This command will show you detail on your RAID drive.

Next create standard EXT3 filesystems for the /boot and RAID virtual drive 

partitions: 

# mke2fs -j /dev/md0

# mke2fs -j /dev/hda1

Create the swap space on each physical drive and turn it on:

# mkswap /dev/hda2 

# swapon /dev/hda2

# mkswap /dev/hdc2 

# swapon /dev/hdc2

Note, swap space will not be RAID'ed, but used efficiently by the kernel

via an entry we will make in /etc/fstab.

 :Cool:  Mount partitions step:

# mount /dev/md0 /mnt/gentoo 

# mkdir /mnt/gentoo/boot 

# mount /dev/hda1 /mnt/gentoo/boot

9) Stage tarballs and chroot step:

Copy the /etc/raidtab file your created into the Gentoo chroot: 

# cp /etc/raidtab /mnt/gentoo/etc/raidtab

10) Follow all usual Gentoo bootstrapping steps until you come to the Installing 

the kernel and a System Logger stage.

In addition to what is normally needed in the kernel by Gentoo, you must compile 

in support for RAID devices and raid level 0. Do NOT compile them as modules. 

These settings can be found under:

Multi-device support (RAID and LVM)" section. In this section, enable the 

following:

Multiple devices driver support (RAID and LVM)

RAID support

RAID-0 (striping) mode

Follow all normal kernel compilation steps.

11) Modifying /etc/fstab for your machine step:

You must alter the /etc/fstab file to account for the RAID virtual devices, 

rather than the usual method of specifying physical devices. Your fstab should 

look like the one below:

/dev/hda1     /boot     ext2      noatime  	     		1 2 

/dev/md2      /         ext3	  noatime           		0 1 

/dev/hda2     swap      swap      defaults,pri=1    		0 0 

/dev/hdc2     swap      swap      defaults,pri=1     		0 0 

/dev/cdroms/cdrom0   /mnt/cdrom   iso9660      noauto,ro     	0 0 

Note that the two swap partions have the "pri=1" flag set. Setting a 

priority of 1 tells the kernel to use each swap space with equal priority, and 

to balance the load between them on a round-robin basis.

Continue following all standard instructions for Gentoo install.

12) The Configure a Bootloader section:

The grub.conf file needs to look like this:

default 0 

timeout 10 

splashimage=(hd0,0)/boot/grub/splash.xpm.gz 

title=Gentoo Linux RAID

root (hd0,0) 

kernel (hd0,0)/boot/bzImage root=/dev/md0 

Note that we specify the kernel root as the RAID multiple device.

13) Your are done!! Reboot and you will be logged into your new RAID 0 system. 

To verify everything is working, issue the "cat /proc/mdstat" command again, and 

review the information provided.

You may now use your RAID'ed filesystem like any other standard Linux filesystem. Have fun with your "reborn" system!!Last edited by cosjef on Thu Jul 24, 2003 3:37 pm; edited 3 times in total

----------

## green sun

Very nice, but how is it significantly different from this?

How to do a gentoo install on a software RAID

On this tho, here's a question:

I have set up raid, gotten up to installing grub & I have a powerloss. When I get back (reboot off of livecd) & modprobe md, it works, but I can not then 

```
mount /dev/md2 /mnt/gentoo
```

to process with chroot. Obviously, since I only need to install grub, I dont want to mkraid again... any ideas on how to proceed?

----------

## cosjef

Thanks. The primary difference is that Chris' tutorial is for RAID1, not RAID0. He also left out a few steps and a good deal of detail that I tried to add.

As to your problem, check the /etc/ directory in your chroot and the LIVECD root directory. If the raidtab file is not in both places, you'll need to recreate it, and copy it back into chroot. MD depends on that file being in place to map out the virtual RAID array.

----------

## Lovechild

I found that for some reason my 2.5/2.6 kernels disliked /dev/mdX and instead wanted /dev/md/X.

Anyone care to test this, and elaborate on why this is?

----------

## lurid

I don't really know much about RAID, so excuse my ignorance.  Heres my situation.  My 40 gig drive died and I'm currently making due with two old drives.  Ones an 8 gig ATA33 and the other is an 9 gig ATA66.  Obviously these are really slow as compared to the 40 gig ATA100.  What I'm wondering is what kind of speed increase are we talking here and whether this might be an option for me since I can't really afford a replacement drive.  I'm assuming the speed of the RAID will be as fast as the slowest drive, which being ATA33 is pretty slow. Current hdparm stats are

```
root@virulent lurid # hdparm -tT /dev/hda3

 

/dev/hda3:

 Timing buffer-cache reads:   128 MB in  1.03 seconds =123.67 MB/sec

 Timing buffered disk reads:  64 MB in  5.71 seconds = 11.22 MB/sec

root@virulent lurid # hdparm -tT /dev/hdc1

 

/dev/hdc1:

 Timing buffer-cache reads:   128 MB in  0.99 seconds =128.64 MB/sec

 Timing buffered disk reads:  64 MB in  6.48 seconds =  9.88 MB/sec
```

So my question is basically what kind of speed up am I looking at and whether or not doing this on two slow drives will give me the speed/performance of a newer faster drive.  The only thing that might be a downside is the lack of space.  I'd be going from  16 gigs, down to 8.

----------

## green sun

 *cosjef wrote:*   

> Thanks. The primary difference is that Chris' tutorial is for RAID1, not RAID0. He also left out a few steps and a good deal of detail that I tried to add.
> 
> As to your problem, check the /etc/ directory in your chroot and the LIVECD root directory. If the raidtab file is not in both places, you'll need to recreate it, and copy it back into chroot. MD depends on that file being in place to map out the virtual RAID array.

 

Right.. no criticism intended... Im setting up RAID1, so I'll focus on his post.

As for my problem, I ended up reinstalling the machine (err.. Im reinstalling now). I was getting a kernel panic, most likely because I didnt have the correct things added to the kernel. I'm reloading & being more careful  :Wink: 

BTW, both HOW-TOs are very helpful. Thanks.

----------

## Lovechild

Also, a correction

A software RAID is perfectly bootable, _as long_ as it's RAID1.

I have my 2 40gb hds hooked up in a RAID0 setup:

Where I have 2x50 megs set aside for RAID1 /boot.

----------

## cosjef

Here's my un-RAID'ed boot partition

# hdparm -tT /dev/hda1

/dev/hda1:

 Timing buffer-cache reads:   128 MB in  0.92 seconds =139.13 MB/sec

 Timing buffered disk reads:  64 MB in  1.77 seconds = 36.06 MB/sec

Here's the RAID-0 array, which uses the same disk:

# hdparm -tT /dev/md0

/dev/md0:

 Timing buffer-cache reads:   128 MB in  0.84 seconds =151.48 MB/sec

 Timing buffered disk reads:  64 MB in  1.25 seconds = 51.41 MB/sec

----------

## lurid

Hm.  So you're getting about a 20mb/sec increase in performance it seems.  Hows that 'feel' to you?  Noticably faster?  I would think that with my drives being as slow as they are, a 20mb boost would certainly be noticable.  Kinda like how back in the old days overclocking a 200mhz to 266/300 made a huge difference, but now a days, its hardly noticed at all.

Eh maybe I should give it a shot, just for the hell of it.    :Cool: 

Btw, you mentioned using ext3 as the file system.  Can this be done with ReiserFS as well?  It seems to make a major difference in the speed of my disk.

----------

## cosjef

The system definitely feels more responsive; things seem to "snap in" very quickly. Disk I/O is one place you'll definitely see a performance boost. Since its free to setup and you already have the drives, why not give it a whirl? You get a cheap performance boost, and you'll learn something along the way! 

I'm certain you can use Reiser FS instead of EXT3 on the RAID array, which should also help performance. You might want to do some research on the proper RAID chunk size for a REISER filesystem, as I seem to recall REISER being optimized for smaller file sizes.

----------

## taskara

raid howtos and info are always welcome  :Very Happy: 

----------

## green sun

 *cosjef wrote:*   

> The system definitely feels more responsive; things seem to "snap in" very quickly. Disk I/O is one place you'll definitely see a performance boost. Since its free to setup and you already have the drives, why not give it a whirl? You get a cheap performance boost, and you'll learn something along the way! 
> 
> I'm certain you can use Reiser FS instead of EXT3 on the RAID array, which should also help performance. You might want to do some research on the proper RAID chunk size for a REISER filesystem, as I seem to recall REISER being optimized for smaller file sizes.

 

FYI, I run Reiser on a RAID1 machine. Gentoo backup DNS server. If Reiser won't work due to chunk size, etc, you can try XFS (I never have), which allows you to customize settings quite a bit with the mkxfs command.... but this is straying OT  :Smile: 

----------

## lurid

I found this document describing raid0 with reiserfs, its pretty old though. They suggest a chunk size of 64.  Its a rather detailed peice and they are using reiserfs in it, so I guess I should set it up like that. I think I'll give it a shot as soon as I backup some stuff.

I'll report back with my experience, good or bad.

----------

## taskara

what is the best chunk size to use in conjunction with reiserfs ?

----------

## lurid

Ok..  problems.

Install went fine, I have the raid0 setup going and its fast.. I was pretty happy until I rebooted.  On boot, fsck wants to check the filesystem..  evidently, md0 doesn't look like a proper reiserfs filesystem to it, so it says it can't open the partition.  Then tells me I can log in as root, or CTRL+D to reboot.  Logging in as root lets me into the system, but its all mounted read-only.

So I boot the LiveCD but I can't get it to mount the raid patition(s).  Says it needs me to specify the filesystem type, which is reiser, but doesn't look like one.  So I'm basically locked out of my system.

Erm..  any advice?  First, how can I get the raid array to mount with the LiveCD.  Second, how can I get either fsck to not attempt to check / on start up,  or to get it to recognise the raid?

I have 0 0 as the 5th and 6th entry in my fstab for my / partition..

----------

## lurid

Sorry about the second post..  I'm doing this from lynx off the LiveCD, so bare with me.   :Wink: 

Alright, I've got the raiddisk mounted.  I'm using the rc3 LiveCD, so it might be different with this one, but I just had to recreate the /etc/raidtab file, then zap /etc/init.d/md and evms, then restart both.

Mount it with: mount /dev/evms/md/md0 /mnt/gentoo

Not to hard.  However, I still can't boot normally.  Since it was fsck that was causing the problems, I renamed /sbin/fsck.reiserfs and tried it that way.  Made it through the boot sequence, but a login prompt never appeared.   :Neutral:   I'm still at a complete loss.

----------

## taskara

did u emerge reiserfsprogs ?

----------

## lurid

Yes, I did.

The problem is that the scripts checkroot and checkfs mount / readonly then have fsck look them over, before remounting read-write.  For whatever reason,  fsck.reiserfs is unable to open the raid array and it drops to a shell.  Its got to be an issue with fsck.reiserfs because I'm posting this on the mounted and chroot'ed raid and it all works fine.

----------

## lurid

Alright, I figured it out.  Sorry for all the posts.   :Sad: 

The answer was right in front of me.  The problem is what Lovechild said, the file system wasn't recognised because /dev/md0 doesn't exist.  devfs rearanges everything.  Its not a kernel issue, which from Lovechilds post I thought it was, its a devfs issue.  I'm running 2.4.20-ck6 so I didn't pay attention to what he said..  but it appies for me too.  I need to put /dev/md/0 in fstab, not /dev/md0.

So I'm all set.  Watching X compile right now.  I did a hdparm test and was pretty impressed with the results.  I'll paste them later since I'm still using lynx.   :Wink: 

----------

## taskara

cool  :Very Happy: 

----------

## cosjef

Lurid--glad you got it working. Looks like a subtle nuance in /etc/fstab for a REISER filesystem. Can you post your hdparm results?

----------

## lurid

Yup, got it working great.  Bogged my brain for a few hours there, but thats a good thing in a way.  :Wink: 

If you remember,  I posted my old hdparm stats as this:

```
 Timing buffer-cache reads:   128 MB in  0.99 seconds =128.64 MB/sec

Timing buffered disk reads:  64 MB in  6.48 seconds =  9.88 MB/sec
```

The new stats are:

```
root@virulent lurid # hdparm -tT /dev/md/0

 

/dev/md/0:

 Timing buffer-cache reads:   128 MB in  0.88 seconds =146.29 MB/sec

 Timing buffered disk reads:  64 MB in  2.87 seconds = 22.30 MB/sec
```

While that might not seem fast to anyone with newer faster drives, look at the difference in seconds.  6.48 seconds, down to 2.87 seconds.  Thats huge and I can certainly feel it..  and all it cost me were a few tense hours as I fumbled around trying to figure some stuff out and a reinstall of Gentoo.  You can't beat that deal with a stick.  When I do eventually replace these slow drives, instead of getting an 80 gig, I'm gonna get 2 40's and do this again.  Hell, I might even get my gf a second 20 gig for her machine and raid her up too.    :Cool: 

I'm very impressed.  Thanks for the how-to.    :Very Happy: 

----------

## Moled

yum

just set this up

```
/dev/md0:

 Timing buffer-cache reads:   3756 MB in  2.00 seconds = 1878.00 MB/sec

 Timing buffered disk reads:  328 MB in  3.00 seconds = 109.33 MB/sec

```

----------

## puddpunk

 *Moled wrote:*   

> yum
> 
> just set this up
> 
> ```
> ...

 

Nice! What the hell kinda HDD cache you got there?  :Very Happy: 

I'm running 2 x 40Gb Seagate 7200 RPM drives in raid0 (with a raid1 boot  :Smile: ). Here are my results, single drive, then raid:

```
# hdparm -tT /dev/hda

/dev/hda:

 Timing buffer-cache reads:   128 MB in  0.45 seconds =284.44 MB/sec

 Timing buffered disk reads:  64 MB in  1.14 seconds = 56.14 MB/sec

# hdparm -tT /dev/md1

/dev/md1:

 Timing buffer-cache reads:   128 MB in  0.45 seconds =284.44 MB/sec

 Timing buffered disk reads:  64 MB in  0.64 seconds =100.00 MB/sec

```

Thats a pretty tidy increase  :Smile:  Almost double, about a 90% increase!

BTW, I'm not sure why the buffer-cache reads are really included in these tests as they don't test disk speed at all. The buffer-cache reads will be the same in or out of raid.

----------

## taskara

hey moled

WOW what the hell kinda computer are you using!?

are you using linux software raid or the silicon "hardware" raid?

As per your suggestion on another thread, I used

 *Quote:*   

> hdparm -d1 -X66 -A1 -c1 -u1 -m16 -a64

  on my hdd's and it seems stable atm.

however here are my screaming hdparm results!

RAID:

 *Quote:*   

> /dev/md5:
> 
>  Timing buffer-cache reads:   128 MB in  0.23 seconds =556.52 MB/sec
> 
>  Timing buffered disk reads:  64 MB in  2.03 seconds = 31.53 MB/sec
> ...

 

NORAID:

 *Quote:*   

> bash-2.05b# hdparm -tT /dev/hde
> 
> /dev/hde:
> 
>  Timing buffer-cache reads:   128 MB in  0.24 seconds =533.33 MB/sec
> ...

 

I know this is only with UDMA2, but u seem to get such a high score there... what's your secret!  :Wink: 

If you can help, here's the info of my drives:

 *Quote:*   

> bash-2.05b# hdparm -i /dev/hde
> 
> /dev/hde:
> 
>  Model=ST380013AS, FwRev=3.05, SerialNo=3JV32V8W
> ...

 

/dev/hdg is the same  *Quote:*   

> bash-2.05b# hdparm -i /dev/hde
> 
> /dev/hde:
> 
>  Model=ST380013AS, FwRev=3.05, SerialNo=3JV32V8W
> ...

 

as u can see they are running udma66

hey puddpunk - what chipset are u running?

----------

## Moled

those are two WD raptors connected to the intel raid controller in the ICH5R

for some reason they show up as scsi discs

individual drive:

 *Quote:*   

> /dev/sda:
> 
>  Timing buffer-cache reads:   3720 MB in  2.00 seconds = 1860.00 MB/sec
> 
>  Timing buffered disk reads:  166 MB in  3.02 seconds =  54.97 MB/sec
> ...

 

oh and im using software raid

can't get the siimage to work just yet

----------

## taskara

ahhh K  :Smile: 

damn those raptors are nice  :Very Happy: 

I am getting a 3Ware 8606 HARDWARE serial ata raid controller  :Very Happy: 

and putting on 4 seagate 8mb cache hdds  :Very Happy: 

now THAT should scream

if only I could put on 4 raptors  :Very Happy: 

----------

## puddpunk

 *taskara wrote:*   

> hey puddpunk - what chipset are u running?

 

Just the standard, run-of-the-mill IDE controllers that come with my A7V-266-VM nforce motherboard. Nothing too fancy.

----------

## taskara

kk..  :Smile: 

----------

## mdpye

 *taskara wrote:*   

> 
> 
> I know this is only with UDMA2, but u seem to get such a high score there... what's your secret! 
> 
> 

 

Well, here's *one* secret I found when setting up my Seagate 7200.7s.

You use hde and hdg as your raid devices. If you can try swapping them to hda and hdc. On my Gigabyte GA7VXRP the raid controller has one one interrupt for both its channels. That's fine when you use the pseudo-hardware RAID cos you address the controller as one device. Use software RAID and suddenly the disks have to compete for interrupts...

Using the raid controller in ATA mode each of disks would pull up ~55MB/s, but combined, only ~65MB/s. Using the primary and secondary channels they get the same individually and combine for ~100MB/s.  :Smile: 

Something for people disappointed with their results to check out...

MP

----------

## taskara

cool.. but I can't use hda or hdc because they are my primary parallel controllers and I have sata disks.

but good point tho.. thanks  :Smile: 

----------

## equilian

So I follewed the how-to and had everything set up correctly or so I thought.  However, when I rebooted, grub got to 1.5 and gave me error # 15. This means some file isn't found but I have no idea what file it's looking for.  If anyone can help me here i'd be grateful.

-Abraham

----------

## Gelfling

I also followed this howto to the letter and upon re-boot, grub gave me an error 17. Can anyone tell me what I did wrong.

----------

## taskara

hmm... perhaps nothing.. have you tried lilo?

did u make a grub boot disk and then re boot, and THEN set up grub?

----------

## Gelfling

After 4 attempts to install Gentoo 1.4 on my Raptors via raid, I'm burned out, grub keeps giving me this error #17. I'll just leave it alone until Intel releases ICH5R support for linux. I've tried following two different howtos and both end results proved unsuccessful. I've tried vanilla-sources and ac-sources, with the same results.

----------

## JSharku

Good tutorial. I just tried it, backed up my existing install, RAID-ified my disks and put the backup back, all went without a problem. I've noticed one little peculiarity, if you put something like:

```
LABEL=root    /       ext3    defaults ...

LABEL=home    /home   ext3    defaults ...
```

in your fstab, and you're / partition is a raid device even though you did specify the label when formatting it won't accept the label, whereas before it did. You have to specifically mention /dev/mdX or the kernel fails to remount the / partition rw after initial startup, which is of course a bad thing.

Other than that I've had no problems at all.

Sharku

----------

## cosjef

What do you have in your /boot/grub/grub.conf file for the "root" attribute?

root (hd0,0)

or

root (hd1,0)

Could you post your grub.conf and your /etc/fstab? Maybe we can help you.

----------

## Gelfling

I gave up and went back to WinXP Pro, I even tried installing on a single 80GB SATA drive and I got a grub error #15. Installing on SATA just isn't working for me. Since I only have room for only one PC, I am in the process of putting together another machine, I'll try again on that machine after the holiday weekend, don't have anymore time to devote to getting Gentoo installed without so many headaches. I have 4 SATA HD's: 2 Raptors , a 120 and 80GB, if I can't get Gentoo installed on any of them, I'll just leave it alone. RH9, SuSe 8.2 and Mandrake 9.1 freeze up as they're booting up. Gentoo's the distro I can even get to attempt a full install. My grub.conf:

default 0

timeout 10

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title=Gentoo Linux 1.4 (Raid 0 ac-sources)

root (hd0,0)

kernel (hd0,0)/boot/bzImage1 root=/dev/md0 vga=791 hdc=ide-scsi

my fstab:

/dev/hda1   /boot   ext3 noatime 1 0

/dev/md0   /          ext3  noatime,noauto 1 2

/dev/hda2  none    swap  defaults,pri=1   0 0

/dev/hdb2  none    swap  defaults,pri=1   0 0

/dev/cdroms/cdrom0 /mnt/cdrom iso9660  noauto,users,ro  0 0

/dev/fd0    /mnt/floppy  autofs  noauto,users,rw  0 0

following the howto didn't help me a whole lot. I appreciate all the support but I am not able to get a decent install. So maybe on Sept. 2 I'll try again on another machine.

----------

## taskara

HAVE you tried LILO!??????

you are insane if u give up because grub does not work properly - try lilo.. 

there is an answer out there somewhere - lots of other people run raid just fine.

good luck!

----------

## equilian

 *Gelfling wrote:*   

> I gave up and went back to WinXP Pro, I even tried installing on a single 80GB SATA drive and I got a grub error #15. Installing on SATA just isn't working for me. Since I only have room for only one PC, I am in the process of putting together another machine, I'll try again on that machine after the holiday weekend, don't have anymore time to devote to getting Gentoo installed without so many headaches. I have 4 SATA HD's: 2 Raptors , a 120 and 80GB, if I can't get Gentoo installed on any of them, I'll just leave it alone. RH9, SuSe 8.2 and Mandrake 9.1 freeze up as they're booting up. Gentoo's the distro I can even get to attempt a full install. My grub.conf:
> 
> default 0
> 
> timeout 10
> ...

 

So I had this some problem on my machine.  In order to get Grub working I had to install it on the second disk of my RAID0 array and have it look for the files on the boot partition of the first disk.  Basically just mirror your two boot partitions.  I have no idea why it was looking to boot off the second disk of the array but after much frustration that's what got it working for me.  Let me know if you need any more help.

-Abraham

----------

## VisualPhoenix

hey guys... so i must be really dumb but i dont know what i did wrong... So.. i followed the whole guide... installed my system... installed grub... upon reboot grub hangs... So. I pop in the livecd for 1.4 - reboot - boom guess what... 

modprobe md

cat /proc/mdstat returns nada 

wtf..

/etc/init.d/md zap

/etc/init.d/evms zap

i make sure md is rmmod'ed

i remake my raidtab exactly how i made it before on the disk

restart md

try to restart evms but get "Open failed for `/dev/evms/block_device': no such file or directory"

mdstat has no mention of my md0 or md1 arrays...

WTF!? I really dont want to have to recompile this beast again.

----------

## VisualPhoenix

well i fixed the evms problem by modprobing evms and restarting the init.d service...

i get:

starting evms...

Rediscover successful.

After: Volume(s) info:

(major,minor): volume-name

devfs is running on this system

devfs will keep the evms device nodes up to date

cat /proc/mdstat still shows nothing though..

----------

## VisualPhoenix

shazam -- after a little investigatory work i found that i needed to run:

raidstart /dev/md0

raidstart /dev/md1

to restart the persistant arrays!

WOOT!

now i have to figure out why grub is hanging.

----------

## Gelfling

All I wanted to do was set up the raid 0 as 100MB - /dev/hda1 (/boot), 512MB - /dev/hda2 & /dev/hdb2 (swap), and 50+GB - /dev/hda3 & hdb3 aka /dev/md0 via mkraid ( / ). Do I need to create a /dev/md1 & /dev/md2 partition? I already have a 120GB HD to be used for file storage.

----------

## VisualPhoenix

hey guys... so look -- grub works -- i can mount the array if i boot off of the livecd however when i try to boot normally i get a kernel panic complaining it can't find my root partition...

I've tried using /dev/md0, /dev/md/0 in both /boot/grub/grub.conf and /etc/fstab...

i have raid0 support in the kernel, lvm support, and evms support...

is evms support and devfs making my md device appear in /dev/evms/md/0? i dont know because i have no way of telling... i'm going to attempt putting /dev/evms/md/0 in grub.conf and fstab and see if it works... if someone else has an idea of what i should try let me know! THANKS!

-Vis

----------

## equilian

 *VisualPhoenix wrote:*   

> hey guys... so look -- grub works -- i can mount the array if i boot off of the livecd however when i try to boot normally i get a kernel panic complaining it can't find my root partition...
> 
> I've tried using /dev/md0, /dev/md/0 in both /boot/grub/grub.conf and /etc/fstab...
> 
> i have raid0 support in the kernel, lvm support, and evms support...
> ...

 

I'd have to see your grub.conf but it sounds like you have something to the effect of 

```

kernel (hd0,0)/boot/bzImage 

```

You'll want to remove boot the above line so it looks like this:

```

kernel (hd0,0)/bzImage

```

Hope this helps.

-Abraham

----------

## iplayfast

I'm trying to make my root a raid device (md0).

I've got fstab with md/0, and lilo with root=/dev/md

(it won't accept /dev/md0, or /dev/md/0) 

I've also got raid modules configured into the kernel (built in). And it looks like it's running MD stuff on the kernel boot. But it can't seem to find the root file system. 

I'm not being very clear here, but I'm hoping the answer springs out to someone. How to add raid0 to the root partition with lilo.

----------

## NicholasDWolfwood

Will this work in my setup?

I've got a situation.

Abit mobo, with an onboard HPT370 (Highpoint 370) ATA100 "RAID" controller. I believe the mobo is a KT7-RAID.

I've got this harddrive setup:

2x40GB Western Digital Caviars in RAID0 config - NTFS - 74.9GB virtually

1x45GB Western Digital Caviar - partitioned

1x80GB Western Digital Caviar - 74.9GB virtual - NTFS

/dev/hda

1 - Physical - /boot (64MB, Ext3)

2 - Physical - / (10GB, Ext3)

3 - Physical - Swap (768MB, Swap)

4 - Extended

5 - Logical - /home (5GB, Ext2)

6 - Logical - /var (5GB, Ext2)

7 - Logical - /hdb stuff (Fat32, 10GB)

/dev/hdb

1 - Physical - /mnt/hdb1 (74.9GB, NTFS)

/dev/hdc

1 - CD-RW Drive (8x4x32 Panasonic, SCSI emulation)

/dev/hdd

1 - Zip 250 Internal IDE drive

/dev/ataraid/disc1/part1

This was our RAID in hardware RAID setup on 2.4.21 kernel

I'm of course using /dev/hda as the Linux drive...the RAID of the 2 40GB HDs has multiple downloads on it, a useless Windows 2000 Adv. Server install, and other things. I'd like to be able to use it in Linux 2.6 kernels over the home network (through Samba)

Will the software RAID instructions on the first page work in my particular situation?

-NDW

----------

## NicholasDWolfwood

Does anybody know if these instructions will work in my case?

I'd like to switch to the 2.6 kernels and backup the data on the RAID before the two drives die (they're getting ready to die)

----------

## Heretic

 *Moled wrote:*   

> those are two WD raptors connected to the intel raid controller in the ICH5R
> 
> for some reason they show up as scsi discs
> 
> individual drive:
> ...

 

You don't want the siimage setup.  You have the optimal configuration.  You're getting a ~2x increase in throughput, this is as best as can be expected.  By using SATA in the ICH5R southbridge, you're completely bypassing any shared PCI bottles necks.

This is a quick solution.  I was looking at exactly this for a gigabit hosted, high-speed caching network device/internet gateway/firewall/network monitoring device/streamomg audio and video server/WLAN authentication server for a 150 user network.  Gigabit ethernet pushes nearly 125MB/s, these two raptors are very close to being able to saturate that in one direction.  If you get Intel's GigE link, which bypasses PCI as well, you have no I/O restrictions what so ever.

My question is, what do you have to do for hotplug support with SATA?  Drivers and the standard are supposed to support it.  Can you just unplug while running?  Do you have to prep with some commmands?

----------

## Naughtyus

I'm having a problem with grub loading too.  It hangs before I see the splash screen (that would let me choose what to boot to) - no error message, but it definetly isn't doing anything.

Someone in here suggested a boot floppy first, then loading grub..  How would I get this to work?

----------

## NicholasDWolfwood

WHY THE HELL DOESN'T ANYBODY ANSWER MY QUESTIONS? IT'S BEEN 3 WEEKS!

I WANT AN ANSWER!

I have RAID0, on two 40GB HDs. I've got a Highpoint 370 controller, which AFAIK is an IDE controller with software-BIOS assistance. I've got an array setup, and it's 95% filled. I've got hde1, but no hdg1, only hdg. How do I get RAID0 software RAID to work?

----------

## Naughtyus

If you boot the 1.4 LiveCD with the options: smp doataraid , is there a /dev/ataraid directory?

----------

## NicholasDWolfwood

Yes, I do....the problem is, software and hardware RAID are different. There's software RAID in all the current kernels, but no hardware RAID in the 2.6x series which I intend to use.

2.4 and hardware RAID work perfectly fine, no software raid though

----------

## Naughtyus

As someone else posted in here, you should be able to set up the software RAID with /dev/hde and /dev/hdg then.  However, since they will then both be on the same controller, they'll use the same IRQ, which leads to absolutly horrible performance.

You'd be best off to just use /dev/hda and /dev/hdc.

----------

## NicholasDWolfwood

...I'm lost here.

Apparently, you need PARTITIONS on your DRIVES.

I HAVE NONE ON /dev/hdg.

I've GOT

HDE1

HDG

NO HDG1 YOU FUCKING MORON

Sorry for my outburst, but I've been going around in circles.

----------

## kappax

Last I checked 2.4 kernels have software raid.. I know this because I have been using software raid 0 for about the last year. 

 *NicholasDWolfwood wrote:*   

> Yes, I do....the problem is, software and hardware RAID are different. There's software RAID in all the current kernels, but no hardware RAID in the 2.6x series which I intend to use.
> 
> 2.4 and hardware RAID work perfectly fine, no software raid though

 

----------

## cayenne

I'm thinking of building a box and using 4 IDE drives...big ones like 250G...but, wanting to do RAID 5. Was going to put in a 2nd ide controller to have each drive on as master.

Will following this primer work for that? Just in the setup put raid type 5 instead of 0?

Also, trying to figure out how to install Gentoo on this thing...with no room for a CD rom on it (or could I put it on as slave to one of the 4 ide drives?

Thanks in advance...

cayenne

----------

## ben_h

 *NicholasDWolfwood wrote:*   

> NO HDG1 YOU FUCKING MORON
> 
> Sorry for my outburst, but I've been going around in circles.

 

If you were sorry for your outburst before you clicked Submit, maybe you should have deleted it? That's what the backspace key is for. You'll get better responses if you take a few deep breaths and calm down. And the problem will be a lot easier to solve that way.

Of course you're going round in circles. You can't fix a computer by swearing at it.

As for your problem --

You're not using hardware raid. Promise and Highpoint "RAID" cards are completely driver controlled, and don't do anything in hardware that a normal IDE controller can't. It's still completely software controlled. That's why it isn't working -- the drivers aren't there.

hdg doesn't HAVE a partition on it, because the disk is the second in a Highpoint stripe. They write the disks how they please. And the hde1 device is no use either.

To access the array, leave hde* and hdg* alone. Look to /dev/ataraid (iirc).

So, boot a kernel that does support it (2.4) to get the data off, set the controller to be a standard ATA controller with no "hardware raid", and set up Linux software raid on the disks (or their replacements, if they're dying). It's faster, more reliable, more configurable, and monitorable from the OS (cat /proc/mdstat).

----------

## ben_h

Cayenne:

No, there's a few more options you need for RAID5. The Software RAID HOWTO at tldp.org has very good explanations.

I used to use RAID5, and for the record here's an example from /etc/raidtab that I used to use.

```
# /

raiddev /dev/md0

        raid-level      5

        nr-raid-disks   3

        nr-spare-disks  0

        persistent-superblock   1

        parity-algorithm        left-symmetric

        chunk-size      64

        device          /dev/hda2

        raid-disk       0

        device          /dev/hde2

        raid-disk       1

        device          /dev/hdg2

        raid-disk       2

```

----------

## lucasjb

Hi All,

While not strictly related to Gentoo, I've got a question or two about Linux Software RAID.  Firstly, I have read the Software-RAID HOWTO and the Multi Disk HOWTO (both excellent and I recommend them to people wanting a better understanding of these topics) and I've successfully configured Software RAID-1 (with boot/root/swap on RAID) for some servers at work (to hopefully give me high availability --- and it seems to be doing the trick).

However, recently I've purchased a machine with two Serial ATA drives and I'd like to setup a RAID-1 across those.  The BIOS and the OS both detect the drives as secondary master and slave (or hdc and hdd), which is fine, however:

 *Software-RAID HOWTO wrote:*   

> 
> 
> It is very important, that you only use one IDE disk per IDE bus. Not only would two disks ruin the performance, but the failure of a disk often guarantees the failure of the bus, and therefore the failure of all disks on that bus.
> 
> 

 

So I'm not sure whether this applies to Serial ATA also, because in my understanding it's not an "IDE Bus" that they're connected to... somebody please confirm this for me?

Secondly, I've heard that some file systems give degraded performance in a software RAID (namely journaling file systems) anyone care to comment on the pros and cons of ext2 vs ext3 in a software RAID?

Hoping one of the RAID gurus around here can help me out...

Thanks,

Lucas

----------

## lucasjb

Hi Again

Just to follow up my own post, I've got some information about what worked for me.

AOpen  MX4SG-4DL Mainboard with Integrated ICH5 SATA Interface

2 x Seagate 80GB 7200RPM 8MB SATA HDDs

BIOS detected drives as Secondary Master and Slave.  OS installed onto hdc.  I was getting horrible performance...

```

mercury:~# hdparm -tT /dev/hdc

/dev/hdc:

 Timing buffer-cache reads:   128 MB in  0.14 seconds =914.29 MB/sec

 Timing buffered disk reads:  64 MB in 18.42 seconds =  3.47 MB/sec

```

But all that was required was a simple...

```

mercury:~# hdparm -c1 -d1 -k1 /dev/hdc

```

To yield

```

mercury:~# hdparm -tT /dev/hdc

/dev/hdc:

 Timing buffer-cache reads:   128 MB in  0.14 seconds =914.29 MB/sec

 Timing buffered disk reads:  64 MB in  1.17 seconds = 54.70 MB/sec

```

A RAID-1 across my /home partition didn't really improve the performance, a great deal but I wouldn't expect it to...

```

mercury:~# hdparm -tT /dev/md5

/dev/md5:

 Timing buffer-cache reads:   128 MB in  0.13 seconds =984.62 MB/sec

 Timing buffered disk reads:  64 MB in  1.16 seconds = 55.17 MB/sec

```

What I don't really understand is why I had such an easy time compared to others on this thread, perhaps it's the integrated ICH5?  I'm still curious as to the performance of the RAID when both drives are on the same "BUS", if anyone would like to try to clear that up for me that would be cool.

Lucas

----------

## NicholasDWolfwood

Okay, new progress.

I got software RAID setup, the problem is when I try to mount the md0 device, it asks for a specific filesystem...the filesystem of the RAID is NTFS, formatted in Windows 2000 Advanced Server setup...any suggestions?

Also, I know the device is /dev/md0, because I specified it in my /etc/raidtab file...any help? It gives me the "bad superblock or file system" error when I try to mount /dev/md0.

----------

## NicholasDWolfwood

Any ideas as to whats wrong?

----------

## arkane

 *NicholasDWolfwood wrote:*   

> Any ideas as to whats wrong?

 

The last time I checked, Linux software RAID wasn't cross platform Windows compatible.

If you formatted it in the 2000 server setup, you pooched it.

----------

## decker in flux

 *mdpye wrote:*   

>  *taskara wrote:*   
> 
> I know this is only with UDMA2, but u seem to get such a high score there... what's your secret! 
> 
>  
> ...

 

Could you or anyone else suggest a place to look? i.e. how?

-d

----------

## gcasillo

My hdparm scores (all Maxtor 40GB drives):

Pre-RAID setup:

```

/dev/hda:

 Timing buffer-cache reads:   1592 MB in  2.00 seconds = 794.81 MB/sec

 Timing buffered disk reads:  86 MB in  3.04 seconds =  28.34 MB/sec

/dev/hde:

 Timing buffer-cache reads:   1504 MB in  2.00 seconds = 752.00 MB/sec

 Timing buffered disk reads:  98 MB in  3.01 seconds =  32.55 MB/sec

/dev/hdg:

 Timing buffer-cache reads:   1584 MB in  2.00 seconds = 792.40 MB/sec

 Timing buffered disk reads:  86 MB in  3.03 seconds =  28.40 MB/sec

```

Software RAID-0:

```

/dev/md0:

 Timing buffer-cache reads:   1760 MB in  2.00 seconds = 879.25 MB/sec

 Timing buffered disk reads:  228 MB in  3.03 seconds =  75.36 MB/sec

```

 :Cool: 

----------

## taskara

 *gcasillo wrote:*   

> My hdparm scores (all Maxtor 40GB drives):
> 
> Pre-RAID setup:
> 
> ```
> ...

 

that's a damn nice boost!!!!  :Wink: 

----------

## TheCoop

cant wait til I get an athlon64 system sometime next year, then I can get a couple of SATA drives and put a raid0 system on them...

Or maybe a raid0+1 or raid5 if the drives are cheap enough...

 :Very Happy: 

----------

## gcasillo

I will say that this P4-2.4GHz box with its new RAID-0 setup is snappier. You know how you get used to the rhythm of the text as it scrolls by during compiles? It is noticibly quicker. Yes, software RAID is a wonderful thing.

----------

## janlaur

I've followed the guide, and got it working without any problems. But i didn't get the performance boot i expected.

I am using two ide disk in a software raid0. each disk is master and with no slaves attached.

As shown below, each drive perform well alone.

```
# hdparm -Tt /dev/hda /dev/hdc /dev/md0 

/dev/hda:

 Timing buffer-cache reads:   1512 MB in  2.00 seconds = 754.87 MB/sec

 Timing buffered disk reads:  138 MB in  3.03 seconds =  45.54 MB/sec

/dev/hdc:

 Timing buffer-cache reads:   1496 MB in  2.00 seconds = 747.63 MB/sec

 Timing buffered disk reads:  164 MB in  3.01 seconds =  54.47 MB/sec

/dev/md0:

 Timing buffer-cache reads:   1488 MB in  2.00 seconds = 743.26 MB/sec

 Timing buffered disk reads:  192 MB in  3.03 seconds =  63.45 MB/sec

# cat /proc/mdstat 

Personalities : [raid0] [multipath] 

read_ahead 1024 sectors

md0 : active raid0 ide/host0/bus1/target0/lun0/part5[1] ide/host0/bus0/target0/lun0/part7[0]

      135186752 blocks 32k chunks

```

Should't my raid 0 be faster than 63mb/sec ?

----------

## arkane

 *janlaur wrote:*   

> I've followed the guide, and got it working without any problems. But i didn't get the performance boot i expected.
> 
> I am using two ide disk in a software raid0. each disk is master and with no slaves attached.
> 
> <snip>
> ...

 

With IDE, there is a limited bandwidth.  In some cases, the second IDE channel is actually a shared resource with the first.  (Lovely engineering tactic)

You can, however, experiment with different sized stripes and so forth.  That's what I did when I  made a raid0 to test it's speed once on a new server.  (I ended up going with LVM simply because the first disk was going to be shared with the system on a Linux partition, and I didn't feel like messing with root-filesystem raid and so forth.)

----------

## arkane

Anybody had any luck with putting more than 2 IDE controllers into a machine to bring a max count of disks above 4 for their raid setup?

----------

## taskara

I have 4 seagate sata drives coming after the new year.. I'll post my findings if you like..

edit: oh, above 4  :Confused:  I could order in a few more ?? total of 6? we'll see

----------

## janlaur

 *arkane wrote:*   

> With IDE, there is a limited bandwidth. In some cases, the second IDE channel is actually a shared resource with the first. (Lovely engineering tactic) 

 

Seems you where rigth. I moved one of the disk to my onboard ide raid controller, and i went from 63 -> 82 mb/sec. 

 *arkane wrote:*   

> Anybody had any luck with putting more than 2 IDE controllers into a machine to bring a max count of disks above 4 for their raid setup?

 

That migth answer that also. If you have a hardware raid controller on you motherboard, you can just use it as a ide controller. i havn't tried with more than 4 disks trough.

btw, anybody knows why the driver for promise mbfasttrack lite (pdcraid.o) dosn't work after 2.4.20, and if it is going to be included in 2.6 at any point ?

----------

## taskara

 *janlaur wrote:*   

> btw, anybody knows why the driver for promise mbfasttrack lite (pdcraid.o) dosn't work after 2.4.20, and if it is going to be included in 2.6 at any point ?

 

well ataraid was just a hack someone wrote for 2.4 kernels

2.6 does not have support for ANY ataraid controllers (unless they are true hardware raid)

but it does indeed support the raid cards, but only as standard ide.

perhaps in the future if someone writes proper ataraid support, or if manufacturers write proper linux drivers things may change..

but I assume from this thread, that you are using linux software raid, which is indeed supported in 2.6, and so will your ide controllers.

----------

## Mr_D

I'm switching over to Gentoo as my main workstation.  I have time to tinker for the next month, afterwards I'll be using this machine mostly for getting things done (imagine that!) I'm doing this (A) because it's cool  (B) I want a fast workstation.

I'm preparing to set up software RAID 0 using 2 IBM 60gb Deskstar drives. I also want to partition the drive so that the /home directory has it's own partition.  I need to clarify a few things

1. Where is RAID 0 more useful? for reading or writing?  For /home or everything else?

2. If I set up two 30gb RAID 0 partitions for /, placed a normal /home in it's own partition,  would that be a good compromise between speed and keeping your data relatively safe?

3. What about having two software raids, using RAID 0 for / and the more reliable RAID 1 in for the /home.  Is this a smart and valid set up?

hda:

/boot   (64 mb)

/swap (256mb)

/root raid0 (28 gb)

/home raid1 (30 gb)

hd1

/swap (256mb)

/root raid0  (28 gb)

/home raid1 (30 gb)

Any help is appreciated.  Also, I have an older 4gb drive sitting here, so I could make that /boot and /swap if it makes any difference.

Lastly -- once I get this Gentoo system setup, I don't plan on tinkering with it a lot.  Let me know if this RAID is just little over the top for a Power User who wants a fast, dependable and easily updateable workstation.

Thx,

----------

## taskara

 *Mr_D wrote:*   

> I'm switching over to Gentoo as my main workstation.  I have time to tinker for the next month, afterwards I'll be using this machine mostly for getting things done (imagine that!) I'm doing this (A) because it's cool  (B) I want a fast workstation.
> 
> I'm preparing to set up software RAID 0 using 2 IBM 60gb Deskstar drives. I also want to partition the drive so that the /home directory has it's own partition.  I need to clarify a few things
> 
> 1. Where is RAID 0 more useful? for reading or writing?  For /home or everything else?

 

everything, including home

 *Mr_D wrote:*   

> 2. If I set up two 30gb RAID 0 partitions for /, placed a normal /home in it's own partition,  would that be a good compromise between speed and keeping your data relatively safe?
> 
> 

 

yes, but linux raid 0 is not un-safe. it has been around for years - it's only if one of your hdd's dies that you loose (possibly half) your data - but if you put all of /home on one partition and that drive dies, you lose all of it anyway. I would be more inclined to use /home on raid 0 and backup the data to an external drive (perhaps your 4gb)

 *Mr_D wrote:*   

> 3. What about having two software raids, using RAID 0 for / and the more reliable RAID 1 in for the /home.  Is this a smart and valid set up?
> 
> hda:
> 
> /boot   (64 mb)
> ...

 

this is smarter, and perfectly appropriate - however you won't get any speed increase, and remember raid 1 is only useful if one of your hard drives totally dies (it does not protect against accidental file deletion or corruption etc). So if your harddrives are good quality you can run the risk of them not dying, and use raid0.

 *Mr_D wrote:*   

> Any help is appreciated.  Also, I have an older 4gb drive sitting here, so I could make that /boot and /swap if it makes any difference.
> 
> 

 

leave the old drive out all together - it will slow your system

 *Mr_D wrote:*   

> Lastly -- once I get this Gentoo system setup, I don't plan on tinkering with it a lot.  Let me know if this RAID is just little over the top for a Power User who wants a fast, dependable and easily updateable workstation.
> 
> Thx,

 

looks good to me  :Smile: 

edit: btw, you can raid 0 your swap partitions if you like, but it's not neccessary - just put them on the same priority in your /etc/fstab and they will work the same as raid0  :Smile: 

oh and also, I'd make sure everything is at exactly the partitions, and I'd also make /boot on your second hard drive, and use raid1.

so should be something like this:

```
hda:

/boot raid1  (64 mb)

/swap pri=1 (256mb)

/root raid0  (28 gb)

/home raid0 (30 gb)

hdc:

/boot raid1 (64 mb)

/swap pri=1 (256mb)

/root raid0  (28 gb)

/home raid0 (30 gb)

```

I assume you are not using your two hard drives on the SAME ide channel? (ie master and slave) if you are it kinda defeats the purpose cause your drives will be dog slow in this config.

----------

## Mr_D

 *taskara wrote:*   

> 
> 
> yes, but linux raid 0 is not un-safe. it has been around for years - it's only if one of your hdd's dies that you loose (possibly half) your data - but if you put all of /home on one partition and that drive dies, you lose all of it anyway. I would be more inclined to use /home on raid 0 and backup the data to an external drive (perhaps your 4gb)
> 
> 

 

Ok -- little confused on this point.  You had said in the post to leave the 4gb out altogether.  What do you mean here by "external drive"?  Are you suggesting backing up to a different box altogether? or is the 4gb is in the same box, just nothing to do with the RAID array? If so, how would it be partitioned?

I've only had one disk meltdown on one of my personal systems ever, and  I guess I need to think more about backup plan -- maybe burning a CD-rw every so often.  

 *Mr_D wrote:*   

> 3. What about having two software raids, using RAID 0 for / and the more reliable RAID 1 in for the /home.  Is this a smart and valid set up?
> 
> 

 

 *taskara wrote:*   

> 
> 
> this is smarter, and perfectly appropriate - however you won't get any speed increase, and remember raid 1 is only useful if one of your hard drives totally dies (it does not protect against accidental file deletion or corruption etc). So if your harddrives are good quality you can run the risk of them not dying, and use raid0.
> 
> 

 

I don't understand why there's not speed increase for the RAID 0 partition. ? Wouldn't everything that operates out of root have a speed increase? 

I've read that the IBM DeskStars are pretty reliable drives. Since there's data loss if either drive fails, I think it's doubling of risk that's bothering me a bit.  

 *taskara wrote:*   

> 
> 
> edit: btw, you can raid 0 your swap partitions if you like, but it's not neccessary - just put them on the same priority in your /etc/fstab and they will work the same as raid0 
> 
> oh and also, I'd make sure everything is at exactly the partitions, and I'd also make /boot on your second hard drive, and use raid1.
> ...

 

Thanks for the setup info -- I wouldn't have know to make the drives physically the exact same.  I will set up both drives as a master.  That will require installing an IDE board to run the CD Rom.  Any suggestions on that are also welcome.  Is there anything to gain by using hardware RAID (the boards in the $100 to $150 range)?

Also -- do you know of any howto or gentoo posts on creating /home in seperate partition?

----------

## Crg

 *taskara wrote:*   

> 
> 
> 1. Where is RAID 0 more useful? for reading or writing?  For /home or everything else?
> 
> 

 

Raid0 is faster for reading and writing but it can have a higher average seek time (it has to move 2 heads) and for small amounts of data.

Mirroring has a lower average seek time than raid0, but it is less diskpace efficient and slower at writing.

 *taskara wrote:*   

> 
> 
> yes, but linux raid 0 is not un-safe. it has been around for years - it's only if one of your hdd's dies that you loose (possibly half) your data - but if you put all of /home on one partition and that drive dies, you lose all of it anyway
> 
> 

 

If you think about it raid0 is more unsafe than just 1 disk as you have 2 disks that could fail, and if one fails you lose all the data on both (because the data is striped across), so not only is it more likely to fail - you have more data to lose.  If you used the 2 disk without raid0 if one failed you'd still have the other one's data.

----------

## Crg

 *Mr_D wrote:*   

> 
> 
> 3. What about having two software raids, using RAID 0 for / and the more reliable RAID 1 in for the /home.  Is this a smart and valid set up?
> 
> 

 

That is a good setup.  Having data you don't want to lose on a mirrored partition for redundancy, but having fast read access on your binaries which you can easily replace.  You might want to have a small raid1 partition for /etc as well as well.

Also I don't think I seen the "stride" option mentioned on these forums.  If you making a ext2/3 raid0 partition you should pass the 

```
-R stride=xx
```

 option to mkfs, where xx is how many blocks fits in the stripe size.

----------

## Mr_D

 *Crg wrote:*   

>   You might want to have a small raid1 partition for /etc as well as well.
> 
> Also I don't think I seen the "stride" option mentioned on these forums.  If you making a ext2/3 raid0 partition you should pass the 
> 
> ```
> ...

 

Can you say more about a small RAID1 partition for /etc.   What's your rationale there?

Also,  I was planning on using ReiserFS (b/c I've always used it before).  Is Reiser a good choice for RAID?  Or would EXT 2 or 3 a better choice?

thx,

----------

## Crg

 *Mr_D wrote:*   

> 
> 
> Can you say more about a small RAID1 partition for /etc.   What's your rationale there?
> 
> 

 

If one of your disks died, you'd still have all the stuff in home and the stuff in /etc so you don't have to reconfigure everything.  Just need to replace the binaries.

It depends on how much stuff you configure in there, for me there's webserver, proxy, firewall, QoS, scripts etc that I wouldn't want to have to redo if a disk died  :Smile: 

 *Mr_D wrote:*   

> 
> 
> Also,  I was planning on using ReiserFS (b/c I've always used it before).  Is Reiser a good choice for RAID?  Or would EXT 2 or 3 a better choice?
> 
> 

 

Haven't benchmarked it so couldn't say  :Smile:   If you were planning on using reiserfs it's a good choice.Last edited by Crg on Sat Jan 10, 2004 12:03 am; edited 1 time in total

----------

## taskara

 *Mr_D wrote:*   

>  *taskara wrote:*   
> 
> yes, but linux raid 0 is not un-safe. it has been around for years - it's only if one of your hdd's dies that you loose (possibly half) your data - but if you put all of /home on one partition and that drive dies, you lose all of it anyway. I would be more inclined to use /home on raid 0 and backup the data to an external drive (perhaps your 4gb)
> 
>  
> ...

  sorry.. I meant perhaps use the 4gb as an external, buying and putting it in an external usb case, and plugging it into your usb port, and then you can use it to backup your /home partition. yes, it is nothing to do with the raid array, I was just suggesting a backup solution so you could use /home in raid0. hope that is clear now  :Confused: 

 *Mr_D wrote:*   

> I've only had one disk meltdown on one of my personal systems ever, and  I guess I need to think more about backup plan -- maybe burning a CD-rw every so often.  

  yep that's fine, whatever backup solution works for you - just that cds don't hold much when you are talking about backing up a whole /home partition!  :Wink: 

 *Mr_D wrote:*   

> 3. What about having two software raids, using RAID 0 for / and the more reliable RAID 1 in for the /home.  Is this a smart and valid set up?
> 
>  *taskara wrote:*   
> 
> this is smarter, and perfectly appropriate - however you won't get any speed increase, and remember raid 1 is only useful if one of your hard drives totally dies (it does not protect against accidental file deletion or corruption etc). So if your harddrives are good quality you can run the risk of them not dying, and use raid0.
> ...

  yes you are right, I didn't make myself clear - I was referring to using raid1 - if you use raid1 on your /home you will not get any speed increase, but any raid0 partition will give you speed increase.

 *Mr_D wrote:*   

> I've read that the IBM DeskStars are pretty reliable drives. Since there's data loss if either drive fails, I think it's doubling of risk that's bothering me a bit.  

  yeah, but if you have a backup then you will be fine  :Smile:  also what's the mean time between failure on the drives? I think you'll find it's about 10 years. that's not to say one won't die tomorrow.

 *Mr_D wrote:*   

>  *taskara wrote:*   
> 
> edit: btw, you can raid 0 your swap partitions if you like, but it's not neccessary - just put them on the same priority in your /etc/fstab and they will work the same as raid0 
> 
> oh and also, I'd make sure everything is at exactly the partitions, and I'd also make /boot on your second hard drive, and use raid1.
> ...

 

no - do not buy a "raid card" you just want an ide card - like a promise ultra100 or something.

and the installation guide shows you how to make a seperate partition for /home. you just create the extra partition, and in your /etc/fstab file you tell linux that the extra partition is /home.

btw this is also what you need: software riad howto

good luck, and let me know if u need to know anything else  :Smile: 

----------

## taskara

oh I didn't see the other posts.

yeah I like reiserfs

use ext2 or ext3 for /boot

and as for /etc - it's the same as /home. you can run it as raid1 if you want, but in my opinion you'd be better off running the whole system in raid0 and just backing up - you won't have many config files to backup, so /etc on raid1 is a bit of overkill.

still at the end of the day, it's up to you  :Smile: 

you can always backup an entire partition, and change the raid level, then copy the data back, so it's not the end of the world if you wanna change later you can  :Smile: 

----------

## Crg

 *taskara wrote:*   

> 
> 
> yes you are right, I didn't make myself clear - I was referring to using raid1 - if you use raid1 on your /home you will not get any speed increase, but any raid0 partition will give you speed increase.
> 
> 

 

Just to clear things up abit with raid1.

If you're talking serial disk access - say like testing it with hdparm - then raid1 will be the speed of a single disk (it will be reading off a single disk  :Smile:  ). 

But if you have parallel reads going on, (which is quite often the case when you are actually using your system), the reads will be balanced across the drives - ie you'll be fetching multiple reads at once, plus raid1 has lower average seek times so more time should be spent reading and less seeking.

In some situations, ie if you have a high read to write ratio and a high read load, raid1 can actually be faster than raid0.

If you're talking writes then raid1 is slow  :Smile: 

----------

## taskara

 *Crg wrote:*   

>  *taskara wrote:*   
> 
> yes you are right, I didn't make myself clear - I was referring to using raid1 - if you use raid1 on your /home you will not get any speed increase, but any raid0 partition will give you speed increase.
> 
>  
> ...

 

yes you are right  :Smile: 

----------

## Mr_D

Thanks, this is great information.

This may be an obvious question, and I need to ask it anyway.  I'm wanting to calculate the total gb for each RAID partition, and realized the total storage space may be different depending on whether it's a RAID 0 or RAID 1.  Are the following assumptions correct?

For RAID 0

If I create a 30gb partion (both drives),  then my total partition storage space is 60gb?

For RAID 1

If I create a 30gb partion (both drives),  then my total partition storage space is 30gb?

Thx

----------

## taskara

 *Mr_D wrote:*   

> Thanks, this is great information.
> 
> This may be an obvious question, and I need to ask it anyway.  I'm wanting to calculate the total gb for each RAID partition, and realized the total storage space may be different depending on whether it's a RAID 0 or RAID 1.  Are the following assumptions correct?
> 
> For RAID 0
> ...

 

correct

----------

## Crg

 *Crg wrote:*   

> 
> 
> If you're talking serial disk access - say like testing it with hdparm - then raid1 will be the speed of a single disk (it will be reading off a single disk  ). 
> 
> 

 

I'd just like to correct myself - this was correct for 2.4 (hdparm showing single disk results) but it appears 2.6 may balance squential reads for raid1 better (not all the time - the results are quite variable - most probably depending on where the heads are and whether there are other reads/writes going on).

first the disks by themselves:

```

hdparm -t /dev/hda

/dev/hda:

 Timing buffered disk reads:   86 MB in  3.02 seconds =  28.43 MB/sec

```

```

 hdparm -t /dev/hdb

/dev/hdb:

 Timing buffered disk reads:  102 MB in  3.01 seconds =  33.90 MB/sec

```

and the the raid1 results:

```

hdparm -t /dev/md0

/dev/md0:

 Timing buffered disk reads:  140 MB in  3.02 seconds =  46.29 MB/sec

hdparm -t /dev/md0

/dev/md0:

 Timing buffered disk reads:   86 MB in  3.01 seconds =  28.57 MB/sec

```

I really need to get around to using a proper benchmark to test this one of these days...  :Wink: 

EDIT:

With cfq:

```

hdparm -t /dev/md0

/dev/md0:

 Timing buffered disk reads:  144 MB in  3.02 seconds =  47.64 MB/sec

```

----------

## suhlhorn

Okay guys-

Having trouble booting my new RAID0 system. I followed the instructions (excellent how-to, BTW), and was able build the system. I built a 2.6.1 kernel (gentoo-dev-sources), but can't boot it with grub. Here are my boot error messages:

```

md:  Autodetecting RAID arrays.

md:  autorun...

md:  ...autorun DONE.

EXT3-fs:  unable to read superblock

EXT2-fs:  unable to read superblock

FAT:  unable to read boot sector

VFS:  Cannot open root device "md/0" or unknown-block(0,0)

Please append a correct "root=" boot option

Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)
```

My root fs is reiserfs and I have reiser support built into the kernel (not module).

Here is my raidtab:

```

raiddev            /dev/md0

raid-level         0

nr-raid-disks         2

chunk-size         32

persistent-superblock   1

device            /dev/hde3

raid-disk            0

device            /dev/hdg3

raid-disk            1

```

fstab:

```

/dev/hde1      /boot      ext3      noauto,noatime      1 1

# /dev/hdg1      <mountpoint>   ext3      noauto,noatime      1 1

/dev/md/0      /      reiserfs   noatime         0 0

/dev/hde2      none      swap      defaults,pri=1      0 0

/dev/hdg2      none      swap      defaults,pri=1      0 0

/dev/cdroms/cdrom0   /mnt/cdrom   iso9660      noauto,ro      0 0

none         /proc      proc      defaults      0 0

none         /dev/shm   tmpfs      defaults      0 0

none         /dev/pts   devpts      defaults      0 0

```

And my grub.conf:

```

timeout 3

default 0

fallback 1

# splashimage=(hd0,0)/grub/splash.xpm.gz

# RAID-0 kernel

title  Gentoo Linux 2.6.1 (gentoo-dev-sources)

root (hd0,0)

kernel (hd0,0)/linux-2.6.1-gentoo  root=/dev/md/0

```

I must be missing something simple...anyone have any suggestions? Thanks!

-stephen

----------

## Dodga

 *Quote:*   

> /dev/md/0      /      reiserfs   noatime         0 0

 

Shouldn it be /dev/md0 ???  :Smile: 

btw i am using mdadm, much easier to config....

Dodga

----------

## taskara

 *Dodga wrote:*   

>  *Quote:*   /dev/md/0      /      reiserfs   noatime         0 0 
> 
> Shouldn it be /dev/md0 ??? 
> 
> btw i am using mdadm, much easier to config....
> ...

 

yes Dodga is right, but sometimes it does have to be /dev/md/0

try changing it to /dev/md0 and see  :Smile: 

Dogda, tell me about this mdadm  :Smile: 

----------

## suhlhorn

As I understand it, for 2.6 kernels fstab should be /dev/md/0, but for older 2.4 kernels, it should be /dev/md0.

Anyway, I've tried both and I get the same error messages. Any other suggestions? Thanks for the input.

-stephen

----------

## suhlhorn

 *taskara wrote:*   

> 
> 
> yes Dodga is right, but sometimes it does have to be /dev/md/0
> 
> try changing it to /dev/md0 and see 
> ...

 

Just tried changing it back to /dev/md0, but got the same error messages.

Since I'm installing from Knoppix (kernel 2.4.22), could this be some conflict between the dev naming between the 2.4 knoppix kernel, and the 2.6 kernel that I'm trying to boot?

Thanks-

-stephen

----------

## taskara

 *suhlhorn wrote:*   

>  *taskara wrote:*   
> 
> yes Dodga is right, but sometimes it does have to be /dev/md/0
> 
> try changing it to /dev/md0 and see 
> ...

 

shouldn't be, because when you boot you're in your new gentoo system..

in your kerne you added devfs support? and told it to mount on boot?

----------

## suhlhorn

 *taskara wrote:*   

> 
> 
> shouldn't be, because when you boot you're in your new gentoo system..
> 
> in your kerne you added devfs support? and told it to mount on boot?

 

yes, and yes. (I already made that mistake on another box!)

----------

## taskara

hmm.. and you built all the raid drivers directly into your kernel, NOT as modules?

do you still have your raidtab file under /etc?

do you have a copy of it under /boot?

----------

## Crg

 *suhlhorn wrote:*   

> Okay guys-
> 
> Having trouble booting my new RAID0 system. I followed the instructions (excellent how-to, BTW), and was able build the system. I built a 2.6.1 kernel (gentoo-dev-sources), but can't boot it with grub. Here are my boot error messages:
> 
> ```
> ...

 

Are your raid partitions type "fd"?  The output above shows you wouldn't be able to boot on any md device as none were found/setup.

----------

## suhlhorn

I finally fixed the box and it seems that the basic problem is that I am a moron. I forgot to include support for my off-board IDE controller in the kernel, so my kernel couldn't find my HD's. Oops.

Thanks for all the suggestions. I really appreciate the help.

-stephen

----------

## taskara

 *suhlhorn wrote:*   

> I finally fixed the box and it seems that the basic problem is that I am a moron. I forgot to include support for my off-board IDE controller in the kernel, so my kernel couldn't find my HD's. Oops.
> 
> Thanks for all the suggestions. I really appreciate the help.
> 
> -stephen

 

lol.. nps  :Wink: 

----------

## saskatchewan46

Hi,

I am currently using the ck-kernel 2.4.24 with my HPT370 RAID0 array.  I will be upgrading to 2.6 and software raid0 shortly, but was wondering, can I use the hpt370 controllers as controllers for my cdrom drives?  I plan on moving the two IDE drives to be software raid'ed to the IDE controllers on the mobo.  So this will free up the two previously used HPT controllers and fill up the two IDE controllers that cdroms used.  Does 2.6 support HPT controllers in NON-RAID format as basic IDE controllers?

Hopefully they will show up as /dev/hdg & hde.

Thanks for the help and the usefull HOW-TO!

----------

## pulz

I need some pointers how to get my raid array working again, i arn't able to "make" the arrays anymore.

This is due to badsuperblock one of the hds, and i realy want to get my drives back to get the data, i arnt about to loose 260gb with data.

After some googling i have found out it should be possible to rebuild the superblock part of array, for that i understand the data should be intackt.

The error is as following:

 *Quote:*   

> 
> 
> raid0: looking at hdf
> 
> raid0:   comparing hdf(156290816) with hdf(156290816)
> ...

 

I found that manual that says i should do an mkraid -force to rewrite the superblock.

But this command tells me to use lsraid:

 *Quote:*   

> 
> 
>  WARNING!
> 
>  NOTE: if you are recovering a double-disk error or some other failure mode
> ...

 

Okay, i looked over the manual for lsraid and i arn't able to find any information for how to rebuild the superblock.

But i are able to rebuild the superblocks by doing mkraid -R /device, but in doing this i will also loose all the data on the drives.

 *Quote:*   

> 
> 
> mkraid -R /dev/md0
> 
> DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!

 

So i now im stuck and need some help if theres anyone that can help me on this, or is my data lost forever ?[/code]

----------

## aridhol

I have a question. I have a 2 disk raid set up. I wanna add another disk.

How?

Might be a good addition to the How to too  :Smile: 

Never mind, found it... I'll post how when/if I get the drives working using raidreconf)

----------

## taskara

pulz, I don't want to suggest anything cause I'm not 100% sure.

I have re-built an array before and it did not delete teh harddrive (with mkraid)

but did you re-boot to the gentoo cd?

and modprobe md?

and re-type your /etc/raidtab

and then try raidstart /dev/mdx?

----------

## pulz

Yup, but it didn't work.

So in the end i had to give up, its some time since i made the post.

So the array have been recreated and all the data have been lost   :Sad: 

----------

## taskara

 :Confused: 

ouch..

----------

## Gelfling

I've finally did it, RAID0 on my system is alive!!!

----------

## Haro

Anyone do this with more than 2 disks?

I have one disk per IDE connection -> IDE0, IDE1, PCI_IDE0, PCI_IDE1.

Would it be possible to include all these disks in a RAID0 array?

----------

## aridhol

Yes, just add more entries to include all your disks

----------

## Haro

Sweet.

I'll post back on how it goes.

----------

## mudrii

Can I change diferent file sistem on RAID 0 becouse I am not satisfy with my XFS speed sucks in my case  :Sad:  I would like to change it in Reiserfs .

like #convertfs /dev/md0 xfs reiserfs

On RAID that command is it work ?

----------

## aridhol

It should be no problem. However it is always risky to change filesystem so I would definately recomend makeing a backup first.

----------

## CharlieS

Is chunksize something that has to stay the same ?  or can u change it and reboot and have everything work fine?  how does that work?  i wanted to test different chunksizes but dont know if it has to be a static value or not

----------

## Vlad

(CharlieS: Block-size/chunksize is set at filesystem creation and cannot be dynamically changed. You'll have to reformat to change it.)

I have an unrelated question:

What's the performance like on a software RAID0 array of two drives on the same cable? (IE, raiding a master and slave drive). I realize the performance of putting two disks on one cable is detrimental, but I don't think it's ever been quantized.

----------

## pinetops

Just a reminder that you must set the partition type to 0xfd if you want to use your raid as your root. If you don't it doesn't work and doesn't say why.

At least, that's what did it for me  :Smile: .

----------

## adante

alrighty i suspect i'm probably the first person to get something fruity like this, i appear to be SLOWER access results on my raid 0 partitions than the drives

my setup

```

md1 : active raid0 hdd1[3] hdb1[2] hda4[1] hdc4[0]

      119295616 blocks 32k chunks

md0 : active raid0 hdc3[1] hda3[0]

      20008640 blocks 32k chunks

```

basically, i have 2x40gb (hda, hdc) + 2x30gb (hdb, hdd), and have made 2 raid 0 partitions as (10gb from hda & 10gb from hdc) (30gb from all drives) as well as assorted boot/swap

hdparm -tT results (hdparm on /dev/hd{b,c,d} give about the same as a):

```

/dev/hda:

 Timing buffer-cache reads:   592 MB in  2.00 seconds = 295.31 MB/sec

 Timing buffered disk reads:  112 MB in  3.00 seconds =  37.30 MB/sec

/dev/md0:

 Timing buffer-cache reads:   604 MB in  2.01 seconds = 300.69 MB/sec

 Timing buffered disk reads:  108 MB in  3.01 seconds =  35.93 MB/sec

/dev/md1:

 Timing buffer-cache reads:   608 MB in  2.01 seconds = 302.08 MB/sec

 Timing buffered disk reads:  104 MB in  3.03 seconds =  34.34 MB/sec

```

this really takes the cake! my raid partitions have actually got reduced performance? ran this a few times and same results (md0/md1 is always lower)

anybody have any ideas?

my raidtab:

```
# / partition

raiddev /dev/md0 # raid device name

raid-level 0 # raid 0

nr-raid-disks 2 # number of disks in the array

chunk-size 32 # stripe size in kilobytes

persistent-superblock 1

device /dev/hda3 # device that comprises the raid array

raid-disk 0 # disk positing index in array

device /dev/hdc3 # device that comprises the raid array

raid-disk 1 # disk position index in array

# / partition

raiddev /dev/md1 # raid device name

raid-level 0 # raid 0

nr-raid-disks 4 # number of disks in the array

chunk-size 32 # stripe size in kilobytes

persistent-superblock 1

device /dev/hdc4 # device that comprises the raid array

raid-disk 0 # disk positing index in array

device /dev/hda4 # device that comprises the raid array

raid-disk 1 # disk position index in array

device /dev/hdb1 # device that comprises the raid array

raid-disk 2 # disk position index in array

device /dev/hdd1 # device that comprises the raid array

raid-disk 3 # disk position index in array

```

fstab entries:

```

/dev/md0                /               reiserfs        noatime,notail         0 0

/dev/md1                /mnt/deathstar  reiserfs        noatime,notail         0 0

```

ALL drives have 32-bit + dma

```

/dev/hda:

 multcount    = 16 (on)

 IO_support   =  1 (32-bit)

 unmaskirq    =  0 (off)

 using_dma    =  1 (on)

 keepsettings =  0 (off)

 readonly     =  0 (off)

 readahead    = 256 (on)

```

if anybody has any idea whats going on i'd be grateful

----------

## jmas

Hi

I'm a complete newbie in linux issues. I'd like to install Gentoo on my machine to dual boot with windows but in the same RAID 0 array. Is this possible?. My motherboard is a Asus K8V Deluxe with integrated promise and VIA RAID solutions.  I have 3 disks, two SATA seagate for the RAID array and one IDE maxtor for backing up data.

Thanks

----------

## BlindSpy

This guide is great first off. Second, I'm pretty sure there isn't, but does anyone know of a way to set up a software raid AFTER the installation. I really don't want to lose all my work.

Also, I've got a 60gig and a 120gig, would partitioning off 60gigs from my 120gig work for the raid and then just using the other 60 for storage?

----------

## blackwhite

Does this support scsi card(workstation scsi card, no raid support) and scsi disks? I tried it, but i failed.

Thanks.

----------

## JWU42

 *adante wrote:*   

> alrighty i suspect i'm probably the first person to get something fruity like this, i appear to be SLOWER access results on my raid 0 partitions than the drives
> 
> <snip>
> 
> if anybody has any idea whats going on i'd be grateful

 

I wouldn't use hdparm as an accurate test.  I would try bonnie++ or tiobench.

----------

## r00d00

So I've followed the guide at reboot grub returns an error:

kernel (hd3,0)/boot/kernel-2.6-1 root=/dev/md0

Error 15: File not found

I then changed grub.conf as hd3,0 should be the boot partition

kernel (hd3,0)/kernel-2.6-1 root=/dev/md0

Which failed with simial error message except no /boot

Then tried grub.conf as

kerne; (hd3,0)/kernel-2.6-1 root=/dev/md/0

Again same error.

Can anyone help?

grub.conf

```

default 0

timeout 30

splashimage=(hd3,0)/grub/splash.xpm.gz

title=Gentoo

root (hd3,0)

kernel (hd3,0)/kernel-2,6-1 root=/dev/md0

```

Setup

2x scsi 32GB

sda1 64M /boot

sda2 1024M /swap

sda3 part of raid md0

sdb1 64M nothing

sdb2 1024M /swap2

sdb3 part of raid md0

2 other drives in machine both ide on primary ide as windows

----------

## tmadhavan

Still no news about using two drives on one ide cable? 

I didn't have much luck with this - formatted everything, and all I got was an incease in performance from 40ishMB/Sec to 50ishMB/Sec - not at all worth the effort. 

I have the drives as /dev/hda /dev/hdc, one drive on one IDE slot, one drive/dvd on the other IDE.

Keep fiddling, see what happens....

----------

## Dodga

Oh, forgot to answer here about mdadm

1. mdadm is in the portage  :Smile: 

2. here a good guide for mdadm http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html

3. gentoo also supports mdadm if there is an mdadm.conf in /etc

Have phun 

Dodga

----------

## soth

I don't know if this is the right place to post this, and maybe I can get some pointers to where to post it and in some more detail, but the driver md in 2004.2 seems kinda broken. I tried loads of different approaches to get it to work for a RAID0 on one of my systems, but to no avail. Switched to 1.4 and it worked like a charm...

----------

## Naughtyus

I'm having problems with my linear raid setup that perhaps someone here might be able to help me with.

My raidtab is as follows:

```
#

# linear RAID setup, with no spare disks:

#

raiddev /dev/md0

    raid-level                linear

    nr-raid-disks             2

    persistent-superblock     1

    chunk-size                32

    device                    /dev/hda3

    raid-disk                 0

    device                    /dev/hdb1

    raid-disk                 1 
```

This works without error when booting to the LiveCD.  When I am done installing the system, and reboot, the raid drive is never detected.

```
VFS: Cannot open root device "md0" or unknown-block(0,0)

Please append a correct "root=" option

Kernel panic: VFS: Unable to mount root fs on unknown-block(0,0)
```

I'm running a 2.6.8 kernel with all of the raid options compiled into the kernel, and am running udev.  I've tried using '/dev/md/0' instead of '/dev/md0', with the same result.

Any suggestions?  Will this work if I use a 2.4 kernel? devfs?

----------

## tmadhavan

I've just set up a RAID-0 on my two 40GB Maxtors (btw, performance increase from 40MB/sec to 65MB/sec - shouldn't it be nearer 80???).

I compiled my kernel with genkernel, so had to add 

```
image=/boot/kernel-2.6.8-gentoo-r3

        label=gentoo

        read-only

        append="init=/linuxrc real_root=/dev/md/0"

        root=/dev/ram0

        initrd=/boot/initrd-2.6.8-gentoo-r3

```

to my lilo.conf. 

Also, in lilo.conf and in fstab, I had to change the device name from 

```
/dev/md0
```

 to 

```
/dev/md/0
```

 even tho in my raidtab it's md0. 

Here's my raidtab:

```

raiddev /dev/md0

raid-level 0

nr-raid-disks 2

chunk-size 4

persistent-superblock 1

device /dev/hda3

raid-disk 0

device /dev/hdc3

raid-disk 1

```

Not sure if that'll help at all, hope it does. 

GL,

T

----------

## stig

Thanks for the how-to. I've never done this before, and I got raid0 up running in five minutes.

----------

## TrainedChimp

Thanks for the HowTo. All went very well for me except that I had to manually create the md* devices, maybe that's becaue I'm running "pure" udev... 

Before:

```
# hdparm -Tt /dev/hda

/dev/hda:

 Timing cached reads:   2792 MB in  2.00 seconds = 1394.12 MB/sec

 Timing buffered disk reads:  172 MB in  3.00 seconds =  57.32 MB/sec

```

After:

```

# hdparm -Tt /dev/md0

/dev/md0:

 Timing cached reads:   2800 MB in  2.00 seconds = 1398.12 MB/sec

 Timing buffered disk reads:  348 MB in  3.01 seconds = 115.48 MB/sec
```

----------

## jkcunningham

How do you manually create the /dev/md* devices? I'm having the same problem.

-Jeff

----------

## TrainedChimp

 *jkcunningham wrote:*   

> How do you manually create the /dev/md* devices? I'm having the same problem.
> 
> -Jeff

 

```
cd /dev

mkdir /dev/md

mknod /dev/md/0 b 9 0

mknod /dev/md/1 b 9 1

mknod /dev/md/2 b 9 2

...etc.

ln -s md/0 md0

ln -s md/1 md1

ln -s md/2 md2

...etc.
```

Or you could emerge mdadm which has a way to force the creation of the devices as it does a 'makeraid'.

Good luck.

----------

## jkcunningham

Hey, thanks for the fast reply. Got it. And I discovered that you can get the "scary messages" to go away entirely if you put the persistent-superblock line before the chunksize line. Apparently, when the man page says:

 *Quote:*   

> 
> 
> The order  of  items  in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code  isn't  overly bright, so be sure to follow the ordering in this man page for best results.
> 
> 

 

it isn't kidding. 

This is great!

----------

## stig

I tested the speed of my new md0-device the other day, and the performance results was a little different than I would expect:

I run two seagate barracuda 60 GB disks:

 *Quote:*   

> /dev/hde:
> 
>  Timing cached reads:   1100 MB in  2.00 seconds = 549.81 MB/sec
> 
>  Timing buffered disk reads:  110 MB in  3.04 seconds =  36.20 MB/sec
> ...

 

 *Quote:*   

> /dev/hdf:
> 
>  Timing cached reads:   1108 MB in  2.00 seconds = 552.70 MB/sec
> 
>  Timing buffered disk reads:  118 MB in  3.03 seconds =  38.99 MB/sec

 

And the md0-device:

 *Quote:*   

> /dev/md0:
> 
>  Timing cached reads:   1104 MB in  2.01 seconds = 550.43 MB/sec
> 
>  Timing buffered disk reads:   66 MB in  3.03 seconds =  21.77 MB/sec

 

The md0 seems to be remarkably slower than each of the two disks which makes the array. DMA is turned on. In any case the md0 should be faster than the two disks independently regardless of the state of the DMA..

----------

## jkcunningham

That is curious. 

How do you have these hooked up to your mother board? Are they both on the same IDE controller? If on separate controllers, are they both masters or slaves (or one each)? Are these SATA drives?

----------

## stig

They're master and slave on the same IDE-controller.

```
0000:00:0c.0 RAID bus controller: Promise Technology, Inc. PDC20265 (FastTrak100 Lite/Ultra100) (rev 02)

```

I used this controller in "hardware" RAID when I used the 2.4 kernel series, and the speed then was greatly improved. Still same physical configuration in respect to how the drives were attatched.

----------

## jkcunningham

Hmmm. If they are both on the same cable, then it is by definition a serial event with respect to the two drives (i.e. one gets to write a byte, then the other, etc.). You should try putting them on two different controllers. All modern motherboards have two controllers. Switch stuff around so they are separated and I'll bet you see a big improvement. 

-Jeff

----------

## stig

Hm. Now ALL disks are on separate cables, but the result is still terrible:

```
/dev/hdf:

 Timing buffered disk reads:  122 MB in  3.03 seconds =  40.24 MB/sec
```

```
/dev/hdg:

 Timing buffered disk reads:  122 MB in  3.04 seconds =  40.18 MB/sec
```

```
/dev/md0:

 Timing buffered disk reads:  116 MB in  3.01 seconds =  38.57 MB/sec
```

----------

## jkcunningham

I don't know what to make of that. You are running raid-0, right?

----------

## stig

Indeed I am  :Smile: 

----------

## Hoshimaru

My motherboard is screwed... Is there a way I can access my md raid 0 with the livecd ? I need to get some data stored on these disks.

I guess booting the system with the old kernel for my former motherboard is a no go :S

I'm so screwed now  :Shocked: 

----------

## augury

i have a software raid0 i setup on a fasttrack that is now running on an ich5.  i didnt use the fasttrack formatter thing in the bios, so my boot partition is ext2 on only one of the disks.

in order for your kernel to mount these software raids, you need to add

   md=0,/dev/sda2,/dev/sdb2 root=/dev/md0

and it will read the raidtab and gentoo will mount drives that are not the root although you might be able to pass those to the kernel too.

the bios has the option to load up raid w/ or w/out a bootPROM? ive never heard of this before.  im wondering if there would be more speed/less cpu with bootPROM.  if i try to boot this way it doesn't boot grub.  i could boot the system off of another disk but i dont know if this will work or not.

sda and sdb are 2 80gb barracuda's with the new command caching but my mb doest support it and 16k chunks.   Timing buffer-cache reads are 2000 for raid or a boot partion.  Timing buffered disk reads ~50-70 on the boot part and 100-113 on the raids.  kernel cpu times are very high on one prossesor during tests.

----------

## jackxh

Hi Everyone:

First thing. It is an exerecllent how-to.

I followed it to setup the raid1 on this. Everything works. I use kernel 2.6.9.

/ reiserfs. AMD 2000, 512 RAM. Gigabyte MB. 

the system boots up. But it is very very slow. Like even when I type return key. It takes .5 sec to respond. 

when I do top, I see 80 to 95 CPU use by raid related executeables. 

Can anyone please give me a hiting. If I need to configure anything.

Jack

----------

## Hoshimaru

Appearantly I could boot the system with the new motherboard. I made a backup to another computer and I'm going to install it from scratch again... if I'm able to see the hard disks attached to a HPT372 RAID controller at least   :Rolling Eyes: 

----------

## Phk

Hi there!

I'm getting mad on trying to put raid 0 working....  :Crying or Very sad: 

I'm able to make /dev/md0,

i'm able to do fdisk /dev/md0 and create partitions,

but......

How can i format now a single partition? if my raid (/dev/md0) had two partitions, how could i access each individually?

Please clear my mind or i'll reject technology from now on...  :Very Happy: 

----------

## Hoshimaru

Why on earth would you want to format individual partition from your /dev/md0 ?

Just format the md0...

fdisking it ? For what purpose ? Once you fdisked 2 or more hard disk and build your mdraid device on a specific partition of all the drives it should be ok.

If you absolutely have to, just mk2fs /dev/hdX or sdX for example.

----------

## Phk

Ok! So /dev/md0 is the first partition, not the raid itself.......

So my question is:

if i need to partition each drive individually, i don't need the raid mode set in the motherboard, right? If so, how can i have raid working on linux+windows?

I'm gettin confused...

----------

## Hoshimaru

Depends on the raid you want.

If you are going to use hardware raid, you need to create the array in your controller's bios and load the correct LKM. The access the raided devices with /dev/ataraid/dXpX, or /dev/sda or other namings... depends on the module. Then it's just partitioning and formatting as if you were using a regular hard disk.

If you want to use linux software raid or (mdraid/mdadm) you might want to check this out: https://forums.gentoo.org/viewtopic-p-2053078.html#2053078

It's for using with a HPT372, but that more or less the same for the rest.

----------

## zpet731

Hi everyone,

Just wanted to know if these instructions are still current and are they the same for SATA RAID 0. Also I would like to mention that I just put a system together with GA-K8NF-9 and have two 160GB seagate HD. Now when I boot the bios asks me to setup the raid array. So if Im only using linux I should use software raid right? Does this mean that I should disable the raid for my to SATA drives within the bios.

Thanks!!

----------

## Phk

Hi there.

I've managed to build a similar system up, but i've spent 2 weeks making a huge pile of sh*t.

So, i encourage you to follow this:

https://forums.gentoo.org/viewtopic-p-2220169-highlight-.html#2220169

Instead of spending 2 weeks like i did.

(you will basicly only need the gen2dmraid-0.99a.iso. Instructions at the link i gave)

----------

## zpet731

Hey Phk,

If the above note was inteded to me, then I need to ask you should I be using the RAID within the motherboard or just stick to software raid. Im running ONLY linux.

----------

## zpet731

Hi everyone,

I am posting again to clarify a few things and ask a more general question.

I stated earlier that I have just built a:

AMD64 system 3200+

GA-N8NF-9 motherboard

6600 GT graphics card

1GB RAM 

2 SATA 160GB drives

Now, Im only planning to run gentoo on this system so no dual boot or anything.

I've read quite a bit on SATA raid threads, most of the threads are excellent but I still need a few things answered before I start installing gentoo on it. Im using a minimal 2005.0 image that I downloaded off the net.

Now if I am to use Raid 0 setup what is my best option do I use the motherboard raid or not? Im not sure which way is better so hopefully someone can enlighten me on this issue.

Also if I am to use the software raid and control it completely from linux do I need to disable the raid in the bios? My motherboard asks me to setup the array each time I boot up and the sata raid is enabled by default. Can someone explain what needs to be done. Thanks!!!

----------

## Phk

Use raid in the MOBO, and then use gen2dmraid-0.99a like i did to create an initrd.

It's simpler then having to define partitions by hand  :Very Happy: 

See the thread i reffered above.

Any doubts, message me (here or there)

Good luck  :Wink: 

----------

## kamikaze04

I've two  questions.

1)==================

I've noticed that if i put /dev/hda i get faster speeds than if i put only a partiton.

To test really the increase of the speed, what should i do?

```
hdparm -tT /dev/hda /dev/hdc /dev/mc/0
```

or

```
hdparm -tT /dev/hdaX /dev/hdcY /dev/mc/0
```

2)==================

Another question is,  :Arrow:  Partitions should be mounted or not to get accurated speeds?

3)[/code]==================

My results are:

[code]

 FOR THE TWO HARDDISKS

/dev/hda:

 Timing cached reads:   1684 MB in  2.00 seconds = 840.87 MB/sec

 Timing buffered disk reads:  164 MB in  3.02 seconds =  54.37 MB/sec

/dev/sda:

 Timing cached reads:   1648 MB in  2.00 seconds = 823.71 MB/sec

 Timing buffered disk reads:  164 MB in  3.00 seconds =  54.62 MB/sec

 FOR TWO PARTITIONS IN THE HARD DISKS

/dev/hda10:

 Timing cached reads:   1660 MB in  2.00 seconds = 828.06 MB/sec

 Timing buffered disk reads:  136 MB in  3.02 seconds =  44.98 MB/sec

/dev/sda5:

 Timing cached reads:   1628 MB in  2.00 seconds = 813.72 MB/sec

 Timing buffered disk reads:  156 MB in  3.01 seconds =  51.90 MB/sec

 FOR THE TWO PARTITONS USED IN THE RAID

/dev/hda15:

 Timing cached reads:   1680 MB in  2.00 seconds = 838.87 MB/sec

 Timing buffered disk reads:   88 MB in  3.07 seconds =  28.68 MB/sec

/dev/sda8:

 Timing cached reads:   1688 MB in  2.00 seconds = 843.28 MB/sec

 Timing buffered disk reads:  146 MB in  3.02 seconds =  48.38 MB/sec

 FOR THE RAID

/dev/md/0:

 Timing cached reads:   1708 MB in  2.00 seconds = 852.85 MB/sec

 Timing buffered disk reads:  172 MB in  3.01 seconds =  57.19 MB/sec

----------

## Phk

You should not be so paranoid about the speed measurement of your HDD..

Mounted or not; link partition or whole drive....

As long as it works...!  :Very Happy: 

 *Quote:*   

> Timing cached reads: 1901 MB in 2.00 seconds = 920.85 MB/sec
> 
> Timing buffered disk reads: 298 MB in 3.10 seconds = 96.18 MB/sec

 

 :Wink: 

2xMAXTOR 200GB 8MBCache SATA; Using A7N8X-E Deluxe MOBO and it's "hardware" RAID-0  :Smile: 

See us!

----------

## kamikaze04

As you can see, if i get only 4Mb/s (from 54 to 57) maybe it won't be relevant for me having RAID, but if I get 15Mb/s more(from 44 to 57), maybe yes.

There are another result more that is amazing from 28 to 57 = 29 Mb.

 And the diference is only running hdparm with one parameter or other.

I only want to know how should i run it.

----------

## kamikaze04

Okey, after few days doing some research and tests, this is what i've learned:

tip 1- It really matters if you have another disk as slave (sepeaking alwais about IDE devices).

  As someone said in this thread, having attached a slave device makes accessing to the   

master slower (in my case about 10Mb/s more). So try to have your hard disk alone in your bus.

tip 2- But what it really makes the difference is the location of your partitions. Before doing raid for whole disk, i've done raid with partitions of 1 o 2 Gb.

This is not only for RAID, it's for every partition in general. If the partiton is fisically first, it will be accessed faster (because of the speed of the disk is faster in the external zone than the internall zone)(at least for secuencial accesses)

That's why i've experienced that all the partitons located at the beginning are faster accessed. 

So, if you really want to have a really faster RAID 0, you should raid two partitons like that.

tip 3- RAID 0 duplicates speed for reading and writing. But remember, it duplicates the velocity of the worst partition. So if you have a partition that has X Mb/s and another that has X+15Mb/s, when you raid them you will have 2*X and not 2*(X+15). So remember to choose well both partitions in both disks (said in tip 2)

tip 4- To ensure that the conditions of all your tests have the same conditions, before testing speed, do: "init 1" 

Another thing i've concluded is that it really doesn't matter to hdparm mounted or not partitons.

=====

Now my results are:

FOR THE TWO PARTITONS USED IN THE RAID 

/dev/hda2:

Timing buffered disk reads: 88 MB in 3.07 seconds = 49.1 MB/sec

/dev/sda3:

Timing buffered disk reads: 146 MB in 3.02 seconds = 55.7 MB/sec

FOR THE RAID 

/dev/md/1:

Timing buffered disk reads: 172 MB in 3.01 seconds = 93.55[/b]MB/sec

----------

## Phk

Nice tips  :Wink: 

Really HDD tunning  :Very Happy:  :Very Happy: 

Thanks

----------

## gnychis

I followed this guide exactly, reboot my computer, it checks for discs in my cd-rom, then just freezes and doesn't go anywhere

(IC7-MAX3)

any ideas?

is it possible for me to go back into liveCD and remount my system to check everything?

Thanks!

----------

## snakernetb

I have a MSI board w/ the raid  controller on MOBO.  I really want to go w/ gentoo, for a learning experience and also due to the fact that it is highly configurable.  I want to run a RAID 0, and notice that they have this link at the Promise website:

http://www.promise.com/support/download/download2_eng.asp?productId=9&category=driver&os=100&go=GO  (Partial Linux source code)

I was unaware of this "software" option, but have been using linux in various flavors for years.  Guess I didn't dig enough... But this is my first attempt at a raid0 setup so can you blame me?  My main question is why not try to make this driver work?  This all looks simple but are you really getting everything you can out of the onboard controller?  Also noted as I looked through here that this is referred to as more of a driver driven raid controller, i.e. windows driver related.  I am planning on using this system as my main system until I can get a 64bit system bought and built.  Until then I want to wring all the performance I can out of it.  From what I gather I just want to turn off the raid controller in the bios and address these disks as regular IDE in gentoo?  Also noted something about IRQ interrupts.  Has anyone ever gotten a driver option to work on the 2.6 kernel?  Any info on this deal is appreciated.  If not I will trying knocking the software option out.  Thanks once again.

----------

## Muddy

so from the how-to I'm getting that your wasting space on one drive as you only put your boot partition on one and then you have this unused partition on the other??

----------

## BernieKe

Hmmz

I'm getting hdparm results on my two (identical disks) at 40MB/s

and hdparm results on my raid 0 array (comprising those two disks) at 79MB/s

so far so good

but time cp VTS_01_1.VOB /home (timed copy of a 1gb file from another hd to the raid array)

gives 54secs, so only about 17MB/s

while the same copy takes only 50secs when I just mount one of the hd's...

these tests have been done with reiserfs, ext3 and ext2, with reiserfs and ext3 giving the same performance;

and ext2 average a little better

and I've tried with both a blocksize of 128 and 512, giving no difference what so ever

so I'm wondering: why is real-life performance so poor, and why is the array even slower than using a regular harddrive?

----------

## eae

Hmm I have a couple of questions (I haven't read the whole thread):

I have two hard disks which are currently on the same ide channel: one is master and the other is slave. I read somewhere that the optimal setup would be to have them on separate channels (both masters), is it true? The problem is that I have a dvd burner and a dvd reader too, and I heard that putting a them on the same channel as an hard disk slows down hard disk performance... So if I put both hard disks as master on the two ide channels and the dvd readers as slaves, will I improve or decrease performance? (and by the way, can I do that without destroying the raid 0 that i am currently using?)

By the way my hard disk speeds seem kinda bad :/

```

# hdparm -tT /dev/md0

/dev/md0:

 Timing cached reads:   2864 MB in  2.00 seconds = 1431.91 MB/sec

 Timing buffered disk reads:   90 MB in  3.01 seconds =  29.88 MB/sec

# hdparm -tT /dev/hda

/dev/hda:

 Timing cached reads:   2856 MB in  2.00 seconds = 1427.91 MB/sec

 Timing buffered disk reads:   74 MB in  3.01 seconds =  24.60 MB/sec

# hdparm -tT /dev/hdb

/dev/hdb:

 Timing cached reads:   2860 MB in  2.00 seconds = 1429.91 MB/sec

 Timing buffered disk reads:   60 MB in  3.05 seconds =  19.66 MB/sec

```

But if I try to run a real benchmark like iozone I get worse results from the raid 0 than from a normal partition :/

I am using reiserfs and a chunk size of 64.

----------

