# Suggestions on RAID card

## zzaappp

Six years ago I installed a PCI add-on raid controller in an x86 box.  It was a PCI card that you configured with the drives you plugged into it.   I plugged in two disk drives, then booted into the RAID manager (this is before I installed the os), and configured the pair of drives to be RAID0 as a single drive. Then I installed linux and it saw only a single hard drive, not the two separate drives connected to the controller.  So I selected ext2 as the filesystem, and everything worked fine, didn't have to do anything else after that...  Or so my recollection of events goes.

Fast forward to today.  I am building a new box right now that has an intel raid controller built right on to the motherboard.  I started looking around in the gentoo forums for setting up raid, and it felt like I was opening Pandora's box when I first saw the term "fakeraid", and how there seemed an awful lot of work to configure this motherboard-based raid controller compared to what I did six years ago.  Then I read that the motherboard raid controller would use CPU cycles to manage the volumes, and I started wondering if this on-board raid controller was a very different animal from what I had used before.

It sounds like the PCI card of yesteryear did all the dirty work for me:  I configured the card, but that was as far as it went:  the controller exposed the RAID0 pair of drives to the linux kernel as if it were a single drive, and I do not recall doing anything more with the kernel than partitioning and adding ext2 on the created partitions.

If I am wrong, please correct me:  I want to understand this raid thing so I can build the fastest disk-i/o for my new computer.  

Are the add-on PCI cards faster on performance than the motherboard based raid controllers?   

The machine I'm building will be writing and reading a lot of information since it will be a database server, so I need this to be fast.  I've already got two SATA drives, so I don't want to switch to SCSI.  What I want to know is, can anyone recommend a good PCI/PCIex raid controller?

Many thanks!

-z

----------

## drescherjm

Why not use the drives as single disks and use the built in linux software raid as it is considerably better and more manageable than fake raid?

 *Quote:*   

> Are the add-on PCI cards faster on performance than the motherboard based raid controllers?

 

For raid 0 generally the answer is no. As raid 0 requires no hardware processing that a real hardware raid card will provide so the one real benefit of a hardware raid card is that it has a cache but you could just spend the money and put more ram in your pc and have a much faster cache for a lot less money. And when I talk about hardware raid cards I mean the ones that have a cpu on them and generally cost > $300 US. 

 *Quote:*   

> The machine I'm building will be writing and reading a lot of information since it will be a database server, so I need this to be fast.

 

Do you really need it to have a STR of 120 to 150MB/s? Currently a single 7200 RPM SATA drive nets a STR 75MB/s (Seagate 330GB 7200.10).  

Are you concerned about data loss as with a two drive raid 0 you are twice as likely to loose all of your data?

Also if you put a plain PCI and not PCIe or PCIX card in your pc it will be definitely slower as the max bandwidth of the PCI bus is less than 2 drives provide.

----------

## drescherjm

 *Quote:*   

> and I started wondering if this on-board raid controller was a very different animal from what I had used before. 

 

If you paid less than $400US for the PCI card 6 years ago it is very likely that you used fake raid but linux had the driver for that card built in.

----------

## zzaappp

The server was a quad xeon P3 and cost about $25k at the time, so the card probably was more than $400.

 *drescherjm wrote:*   

>  *Quote:*   and I started wondering if this on-board raid controller was a very different animal from what I had used before.  
> 
> If you paid less than $400US for the PCI card 6 years ago it is very likely that you used fake raid but linux had the driver for that card built in.

 

----------

## drescherjm

You do have many options depending on what you want to spend and if you can risk loosing some or all of your data. You really should check out linux software raid. Although if you want a very easy solution and are willing to shell out $300 I would get a 3ware card although as I tried to explain above that with raid 0 a hardware card is in most cases an overkill and it will not significantly improve performance or reduce cpu time but it will be simple to set up raid 0 on it. As for cpu usage linux software raid 6 only takes 5 to 7% cpu time on a 2GHz single core opteron and raid 0 would be significantly less than that.

----------

## drescherjm

BTW, Please forgive me if anything I am saying does not make a lot of sense I banged the back of my head (no I was not drinking - very cold and windy here) getting into my suv a few hours ago and it really hurts...

----------

## zzaappp

This sounds worthy of an attempt...  Is there an up-to-date FAQ on configuring/setting-up a linux software raid 1?  I've never configured linux for raid support, plus this is a fresh gentoo install, so anything I do has to consider that.  And as I mentioned the disks I have are SATA.

Thanks!

-z

 *drescherjm wrote:*   

> You do have many options depending on what you want to spend and if you can risk loosing some or all of your data. You really should check out linux software raid. Although if you want a very easy solution and are willing to shell out $300 I would get a 3ware card although as I tried to explain above that with raid 0 a hardware card is in most cases an overkill and it will not significantly improve performance or reduce cpu time but it will be simple to set up raid 0 on it. As for cpu usage linux software raid 6 only takes 5 to 7% cpu time on a 2GHz single core opteron and raid 0 would be significantly less than that.

 

----------

## drescherjm

I would check gentoo-wiki.com. There are several guides there and this one looks good at first glance:

http://gentoo-wiki.com/HOWTO_Install_on_Software_RAID

BTW, I have like 10TB of data sitting at work on several linux software raid arrays (mostly raid6) that I setup so I can help here if you get stuck...

It really is not that hard to use mdadm.

You mentioned raid 1. You might consider raid 10 if you can get 4 disks as this will give you the benefit of faster disk access with the redundancy of raid 1. 

There are other options. I do not recommend raid 5 or 6 unless you have 5 or more disks. With linux software raid 5 or 6 your write performance should be around the speed of 1 disk while read performance should be nearly raid 0 speed for the amount of disks that store data. If you want I can explain what I mean on this further. Now for 3ware cards they claim similar read speeds with better than 1 drive write speeds. I have one of these at work but I have not installed it yet as the nVidia fakeraid raid5 in windows seems to be cooperating now that I switched a workstation from XP to win2k3r2.

An example of this is at work I have a 6 drive raid6 array (using seagate 7200.10) drives. With the array read speeds (using hdparm -tT ) I get ~270MB/s which is not bad since 4 drives are data and 2 parity (although parity is striped so its not exactly the same). On the same array I believe using dd if=/dev/zero of=somefile bs=8m I got ~80 MB/s.

----------

## Cyker

IIRC, if you are trying to make a RAID1 or RAID0 system, you should be able to do it in hardware without any drivers.

It's only 'complex' RAID like RAID5 that needs drivers when using 'accelerated'/fake RAID.

You can buy 'real' hardware RAID5 cards - These are cool, and your system will just see the array as a single drive (As in your example), but they often cost more than a brand new high-end mobo, CPU and RAM combo!!

BTW, if you do software RAID5 I highly recommend you have either a really fast CPU, 2 CPUs or a dual-core processor if it's going to get heavy use.

All that talk about RAID5 not adding much processing overhead is a damned lie. At saturation writing speed, RAID5 will actually make sshd cut me off from my system because its too low priority and gets swamped the ultra-high priority md0_raid5 driver!

For pure speed, stick with RAID0 and backup religiously  :Wink: 

RAID0+1 is also good, but expensive.

I went RAID5 because although it is a slow (and high CPU use!) writer, it's near-RAID0 speed in reading, and also doesn't cost me half my disk space.

If you just want RAID0 or RAID1 (Or RAID0+1  :Very Happy: ), try it with your mobo's config. If it don't work, post back and we can get you on the track for doing a software RAID (It's not too hard).

Heck, if it does work, post to let us know  :Smile: 

----------

## drescherjm

 *Quote:*   

> All that talk about RAID5 not adding much processing overhead is a damned lie.

 

I have never seen more than 7% cpu usage in top with raid 5 or 6 running on a 2 GHz Opteron during reads or writes. And I have looked into this many times during many different types of file load. I am running amd64 and I have at least 2 GB of ram on all my software raid boxes. During a rebuild the cpu usage may be higher I do not remember.

----------

## Cyker

Are we talking light use or full-on maximum sustained transfer 'tho?

'cause I tell ya, when I was dumping the 1TB I had on the 2 disks (Really about 6-700GB) to the RAID with Midnight Commander in a VNC window, my system basically locked up until it finished.

It's possible I have something configured weird that's making it do a lot more work than it should, but I followed all the guides I could find even right down to the optimum stride settings for mkfs and write-intent bitmaps for mdadm!

<needless off-topic rant>

This is when I discovered that Seagate are bunch of liars too (Whisper quiet?! Maybe when they're IDLE! Those SATA Barracuda 7200.10s sound like servo-driven head ball-bearing drives from the 90's when they're seeking like crazy! I swear you could make the whole array walk across the floor if you figured the right seek pattern! In comparison, you can't even hear the MaxLineIII's when they're running full pelt! I really wish Seagate hadn't bought out Maxtor...)

Out of curiosity, what's your dmesg say for your raid driver? The section I'm interested in is the bit that says:

```
raid6: int32x1    713 MB/s

raid6: int32x2    746 MB/s

raid6: int32x4    658 MB/s

raid6: int32x8    510 MB/s

raid6: mmxx1     1499 MB/s

raid6: mmxx2     2707 MB/s

raid6: sse1x1    1363 MB/s

raid6: sse1x2    2251 MB/s

raid6: sse2x1    2308 MB/s

raid6: sse2x2    3004 MB/s

raid6: using algorithm sse2x2 (3004 MB/s)

md: raid6 personality registered for level 6

md: raid5 personality registered for level 5

md: raid4 personality registered for level 4

raid5: automatically using best checksumming function: pIII_sse

   pIII_sse  :  5756.000 MB/sec

raid5: using function: pIII_sse (5756.000 MB/sec)

md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27

```

This is about right for an Athlon64 1.8GHz CPU (Which is what the server uses)

----------

## drescherjm

 *Quote:*   

> Are we talking light use or full-on maximum sustained transfer 'tho?

 

Yes, these benchmarks are for sequential reads and writes of a large size ( several megabytes at a time ) once you go to random ( or small operations) performance goes down significantly. 

 *Quote:*   

> 'cause I tell ya, when I was dumping the 1TB I had on the 2 disks (Really about 6-700GB) to the RAID with Midnight Commander in a VNC window, my system basically locked up until it finished. 

 

There can be a few reasons for that. One if 2 or more of your drives are on external PCI card and not PCIe or PCIX controllers you probably maxed out the PCI bus during the transfer. Also if your os is on any of the involved disks and it needs to load anything this will be slow because of thrashing and other reasons. 

 *Quote:*   

> Out of curiosity, what's your dmesg say for your raid driver? The section I'm interested in is the bit that says: 

 

I will get this tomorrow as I only do raid 1 at home at work is where I have the raid 5 and raid 6 arrays.

 *Quote:*   

> This is when I discovered that Seagate are bunch of liars too 

 

Well they do not sound as loud as WD drives and I have around 25 X 250 GB WD and 25 X 330 GB Seagate drives at work. But at work there are many factors at play. In our server room where we have around 15 servers and the 10TB of raid 6 the ac unit drowns out any of the sound of the computers.

----------

## Cyker

ROFL Yeah, in some situations the acoustics rate rather low down the importance scale  :Wink: 

The acoustics in the newer Seagates are odd... in their older drives, you could flick them between Fast and Quiet modes; NONE of the 7200.10s have this ability for some odd reason. Maybe some issue with the perpendicular recording? I don't know why they would leave it off...

Even more strangely, it's been reported that the PATA version of the 7200.10 is MUCH quieter than the SATA version, which makes me think the SATA ones a factory-set to Fast/Loud mode, while the PATA ones are set for Slow/Quiet mode?

There doesn't seem to be a way to change this...

Ironically, the newer WesternD's retain the fast/quiet mode switch, so you can still set it to either...

I just liked my MaxLines - They had one setting; Fast AND quiet  :Very Happy:  (But expensive...!)

I think you're right about the file types. All the disks here are attached to on-board connectors that are ostensibly connected directly to the same bridge chips, so in theory they don't even have to cross the HyperTransport bus.

However, the copy op did involve copying 100,000's of sub-block/sub-stripe files, and I think that is what made the CPU usage shoot through the roof.

Copying a single 17GB file barely phased the system, but copying a 4GB folder full of a few 100,000's of files, most 2kB or less, duplicated my timing-out-from-system trick.

Weird, but there you go... I suspect it's partly because of the huge penalty you get for copying sub-stripe files in general, and the insane no. of journal accesses ext3 needs when dealing with so many files...

----------

## drescherjm

 *Quote:*   

>  you could flick them between Fast and Quiet modes; NONE of the 7200.10s have this ability for some odd reason.

 

I have never messed with that are you talking about jumpers or using hdparm?

I am using reiserfs in all my raid arrays since reiserfs and ext3 are the only choices as XFS and JFS do not allow shrinking partition sizes and I was told that ext3 is slower out of the box because of its tighter guarantee of consistency.

----------

## zzaappp

I followed the WIKI page, but in the end the thing locks up on reboot, so no go.

I'm wondering about the controller on board.  According to the manual that came with the board:

 *Quote:*   

> ZCR: Zero Channel RAID.  PCI card that allows a RAID card to use the onboard
> 
> SCSI chip, thus lowering cost of RAID solution 

 

and

 *Quote:*   

> SCSI Interrupt Steering Logic (SISL): Architecture that allows a RAID controller, 
> 
> such as AcceleRAID 150, 200 or 250, to implement RAID on a system board-
> 
> embedded SCSI bus or a set of SCSI busses.  SISL: SCSI Interrupt Steering Logic 
> ...

 

and

 *Quote:*   

> You may use these six Serial ATA ports to have the support of RAID 0 and 1 through the on board Intel ESB2 chipset. 

 

When I did the linux fakeraid I set it up for RAID1.  But I didn't go into the BIOS raid and clear what I had done previously:  It was still set for RAID0.  Maybe that is what went wrong with my install.  

So is the Intel ESB2 chipset a real RAID?  I mean, can I just load  driver of some kind in linux without having to go through the fakeraid steps?

-z

----------

## Cyker

 *drescherjm wrote:*   

>  *Quote:*    you could flick them between Fast and Quiet modes; NONE of the 7200.10s have this ability for some odd reason. 
> 
> I have never messed with that are you talking about jumpers or using hdparm?

 

On early HDs it was a jumper, but more commonly it's software controlled.

I've seen BIOS options in Dell computers and some mobos for setting the 'AAM', and in the Windows world it's usually set by a DOS utility.

In Linux, I *think* you can set it with hdparm -M <0-255>, but I'd have to check the man pages.

 *drescherjm wrote:*   

> I am using reiserfs in all my raid arrays since reiserfs and ext3 are the only choices as XFS and JFS do not allow shrinking partition sizes and I was told that ext3 is slower out of the box because of its tighter guarantee of consistency.

 Yeah, I went ext3 because I'm lazy  :Wink: 

I was thinking of switching to Reiser (On account of it benchmarking as the fastest filesystem overall), but all the scare stories put me off  :Sad: 

----------

## Cyker

 *zzaappp wrote:*   

> I followed the WIKI page, but in the end the thing locks up on reboot, so no go.
> 
> I'm wondering about the controller on board.
> 
> <snip>
> ...

 

Lemme just clarify the terms I use to avoid some confusion:

Software RAID/S-RAID - What Linux does; Drives are just normal drives and Linux does all the RAID work from stripe/parity calculations to moving the data to the right drives.

Hardware RAID/H-RAID - What those expensive RAID cards do (Not the cheap ones!); These do all the RAID work and all Linux sees is a single drive - It knows nothing of the RAID work

Accelerated RAID/A-RAID - This is what most mobos and 'cheap' (haha) RAID cards do; Your CPU has to do all the RAID calculations, but the card handles all the data traffic. Linux only see's a single drive like in H-RAID, but needs a driver so it knows to do the calculations, otherwise it won't work.

Some (all?) mobo's do RAID0,1 and 0+1 in hardware (Unlike RAID5, it's very simple) - If yours is one of these and you only want RAID0/1, I'd highly recommend leaving Linux S-RAID and just using the mobo's logic to deal with the RAID.

The procedure varies, but generally you make a note of which drives you want to erase and turn into an array, then goto your BIOS (Or RAID-BIOS on older mobo's), add the devices to the array and tell it to create it.

When you goto Linux, you get a new /dev/hd? device (Or possibly a /dev/sd?) which you can just set up and work with like it's a single disk.

Dead easy.

If it's not one of these boards, or you want to do RAID5 (NO mobo does true H-RAID5), you can go down the A-RAID route or the S-RAID route.

Now, it's very likely you can get the correct driver for your mobo to do A-RAID, using the dmraid tools, but personally I'd recommend against it UNLESS you want to also access the array in Windows.

Going down the software RAID takes a few steps - You are right in that you MUSTMUSTMUST disable any BIOS RAID options - The disks HAVE to be presented to Linux as normally as possible; Any munging of them will bugger things up.

The creation of a Linux software RAID is fairly simple, assuming all goes smoothly:

Step 1) Allocate Array Space:

```
cfdisk <device>

        -> Make partiton type FD
```

Step 2) Create Array:

```

Example for a RAID5 array:

mdadm -C /dev/md0 --verbose -l 5 -n 4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

2-disk RAID1 array:

mdadm -C /dev/md0 --verbose -l 1 -n 2 /dev/sda1 /dev/disk/by-label/RandomUSB1

3-disk RAID0 array:

mdadm -C /dev/md0 --verbose -l 0 -n 3 /dev/sda1 /dev/sdb1 /dev/hda1

```

Obviously, you'd need to substitute the right RAID level -l, the right no. of drives (The -n param) and the correct drive devices.

This should cause a crap-load of crazed whirring from your drives; You can check on it using the commands:

```
cat /proc/mdstat

mdadm -D /dev/md0
```

NB: One or more disks may be designated as spare while the actual array is built, even if you don't specify one. This is apparently by design so don't worry about it

Now, you can wait for the array to build, or dive right in and make a filesystem on it; Just use the appropriate mkfs command for the filesystem you want to use.

The stride param, as mentioned in the wiki, may be used here for enhanced performance but it's not mandatory.

Don't follow these instructions blindly; It's getting late and I haven't sanity-checked them, and also I think I missed a step out  :Razz: 

This is just meant to point you in the right direction...

----------

## zzaappp

 *Quote:*   

> Accelerated RAID/A-RAID - This is what most mobos and 'cheap' (haha) RAID cards do; Your CPU has to do all the RAID calculations, but the card handles all the data traffic. Linux only see's a single drive like in H-RAID, but needs a driver so it knows to do the calculations, otherwise it won't work.

 

This sounds like the way I would want to go.  I configured the BIOS-RAID for RAID0 mode already.  I imagine, then, that there is some command I need to hand the linux kernel at boot time (when booting the gentoo minimalist boot disk) so that it uses my controller, right?  That is what I've looked for, but I don't find anything in the forums.  I may be looking for the wrong thing:  Maybe no such kernel parameter exists.

Booting up from the minimalist boot disk and getting to the shell prompt, I can issue fdisk -l, and that lists the two individual SATA drives as /dev/sda and /dev/sdb.  I assume that if the RAID driver in the kernel is talking to my on-board raid controller that there should be 1 drive available that represents the pair of drives running as RAID0.  But I don't know what it is called, or now to access it at this point.

Thoughts?

-z

----------

## drescherjm

Although you do not have a nvraid chip the installation should be similar to this:

http://gentoo-wiki.com/HOWTO_Install_Gentoo_with_NVRAID_using_dmraid

----------

