# Gentoo support for Harware RAID

## dlambeth

Okay, so I've been working with Gentoo for a long time but somehow never got around dealing with RAID. Does anyone know if there is support for harware RAID with kernel 2.6.19X? Lot's of servers come with SATA RAID but I found out most servers that have SATA RAID are not true hardware RAID, they are controlled by the BIOS. Does anyone know of any good reasonable priced hardware RAID cards that will work with Gentoo? If so please enlighten me.

Cheers!

PS Software RAID bites so I wont' really use it unless I have no alternative.

----------

## Suicidal

LSI  seems to be well supported. While not cheap they have always had good in kernel support.

----------

## dlambeth

Great! yeah I'm familiar with those cards but never used them in anything other than Microsoft OS. I wonder if the latest Gentoo distro would be much pain to get working with this.

Thanks,

----------

## elgato319

3ware raid controllers are also working very well under Linux.

http://www.3ware.com/support/OS-support.asp

----------

## huuan

I've just done an install of a system with a SAS5iR hardware SATA RAID conroller and it seems to work OK so far.

I had a time figuring out which driver to use but found some help

https://forums.gentoo.org/viewtopic-t-543406-highlight-poweredge.html

----------

## dlambeth

Excellent, keep me posted your progress if you can. I am looking at the 3ware line of cards, but I want to make sure someone is successful with a Gentoo 2.6 kernel implementation of a hardware RAID.

Thanks everyone!

----------

## wyv3rn

Using 3ware RAID on multiple machines with no problems.  3ware hardware has been well supported by Linux for a number of years.

----------

## Cyker

Do Hardware RAID cards actually need Kernel support?

I thought they'd just 'work'.

I know for sure the only things that need Kernel drivers are 'accelerated' RAID cards, and possibly the management utils for accel/hardware RAID cards...?

----------

## Suicidal

 *Cyker wrote:*   

> Do Hardware RAID cards actually need Kernel support?
> 
> I thought they'd just 'work'.
> 
> I know for sure the only things that need Kernel drivers are 'accelerated' RAID cards, and possibly the management utils for accel/hardware RAID cards...?

 

They really do, most cards get auto detected and it appears that it is automatic but I had one hell of a time the first time in installed Gentoo 2005.0 on a power edge 4600 with a PERC 4 DI, 

I had to rmmod the raid module and all of the Ethernet modules and then modprobe them back in to get it to detect both the disks and network; thankfully after installed it ran much, much, much better.

----------

## wizard69

From my own experience with hardware raid under gentoo buy a 3ware card. I have about 25 Gentoo Servers with raid 1 raid 5 and raid 10 without any problems what so ever. These controllers are rock solid you can also install the 3ware utillites to check disks and also report crc errors via email with some of the models. We recently bought a few dell servers with LSI controllers and we were very disapointed with the bad performance high wait states and bad responce times. So my recommendation has to be 3ware. There are two kernel drivers for 3ware under device drivers scsi device support scsi low level drivers. Other than that you have the 3ware bios were you can configure your raid settings and create a new array if necessary.

----------

## neysx

There is no such thing as cheap hardware raid.

Yes, hardware raid cards need kernel drivers, just like any ide/sata/scsi interface. The raided disks are seen as a normal scsi disk by the system.

3ware raid cards are good and supported, the monitoring software is even in Portage.

Be aware though that your hardware raid card becomes a single point of failure. Should it break, you'll need a similar one to access your data.

IMO, software raid offers a lot more flexibility and does not depend on a single piece of hardware.

I own a 3ware 9500S but I configured it in JBOD mode and I use software raid.

Hth

----------

## Cyker

What, Linux can't see Hardware RAID as just another IDE device?? Man, that kinda sucks. I assumed if you had the Generic ATA driver compiled in they would just *work*. For a laugh we tried installing DOS with an obsolete Adaptec PCI card - Booted off a 3x20GB RAID5 array into DOS6.22 and managed to load TIE Fighter  :Razz: 

(Of course, if we're talking about SCSI RAID, then ignore me  :Smile:   I just assume IDE because I always saw SCSI RAID as being a bit of an oxymoron, since SCSI disks are definitely *not* 'inexpensive'!  :Shocked: )

----------

## neysx

 *Cyker wrote:*   

> What, Linux can't see Hardware RAID as just another IDE device?? Man, that kinda sucks. I assumed if you had the Generic ATA driver compiled in they would just *work*. For a laugh we tried installing DOS with an obsolete Adaptec PCI card - Booted off a 3x20GB RAID5 array into DOS6.22 and managed to load TIE Fighter 

 Neither do Windows or DOS, unless the hardware offers a standard IDE interface in which case a standard driver is enough. Bottom line is you still need the right driver for your RAID card.

 *Cyker wrote:*   

> (Of course, if we're talking about SCSI RAID, then ignore me   I just assume IDE because I always saw SCSI RAID as being a bit of an oxymoron, since SCSI disks are definitely *not* 'inexpensive'! )

 Of course they are cheap, just not for equivalent capacity.  I use old 73/36GB 10K scsi disks that deliver >60MB/s each. IDE disks could not deliver that and I'd need high-end SATA on a 64-bit PCI interface to match that speed, which would be more expensive  :Smile: 

```
# hdparm -t /dev/sdd

/dev/sdd:

 Timing buffered disk reads:  198 MB in  3.02 seconds =  65.61 MB/sec
```

Hth

----------

## drescherjm

 *Quote:*   

>  I use old 73/36GB 10K scsi disks that deliver >60MB/s each. 

 

Current generation sata drives net > 60MB/s. The following is a 330GB seagate 7200.10 SATA2 drive in SATA1 mode on a dual processor opteron.

```
# hdparm -t /dev/sda

/dev/sda:

 Timing buffered disk reads:  236 MB in  3.01 seconds =  78.29 MB/sec
```

And a similar 400GB 7200.10 on a 5 year old dual processor athlon mp 2200 system nets this:

```

 # hdparm -t /dev/sda

/dev/sda:

 Timing buffered disk reads:  198 MB in  3.00 seconds =  65.91 MB/sec

```

Both systems are using Mass storage controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) controllers which are pretty cheap...

With that said I guarantee that your SCSI drive outperforms my sata disks in everyday use as the SCSI drive has a much better seek time (less than 1/2 of the seek time for sata) and is much better at random reads.

----------

## Suicidal

 *neysx wrote:*   

> There is no such thing as cheap hardware raid.
> 
> Yes, hardware raid cards need kernel drivers, just like any ide/sata/scsi interface. The raided disks are seen as a normal scsi disk by the system.
> 
> 3ware raid cards are good and supported, the monitoring software is even in Portage.
> ...

 

I have to agree there Linux SW RAID is pretty sweet; I had about 13 PE 800's that only had 2 SCSI drives and no RAID card. I mirrored /boot and striped swap and / and it drastically outperformed Windows with ISA.

----------

## Cyker

Agreed. The main issues with Software RAID were the interface (Needed a crap-ton of IRQs, or double up 2 IDE drives per channel, which is slow and causes massive IOwait time) and processor usage.

These bottlenecks no longer exist - Modern CPUs have long been able to handle RAID parity calculations and on dual-core or better systems you won't even notice a performance drop vs. Hardware RAID.

The biggest bottleneck - The interface is also gone now what we have SATA - SATA is point-to-point so no more of that sillyness with Masters/Slaves blocking each others IO,

and new APICs allow lots of IRQs and thus lots of channels!  :Very Happy: 

SATA also supports multi-level IRQs even on non-APIC hardware, not to mention SATA2 port multiplexing switchs, so you can hook in almost as many drives as you want like SCSI.

The fact that SATA can hotswap and have long cables like SCSI means most of the benefits of SCSI vs IDE are reduced significantly.

The biggest benefits are the reliability IMHO - Not sure why SCSI drives tend to be more reliable; I suspect they just have better QA on the electronics and mechanics, plus they use smaller platters than IDE disks. Given that they run at 10-15k RPM and thus run a lot hotter than IDE drives, it never ceases to confound me!  :Smile: 

If the SCSI industry hopes to survive, they really need to cultivate the reliability. I swear IDE reliability is getting worse!  :Sad:   Espescially on drives with >2 platters; The early-death syndrome seems worryingly high!

I do reckon (IMHO!) that using SCSI for RAID is stupid 'tho - RAID is supposed to stand for Redundant Array of INEXPENSIVE Disks, and given you can build a 2TB SATA Software RAID5 array for the price of ONE 160GB SCSI disk (ONE!!  :Shocked: ), it just seems wasteful.

<Man, I'm so off-topic now!  :Shocked:  Um...>

Err, anyway, I'm not sure, but if you have a generic IDE driver installed, any *real* hardware RAID card should work (And by REAL, I mean one you can boot off without needing any drivers). They all have BIOS int13h support so that the BIOS can load them; Something that most accelerated RAID and all software RAID can't do.

----------

## astor84

 *wyv3rn wrote:*   

> Using 3ware RAID on multiple machines with no problems.  3ware hardware has been well supported by Linux for a number of years.

 

Same here.

----------

## vapir

I felt I should answer before you run out and get a 3ware. Sure, they work ok - but there's a BIG problem with those. The massive IOWAIT on disk access which is independent from the controller series (just google 3ware and iowait) - we have two 3ware setups at the company where I work; one with a 8506 controller, the other with a 9550 - both equipped with WD raptor disks. For performance reasons we migrated the 3wares from raid5 to raid10, which helped little, yet better than nothing  :Confused:  The raid5 acceleration they supply is inferior compared to every modern CPU.

The best one I've seen in this department is an SCSI controller from the Adaptec (3805 Marauder80LP) which we use together with the Fujitsu MAX series of disk. Very low iowait compared to 3ware (but it takes ages to boot up  :Very Happy:  which means the whole system got a startup time of about 5 minutes now).

Unfortunately SCSI disks are pretty expensive compared to their SATA counterpart the controllers all end up with almost the same price tag though.

We also use a Promise VTrak storage enclosure with 12 huge Seagate SATAII disks in a RAID6 setup connected to a LSI MEGARAID controller . The performance is pretty bad (not better than one disk seperate) but for backups and shared storage for ~16ppl it works alright. But I can't tell where the bad performance comes from - the enclosure, the controller or simply the raid level  :Wink: 

Anyway, if you need high HD performance from your RAID, don't choose 3ware.

@drescherjm: AFAIK Silicon Image doesn't make any hardware RAID controllers.

----------

## bunder

here's my hardware raid setup:

```
# hdparm -Tt /dev/sda

/dev/sda:

 Timing cached reads:   302 MB in  2.01 seconds = 150.53 MB/sec

 Timing buffered disk reads:   52 MB in  3.01 seconds =  17.28 MB/sec

shell chris # hdparm -Tt /dev/sdb

/dev/sdb:

 Timing cached reads:   286 MB in  2.00 seconds = 142.71 MB/sec

 Timing buffered disk reads:   34 MB in  3.13 seconds =  10.87 MB/sec

```

```
(dmesg)

megaraid: found 0x8086:0x1960:bus 0:slot 2:func 1

scsi0:Found MegaRAID controller at 0xf8806000, IRQ:10

megaraid: [D :B ] detected 2 logical drives.

megaraid: channel[0] is raid.

megaraid: channel[1] is raid.

scsi0 : LSI Logic MegaRAID D  254 commands 16 targs 5 chans 7 luns

```

right now it's 2 sets of discs in a 5 disc in-system enclosure with removable trays.  i really should merge the last 3 discs back into the first set, as they're going unused... and i could go back to a more sane raid level (it's just striped now).

i must say these drives are rather loud and clicky...  is that synonymous with "older" scsi drives?

about the IO wait and stuff... would this cause a high ext3_inode_cache and dentry_cache?  considering /home and /usr/portage/distfiles is on nfs, there shouldn't be a lot of hard drive access, except for when people log in or initialize applications.   :Confused: 

----------

## vapir

Those 17MB/s and 11MB/s speeds look really terrible for a striped raid  :Neutral:  usually you should be able to get between 40 and 70 on single disks.

Concerning the IOWait - I doubt it will affect any caches. It's just that the CPU / Core responsible for the access will sit there doing nothing while waiting for an answer from the controller (which has nothing to do with enabled/disabled DMA). So if you're doing CPU intensive work it's about the worst thing that could happen.

----------

## bunder

 *vapir wrote:*   

> Those 17MB/s and 11MB/s speeds look really terrible for a striped raid  usually you should be able to get between 40 and 70 on single disks.

 

yeah, i don't know what's going on there...  i think they're older scsi drives.   :Shocked: 

----------

