# Looking for a Raid 5 card.

## zecora

Title says it. 

I am looking for a good card around $100.00 which can support SATA hdds. I need it to work under Gentoo and I will be having 4x250gig hdds on it. 

So please recommend me some cards.

----------

## neurosis

This page might help you out:

http://linuxmafia.com/faq/Hardware/sata.html

I'm looking for something similar myself.

----------

## zecora

Also I am still wondering how much space I will have with 4x250gig hdds. I am new to this so go easy.

----------

## eelke

For $100 you are lucky if you can get a card with four sata ports, RAID-5 you will have todo in software, which is a bit slow when writing. Notice that the sustained transfer rate of a four disk array is higher then the transferrate of a standard PCI-bus.

 *Quote:*   

> Also I am still wondering how much space I will have with 4x250gig hdds. I am new to this so go easy.

 

With a raid-5 array you allways get the capacity of one disk less. So in your case that would be 3 x 250 GB.

----------

## bexamous2

Well I'm in the same boat...  not sure if I'll go for 4 250GB drives or just get 3 and expand as I go....  I have been looking at hardware raid5 and its out of my pricerange... Areca pci express raid cards look very nice but I don't have $600 to spend on a controller when I'll be spending only $300-400 on the harddrives....  most of the lower end cards are all going to be software based, in which case I see I'm not sure why I'd even buy a sata controller...   I can get an a64 mobo and use the onboard sata ports... asus has a few mobos with 8 sata2 ports onbaords.

I wasn't sure about software raid but after reading about this crap all night unless you can buy a highend controller its all about the same and you're better off using linux software raid.  I don't know if someone wants to comment on this...  also is the fact that if you get a pci card you'll be limited to pci's bandwidth whichi s nothing stellar.

ASUS A8N-SLI Deluxe $159

4 ports from the nf4 chip and 4 ports from the Sil 3114 chip..... only downside is it has that crappy fan- i hate mobo fans they always die.

----------

## ejmiddleton

Software raid is good.  The problem with hardware raid is that, for low end cards, it is really just propriatry software raid. and for high end cards, the card becomes the single point of failure.  With software raid, at worst, you can throw the drives in another machine and rebuild, with hardware raid if you can't get a compatable card you have nothing but your backups.  If you are concerned about speed I would go with raid1 it is signifigantly faster.

----------

## drescherjm

I second the above statement, especally the part about hardware raid card failing... And add for $100US you will not be able to buy a new real hardware sata raid 5 card unless it is stolen... These cards go for 2 to 6 times that price..

----------

## flysideways

I found this post as I was searching the Areca cards out of curiosity since it seems that they have support in the newer kernels. The following is a cut and paste from an old post of mine to LQ. I am still using the server as described except that it has an old DiamondMM pci Viper V330 for the video card and 768MB pc133  ram. The main point is that if you are only storing and reading files, an old hardware based software raid5 will work. I still am not sure that I need anything other than a software raid. I have yet to try and move the drives to a different computer, that would be an interesting exercise. A  64bit system would be nice to avoid the 2 terabyte address limit. Then, of course, I'd have to buy some more drives.   :Laughing: 

 *Quote:*   

> 
> 
> Raid 5 Headless Server Success
> 
> Using ideas from LQ posts like http://www.linuxquestions.org/questi...217#post622217 and the raid how to at http://www.tldp.org I gathered up mostly unused parts and got a few new parts to put together a raid 5 server. My first goal was to put to use some discarded hardware and I also had a need to store some captured video and isos of DVDs that I have made.
> ...

 

----------

## drescherjm

These look like decent cards from their specs and they do support linux but they are by no means $100US. A 4 port SATA card with raid 5 goes for $379 at newegg http://www.newegg.com/Product/Product.asp?Item=N82E16816131001

----------

## zecora

For a good beginner card that is going to get me through the day. How much am I looking at spending?  I mean I can go higher but you have to start somewhere.

~Zec

----------

## drescherjm

I would buy a regular sata card and use linux software raid. In my department I have several servers running this config (with Promise sx8 cards- 8 drive promise sata)  and linux software raid 5 and 6. I have 1TB and 2TB arrays using 5 to 10 250 GB SATA drives with a single machine having 16 SATA disks and two 8 drive sx8 cards. 1 boot drive + a 1TB 5 disk raid 5   + a 2TB 10 disk raid 6.  And I would like to say this works very well. Use the mdadm tools to setup your software raid and make sure your kernel has raid compiled in.

```

fileserver ~ # cat /proc/mdstat 

Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] 

md0 : active raid5 sx8/4[4] sx8/1[0] sx8/3[3] sx8/5[2] sx8/2[1]

      976793600 blocks level 5, 256k chunk, algorithm 2 [5/5] [UUUUU]

      

md1 : active raid6 sx8/6[0] sx8/15[9] sx8/14[8] sx8/13[7] sx8/12[6] sx8/11[5] sx8/10[4] sx8/9[3] sx8/8[2] sx8/7[1]

      1953587200 blocks level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU]

```

```

fileserver ~ # hdparm -tT /dev/md1

/dev/md1:

 Timing cached reads:   3836 MB in  2.00 seconds = 1917.33 MB/sec

 Timing buffered disk reads:  336 MB in  3.00 seconds = 111.94 MB/sec

hdparm -tT /dev/md0

/dev/md0:

 Timing cached reads:   3976 MB in  2.00 seconds = 1988.30 MB/sec

 Timing buffered disk reads:  284 MB in  3.01 seconds =  94.30 MB/sec

```

----------

## zecora

So If I put in 4x250gig hdds. That would mean I would have 750gigs?

----------

## tgh

Yes, RAID5 is always N-1 for net space (assuming you don't use a hot-spare drive).  So 4 drives gives you 3 drive's worth of space after the parity blocks are allocated.

RAID1 is N/2 space.

There's also a RAID6, which allows for multiple drive's worth of parity.

----------

## drescherjm

Yes. For raid-5 the usable space is (n-1) * size of the smallest disk. For raid-6 it is (n-2) * size of the smallest disk. For raid 1 it is (n/2) * size of the smallest disk. Usually you want all disks to be the same size, speed, manufacturer and from the same lot but this is for optimal performance.

----------

## zecora

I think I am gonna have 5drives just to make it 1TB. Then again, how much do you think I need to spend for a good/decent raid card.  I mean what I am using this for is a fileserver/home media fileserver.

----------

## Kai Hvatum

 *zecora wrote:*   

> I think I am gonna have 5drives just to make it 1TB. Then again, how much do you think I need to spend for a good/decent raid card.  I mean what I am using this for is a fileserver/home media fileserver.

 

Upwards of $200. For anything less than that you would get better reliability and compatibility using Software RAID. I've been running my large IDE software Array for a few years now without any problems. My brother has been running his LSI Hardware RAID and has had nothing but problems. The thing sucks at detecting and recovering from drive failure and it's text-based BIOS interface also seems to actually be broken and nearly wiped everything out a few weeks ago. 

Don't skimp on a Hardware RAID card, you'll just get the worst of all worlds. Bad RAID cards will unload most of the work to the CPU anyway and if the card fails you'll be stuck with an incompatible array nightmare.

http://www.newegg.com/Product/Product.asp?Item=N82E16816116022

----------

## zecora

So you are saying spend the money or do not even bother?

EDIT: Also I would like to get 5x250gig hdds just to make it have 1TB of space.

----------

## drescherjm

 *Quote:*   

> So you are saying spend the money or do not even bother?

 

I have two questions before I answer this. What are the specs (cpu & memory) of the machine that you are going to install this raid? Do you need > 100MB /s STR (serial transfer rate)?

----------

## Kai Hvatum

 *zecora wrote:*   

> So you are saying spend the money or do not even bother?
> 
> EDIT: Also I would like to get 5x250gig hdds just to make it have 1TB of space.

 

Yeah, if you can't afford a good RAID card from reputable vendor such as 3Ware you're better off not bothering with hardware RAID. 

I would still encourage you to setup a software RAID 5 array as part of an LVM2 volume group. 

First get your 5x250 drives and set them up on your computer. If you end up getting any IDE drives attach only one master drive and no slave to each IDE cable. If the master drive fails in the correct manner you could end up corrupting data on the slave drive also, meaning that you would have no useful parity data because the master drive would be dead and a second drive corrupted. (A sofware RAID 5 array with two dead drives = You have lost all of your data and have no chance of recovery (!))

Once you have that setup you can move onto adding this array, probably mounted under md0, to an LVM2 Volume Group. Then you can partition that volume group and mount it. The advantage of layering LVM2 ontop of everything of course being expandability; Later on you can add another 250 GB RAID 1 mirror or even a second RAID 5 array to the volume group without risking the corruption of the data already saved on your first RAID 5 array. You simply partition out the new array (seprate from your first one) and expand the LVM2 volume group. Easy as pie.

BTW: If you ignore my advice about using LVM2 don't ever come in here asking questions about expanding a RAID 5 array! No one will help you.

----------

## zecora

I am using software raid 1/0 right now and do not have a problem with it. So I might just passwd on the hardware card and just go to a software raid.

----------

## zecora

/bump

----------

## drescherjm

 *Quote:*   

> So I might just passwd on the hardware card and just go to a software raid.

 

I was looking for a good source of the pros and cons of software raid 5 vs hardware but I did not find it. To me the decision depends on your hardware and needs. Software raid 5 is easier to recover, easier to move to a new machine, more integrated with linux, and you do not have to worry about being stuck to a controller if you have hardware failure. BIOS raid 5 (< $100 US) is simpler than software to setup, you can directly boot off of it easily. It is not any faster (than software), most likely less able to recover and very much less integrated with linux. Hardware raid is probably faster depending on the processing power of the CPU on the controller (vs power of your system to perform the same calculation) but it is also probably less likely (than software) to recover from a severe failure and it will require you to buy at least a card from the same manufacturer if the hardware fails. Some hardware raid cards come with a GUI and some type of remote administration/monitoring but you can probably do most of this with linux software raid if you want. As with BIOS raid you can directly boot off hardware raid 5 easily.

----------

