# Recommended hardware SATA RAID controller?

## beatryder

I am building a server, and I just now discovered that I cannot use the ICH7 Raid under linux. 

Yes I know I can do software raid. but that is not an option.

I will say it again, software raid is not an option.

What I am looking for is a pure hardware raid controller for 4x SATA hard disks, that will allow me to work with the drive under linux as though it were one.

I have been looking at a couple:

The silicon Image sil3114 and the Promise Fasttrack SX4300

I can use PCI or PCI express.

----------

## NeddySeagoon

beatryder,

The SIL 3114 is fakeraid. It provides software RAID in its BIOS, in a similar manner to your onboard ICH7

Both of them will work with dmraid under linux but its still software raid.

The only reason for anyone using fakeraid is the share the raid with Windows.

The Promise Fasttrack SX4300 is a half way house between software raid and hardware raid.

Note the hardware assisted in the fine print. The linux driver may not use this assistance, rendering it a very expensive software aid card.

What RAID level do you want and why do you say software raid is not an option?

=====  edit 

Putting any multiple disk system on a PCI bus will cripple its performace. PCI has a theoretical peak bandwidth of 133Mb/sec

Two SATA drives can max that out easily and if your network card(s) are on the same bus the data is moved over the bus twice, which reduces performance still further. If software RAID is not an option, PCI is not an option either.

----------

## beatryder

I am under the understanding that hardware raid is better than software. I am not all that familiar with RAID, only ever having used it in 2K3 server. I want a RAID 1/0, and I want it to be easy to recover from a problem if one drive dies.

Can I do this with software RAID?

----------

## NeddySeagoon

beatryder,

The difference between hardware raid and software raid is in where the calculations for the redundancy are done.

A hardware raid card provides its own RAM, CPU and so on. It is really a computer on a card dedicated to RAID.

Hardware RAID therefore costs almost as much as a computer.

Hardware RAID cards are not normally made for 33MHZ 32 bit PCI buses because the bus bandwidth is too low,

it becomes the limiting performace factor.

As you require RAID 0/1, there are no redundancy calcualtions to perform. You will keep two copies of all your raidi data, one copy in each part of the mirror. This means that data must be written twice, which may reduce the write data rate against a single drive, however, the read data rate will be almost doubled, since the kernel will use both mirrors.

I say the write rate many be reduced because it depends on the size of files your write. For files that fit into the on drive cache, you will not notice the difference. That can be up to 16 Mb.

Kernel Software raid can do what you want. 

With RAID 0/1 on 4 identical drives, you will have 2 drives worth of useful space.

Consider  RAID 5 for a moment. The raid 'parity' calculations are more involved (uses more CPU time) but the useful space is now 3 drives worth of your data. Both systems protect against the loss of a single drive from the raid set.

RAID 0/1 protects against the loss of some combinations of two drives too. If thats a real requirement, look at RAID6, which protects against the loss of all combinations of 2 drives.

The kernel can do all these things, you need to look at what your server will be doing and where the data flow bottlenecks are to be able to make an informed decision about which is right for you.

----------

## beatryder

Well that certainly helps.

So what you are saying then is that software raid is going to be just as safe as hardware raid?

I cant divulge too much of what I am doing with this server due to some NDA's I have signed. But this is what I can tell you, maybe it will help:

I am going to be running a completely encrypted system. /boot will live on a USB flash drive and will contain the keys required to decrypt the data. The idea is that if power is lost to the server (ie: someone physically steals it) the only way to recover the data will be to boot the server with the flash drive.

The information stored on the server is very valuble to us, so I want to have it as safe as possible, but yet easy to maintain at the same time.

----------

## skutnar

If you want an excellent RAID 5 solution, the Areca line of PCI-E SATA controllers are great.  I have the ARC-1210 and it's working fantastic in Gentoo.  The stock kernel doesn't have the driver yet.  I'm using the driver from the Areca web site.  The stock kernel will be getting the driver in 2.6.19, if I recall correctly.

The throughput of the RAID 5 I have is faster than a WD Raptor in both reads and writes.

----------

## NeddySeagoon

beatryder,

Kernel software raid is safer than any hardware raid solution.

When the hardware dies, you can move a kernel raid set to different hardware and the kernel will sort out the mess.

It knows which order to assemble the raid in, even if you connect the drives out of sequence.

With hardware raid, you need an identical controller to recover your raid set. They all have different ways of laying the data out on the disk.

Raid is no substitiue for backups. When you accidently delete a file from a raid set - its just as gone as from a single drive. 

I didn't want to know what you were putting on the server. Think about the I/O traffic.

Encryption is CPU intensive too, so if you will have a lot of traffic, raid0/1 will be ok but raid5 will increase the CPU load further.

============= edit ============

Invest in a UPS so you will get clean shutdowns, even if power fails.

You really don't want to mess up encrypted systems.

----------

## someone19

There was a report on Tomshardware where they tested the theoretical limit of performance of hard drives and arrays.  They connected four sata controller cards with four drives each (if memory serves - the pictures basically looked like a rats nest of sata cables)  HOWEVER - they had this on a dual xeon server board running 2k3 and had the drives all in a software dataset, as hardware raid was not going to be able to support the data rates they were looking for over a single bus.

This is your consideration for hardware vs software raid - Your CPU and what else you plan on having the server do.  For a good quality processor (or two, four...    :Laughing: )  with tons of ram, simply preforming a fileshare level of service, software raid is the way to go.  Start throwing in compiling, database, service hosting (www, ftp, etc) or whatever else you plan, and for a four disk array you may start wanting hardware Raid - but you'll have to shell out for a card that you can put additional ram - etc on.  You can probably spend less money on another computer to preform the data manipulating with a gigabit lan and x-over cable (for security) and run the software raid on the other and distribute the load over the two machines - one to data process, one to encrypt and store the data.

----------

## bexamous

"HOWEVER - they had this on a dual xeon server board running 2k3 and had the drives all in a software dataset, as hardware raid was not going to be able to support the data rates they were looking for over a single bus."

What the?  What kind of speed are we talking about?  Areca cards can do almost 800MB/sec in raid5 on pci-e.  Guess if you'd want more than that you could have two of them and do software raid0 over them but really...

----------

## beatryder

NeddySeagoon, you have been a great help. I am going to configure a software raid 1/0, as this is a 3.6Ghz Pentium D with 2GB of RAM, I am not too worried about the overhead. And if speeds are a tad slow its not a big deal, safety and security are my priorities.

 *NeddySeagoon wrote:*   

> beatryder,
> 
> ============= edit ============
> 
> Invest in a UPS so you will get clean shutdowns, even if power fails.
> ...

 

I have one comming that will give me 17Mins, enough to email a warning to my cell phone and shut the server down properly.

----------

## NeddySeagoon

beatryder,

Raid 0/1 won't be slow. If it is, its the encryption overhead, not the raid overhead.

Thats good news about the UPS. Be sure to test it every month or so, you don't want to find out the batteries have failed when you need it most.

----------

## beatryder

 *NeddySeagoon wrote:*   

> beatryder,
> 
> Raid 0/1 won't be slow. If it is, its the encryption overhead, not the raid overhead.
> 
> Thats good news about the UPS. Be sure to test it every month or so, you don't want to find out the batteries have failed when you need it most.

 

Ok, so just to be clear on this one. When using a software RAID solution, it will be easier to recover from problems than a hardware raid?

----------

## NeddySeagoon

beatryder,

For most problems there will be little or no difference.

For non disk hardware problems, you put your raid set in a new system and it just works.

Thats easier. Kernel raid is the same everywhere.

For hardware raid card problems, you need an *identical* raid card to contuine using the raid set without rebuilding from backups. You can be sure a raid card won't fail while you can still buy a replacement.

Different raid cards lat the data out differently on the disks.

With encrypted systems, you should regard data recovery as not possible, so maintainace is restricted to replacing failed hardware or restoring from backups. You will still need backups as your system has a number of single point failure modes that can destroy all your data.

Why not practice the recovery process before you need to do it do it for real ?

When you build your raid set, leave one drive out and tell mdadm thats its a failed drive.

It can be physically connected - tell mdadm that its failed.

The raid will build as normal but in degraded mode. When its all up and running, add the last drive.

/proc/mdstat will tell you what is happening and show progress as the raid rebuilds.

It will take a little longer but when you really need to do it, you won't have time to practice.

----------

## beatryder

 *NeddySeagoon wrote:*   

> beatryder,
> 
> For most problems there will be little or no difference.
> 
> For non disk hardware problems, you put your raid set in a new system and it just works.
> ...

 

Ok, I think I understand. You are suggesting When I build the raid, leave out 1 of 4 disks? and tell mdadm that that disk has failed, this will cause the raid to operate in degredated mode. Then I should add the last drive, and mdadm will rebuild the raid (on its own, or when I tell it)?

As for testing the ups, I do plan on testing that by litterally pulling the plug.

----------

## NeddySeagoon

beatryder,

When you write your /etc/mdamd.conf, you include one disk as failed.

It needs to be listed (as failed) so the raid is built to include it. I used raidtools when I may my raid, which is slightly different.

There will be a mdadm command similar to raidtools raidhotadd that adds a replacement into a degraded raid set.

A word of warning - kernel raid expects to operate on partitions, not whole drives.

It will work on whole drives but then you cannot make the partition as type fd in the partition table, so the kernel will not auto form the raid in the boot sequence. See this thread.

----------

