# ICH10R as Raid1 controller

## mno

Hi all,

Does anyone have any good info as to whether the ICH10R is a good/reliable raid controller, for Raid 1? I've been using 3ware's products before and am very happy with them, especially with their CLI, but getting them is an extra 300-500 expense depending on how many drives you have. So I'm considering going just the ICH10R route as that comes as part of the chipset. Further, is ICH10R support good in the 2.6.2x kernel?

Any comments/suggestions?

Thank you in advance,

Max

----------

## NeddySeagoon

mno,

The ICH10R is a fakeraid controller. Its not hardware raid, which is why its on so many motherboards.

The only reason to use fakeraid is to allow Windows and Linux to both access the raid set.

Kernel support for fakeraid is indirect. You need the third party dmrad module.

Kernel software raid is faster and more portable but not compatible with Windows

----------

## mno

Thanks! That was what I was thinking, but wanted to check. 3ware it is.

----------

## NeddySeagoon

mno,

Do the sums before you invest in a plug in controller.

What raid level do you want ?

What sort of bus will it plug into?

What CPU load will the system have besides raid?

A plug in raid card adds more single points of failure and may actually be slower than kernel raid.

----------

## mno

This is for a dual E5520 server running on SuperMicro 6016T-NTRF barebones system (X8DTU-F) with 3x2GB RAM per processor and with probably 4 500GB RE3 drives (with two distinct RAID1 arrays, each with 2 drives). x16 PCI-E 2.0/QPI at 5.8Ghz.

I am more concerned not with performance of drives, as this will mostly be a db/app server, as the reliability. I've never had any issues with 3ware cards, and I am very wary of kernel raid due to history. What do you think?

----------

## NeddySeagoon

mno,

I have been using kernel raid0 and raid1 for the last few years with no problems.

The more bits you add, the worse your reliability.

There is no CPU overhead in raid1, the kernel only has to set up the DMA controller to send the same data to all the members of the raid set, therefore there is no advantage in offloading this trivial amount of processing to a raid card.

Raid5 (one redundant drive) needs more CPU time but it looks like you will have plenty of that.

For some bus types  (plain PCI), plug in cards are a bottleneck you should be able to get the same performance out of a PCI-E card as from the on board interfaces.

I would keep my money in my pocket and use kernel raid, or maybe invest the money in a real UPS, (if you don't already have one) so you always get clean shutdowns in the face of power failure.

--- Edit ---

Given that kernel raid works with partitions, not whole drives, I might even be tempted to reorganise your four drives into several raid5 sets, as you would still get one drive redundancy and have more storage space.

----------

## mno

Thanks for your suggestions, NeddySeagoon.

This server would be co-located at a great facility, so I'm not worried about power issues. I currently have several servers there, never had issues with that.

Hm, I will need to consider then not going the 3ware route. It is an extra $300-500 expense that I don't necessarily need for this kind of server already  :Smile: 

Thanks,

Max

----------

## mno

 *NeddySeagoon wrote:*   

> --- Edit ---
> 
> Given that kernel raid works with partitions, not whole drives, I might even be tempted to reorganise your four drives into several raid5 sets, as you would still get one drive redundancy and have more storage space.

 

Now that could be an interesting idea. I am just a bit worried that's getting into something that's a bit too complex than I want for a server?

----------

## mno

A bit concerned about this bug: 230950

EDIT:

Actually, as I gather, this bug is more due to using dmraid over mdraid. From a response on the LKML, it seems if I run the controller in AHCI mode, I can use mdraid and not have these issues?

----------

## NeddySeagoon

mno,

Kernel raid is something else to learn but you only learn it once.

You need its management tool, which is called mdadm and one or two extra kernel modules built into the kernel.

If you were to go the kernel raid5 route, I recommend you do a single drive install, copy it to the raid, while the raid is in degraded mode (one drive missing) then add the drive hosting the single drive install to the raid to bring it up to full strength.

This way you get to practice recovering from a drive failure before you have your own data on the server.

----------

## mno

Thanks again, NeddySeagoon, for the hints. It's actually a very good idea, I like installing onto one drive and then rebuilding the rest, sounds like a great plan. By chance any good docs you could point me to getting up to speed on mdadm?

----------

## NeddySeagoon

mno,

I'm still a raidtools user but that was removed a long time ago and replaced with mdadm.

I need to refer you to google and the mdadm man page.

Be on your guard for the difference between the two commands that make a new raid set and assemble an existing one.

Making a new raid set destroys all the data on the raid, assembling it makes it ready to be mounted. 

This Guide gives an overview of installing onto raid with Logical Volume Manager (LVM) on top of the raid. Its not a complete guide by any means. Also it does not cover the degraded raid step.

One thing it did remind me of is that /boot must be on raid1 or an unraided partition. Thats because grub ignores raid altogether.

----------

## mno

 *NeddySeagoon wrote:*   

> One thing it did remind me of is that /boot must be on raid1 or an unraided partition. Thats because grub ignores raid altogether.

 

That's one advantage of having a dedicated hardware raid card  :Smile: 

Thanks for the tips and help overall!

----------

