# A7N8X SiI 3112 RAID1 How ?

## 102039

Hi

i have been searching the forum for that matter now, but everyone seems to have a different situation or the solution is not explained in detail.

My problem is that i have a A7N8X Deluxe Motherboard with a SiI 3112 RAID Controller onboard. I now have two 160GB Samsung SATA discs which i want to run in a RAID1 Array. I only want to install Gentoo 2005.0 on those discs with about 6 partitions: 1 boot, 1 swap, 1 /, 1 /home, 2 /storage. No dualboot, no windows.

I got the fact that onboard is not fully hardware RAID. But how to setup the RAID1 than ? Which tools to use, which way to go ? I have the option to create a RAID1 mirror in the SATA chipsets bios. Since i have data on the 2 storage drives which i cant copy to a third HDD atm, i need to use the option "Copy from source".

I hope i gave you all the facts you need to assist me, if anyone did this before, please help.

----------

## NeddySeagoon

Wurstteppich,

There are several things to consider.

1) Do you need Windows to share the raid array?

If so, you must set up the array in the BIOS so windows can see it and use dmraid in Linux. This is not the way to go if only Linux will be installed.

2) For a linux only box, partition the drives identically and install on one drive as per the handbook, so you have a standard non mirror install. When you make your own kernel, be sure to include md and raid personalities you want to use. After it boots on its own, move it over to raid1. This saves the problems of mounting a raid volume on the liveCD if things go wrong. When you are happy, gp to step 3.

3) Make the partitions on the unused drive, type fd (raid autodetect) and put the partitons into a degraded raid set along with the partitions running the install. Tell raidtab that the running drive is defective and the otherwise blank one is OK. This is important or you will wipe your install.

Now make your filesystems on the /dev/md volume.

Lastly, copy the install from the single drive to your new (degraded) raid1 set.

4) Boot the degraded raid set and make sure it works. (The single drive install is no longer being used)

5) Raidhotadd the partitions from the original install and fix the raidtab to show the raid is fully operational. Fix the partition types too. This step wipes the original install by copying from the working part of the mirror.

You now have a working raid1 system for almost the proce of a normal install. 

At one time you had a raid and a normal install and could choose to boot either. That makes sorting out the mess much eaiser when it doesn't work quite as you thought it would.

----------

## 102039

I first want to thank you for the answer, i didnt find such an explanation what to do before. But to be honest, the steps you listed still confuse me a little bit. Maybe you are up to explain it a little bit further, because i did not get the whole idea yet. Here you will find an entry of mine, how my partition layout looks at the moment:

https://forums.gentoo.org/viewtopic-t-188770-start-150.html

Now i want to create an RAID 1 mirror out of it with the second hdd i have (which is the same model, so 2 160GB hdds).

From this point of view what preparations i do have to take care of and which exact steps do i have to perform ?

First thing to start off is to prepare the raid on a software level and copy the content (so all partition) with the RAID1 "Copy from source" option in my Silicon Image controller bios if i understand it right. Sorry for being for unprepared with the steps, i only used SATA Onboard RAIDs with Windows so far, and they just worked as if you are using a real Adaptec like SCSI Raid 1. And sorry if repeat myself, just want to make sure we have the same idea on my plan.

 *Quote:*   

> 1) Do you need Windows to share the raid array? 

 

It is a pure linux install.

 *Quote:*   

> When you make your own kernel, be sure to include md and raid personalities you want to use.

 

So i have to add the RAID1 mode in the Multiple device support option? You were talking of md and raid here, just want to go for sure i dont miss something here.

----------

## NeddySeagoon

Wurstteppich,

I'll recap my understanding. You have a one drive install, in line with your referenced post and you would like to add another drive to it to make it raid1. You will use linux kernel raid and not form the raid system in your BIOS.

The new drive is still blank (or can be) and the sytem works from your single drive install. If this is correct, proceed as follows. If not stop here and fix my understanding.

In your kernel, under Device Drivers ->  Multi-device support (RAID and LVM) chose the following as built in, or you will not be able to boot with the root filesystem on the raid 

```
Multiple devices driver support (RAID and LVM)

RAID support

RAID-1 (mirroring) mode 
```

Rebuild and reinstall your kernel. Reboot to run it and check the timestamp in 

```
uname -a

```

Use fdisk to make partitions on the new drive the sizes you want for your raid partitions. They need not be the same as you have for your single drive install but must be able to contain the data from that install. The partition type must be fd for raid autodetect. Do not make filesystems yet.

Create an /etc/raidtab file with entries like this (I'll show just one partition)

```
# /boot

raiddev                 /dev/md0

# Must Use RAID1 (mirror) for booting off to keep Grub Happy

raid-level              1    # it's not obvious but this *must* be

                             # right after raiddev

persistent-superblock   1    # set this to 1 if you want autostart,

                             # BUT SETTING TO 1 WILL DESTROY PREVIOUS

                             # CONTENTS if this is a RAID0 array created

                             # by older raidtools (0.40-0.51) or mdtools!

chunk-size              16   # Thats 16kb

nr-raid-disks           2

nr-spare-disks          0

device                  /dev/sda1

# raid-disk               0

failed-disk           0    # create the arrary in degraded mode

device                  /dev/sdb1

raid-disk               1
```

 A few points to ponder. A chunk-size of 64 may be better

Get the failed-disk entry right for your system. If /dev/sda is right for your live single drive install now, thats the one thats failed. This stops mkraid from wiping out your system when you make the raid later. Only the new partition will be included, hence the "degraded mode"

You need to 

```
emerge raidtools
```

if you haven't already and read the raidtab and mkraid manpages.

Now you can 

```
mkraid /dev/mdX
```

to make your degraded raid mdX partition. Only do one to show it works. Now mkfs on /dev/mdX. You choose the filesystem. You do not use the /dev/sdXY references for the underlying partitions again.

Make a mount point in /mnt like /mnt/raid and mount your new raid partition there. 

```
mkdir /mnt/raid

mount /dev/mdX /mnt/raid
```

and copy the contents of its non raided counterpart. 

```
cp -a /source/mount/point /mnt/raid
```

.  You cannot do the root filesystem like this. You will recursively copy everything. 

I'll need to check how to copy a live root without getting the entire filesystem tree and having problems with /proc and /dev/

Having copied everthing over except your root partition, you can modify your /etc/fstab to point to your /dev/md devices, like this

```
# The Old Way

# /dev/hda1             /boot           ext2            noauto,noatime          1 1

# /dev/hda6             /usr            ext3            auto                    0 0

# The RAID Way

/dev/md0               /boot           ext2            noauto,noatime          1 1

/dev/md2               /usr            ext3            auto                    0 0
```

A reboot, with your grub.conf unchanged will boot from your non raid /boot, using your non raid root with all the supporting partitions from your degraded raid. 

Now to fix your raided /boot.

Mount your mdX boot partition on /boot and install grub on the raid drive. Get the drive and /boot partition right in grubspeak.

```
grub

root (hd1,0)

setup (hd1)

exit
```

This is the only time you need think about the underlying drive.

It will probably be (hd1, since you are installing on the working part your degraded raid.

Edit grub.conf and add a new block along the lines of 

```
title=Kernel 2.6.11-gentoo-r4 (RAID Boot)

root (hd1,0)

kernel (hd1,0)/2.6.11-gentoo-r4 root=/dev/hdaX
```

Fix the names to suit.

Reboot and enter the BIOS. Set the boot order to boot the Raided drive first. Note that you now have two different grub installs with two different grub.conf files. Only the one on the degraded raid has this new (RAID Boot) entry.

Boot, choosing the (RAID Boot) entry. You have now booted from your degraded raid1 /boot, using all your raided filesystems except root. I'll post back on how to copy that over.

At the moment, you have almost two complete seperate installs. A degraded raid1 install, and a single drive install.

----------

## 102039

 *Quote:*   

> 'll recap my understanding. You have a one drive install, in line with your referenced post and you would like to add another drive to it to make it raid1. You will use linux kernel raid and not form the raid system in your BIOS.
> 
> The new drive is still blank (or can be) and the sytem works from your single drive install. If this is correct, proceed as follows. If not stop here and fix my understanding. 

 

That is correct so far, except for the fact that i would better like to form the RAID1 in my BIOS. The fact which made me asking about it was, that i read many posts that forming a RAID1 with my onboard RAID controller BIOS isnt enough, because it is no real RAID then. And this is also the fact which confuses me. In Windows systems you build the RAID and at the time you install and the setup wizard asks you for the partition to use, you just see one physical disk, the RAID1 array. Under Linux, when i first tried to create the RAID1 with the controller's bios, i had two physical hdds, /dev/sda and /dev/sdb,  which is correct, because there are two discs, but they form an array and so Linux should like Windows also recognize the array as /dev/sda and not the two seperate ones. My question is now, do i need to support the hardware RAID1 by software also or whats the magic behind this ?

If it is not possible to make it work like it does on Windows, i will go the way you described in detail here, but for now it sounds like a complete software solution to me.

Maybe you have some ideas on that and can help me with that. By the way, i really appreciate your help and detailed descriptions so far. Thanks for that!

----------

## pilo

It seems that making a BIOS-RAID is more trouble than a software: http://www.ubuntuforums.org/showthread.php?t=2557

If NeddySeagoon is drawing the right conclusions, i.e. that you already have a drive you want to mirror, it would probably be easier to use mdadm.

If this is the case, follow these steps:

1. Make sure you have MD- and RAID1-support compiled in, not as modules. Also install mdadm, our tool of choice.

2. Make a copy of the partition table of your primary disk to your secondary disk, setting the partition type to fd. (I'm not sure, but I think that partition types should be set to fd on the primary disk as well, even though common sense says that would break data?) I would recommend that you leave swap's partition type untouched.

3. 

```
# mdadm --create /dev/mdX --level=1 --raid-devices=2 /dev/hdaX missing
```

This will create a RAID-array with one disk missing, which will be added later. This to ensure that sync will be done from the first disk in the array. You cannot make a RAID out of a whole disk, unless you only have one partition, so you need to repeat this step for each partition on your primary disk. Do _not_ make an array out of swap partitions, as this increase the possibility of crashing your system if/when your array fails.

4. To sync the arrays:

```
# mdadm /dev/mdX --add /dev/hdbX
```

The sync process can be monitored by:

```
# watch -n1 cat /proc/mdstat
```

5. Edit /etc/mdadm.conf and add the output of:

```
mdadm --detail --scan
```

This command will return one line per array.

6. 

```
rc-update add mdadm boot
```

You might have to add an email-address to /etc/mdadm.conf to make it happy, you can try by starting /etc/init.d/mdadm after all arrays are completed.

As a finish I might add that I did this procedure and it worked nicely, except that I did not change the partition types of my primary disk to fd, and haven't tried to start mdadm in boot runlevel, which forces me to assemble the array by hand every boot.

Please ask more people about advice, or at least verify my information, so that I have not given information that is inadequate for your system.

----------

## NeddySeagoon

pilo,

You have done pretty much what I described but with mdadm which I belive is preferred over raidtools now, which 

I used and described. You should not set the old single drive installs partitons to type fd until you want them autodetected as raid.

Wurstteppich,

If you have your heart set on BIOS raid (and its still software raid) go here http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/

However, you will need to install again, since this form of raid is not compatible with kernel raid. You will still have your /dev/sda, /dev/sdb and the raid volume in linux. I've not done this setup. My reading showed that the dmraid driver is not as mature as kernel raid.

----------

## Muddy

What parts of these change with raid0 ??

Seems I've done some of these steps already, although I admit I'd be more at ease formatting both drives with all zeros and starting fresh.

----------

## NeddySeagoon

Muddy,

You cannot migrate an exiting one drive install onto a raid0 that will include that drive because you cannot have a raid0 in a degraded state and still have it working.

Grub will not boot from raid0, nor will lilo without a patch, so /boot must either be a standard drive or raid1. Grub is quite at home then. I prefer a raid1 /boot becasue its simpler to maintain.

You need to build a kernel that can handle your choosen raid, set up the raidtab, mkraid your raid volumes, mount them to a temporary mount point, copy things over. Install grub on each half of the mirror /boot, fix etc/fstab to pick up your raid partitions in place of the old /dev/hdaX.

You can find my raid specific files at http://62.3.120.141/linux_stuff/raid-bits/

----------

## Muddy

NeddySeagoon,

I have two brand new 80g sata drives in raid0 in the bios that I setup with equal partitions in fdisk both sda1 and sdb1 are /boot in ext2 (non-raid) and sda2/sdb2 are both 256m each swap while sda3 and sdb3 are setup in raid0 as /dev/md0 with sda3 and sdb3 setup in fdisk as FD type so they will work and do work as /dev/md0 only after booting linux however.

```

# fdisk -l /dev/sda

Disk /dev/sda: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          12       96358+  83  Linux

/dev/sda2              13          48      289170   82  Linux swap / Solaris

/dev/sda3              49        9729    77762632+  fd  Linux raid autodetect

# fdisk -l /dev/sdb

Disk /dev/sdb: 80.0 GB, 80026361856 bytes

255 heads, 63 sectors/track, 9729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *           1          12       96358+  83  Linux

/dev/sdb2              13          48      289170   82  Linux swap / Solaris

/dev/sdb3              49        9729    77762632+  fd  Linux raid autodetect

```

```
# cat /etc/raidtab

 #

       # sample raiddev configuration file

       # 'old' RAID0 array created with mdtools.

       #

       raiddev /dev/md0

           raid-level              0

           nr-raid-disks           2

           persistent-superblock   0

           chunk-size              16

           device                  /dev/sda3

           raid-disk               0

           device                  /dev/sdb3

           raid-disk               1

```

```

# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [multipath] [faulty]

md0 : active raid0 sdb3[1] sda3[0]

      155525248 blocks 16k chunks

unused devices: <none>

```

if that helps

essentially I'm wanting to move my install off my ide drive /hda and over to my new sata drives, leaving /hda drive in the box but not booting off it or using it for /, just as an extra drive.

*edt*

oh, also I know /sdb1 will most likely be useless but I could not see a way around it. Since I'll be booting off /sda1 if this whole thing works correctly.

checking over your files did you setup raid0 in your a7n8x bios at all? (just wondering)

----------

## NeddySeagoon

Muddy,

You either set up raid in the bios and use the contents of this page http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ or you use kernel raid. 

You appear to have set up raid in your BIOS and kernel raid. Thats probably harmless because nothing will be driving the BIOS raid.

The setup differences are, you put disks (not partitions into BIOS raid, then partition the resulting raid volume. With kernel raid, you partition the drives (normally identically) and put the resulting partitions into raid sets. This allows you to mix raid0 and raid1 on the same drives. With BIOS raid, you must use the same raid level over the whole drive.

I did not set up raid in my BIOS.

If you use a persistent superblock and set the partition types for /dev/sda3 and /dev/sdb3 to fb, your kernel raid should be detected on boot. You can then have an entry in fstab to mount it.

Note: You must mkfs the /dev/md0, not the underlying partitions.

You have your entries in raidtab in a different order to me. Raidtools can be a bit picky about ordering.

----------

## Muddy

at this point the whole raid thing does not even matter as I have been stuck on the Grub error 17 for days now when trying to boot my sata drives.

I have no clue how grub views my drives, I'm looking in the forums now for a command to have grub show me what it sees so I can correctly set it up.

----------

## NeddySeagoon

Muddy,

Take your SAT drives out of the BIOS raid ( I don't know if it matters) then install grub to their master boot records like this.

```
grub

root (hd0,0)

setup (hd1)

quit

grub 

root (hd0,0)

setup (hd2)

quit
```

This installs the grub stage 1 and stage 1.5 on your (hd1) and (hd2) using (hd0,0) as your /boot

The (hdX will be correct if your IDE drive is the first BIOS detected hard drive. The root (hd0,0) will be correct if your /boot is the first partition on the first BIOS detected hard drive. Adjust the drive and partition numbers to suit.

With this setup, you should be able to point the BIOS at any of the three drives and booting will work but always using your IDE /boot. However there is a catch. If you change the boot order in some BIOSes, the drive detection order changes too. If you have such a BIOS booting from the SATA drives will fail.

The next step is to copy the content your IDE /boot to one of the SATA drives, (I'll call it) /newboot on /dev/sda, or (hd1) in grubspeak. If you reinstall grun on this drive as follows

```
grub

root (hd1,0)

setup (hd1)

quit 
```

then telling the BIOS to boot this drive will cause the content of /newboot to be used. You may want to edit the titles in /newboot/grub/grub.conf so you can tell the difference and you will certainly need to change the root (hdX,y) entries in /newboot/grub/grub.conf so that you fetch kernels and initrd files from /newboot. You may wish to change kernel file names in /newboot and  /newboot/grub/grub.conf so you can be sure which kernel file is being loaded.

----------

## Muddy

NeddySeagoon, 

What I have done thus far before seeing your post.

removed the bios raid0 setup, low level formated both sata drives.

removed the raid tab in gentoo along with anything associated with the raid setup.

my plan to get the stage 3 iso do a quick (as quick as possible) setup on one sata drive to get it to boot to console, boot back off the ide drive copy over everthing but /proc and /dev then remake the raid0 setup change the fstab and grub settings and hopefully boot off the sda1 volume.

your way seems quicker however.  :Wink: 

Don't you boot off your sata drives??

raid 1 right?

----------

## Muddy

well, i'm up and running with /dev/md0 as my / partition and /dev/md1 as my /home now.. so everything is working except booting off the sata drives.

I'll leave that for another day.

----------

## NeddySeagoon

Muddy,

You can make your two /boot partitions on your /dev/sda and /dev/sdb raid1 if you want.

Then you copy over your IDE /boot and install grub on each sata drive.

When the running system talks to your /boot, it upates both halves of the mirror.

Grub has no idea about raid1, and will happily use each half seperately to boot from.

I boot from my sata - I don't have an IDE drive any longer.

----------

## Muddy

 *NeddySeagoon wrote:*   

> Muddy,
> 
> You can make your two /boot partitions on your /dev/sda and /dev/sdb raid1 if you want.
> 
> Then you copy over your IDE /boot and install grub on each sata drive.
> ...

 

I'll give that a shot, however it'll be a bit as I'm heading into a heavy work week.

----------

## 102039

NeddySeagoon wrote:

 *Quote:*   

> I've not done this setup. My reading showed that the dmraid driver is not as mature as kernel raid.

 

Does the BIOS supported software raid does have any advantages over the kernel based raid, in cpu usage for example or in any other area ? At the moment that RAID onboard sounds like total crap for me.

What are your readings about the dmraid driver did you exactly? What kind of problems can you get by using that driver ?

I am about to reinstall the system in the next days, since i want to change filesystems on two partitions and properly configure all use flags again. So i am up to both solutions at the moment, but i am still asking myself which, since hardware supported sounds still like a little bit more performance for me (i only know that fact from windows os so far). Please correct me on that if i am wrong.[/quote]

----------

## NeddySeagoon

Wurstteppich,

For Linux, RAID comes in 3 forms:-

1. Real Hardware RAID. You get this on Server Motherboards or plug-in cards only.

2. Fakeraid (often mistaken for Real Hardware Raid by Windows users) provided by softare in the BIOS.

3. Kernel Raid, in which software in the kernel does what would be done by the BIOS software.

If you had Real Hardware RAID, you would know. As far as I have been able to find out there is little speed difference between the two software raid systems.

BIOS raid can be understood by both Windows and Linux.

Kernel raid is Linux only.

I selected Kernel raid because :-

1. I don't use Windows

2. It allows differing raid levels to be mixed on the same drives.

3. The LiveCD allowing an install with BIOS raid is only beta.

I have a raid1 boot, that allows grub to be used for booting and made all the other partitions raid0. Swap is not raided. The kernel can manage two swap partitions for itself, although there may be something to be said for raid1 swap.

----------

## 102039

Just one last question, there seem to be non-beta linux drivers for my onboard controller. Would it be worth a try, since i would go for a complete mirror of the whole disk, not just single partitions ? What are your thoughts on it ?

Thought i think i am going for your suggestions to use kernel raid since it seems a real mature method to me now. Additionally i got a perfect description of how to do it now, thanks to you!

Greetings,

Wurstteppich

----------

## NeddySeagoon

Wurstteppich,

I have an A7N8X-Delux (Ver 2.0) on which I run kernel raid. I presume you mean you will use raid1 by your phrase  *Quote:*   

> a complete mirror of the whole disk

 

In that case, there is little to choose between to two methods of software raid. (Kernel or BIOS). The difference is in the setup steps.

However, the ordinary liveCD does not support BIOS raid installs, which may become the deciding factor.

----------

