# RAID and Grub... Suggestions?

## avieth

I just got two identical SATA drives, raid works on the gentoo livecd if I boot with gentoo dodmraid. The drive as a whole appears under /dev/mapper/sil_randomnumbers and when I made two partitions they suffixed sil_randomnumbers#. I formatted both of these with reiserfs. Then I rebooted to my hard-drive installation.

I have dmraid and device-mapper installed. Got the right driver in my kernel under Drivers->SCSI->Low Level Devices. I use the same one the livecd loaded. The partitions I made on the livecd appear as /dev/sda# but there is also a /dev/sdb.... But I don't think that matters. The main problem: xfs and reiserfs give input/output errors when formatting the large 280GB partition, but the smaller 20GB partition formatted with reiserfs successfully. ext3 works on the large one, but I want a faster fs as this drive will be used to record mythtv data. 

So, any idea how to get the mapper to show in /dev/mapper/sil_randomnumbers? How about botting off the array from grub? Theres a gentoo wiki guide but the author uses genkernel and doesn't know much about setting up raid in general, just specifically for his hardare. And what about formatting the large drive with xfs or reiserfs? Is there a kernel module I need?

----------

## batistuta

avieth, based on your number of posts you probably already know everything I will say, but I try anyway. I don't know anything about nvraid, but I can tell you some things about software raid.

Grub doesn't know anything about RAID so you can boot on RAID1, but not on any other RAID (RAID is just a mirror so it's OK). What type of RAID are you trying to set up, RAID1 or RAID0?

Regarding formating with xfs or raiserfs, these are somehow contradictiing: xfs is optimized for very large files, while reiserfs for small. I guess in your case you want either very large, or if mythTV is able to break the files, you will have a few medium size. In this later case I think reiserfs or reiser4 should be fine. Both support partitions and filesizes in the order of Terabytes, so this shouldn't be the problem.

I remember reading something about some file systems not liking certain I/O schedulers. have you checked that?

----------

## avieth

 *batistuta wrote:*   

> avieth, based on your number of posts you probably already know everything I will say, but I try anyway. I don't know anything about nvraid, but I can tell you some things about software raid.
> 
> Grub doesn't know anything about RAID so you can boot on RAID1, but not on any other RAID (RAID is just a mirror so it's OK). What type of RAID are you trying to set up, RAID1 or RAID0?
> 
> Regarding formating with xfs or raiserfs, these are somehow contradictiing: xfs is optimized for very large files, while reiserfs for small. I guess in your case you want either very large, or if mythTV is able to break the files, you will have a few medium size. In this later case I think reiserfs or reiser4 should be fine. Both support partitions and filesizes in the order of Terabytes, so this shouldn't be the problem.
> ...

 

I'm using RAID-0... As for I/O Schedulers I have CFQ enabled in the kernel... I'll add the other ones and try.

----------

## NeddySeagoon

avieth,

You appear to have a horrible mix of kernel raid and BIOS fake raid. Lets setp back a litttle.

dmraid operates with code in your BIOS to give you /dev/mapper/...  partitions. The only reason for using dmraid is that Windows and Linux must both access the raid set. With this system, you allocate your drives to raid in the BIOS and your BIOS shows grub a single logical device. Raid 0 and Raid 1 both work this way. You do not make partitions on the underlying drives.

If windows is not involved, kernel raid is the way  to go. You do not use dmraid, use the raid personalties in the kernel.

Now you contribute partitions from the underlying drives to the raid sets - you may have sevaral of different raid levels on the same pair of drives. /boot must be raid1 to keep grub happy but the rest can be whatever you like.

Think about what you want to do - and read abound mdadm, which is the tool used to manage kernel raid sets.

----------

## avieth

If I use software raid will I see any performance loss? 

You're saying I should compile all the software raid drivers in the kernel and use mdadm to create the arrays?

----------

## NeddySeagoon

avieth,

Both BIOS raid and kernel raid are different ways of doing software raid. Kernel raid is portable between machines, BIOS riad is not, unless the motherboards have the same BIOS and same chipsets.

Root on BIOS raid requires an initrd to house the dmraid software.

Root on kernel raid requires that /boot be raid 1, the raid partition type in fdisk be fd, you choose persistant superblocks when you use mdadm to set up the mdX devices and that you make your raid personalities in the kernel, since one of them is needed to boot.

----------

## avieth

ok, so following the raid-1 guide on gentoo-wiki as best I could (i want to use raid-0) but I can't get it set up right. Is there a guide out there for raid-0?

----------

## batistuta

The setup for RAID0 is exactly as for RAID1, except that you replace the raid levels everywhere from 1 to 0. And then the other difference is that you don't put boot on RAID0 (it won't work).

For example, in my setup I have three HD:

HD1: has boot, Windows partition, swap, and a RAID partition

HD2: has root, swap (same size as in HD1), and a RAID partition of the same size as the RAID partition in HD1. The rest goes in an LVM

HD3: is divided into 4 LVMS

Now the RAID partition in HD1 and HD2 are RAID0.

All LVMs in HD3 are combined into a single Volume, use for backup and music

The RAID0 array has LVM on top, and from this volume I get /usr, /etc/ and so on

The LVM in HD2 is used mostly for /home

I hope this helps

----------

## NeddySeagoon

avieth,

You can't swap between raid 1 to raid 0 without another disk, unless you reinistall.

The only difference is you choose /boot as raid 1 and everything else as raid 0.

After the /dev/mdX are formed its all transparent.

What doesn't work?

Do your install and report symptoms and error messages, then we can help you fix it.

----------

## batistuta

another difference is that with RAID1, you might want to RAID your swap as well. But if doing RAID0, for what I've read it is usually preffered to put your two swaps outside the RAID0 array. The kernel will anyway strip them for you. Putting swap in the RAID0 array should also work, but for what I've read it is usually done outside. Don't know of any significant advantages. Anyone?

Also having just the two /boot partitions in a RAID1 doesn't make any sense to me. So I would simply do a single /boot partition, rather than creating a RAID1 array for /boot. I mean, either your RAID1 your whole disk, or nothing. But otherwise what's the point? One disk crashes and you lose everything except for your boot...   :Rolling Eyes: 

The point of RAID1 is that if one disk crashes, your system keeps going. RAIDing1 anything but the whole disk sounds like doing backup in the wrong way.

----------

## NeddySeagoon

batistuta,

raid is for speed or reliability or both, never for backup. When you 

```
rm <file> 
```

its just as gone as on a single drive install. You still need your backups.

If you make boot 32Mb (and thats plenty with ext2) you may as well raid 1 it to keep everything tidy. You have 32Mb left over on the other (non boot) drive otherwise

----------

## avieth

I found this page http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html

It's not gentoo specific, but I followed the RAID-0 part and I think /dev/md0 is working  :Very Happy:  I made a 32mb boot, 2gb swap, 15gb root, and... well here's the fstab:

```

# fdisk /dev/md0

The number of cylinders for this disk is set to 38914.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/md0: 320.0 GB, 320083591168 bytes

255 heads, 63 sectors/track, 38914 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System

/dev/md0p1   *           1           5       40131   fd  Linux raid autodetect

/dev/md0p2               6         255     2008125   fd  Linux raid autodetect

/dev/md0p3             256        2080    14659312+  fd  Linux raid autodetect

/dev/md0p4            2081       38914   295869105   fd  Linux raid autodetect

```

Does it look alright? I'll probably edit this post to add some more questions/comments.

EDIT:  :Embarassed:  /dev/md0p* didn't appear, there are no /dev/md0p-anythings. /dev/sda1 2 3 and 4 appeared though... When I try and format /dev/sda4 with reiserfs I get:

```

# mkreiserfs /dev/sda4

mkreiserfs 3.6.19 (2003 www.namesys.com)

A pair of credits:

Yury Umanets  (aka Umka)  developed  libreiser4,  userspace  plugins,  and  all

userspace tools (reiser4progs) except of fsck.

Elena Gryaznova performed testing and benchmarking.

Guessing about desired format.. Kernel 2.6.17-gentoo-r8 is running.

Format 3.6 with standard journal

Count of blocks on the device: 73967264

Number of blocks consumed by mkreiserfs formatting process: 10469

Blocksize: 4096

Hash function used to sort names: "r5"

Journal Size 8193 blocks (first block 18)

Journal Max transaction length 1024

inode generation number: 0

UUID: 1f47e2e1-e216-4d29-a1eb-31e957c1bffa

ATTENTION: YOU SHOULD REBOOT AFTER FDISK!

        ALL DATA WILL BE LOST ON '/dev/sda4'!

Continue (y/n):y

Initializing journal - 0%....20%....40%....60%....80%....100%

The problem has occurred looks like a hardware problem. If you have

bad blocks, we advise you to get a new hard drive, because once you

get one bad block  that the disk  drive internals  cannot hide from

your sight,the chances of getting more are generally said to become

much higher  (precise statistics are unknown to us), and  this disk

drive is probably not expensive enough  for you to you to risk your

time and  data on it.  If you don't want to follow that follow that

advice then  if you have just a few bad blocks,  try writing to the

bad blocks  and see if the drive remaps  the bad blocks (that means

it takes a block  it has  in reserve  and allocates  it for use for

of that block number).  If it cannot remap the block,  use badblock

option (-B) with  reiserfs utils to handle this block correctly.

bread: Cannot read the block (73967263): (Input/output error).

Aborted

```

So I tried:

```

# mkfs.reiser4 /dev/sda4

mkfs.reiser4 1.0.5

Copyright (C) 2001, 2002, 2003, 2004 by Hans Reiser, licensing governed by

reiser4progs/COPYING.

Block size 4096 will be used.

Linux 2.6.17-gentoo-r8 is detected.

Uuid 490a4255-fcf7-416f-bbb1-3b693423db23 will be used.

Reiser4 is going to be created on /dev/sda4.

(Yes/No): yes

Creating reiser4 on /dev/sda4 ... done

Error: Can't synchronize device /dev/sda4.

```

EDIT2: Been reading up some more and I mirrored /dev/sda and /dev/sdb fstabs and tried this command:

```

# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

mdadm: chunk size defaults to 64K

mdadm: Cannot open /dev/sda1: Device or resource busy

mdadm: Cannot open /dev/sdb1: Device or resource busy

mdadm: create aborted

```

 :Sad: 

----------

## NeddySeagoon

avieth,

The fstab for disk /dev/md0 looks wrong on several counts.

md0 is is logical raid comprised od partitions from each of the underlying drives.

You partition your two drives as for a normal install but make sure they are partitioned identically.

For all partitions that are to be donated to raid sets, use fdisk to change the partition types to fd.

Next, form your raid sets with mdadm or raidtools (depreciated).

make filesystems on the raid sets and mount your /dev/mdX and so on. 

Proceed with a normal install, the kernel will hide the raid from you.

You do not attempt to partition /dev/mdX further - you already have the raid sets you require from partitioning the underlying drives.

Here is my fdisk for one drive, the other is identical.

```
Disk /dev/sda: 300.0 GB, 300090728448 bytes

255 heads, 63 sectors/track, 36483 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1           5       40131   fd  Linux raid autodetect

/dev/sda2               6         130     1004062+  82  Linux swap / Solaris

/dev/sda4             131       36483   292005472+   5  Extended

/dev/sda5             131         739     4891761   fd  Linux raid autodetect

/dev/sda6             740        4387    29302528+  fd  Linux raid autodetect

/dev/sda7            4388        4631     1959898+  fd  Linux raid autodetect

/dev/sda8            4632        5361     5863693+  fd  Linux raid autodetect

/dev/sda9            5362        7307    15631213+  fd  Linux raid autodetect

/dev/sda10           7308       36483   234356188+  fd  Linux raid autodetect
```

and /proc/mdstat shows

```
Personalities : [raid0] [raid1] 

md1 : active raid0 sdb5[1] sda5[0]

      9783296 blocks 16k chunks

      

md2 : active raid0 sdb6[1] sda6[0]

      58604928 blocks 16k chunks

      

md3 : active raid0 sdb7[1] sda7[0]

      3919616 blocks 16k chunks

      

md4 : active raid0 sdb8[1] sda8[0]

      11727232 blocks 16k chunks

      

md5 : active raid0 sdb9[1] sda9[0]

      31262208 blocks 16k chunks

      

md6 : active raid0 sdb10[1] sda10[0]

      468712192 blocks 16k chunks

      

md0 : active raid1 sdb1[1] sda1[0]

      40064 blocks [2/2] [UU]

      

unused devices: <none>
```

----------

## avieth

Thanks for all the help so far, I think I'm on the last problem! I set up some software raid in the livecd and copied all my root filesystem to a 15gb raid-1, copied /boot to a raid-1 32M partition, and then copied my home dir to a 300GB raid-0 partition. Now I have all the files I need to boot, it's just time to install grub on the 32M /dev/md1 partition.

The drives in the system at the time:

```

1 MAXTOR 30GB /dev/hda

1 WD 80GB /dev/hdb

2 Identical WD 160GB SATA at /dev/sda /dev/sdb

```

So I did:

```

# grub

> root (hd2,0)

> setup (hd2)

> root (hd3,0)

> setup (hd3)

```

Everything went fine, but when I told my bios to boot from scsi it gives me a grub error 17. What am I doing wrong?

----------

## NeddySeagoon

avieth.

```
17 : Cannot mount selected partition

     This error is returned if the partition requested exists, but the

     filesystem type cannot be recognized by GRUB.
```

Grub ignores the raid and reads the underlying partitions, which is why /boot must be on raid 1. Neither the BIOS nor grub understands kernel raid. I suspect your BIOS may have renumbered your drives so the SATA drives are 0 and 1 and your IDE drives are 2 and 3 because you changed to booting from SCSI. My BIOS does that.

Its likely that your (hd2,0) or (hd3,0) is now NTFS, or whatever you have on your IDE drives.

The fix is to boot with the liveCD ... form your raids *not* remake them, and get into the chroot to reinstall grub.

When you get to

```
grub

root (hd
```

press the tab key and grub will lists your drives, choose one and continue to

```
root (hd0, 
```

and press tab again. Grub will show the partitions on hd0. Repeat for drives 1, 2 and 3.

You can probably tell the drives apart by the partition scheme that grub shows you.

----------

## avieth

How can I boot from the livecd and not have to remake the raids with mdadm? Is there a boot option I need for the livecd?

----------

## NeddySeagoon

avieth,

Boot the liveCD and use the mdadm assemble command. See its man page You will have it installed on your raid but you need to assemble the raid to read it from there.

Be very clear about the to commands for making and assembling raid sets. One destroys all the dta and gives you a clean new raid set, the other starts an existing raid set so you can mount it.

----------

## avieth

I can't figure out a --assemble command that finds the drives, so I did:

```

mknod /dev/md1 b 9 1

mknod /dev/md3 b 9 3

mknod /dev/md4 b 9 4

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

mdadm --create /dev/md4 --level=0 --raid-devices=2 /dev/sda4 /dev/sdb4

```

The filesystems were kept intact.

Then:

```

grub

> root (hd0,0)

> setup (hd0)

> root (hd1,0)

> setup (hd1)

> quit

```

The two identical sata drives are the only hard drives connected. I rebooted and got the same error 17.

Oh, and could you just reassure me that software raid is just as good as the A7N8X's "hardware" raid? I understand that I don't really have the option of using the bios controlled raid, but am I really losing any speed by using software raid?

EDIT: Ok, this is really confusing. I deleted my BIOS raid array - Then it wouldn't even attempt to boot from SCSI. So I created a mirrored array - All data was lost. So I went back to a BIOS striped array. Don't worry, I have backups  :Very Happy: 

So what I don't get is why deleting the BIOS arrays affects my data if I'm using software raid and can see two separate disks in linux. Maybe I should just reinstall gentoo from scratch  :Confused: 

----------

## NeddySeagoon

avieth,

I have a A7N8X too. It does not have hardware raid. It provides BIOS software raid.

Your choice is between two different implementations of software raid, Your BIOS or the kernel.

Speed wise, there will be little to choose between them. The data rate limit will be the head platter limit if you are using the on board SIL 3112 SATA controller. If you have a card in a PCI slot, the PCI bus speed will be the limit.

The data layout on the drives for BIOS raid is determined buy the BIOS, if you use its raid and by the kernel if you use kernel raid.

You need an identical motherboard and BIOS if you ever need to move a BIOS raid to a new host. Kernel raid is hardware and BIOS independent. Thats good when your motherboard dies.

Kernel raid is more mature than BIOS raid, as well as being portable.

Grub Error 17 means the partition exists but grub cannot read it. Please post your

```
fdisk -l
```

output annotated with the filesystem name and type. Just /dev/sda will do, since /dev/sdb should be identical.

You did make /boot raid1 and not raid0 ?

----------

## avieth

I can't copy it exactly, but here it is from memory. All the data (except for the start end and blocks) is correct:

```

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1           #       #####   fd  Linux raid autodetect

```

Yes, /dev/sda1 and /dev/sdb1 are in a raid-1 array.

----------

## NeddySeagoon

avieth,

That looks good. Did you get any errors when you installed grub or did it say Embedded XX sectors, where XX is a number ?

Since you installed grub on both MBRs try pointing the BIOS at the other disk to boot from.

Do you have a floppy drive ?

If not, when you install grub, you need to use the -no-floppy option, as in 

```
grub -no-floppy
```

----------

## avieth

```

# grub --no-floppy

> root (hd0,0)

  Filesystem type is ext2fs. Partition type 0xfd.

> setup (hd0,0)

  #First three lines went well

  Running "embed /boot/grub/e2fs_stage1_5 (hd0,0)"... failed (This is not fatal)

  Running "embed /boot/grub/e2fs_stage1_5 (hd0,0)"... failed (This is not fatal)

  Running "install /boot/grub/stage1 (hd0,0) /boot/grub/stage2 p /boot/grub/menu.lst "... succeeded

Done.

> root (hd1,0)

  Filesystem type is ext2fs. Partition type 0xfd.

> setup (hd1,0)

  #First three lines went well

  Running "embed /boot/grub/e2fs_stage1_5 (hd1,0)"... failed (This is not fatal)

  Running "embed /boot/grub/e2fs_stage1_5 (hd1,0)"... failed (This is not fatal)

  Running "install /boot/grub/stage1 (hd1,0) /boot/grub/stage2 p /boot/grub/mem.lst "... failed

Error 16: Inconsistent filesystem structure.

```

I had to type that out myself so excuse any spelling mistakes. That's what happens though. These two partitions are mirrored and go to /dev/md1... I tried reformatting /dev/md1, and in fdisk they are both identical... What's the problem here?

EDIT: If I first do root (hd1,0) and then setup (hd1,0) the command is successful, but (hd0,0) fails, so only the second drives fails.

I also tried reformatting with xfs. The second drive still fails, but this is outputted at the end of the setup command:

```

Error 31: File is not sector aligned

```

EDIT 2: I tried booting from SCSI anyways... The bios pauses on Verifying DMI Pool Data...

Maybe this grub problem is here because I have a striped set in the bios  :Confused: 

----------

## NeddySeagoon

avieth,

You must turn off the RAID in the BIOS. The kernel is doing it all for you now.

----------

## avieth

 *NeddySeagoon wrote:*   

> avieth,
> 
> You must turn off the RAID in the BIOS. The kernel is doing it all for you now.

 

Ah, thanks for the help man. I gotta copy everything from my IDE to sata again now  :Mad: 

----------

## NeddySeagoon

avieth,

From IDE to /dev/md... you mean ?

----------

## avieth

Yeah, I mount /dev/hda3, my old root partition, mount /dev/md3 on /mnt/store, and then:

cp -ax / /mnt/store

----------

