# moving to a SATA RAID 1 from single IDE drive (SOLVED)

## GoofballJM1

I have ordered two 80gb hard drives and a Promise SATA RAID 0/1/JBOD controller for my company's linux server.  It is currently running on a 40gb IDE drive.  My plan obviously is to move it to the RAID 1 array for redundancy.  Since I have never done this before, here's how I am planning on doing this.

1.  Set up the RAID in the RAID bios.

2.  Compile Kernel support for the RAID controller

3.  Emerge device-mapper and dmraid

4.  Format and partition the RAID array

5.  copy the data over to the RAID drives using cp -a or dd (which would be used here?  I would assume cp)

6.  Edit my boot loader config file (Lilo in this case), and install using /sbin/lilo

7.  Edit my /etc/fstab to reflect changes

8.  Reboot and cross my fingers.

Anything I am missing?

----------

## anonybosh

I did this kind of thing a while back, only with software RAID, so I'm not privy on the bios control stuffo.

But as far as step 3 and up, looks good to me. For cp, you might want to use -P also. I've read that tar is supposedly the most sure fire way to go though.

Good luck!

----------

## NeddySeagoon

GoofballJM1,

If its really a hardware raid controller you are procuring you should not need  *Quote:*   

> 3 Emerge device-mapper and dmraid 

 The raid set should appear as a single drive the the kernel and the raid card will take care of the messy internals.

I'm not aware of any real promise hardware raid controllers supported by the kernel - they tend to be software raid with the software in the BIOS. Then you do need dmraid and friends.

The only reason for anyone using dmraid is that it is compatible with windows. Kernel software raid is a much more mature product.

You cannot use dd to copy the data over. That will copy all the filesystem data too, which is not what you want at all.  you need to copy, preserving permissions. tar or rsync may  be better. You need to avoid copying /proc and /dev, since you will get all sorts of unwanted affects.

----------

## GoofballJM1

Thanks for the help.  I assume the controller is similar to a lot of comtemporary RAID devices in the sense that they seem to rely on software driver control more than they used to.  I will let you all know if I run into any issues  :Very Happy: 

----------

## GoofballJM1

If I don't need dmraid because I am booting linux only, then do I really need the controller card(fasttrack TX2200)?  I configured the mirror within the RAID BIOS, but they show up as two separate drives.

----------

## NeddySeagoon

GoofballJM1,

If you use kernel raid and can plug the drives into the motherboard rather than a PCI card, you will get better performace because you will not clutter your PCI with all the disk traffic.

dmraid should show the drives as a single logical device but the kernel can still see the individual drives too.

----------

## GoofballJM1

 *NeddySeagoon wrote:*   

> GoofballJM1,
> 
> If you use kernel raid and can plug the drives into the motherboard rather than a PCI card, you will get better performace because you will not clutter your PCI with all the disk traffic.
> 
> dmraid should show the drives as a single logical device but the kernel can still see the individual drives too.

 

Yeah, I am just going to plug them in straight to the motherboard and do a kernel RAID.  I suppose I should have researched whether this controller was a true hardware raid device.  Even when I went into the RAID BIOS and configured the array for RAID 1, the /dev/ entry for the raid didn't show up with dmraid.  Oh well, kernel RAID it is!

----------

## GoofballJM1

Okay I have spent the better part of two days working on this and can't get it to work.  Since there is no exact tutorial on how to do this, I referred to this howto considering this is the closest I could find for what I needed to do.  Unfortunately the drive DO NOT boot.  It gets to searching for a bootable drive and just sits there.  No luck.  I have tried everything I can think of.  Can anyone tell me how this should be done?

UPDATE:  I can get the drives to boot, but now I am getting this error message

```

Mounting /dev/md2 on /newroot failed:  Input/Output error

Could Not mount specified ROOT, try again

The root block device is unspecified or not detected
```

Here's my grub.conf

```
timeout 5

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

default 0

fallback 1

title=Gentoo Linux (hd0,0)

root (hd0,0)

kernel (hd0,0)/kernel-genkernel-x86-2.6.15-gentoo-r1 root=/dev/ram0 init=/linuxrc ramdisk=8192 real_root=/dev/md2 udev

initrd (hd0,0)/initramfs-genkernel-x86-2.6.15-gentoo-r1

title=Gentoo Linux (hd1,0)

root (hd0,0)

kernel (hd1,0)/kernel-genkernel-x86-2.6.15-gentoo-r1 root=/dev/ram0 init=/linuxrc ramdisk=8192 real_root=/dev/md2 udev

initrd (hd1,0)/initramfs-genkernel-x86-2.6.15-gentoo-r1
```

and my fstab

```
/dev/md0                /boot           ext2            defaults,noatime        1 2

/dev/md1                none            swap            sw              0 0

/dev/md2                /               reiserfs        noatime         0 1

/dev/cdroms/cdrom0      /mnt/cdrom      auto            noauto,user     0 0

#/dev/fd0               /mnt/floppy     auto            noauto          0 0

# NOTE: The next line is critical for boot!

proc                    /proc           proc            defaults        0 0

# glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for

# POSIX shared memory (shm_open, shm_unlink).

# (tmpfs is a dynamically expandable/shrinkable ramdisk, and will

#  use almost no memory if not populated with files)

tmpfs                   /dev/shm        tmpfs           nodev,nosuid,noexec     0 0
```

I can get either drive to boot now, but I can't get it to find the actual ROOT devices.

----------

## NeddySeagoon

GoofballJM1,

Hmm. there are a number of errors in that howto, but nothing that should stop you booting.

You need to install grub on both drives seperately. The raid1 applies only to the filesystems on the partitions, not to the space outside them where the grub stage 1 and stage 1.5 are stored.

Thats 

```
grub

root (hd0,0)

setup (hd0)

root (hd1,0)

setup (hd1)

quit
```

to install grub on both MBRs.

Are you sure that genkernel includes the raid modules in the kernel ?

I thought it left them out, hence your kernel has no idea what /dev/md2 is.

Note that the raid moules need to get into the initrd file for you to boot. Did you make that yourself?

Its far safer if you build your kernel by hand and build in everything you need to boot. This post may help.

In your grub.conf you have 

```
title=Gentoo Linux (hd1,0)

root (hd0,0) 
```

 the root (hd0,0) should be root (hd1,0) to boot from the second drive.

You no longer need

```
udev
```

on the kernel line. devfsd was removed in 2.6.13, so its no longer an option.

Did you mark the raid partitions as type fd when you used fdisk?

Thats how the kenle knows to assemble the raid sets at boot.  That has to happen before /dev/md2 exists to mount.

What about your /etc/raidtab - did you use persistant superblocks ?

----------

## GoofballJM1

 *Quote:*   

> Are you sure that genkernel includes the raid modules in the kernel ?
> 
> I thought it left them out, hence your kernel has no idea what /dev/md2 is.
> 
> Note that the raid moules need to get into the initrd file for you to boot. Did you make that yourself? 

 

I am using genkernel.  RAID support is built into the kernel.  The kernel is the same one on the RAID drives.  The array works great when I boot the the IDE drive.  That kernel image sees them fine and they work great.

 *NeddySeagoon wrote:*   

> GoofballJM1,
> 
> Did you mark the raid partitions as type fd when you used fdisk?
> 
> 

 

Didn't do that.  A missing step in the process.  Can I still do it even though the drives are formatted and data exists on them?  How would I toggle that option in fdisk?

 *Quote:*   

> What about your /etc/raidtab - did you use persistant superblocks ?

 

I emerged raidtools, but I couldn't find anything about that.  

Here's my /etc/mdadm.conf

```

DEVICE  /dev/sda*

DEVICE  /dev/sdb*

ARRAY   /dev/md0 devices=/dev/sda1,/dev/sdb1

ARRAY   /dev/md1 devices=/dev/sda2,/dev/sdb2

ARRAY   /dev/md2 devices=/dev/sda3,/dev/sdb3

```

It sounds like I have more than one problem here.  I also looked at this howto, but quite frankly, I thought it deviated too far from what I need.  I don't need mixed arrays, I just need to identical drives.

EDIT:  I am going to start over in the morning using the previously mentioned howto.  Now that I am reading it again, I can see the errors I have made.  I will let you know what happens.

----------

## NeddySeagoon

GoofballJM1,

Before you start over, use fdisk to fix the partition types. Its quite harmless to your data.

Are the raid personalities in the kernel as modules or built in ?

If they are modules, they need to be in the initrd, if they are built in, thats fine.

raidtools and mdadm serve the same purpose. I'm a raidtools user.

----------

## GoofballJM1

 *NeddySeagoon wrote:*   

> GoofballJM1,
> 
> Before you start over, use fdisk to fix the partition types. Its quite harmless to your data.

 

I toggled the Linux raid detect flag on /dev/md2 (My root directory).  I will try it out when I get in tomorrow.

EDIT:  same error message after that change

 *Quote:*   

> Are the raid personalities in the kernel as modules or built in ?
> 
> If they are modules, they need to be in the initrd, if they are built in, thats fine.

 

I compiled the kernel using genkernel --menuconfig all.  I made sure the RAID configuration was configured as modules, so I would assume that it is in the initrd image.  Unfortunately, I'm not a work anymore so I can't tell you for sure.

EDIT:  It appears they were not loading into initrd, because when I compiling them directly into the kernel, the server boots now!  Yeah!

 *Quote:*   

> raidtools and mdadm serve the same purpose. I'm a raidtools user.

 

That's what I figured.

----------

## GoofballJM1

One last issue is this bug.  After doing what the message says, it appears that my /dev nodes are populating.  Here are the contents of my /dev directory.

```
agpgart   initctl  md1     ram15   sdb1    tty14  tty3   tty45  tty60    vcs4

bus       input    md2     ram2    sdb2    tty15  tty30  tty46  tty61    vcs5

cdrom     kmem     mem     ram3    sdb3    tty16  tty31  tty47  tty62    vcs6

cdrw      kmsg     misc    ram4    shm     tty17  tty32  tty48  tty63    vcsa

console   log      null    ram5    snd     tty18  tty33  tty49  tty7     vcsa1

core      loop     oldmem  ram6    stderr  tty19  tty34  tty5   tty8     vcsa12

disk      loop0    port    ram7    stdin   tty2   tty35  tty50  tty9     vcsa2

fb        loop1    psaux   ram8    stdout  tty20  tty36  tty51  ttyS0    vcsa3

fb0       loop2    ptmx    ram9    synth   tty21  tty37  tty52  ttyS1    vcsa4

fbsplash  loop3    pts     random  tts     tty22  tty38  tty53  ttyS2    vcsa5

fd        loop4    ram0    rd      tty     tty23  tty39  tty54  ttyS3    vcsa6

fd0       loop5    ram1    rtc     tty0    tty24  tty4   tty55  urandom  zero

floppy    loop6    ram10   sda     tty1    tty25  tty40  tty56  vcs

full      loop7    ram11   sda1    tty10   tty26  tty41  tty57  vcs1

gpmctl    mapper   ram12   sda2    tty11   tty27  tty42  tty58  vcs12

hdd       md       ram13   sda3    tty12   tty28  tty43  tty59  vcs2

hpet      md0      ram14   sdb     tty13   tty29  tty44  tty6   vcs3
```

Looks right to me.  I can get the system booting up without issues now.  If it is working, how do I remove that meaningless error message?

EDIT:  Fixed this one too. 

```
cp /etc/issue.devfix /etc/issue
```

Thanks for your pointers and getting me in the right direction.  :Wink: 

----------

