# raid recovery

## barrymac

Hello fellow gentooists! 

I've had a nightmare of a week. Two NAS's appear to have died, slowly but surely. You'd think that having two identical boxes as well as multiple drives would be protection enough but this appears to be my situation. The disks are fine, mostly. My most valuable work is now on a degraded 3 out of 4 drive array. 

I'm trying to determine if I'll ever be able to reassemble the raid5 array on another machine and perhaps even boot it from an esata enclosure. I've never used one of these esata enclosures. Of course just to make things interesting the drives are PATA. Adapters are available to convert them to SATA but of course they probably wouldn't fit then.

Is it possible to use a hardware raid card with a previously linux software raid array? Is this dangerous for the data? 

I have only a small PC with no spare slots, and only one esata port so I'm trying to do some research before burning more cash pointlessly.

Maybe worst case scenario is to use USB enclosures ? But really I would desperately like to actually boot the old system so I can dump some mysql database which I hadn't gotten around to backing up recently enough.

----------

## NeddySeagoon

barrymac,

First, if you have three drives of a four drive raid5 array, you don't need to do anything special to make it work.

Its supposed to work that way - its called degraded mode. The idea is you replace the dead drive before another drive fails and you lose all your data.

This is true regardless of the type of raid you have.

Linux supports four types of raid. Disks used in one cannot be moved to another type, without reformatting.

The four types are:-

hardware raid

kernel raid

fakeraid

Windows Dynamic Disks

Its unlikely your have hardware raid or Windows Dynamic Disks.  Hardware raid makes a big hole in your wallet and there is no point in using Windows Dynamic Disks unless your also use Windows.  That leaves kernel raid and fakeraid.

They are both versions of software raid. kernel raid is entirely in kernel.  Fakeraid needs help from the BIOS.  The data layout on the drives is quite different too.

If you have kernel raid, you can move the drives to new host hardware and provided your now hosts kernel supports raid, it should all just work. Even in degraded mode. If you have fake raid, you need to move the drives to an *identical* hard drive controller with an *identical* BIOS, as the BIOS sets the data layout on the drives.

USB enclosures are a really bad idea for a raid set. They are unreliable and tend to drop out. Maybe at a pinch, if you can manage read only, after everything else has failed.

As your raid seems to not work, I'm worred you have lost more than one drive.

----------

## barrymac

Thanks Neddy for your imformative post. 

I haven't been clear in my situation. Nether of my NAS boxes will start, it looks like motherboard failure on both systems! No BIOS post on display, no keyboard lights, no beep codes!! 

But I'm pretty cure the discs are ok, last time it booted then the system was running ok and had booted in a 3 of 4 degraded configuration.

So it seems that software and hardware raid is not compatible. In any case I'm beginning to think that the RAID support I have in the motherboard of my PC is fakeraid. 

But I'm wondering if it would be possible to boot from the array from any other machine as long as I could plug in the drives. 

Is it possible to even plug them in with USB adapters and assemble the raid? of do they have to have exactly the same hardware device node as before?  (sda , sdc, sde, sadg)   To make things interesting the motherboard in this NAS puts each drive on it's own channel thus skipping the slave on each one.

----------

## NeddySeagoon

barrymac,

Just because you have fake raid does not mean that its actually used.

You can ignore the fakeraid properties of a chip set and use it in ins Just a Bunch Of Drives (JBOD) mode.

Fake raid is slower and less flexible than kernel raid, so given the choice, kernel raid should be used.

The *only* reason for choosing fake raid is that windows must share the raid set.

Find another linux system that you can plug one of the drives into. 

Look at the drive with fdisk -l to discover the partition layout.

When you know the partition layout, point mdadm -E at each partition.

From the output, of both fdisk -l and mdadm -E we can tell if you have kernel raid or fake raid.

mdadm output like  

```
$ sudo mdadm -E /dev/sda1

Password: 

/dev/sda1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 9392926d:64086e7a:86638283:4138a597

  Creation Time : Sat Apr 11 16:34:40 2009

     Raid Level : raid1

  Used Dev Size : 40064 (39.13 MiB 41.03 MB)

     Array Size : 40064 (39.13 MiB 41.03 MB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 1

    Update Time : Fri Mar 18 21:14:13 2011

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : ffc79960 - correct

         Events : 4

      Number   Major   Minor   RaidDevice State

this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1

   1     1       8       17        1      active sync   /dev/sdb1

   2     2       8       33        2      active sync   /dev/sdc1

   3     3       8       49        3      active sync   /dev/sdd1
```

shows that mdadm understands the raid superblock format, so its almost certain you have kernel raid.

----------

## barrymac

I know it's a kernel raid because I created it myself and learned a lot during the process. Interesting points you mention that kernel raid is actually also faster than fakeraid. 

I just hope that it's possible to reassemble the array on a different machine than the one I created it in, which is now dead apparently. The discs will definitely have different device nodes. I would also like to be able to boot the old system in this way. 

I'm afraid to plug them into another machine and try and reassemble the array because I'm no expert on this stuff and I'm afraid of permanently making the array unmountable. I don't know what could happen to the superblock info and things like this if I tried this 

If the drives positions don't matter then it would even be possible for me to DD the raid partitions into files onto a bigger disc, set them up as loopback devices and assemble the array like this! just for the purposes of data recovery.

----------

## barrymac

Sorry for the additional post but just for clarification; 

Is it possible and SAFE to plug in drives that are configured as a kernel RAID5 array in any order in another machine and reassemble the array ?  

The reason I'm afraid to try this is because I once cause some problems just by putting a drive back in the wrong position in the original system and it wouldn't start. In fact it messed up the array and it still wouldn't start after I moved the drive back to the correct slot. I needed to recovery it by booting from a CD.   :Mad: 

Basically I bumped in the dark into the following situation:

http://www.linuxdoc.org/HOWTO/Software-RAID-HOWTO-8.html#ss8.1

In fact it should be possible to practice the situation using loopback devices and see what happens! 

http://comments.gmane.org/gmane.linux.raid/31866

----------

## NeddySeagoon

barrymac,

Having a backup is always a good idea and a part of your raid set is dead.  You might need an 'undo' funaction.

Everything in Linux is a file so you should be able to feed loop devices to mdadm -A and have the raid assemble from the partition images.  I'm assuming here that you donated partitions to the raid sets and not whole drives?

If you ensure the kernel in your live box does not have auto assemble on, regardless of the raid superblock version, no auto assembly will take place.

----------

## barrymac

Yes indeed, I donated large partitions to the RAID5 and used the rest of the disc for swap partitions and redundant boot partitions, which also contained independently bootable systems. So basically if the raid ever failed to boot I have four other bootable partitions with which to attempt recovery. I backed up most of the data, but didn't have a proper scheme for dumping mysql easily and regularly ... yet, (procrastination!) 

But none of this helps when both system won't even post BIOS!  :Sad: 

So ok in the worst case I can always try the loopback approach and even chroot into it and run mysql from inside the chroot and dump out the databases.   :Rolling Eyes:  Mad!

Thanks very muh for the help Neddy, I think it's not the first time you have helped me here!

----------

