# [solved] Problem with DMRAID and 2 different Hard Drives

## Child_of_Sun_24

Hi @all

I have got a problem with my Fakeraid (Controller: Sil3231, Raid0 Stripe).

When i try to use the Raid0 dmraid tells me that the raid is not consistent, it only shows the second hard drive and ignore the metadata on the first hard drive (dmraid -r /dev/sdf says that this is not a raid device).

I have read this Link and tried to modify dmraid/lib/format/ataraid/sil.h but i can't understand the file because it is very different than the explained one for Promise Raid.

https://help.ubuntu.com/community/FakeRaidDebug

The only thing that i could see is that on the first drive the metadata shows up when i dump it (4 times), but thats all i can see.

I can't get it to work because of this error, i have got 2 1000GB hard drives from Seagate, with Windows it works fine but dmraid fails.

Some time before i have had another Problem with dmraid and Linux Kernels newer than 2.6.35 which i solved with disabling the AHCI function on the internal controller but this time it won't help.

I hope someone understand my Problem and can help me to solve this Problem.

Please excuse my bad English.Last edited by Child_of_Sun_24 on Mon Sep 12, 2011 12:27 pm; edited 1 time in total

----------

## NeddySeagoon

Child_of_Sun_24,

Lets get the silly questions out of the way first.

Is it really fakeraid and not Windows Dynamic Disks ?

Windows Dynamic Disks is software raid for Windows.

Linux can use that too.

----------

## Child_of_Sun_24

No it' not a dynamic disk, it is a Legacy Raid0, controller is a Dawicontrol 300e with a Silicon Image 3132 Chip, with normal MBR Partitions.

----------

## NeddySeagoon

Child_of_Sun_24,

Ahh.  I recall your previous related thread

Have any of your raid0 drives been part of a different fakeraid0 set in the past, so they could have left over (old) metadata on, which dmraid now picks up on?

Its not clear from your original post if you have two different drives in the problem raid0 set or if the drives are identical.

With fakeraid, you donate the drives to the raid set then partition.  This results in different disk data structures depending on the raid level you choose.

With raid1, when you use fdisk, you will get a partition table on each drive that describes that particular part of the mirror.

With raid0, you get a partition table on one drive that describes the entire space.  With a 1Tb/drive, 2 drive raid0 set it describes the entire 2Tb.

When you boot and the kernel sees the drives independently, it will complain a bout a missing partition table on one and complain that the partition table on the other drive describles more space than the drive has.  Both error conditions are normal.  This is the kernels view of the world before dmraid is started. 

Notice the bug you reference is written around a raid1 faskeraid setup, not raid0. You may only have metadata on one drive, as the BIOS stores the raid level you are using.

I would not be surprised to find a single copy of the medadata either on one disk or striped across both.

Post your fdlsk -l /dev/... for both drives and the tarballs of the last 3000 blocks of each drive and I'll see if I can spot your fakeraid header and calculate the offset.

If you have nowhere to post tarballs, you can email them to me. They will be about 3Mb total, uncompressed.

----------

## Child_of_Sun_24

First i will answer you questions.

No the drives are new (In my old thread there were 2x 500GB Seagate Barracuda 7200.10 Drives), a few days ago i've bought 2x Seagate drives, but haven't looked as good as i should  :Sad:  so that there are 2 different, the second drive is a Seagate Barracude 7200.12, at the moment i don't know the first drive,i will look at it when i create the disk dumps.

To the fdisk part, i know that on the first drive it shows /dev/sdg, /dev/sdg1, /dev/sdg2, and /dev/sdg3, the second drive only shows /dev/sdh.

But i will post more verbose output in the next posting.

----------

## NeddySeagoon

Child_of_Sun_24,

 *Quote:*   

> To the fdisk part, i know that on the first drive it shows /dev/sdg, /dev/sdg1, /dev/sdg2, and /dev/sdg3, the second drive only shows /dev/sdh. 

 

Is the correct fdisk structure for a dmraid raid0 set.

I will need the numbers from fdisk to try to decode the metadata.

----------

## Child_of_Sun_24

https://rapidshare.com/files/4118175354/dumps.tar

Here are the files you've wanted, there are the logs from the fdisk command, hexdumps and normal dumps of the last 3000 sectors of both drives, and the error dmraid tells me.

In my previous post i've made a mistake the drives are /dev/sdf (not /dev/sdg) and /dev/sdh.

The first drive is a Seagate Barracuda LP 1000GB (Green).

Heres the Fdisk part:

```
root@sysresccd / % fdisk -l /dev/sdf

Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xeaca4ba3

   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1   *        2048      206847      102400    7  HPFS/NTFS/exFAT

/dev/sdf2          206848   409806847   204800000    7  HPFS/NTFS/exFAT

/dev/sdf3       409806848  3907041279  1748617216    7  HPFS/NTFS/exFAT
```

```
root@sysresccd / % fdisk -l /dev/sdh

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdh doesn't contain a valid partition table
```

and finally the dmraid error message and the content of /dev/mapper:

```
root@sysresccd / % ls /dev/mapper

control

root@sysresccd / % dmraid -ay

ERROR: sil: wrong # of devices in RAID set "sil_bhajagabcjbd" [1/2] on /dev/sdh

ERROR: removing inconsistent RAID set "sil_bhajagabcjbd"

ERROR: no RAID set found

no raid sets
```

----------

## NeddySeagoon

Child_of_Sun_24,

I have the dumps tarballs but as its after 1:00 am here, I'll look at them after I've slept.

Poke me with a PM if you haven't heard more my 19:00 UTC.

----------

## Child_of_Sun_24

Ok, thank you.

----------

## NeddySeagoon

Child_of_Sun_24,

```
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors

Disk /dev/sdh doesn't contain a valid partition table
```

Is all perfectly normal ... almost. Even if the drives are different types, they have identical sizes, which was a worry as I don't know how fake raid copes with differing drive sizes.

Normally, fakeraid sets are made from sequential devices but you have /dev/sdf and /dev/sdh in your raid set.

You appear to have at least 8 drives in your system.

I suspect Windows and Gentoo do not detect your drives in the same order, so dmraid is looking in the wrong place for a part of your raid0.

Post your lspci output, your dmesg output and describe how Windows sees your drives.

How many drives does your Dawicontrol 300e controller accept? 

The Silicon Image 3132 chipset supports 4 drives - thats not to say your cards does too.

----------

## Child_of_Sun_24

Since the beginning of the Thread and now something in my System has changed, the two drives have been directly successively at the beginning under the Gentoo Linux i've set up.

The order has changed since i now use sysresccd because i have deleted my old Gentoo installation.

Today i must change my Mainboard because the old one had given up, but nothing changed, the Problem still exists.

The Controller accept only 2 drives (which are the two Seagate 1000GB), the other drives are USB Drives, 4 Card reader, the Stick i start sysresccd from and my USB Backup driver, where all my backups are stored.

Windows see the drives as one drive Volume, i can't even use SiW to read the drive information, only with the silicon image raid tool (When i use the silicon image drivers) i can view the complete information and can change the raid on the fly.

Dmesg and lspci will follow a bit later.

----------

## NeddySeagoon

Child_of_Sun_24,

It would be useful to disable USB storage altogether so we could be sure of the drive detection order.

To do that you need to build your own kernel, or at lease be able to blacklist the usb-storage module so it does not loaded before the SATA drives are all detected.

I don't think you can do that with SystemResuceCD.

Disconnecting all of the USB storage devices would work too but that may mean you have to open your box, depending on your card reader.

edit:  this is just for testing. If booting with no USB storage devices fixes the issue, we can arrange to have USB-storage load after everything else, when the raid0 is up.

----------

## Child_of_Sun_24

Sorry, this was to long  :Smile:  Will upload the files a bit later

----------

## Child_of_Sun_24

root@sysresccd /root % dmraid -ay

ERROR: sil: wrong # of devices in RAID set "sil_bhajagabcjbd" [1/2] on /dev/sdf

ERROR: removing inconsistent RAID set "sil_bhajagabcjbd"

ERROR: no RAID set found

no raid sets

root@sysresccd /root % ls /dev/sd*

/dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sde1  /dev/sde2  /dev/sde3  /dev/sdf

Have had the same idea, but here the two drives are sde and sdf, the first 4 are the card readers drives, but it won't work here, too.

dmesg:

http://pastebin.com/X3i9Bv6f

lspci -v:

http://pastebin.com/Bn4BmDjZ

I have backed up everything on my Computer, i want to try a complete new resetup of everything inclusive the raid0 stripe.

Hope that this helps a bit, but don't really believe this.

----------

## NeddySeagoon

Child_of_Sun_24,

```
[    0.000000] Kernel command line: scandelay=1 nomodeset vga=791 initrd=initram.igz docache nodmraid blacklist=*sil24* BOOT_IMAGE=altker64 
```

You turned off dmraid in that boot, so SystemRescueCD did not attempt to start fakeraid volumes.

There is nothing to be gained from rebuild your fake raid set.  Its unlikely to work and if it does, you won't know why, so if it fails in the future you won't be able to fix it.

----------

## Child_of_Sun_24

At the moment i have done it, i haven't rebuild it, i have deleted it and made a new, tried raid0, jbod and raid1, in the raid1 mode it accept the raid but only without a mirror (It creates /dev/mapper entries but it only use the Barracuda 7200.12, the Barracuda LP drive is ignored everytime.

Even if i completely delete the array and delete the metadata from the 7200.12 dmraid won't find any metadata on the LP.

So now i have made a new stripe set and removed all drives except the two raid disks and the cdrom, so there were no usb drive, it won't work.

I have disabled all acpi bios settings, the ide and sata controller, and used a bootable usb stick and it won't work.

I have changed the drive order, won't work.

But there is something curios with dmraid, when i write dmraid -r /dev/sdf it says that this is a raid drive sil_********, etc.pp. and that the stripe is consistent but only shows the sectors of the one disc.

But no failure.

Don't know anything that i can do, nothing seems to work.

----------

## NeddySeagoon

Child_of_Sun_24,

Your raid0 set is destroyed now ?

----------

## Child_of_Sun_24

Yes, i have tried other configurations to find out if it is anything else, have made a new raid0 stripe, and reinstall windows at the moment which can use it as normal, only under Linux the same error everytime, it doesn't see the metadata on the LP drive.

Like i said i have changed the physically drive order with the result that the LP drive won't be acceptet, i have changed the order in the raid set, with the same result.

It's every time the Seagate Barracuda LP that won't be read, the Seagate Barracuda 7200.12 is every time the accepted drive.

I think it is the Hard Disk that makes Problems.

----------

## Child_of_Sun_24

Could it be that it have something to do with the device string ?

The Seagate Barracuda 7200.12 have got a device String length of 11 Signs like the 2x500GB drives which i used before:

ST3100052AS

The not working Seagate Barracuda LP have got a 18 char String:

ST1000DL002-9TT153

Don't know if this has something to do with my Problem.

----------

## Child_of_Sun_24

Hi,

i have given up, i will use the raid only for Files and backups and will use a internal 750 GB for my Windows and Linux installations, so i can use the raid under Windows and if sometime in the future the raid will be usable with Linux i migrate it back.

Thanks for the help NeddySeagoon.

CoS24

----------

## Child_of_Sun_24

I have now setup my raid0 and changed the Barracuda LP with my 750 GB Samsung, now the raid0 works perfect, only 250GB are not usable, but that is not important.

The LP drive is now my external backup drive  :Smile: 

Thanks,

CoS24

----------

