# [(RE)SOLVED] Windows Dynamic Disk RAID0

## nilekurt

Greetings!

Almost a year ago I bought a couple of storage drives and (sadly) used a software RAID0 in Windows Vista. It worked as intended so I stuck with it. Now I am back to Gentoo again and would like to access the drives without having to back up 1TB of data.

Windows Logical Disk Manager (Dynamic Disk) support is enabled in kernel. 

NTFS support is up and running.

/dev/sda(1,2,3)

*/dev/sdb1

*/dev/sdc1

*Part of RAID0

Trying to mount the separate partitions:

```

$sudo ntfs-3g /dev/sdc1 /test

Failed to read last sector (1953533951): Invalid argument

HINTS: Either the volume is a RAID/LDM but it wasn't setup yet,

   or it was not setup correctly (e.g. by not using mdadm --build ...),

   or a wrong device is tried to be mounted,

   or the partition table is corrupt (partition is smaller than NTFS),

   or the NTFS boot sector is corrupt (NTFS size is not valid).

Failed to mount '/dev/sdc1': Invalid argument

The device '/dev/sdc1' doesn't have a valid NTFS.

Maybe you selected the wrong device? Or the whole disk instead of a

partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?

```

```

$sudo ntfs-3g /dev/sdc1 /test

Failed to read last sector (1953533951): Invalid argument

HINTS: Either the volume is a RAID/LDM but it wasn't setup yet,

   or it was not setup correctly (e.g. by not using mdadm --build ...),

   or a wrong device is tried to be mounted,

   or the partition table is corrupt (partition is smaller than NTFS),

   or the NTFS boot sector is corrupt (NTFS size is not valid).

Failed to mount '/dev/sdc1': Invalid argument

The device '/dev/sdc1' doesn't have a valid NTFS.

Maybe you selected the wrong device? Or the whole disk instead of a

partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?

```

Thanks in advance!Last edited by nilekurt on Sun Dec 07, 2008 9:33 pm; edited 2 times in total

----------

## NeddySeagoon

nilekurt,

Welcome to Gentoo.

Thank goodness the mounts failed. Had they succeeded, your data would have been lost as you depend on the raid structure to read  the raid set and the superblocks would have different last mounted timestamps, which would make the kernel think they did not belong to the same raid set. You must never do anything to the underlying partitions of a raid set.

I'm unsure if you need dm-raid or kernel raid for Windows Dynamic Disk but the startup sequence is assemble your raid, look for your /dev/md (kernel raid) or /dev/mapper/ (dm-raid) node and mount the raid like you would a single partiton.

The kernel hides the underlying raid structure from you after the raid is assembled.  

The content of  /usr/src/linux/Documentation/ldm.txt will be useful to you.

----------

## nilekurt

Whoops! I didn't figure mounting it could cause something like that.

Thank you for the welcoming and suggestions. I have already tried fiddling about somewhat with that, but I guess it requires a bit more.

----------

## NeddySeagoon

nilekurt,

kernel document says 

```
Once the LDM driver has divided up the disk, you can use the MD driver to

assemble any multi-partition volumes, e.g.  Stripes, RAID5.
```

 MD is the kernel raid driver, so you need kernel raid0 support, either built in or as a module. Check your kernel config like this

```
grep RAID /usr/src/linux/.config

# CONFIG_RAID_ATTRS is not set

CONFIG_MD_RAID0=y

CONFIG_MD_RAID1=y

# CONFIG_MD_RAID10 is not set

# CONFIG_MD_RAID456 is not set
```

That shows I have both RAID0 and RAID1 built in.

This might be a bit of a showstopper 

```
A newer approach that has been implemented with Vista is to put LDM on top of a

GPT label disk.  This is not supported by the Linux LDM driver yet.
```

 What does 

```
fdisk -l 
```

say about your drives ?

fdisk does not understand GPT, so ignore any worrying looking error messages.

Does your dmesg show   

```
hda: [LDM] hda1 hda2 hda3 hda4 hda5 hda6 hda7
```

or at least the [LDM] bit for the drives ?

If all looks good, make yourself a /dev/md0 node with 

```
mknod -m 660 /dev/md0 -b 9 0

chown root:disk /dev/md0
```

You may not need this step but its harmless.

Now tell mdadm to assemble the raid. Read is man page. Be sure you understand the difference between assemble and create.

With the raid assembled as /dev/md0 you should be able to 

```
mount -t ntfs-3g /dev/md0 /some/place
```

----------

## nilekurt

I haven't been able to solve it yet, but I think I've progressed a bit since my last post. 

RAID support wasn't enabled (durr), and neither did I have device-mapper running. The problem persists despite getting it all together.

```

$ sudo dmraid -r

No RAID disks

```

```

$ grep RAID /usr/src/linux/.config

# CONFIG_RAID_ATTRS is not set

# CONFIG_BLK_DEV_3W_XXXX_RAID is not set

# CONFIG_SCSI_AACRAID is not set

# CONFIG_MEGARAID_NEWGEN is not set

# CONFIG_MEGARAID_LEGACY is not set

# CONFIG_MEGARAID_SAS is not set

CONFIG_MD_RAID0=y

CONFIG_MD_RAID1=y

CONFIG_MD_RAID10=y

CONFIG_MD_RAID456=y

CONFIG_MD_RAID5_RESHAPE=y

```

```

$ sudo fdisk -l

Disk /dev/sda: 150.0 GB, 150039945216 bytes

255 heads, 63 sectors/track, 18241 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x6c112394

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1           5       40131   83  Linux

/dev/sda2               6          68      506047+  82  Linux swap / Solaris

/dev/sda3              69        6294    50010345   83  Linux

Disk /dev/sdb: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x88bfbce1

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *           1       60802   488385528+  42  SFS

Disk /dev/sdc: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0xcdb63729

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1       60802   488385528+  42  SFS

```

```

$dmesg | grep sd

...

 sdb:<2>ldm_parse_tocblock(): Cannot find TOCBLOCK, database may be corrupt.

 [LDM] sdb1

...

 sdc:<2>ldm_parse_tocblock(): Cannot find TOCBLOCK, database may be corrupt.

 [LDM] sdc1

...

```

Well this doesn't bode well.

```

$ mknod -m 660 /dev/md0 b 9 0 

mknod: `/dev/md0': File exists

```

Huh?

```

$sudo mount -t ntfs-3g /dev/md0 /test

Failed to read bootsector (size=0)

Failed to mount '/dev/md0': Invalid argument

The device '/dev/md0' doesn't have a valid NTFS.

Maybe you selected the wrong device? Or the whole disk instead of a

partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?

```

Is this the showstopper?

Thanks for the response.

----------

## NeddySeagoon

nilekurt,

Two things. You don't need dmraid, thats for BIOS fakeraid. Its harmless but unused.

You skipped assembling your raid with mdadm. Until its assembed, its not attached to /dev/md0

```
   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *           1       60802   488385528+  42  SFS 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1   *           1       60802   488385528+  42  SFS 
```

is good - you do not have GPT disklabels.

That would have been a showstopper.

```
Cannot find TOCBLOCK, database may be corrupt
```

may be bad, we don't know yet, but it probably needs to be fixed.

You don't need to manually make your /dev/md0 - I wasn't sure about that, thats ok.

Next you should have used mdadm to assemble the raid before you tried to mount it.

----------

## nilekurt

Whoops!   :Embarassed: 

I'm a bit unsure about which options to use for assembling it. Without options it fails:

```

$sudo mdadm -A /dev/md0 /dev/sdb1 /dev/sdc1

mdadm: no recogniseable superblock on /dev/sdb

mdadm: /dev/sdb1 has no superblock - assembly aborted

```

Edit:

After reading a similar thread I enabled LDM debugging in kernel and rebooted:

```

$ dmesg | grep ldm

...

 sdc:<7>ldm_validate_partition_table(): Found W2K dynamic disk partition type.

ldm_parse_privhead(): PRIVHEAD version 2.12 (Windows Vista).

ldm_parse_privhead(): Parsed PRIVHEAD successfully.

ldm_parse_privhead(): PRIVHEAD version 2.12 (Windows Vista).

ldm_parse_privhead(): Parsed PRIVHEAD successfully.

ldm_parse_privhead(): PRIVHEAD version 2.12 (Windows Vista).

ldm_parse_privhead(): Parsed PRIVHEAD successfully.

ldm_validate_privheads(): Validated PRIVHEADs successfully.

ldm_parse_tocblock(): Cannot find TOCBLOCK, database may be corrupt.

ldm_parse_tocblock(): Parsed TOCBLOCK successfully.

ldm_parse_tocblock(): Parsed TOCBLOCK successfully.

ldm_parse_tocblock(): Cannot find TOCBLOCK, database may be corrupt.

ldm_validate_tocblocks(): Validated 2 TOCBLOCKs successfully.

```

So I figure it should be fine as long as the md device gets set up properly.

----------

## NeddySeagoon

nilekurt,

Thats for 

```
sdc:<7>ldm_validate_partition_table(): Found W2K dynamic disk partition type.
```

or /dev/sdc.

You need the same report for sdb too, then its a case of getting the raid assembled.

----------

## nilekurt

I probably should have made a note that the output for sdb is the same. 

Furthermore; if /dev/sdc is passed to mdadm before /dev/sdb (as opposed to the previous snippet), it returns the respective error.

Scratch that!

It now assembles properly (in any order), but according to fdisk there are no partitions on the assembled drive. I fear the worst.

----------

## irgu

I'm not sure there should be any partition on the assembled device. The assmebled device itself is the one to be mounted.

Afair, it's also important to use the mdadm --build parameter, otherwise there could be problems.

----------

## NeddySeagoon

nilekurt,

With the raid assembled, You should be able to mount /dev/md0 somewhere, unless the system has made it a new device node, and read the content from the mount point.

Its not normal to have partitions inside a Linux md device as there is no way to use them.

Instead, you donate partitions to the md device.

----------

## nilekurt

After messing about with --build and --create:

```

$sudo cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 

md0 : active raid0 sdb1[0] sdc1[1]

      976766848 blocks 64k chunks

      

unused devices: <none>

```

```

$sudo mount -t ntfs-3g /dev/md/0 /test

NTFS signature is missing.

Failed to mount '/dev/md0': Invalid argument

The device '/dev/md0' doesn't have a valid NTFS.

Maybe you selected the wrong device? Or the whole disk instead of a

partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?

```

```

$sudo mdadm --examine /dev/md0

mdadm: No md superblock detected on /dev/md0.

```

Have I now overwritten the original partitioning?

fdisk -l shows no apparent changes,

----------

## irgu

--create destroys NTFS. --build should be used.

----------

## nilekurt

I suspected as much. Thank you everyone for your help. Luckily there wasn't anything absolutely irreplaceable.

Instead of going through this hassle I have decided to scrap it all and make a hardware raid.

----------

