# raid using dmraid vanishing

## scaanjoon

Greetings all.  I'm very very new to using anything but a true hardware raid, so this is something of new territory for me.  I've tried creating a raid-1 array on a system equipped with fakeraid 3 times now and in every case the array vanishes after a couple of reboots.  I'm hoping that there's a simple obvious answer that I just haven't run across yet.  

I'm using a system that is equipped with Intel Rapid Storage (aka hardware fake raid).  I created a raid-1 array through the raid boot utility.  After a little research I decided that mdadm-3.2.1 was the best linux option to handle the raid device.  I emerged it and lo and behold it found the device I'd created (named HOLD).  HOLD_0 showed up in /dev/md (soft link to /dev/md126 with a container at /dev/md127).  I formatted the array into a single partition /dev/md126p1 and formatted using ext4.  The device was stable for roughly 12 boot cycles before it vanished.  /dev/md126 and /dev/md127 are still listed but the partition /dev/md126p1 is not longer listed.  /dev/md/HOLD_0 is no longer listed.  When I rebooted the system I found that the Intel Raid is listing the raid status as Verify rather than Normal.  As this is fakeraid any validation / maintenance needs to be done through the operating system.  

I've tried reassembling the raid with mdadm --assemble --scan.  That seems to have had no effect.  

I've run the following commands and recieved the following results

mdadm --detail --scan

ARRAY /dev/md/imsm0 metadata=imsm UUID=3fdf4eba:19a9f1cb:d75421eb:6f1f12a6

mdadm: cannot open /dev/md/HOLD_0: No such file or directory

---------------------------------------------------------------------------------

mdadm -E /dev/md127 

/dev/md127:

          Magic : Intel Raid ISM Cfg Sig.

        Version : 1.1.00

    Orig Family : e6acf3ee

         Family : e6acf3ee

     Generation : 000094e4

           UUID : 3fdf4eba:19a9f1cb:d75421eb:6f1f12a6

       Checksum : 7ee79dc8 correct

    MPB Sectors : 2

          Disks : 2

   RAID Devices : 1

  Disk00 Serial : S2H7JD1B219905

          State : active

             Id : 00020000

    Usable Size : 3907024654 (1863.01 GiB 2000.40 GB)

[HOLD]:

           UUID : ae306efa:69ef8809:054e334a:29f0b3d2

     RAID Level : 1 <-- 1

        Members : 2 <-- 2

          Slots : [UU] <-- [UU]

    Failed disk : none

      This Slot : 0

     Array Size : 3907022848 (1863.01 GiB 2000.40 GB)

   Per Dev Size : 3907023112 (1863.01 GiB 2000.40 GB)

  Sector Offset : 0

    Num Stripes : 15261808

     Chunk Size : 64 KiB <-- 64 KiB

       Reserved : 0

  Migrate State : repair

      Map State : normal <-- normal

     Checkpoint : 1662606 (512)

    Dirty State : clean

  Disk01 Serial : S2H7JD1B219915

          State : active

             Id : 00030000

    Usable Size : 3907024654 (1863.01 GiB 2000.40 GB)

---------------------------------------------------------------------------------

mdadm --detail /dev/md126

/dev/md126:

      Container : /dev/md/imsm0, member 0

     Raid Level : -unknown-

   Raid Devices : 2

  Total Devices : 2

          State : active, Not Started

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

    Number   Major   Minor   RaidDevice State

       1       8       16        0      active sync   /dev/sdb

       0       8       32        1      active sync   /dev/sdc

---------------------------------------------------------------------------------

mdadm --detail /dev/md127

/dev/md127:

        Version : imsm

     Raid Level : container

  Total Devices : 2

Working Devices : 2

           UUID : 3fdf4eba:19a9f1cb:d75421eb:6f1f12a6

  Member Arrays : /dev/md126

    Number   Major   Minor   RaidDevice

       0       8       16        -        /dev/sdb

       1       8       32        -        /dev/sdc

---------------------------------------------------------------------------------

cat /proc/mdstat 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 

md126 : inactive sdb[1] sdc[0]

      3907023112 blocks super external:/md127/0

md127 : inactive sdc[1](S) sdb[0](S)

      4514 blocks super external:imsm

unused devices: <none>

---------------------------------------------------------------------------------

I'd appreciate any ideas anyone might offer.

----------

## magic919

Have you tried disabling the onboard raid?

----------

## scaanjoon

I've tried disabling, rebooting, and reenabling the raid in the bios to see if the status would change.  It remained the same.  I think the problem is with mdadm rather than with the intel fakeraid.

----------

## magic919

Why do you enable it in the BIOS?

----------

## scaanjoon

This is a dual boot workstation (gentoo and windows 7).  The configuration I need for the data drive (ie the raid1) is 1.5TB reserved for linux and .5TB reserved for windows.  I've run the windows side through countless boot cycles without difficulty.  The only time the raid vanishes is when I boot into the linux side.  As I mentioned I've tried this three times now.  The first two times I used the 1.5TB / .5TB config booting into both windows and unix when I had problems.  The last time, in an effort to localize the problem I set the entire raid to a single 2TB linux partition and only booted into gentoo.  Even removing win7 from the mix I'm unable to maintain a stable raid.

----------

## magic919

It's common enough to get strange results like this when setting the raid in the BIOS. Only real raid would be a safe option.

----------

## scaanjoon

So what you're saying is that dmraid and mdadm are useless?  The only feasible raid solution is a dedicated hardware raid card.... or just using the fakeraid and windows 7 for the workstation?

----------

## scaanjoon

Update::  Again this is for anyone with any experience with this kind of a configuration.  I couldn't find any indication that the raid was being monitored, rebuilt, etc in linux so as a last ditch effort rebooted into windows 7.  I loaded the Intel Matrix Storage utility and it instantly started repairing the raid.  It took the better part of 5 hours, but when it was completed I booted into gentoo and found that the raid1 was once again available and, as far as I can tell, uncorrupted (I'm currently running a md5deep against the 1.7TB of test files I previously copied there).  

So the question is, does anyone have any experience / advice on getting mdadm, dmraid, or any other linux raid utility to play nice with Intel fake raid when implemented in a dual boot linux / win7 configuration?  The only time the raid seems to be corrupted is after a boot into linux.  Prior to shutdown I unmount the raid, run "mdadm --wait-clean --scan" to mark it as clean, and then do a graceful shutdown or reboot.  Thoughts anyone?  I'd appreciate any and all advice.

----------

