# mdadm / kernel forgot my Raid5 Array [solved]

## RayDude

I upgraded my kernel to 3.2.12 a few months ago. Everything had been working for years before that.

I have four drives /dev/sdf1 through /dev/sdi1 that are used in a raid5 array. It crashed today while the system was up reporting that three drives failed.

smartctl disagreed and reports all drives as healthy.

When I try to assemble the array I get no error messages to indicate the nature of the failure. Note: since the drives tend to move around depending on the order of a usb drive being detected I changed mdadm.conf to specify the UUID of the drives so that mdadm would always find the drives. This has worked for years.

However, now, when I specify the drives on the command line I get this "verbose" output from mdadm:

```
server etc # mdadm -v -A /dev/md1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1

mdadm: looking for devices for /dev/md1

mdadm: /dev/sdf1 is identified as a member of /dev/md1, slot 0.

mdadm: /dev/sdg1 is identified as a member of /dev/md1, slot 1.

mdadm: /dev/sdh1 is identified as a member of /dev/md1, slot 2.

mdadm: /dev/sdi1 is identified as a member of /dev/md1, slot 3.

mdadm: added /dev/sdg1 to /dev/md1 as 1

mdadm: added /dev/sdh1 to /dev/md1 as 2

mdadm: added /dev/sdi1 to /dev/md1 as 3

mdadm: added /dev/sdf1 to /dev/md1 as 0

mdadm: /dev/md1 assembled from 1 drive - not enough to start the array.

```

There are no indications as to what's wrong. There is some clue in the mdadm --examine of the drives though:

```
server etc # mdadm -E /dev/sdf1

/dev/sdf1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : e15aad38:67509c10:01f9e43d:ac30fbff (local to host server)

  Creation Time : Sat Sep  4 18:05:49 2010

     Raid Level : raid5

  Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)

     Array Size : 4395407808 (4191.79 GiB 4500.90 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 1

    Update Time : Tue May  8 23:23:51 2012

          State : clean

 Active Devices : 1

Working Devices : 1

 Failed Devices : 3

  Spare Devices : 0

       Checksum : 9384e7fc - correct

         Events : 40866

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     0       8       65        0      active sync

   0     0       8       65        0      active sync

   1     1       0        0        1      faulty removed

   2     2       0        0        2      faulty removed

   3     3       0        0        3      faulty removed

server etc # mdadm -E /dev/sdg1

/dev/sdg1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : e15aad38:67509c10:01f9e43d:ac30fbff (local to host server)

  Creation Time : Sat Sep  4 18:05:49 2010

     Raid Level : raid5

  Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)

     Array Size : 4395407808 (4191.79 GiB 4500.90 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 1

    Update Time : Tue May  8 15:59:44 2012

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 93847fa0 - correct

         Events : 40842

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     1       8       81        1      active sync   /dev/sdf1

   0     0       8       65        0      active sync

   1     1       8       81        1      active sync   /dev/sdf1

   2     2       8       97        2      active sync   /dev/sdg1

   3     3       8      113        3      active sync   /dev/sdh1

server etc # mdadm -E /dev/sdh1

/dev/sdh1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : e15aad38:67509c10:01f9e43d:ac30fbff (local to host server)

  Creation Time : Sat Sep  4 18:05:49 2010

     Raid Level : raid5

  Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)

     Array Size : 4395407808 (4191.79 GiB 4500.90 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 1

    Update Time : Tue May  8 15:59:44 2012

          State : clean

 Active Devices : 4

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 93847fb2 - correct

         Events : 40842

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     2       8       97        2      active sync   /dev/sdg1

   0     0       8       65        0      active sync

   1     1       8       81        1      active sync   /dev/sdf1

   2     2       8       97        2      active sync   /dev/sdg1

   3     3       8      113        3      active sync   /dev/sdh1

server etc # mdadm -E /dev/sdi1

/dev/sdi1:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : e15aad38:67509c10:01f9e43d:ac30fbff (local to host server)

  Creation Time : Sat Sep  4 18:05:49 2010

     Raid Level : raid5

  Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)

     Array Size : 4395407808 (4191.79 GiB 4500.90 GB)

   Raid Devices : 4

  Total Devices : 4

Preferred Minor : 1

    Update Time : Tue May  8 17:39:32 2012

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 2

  Spare Devices : 0

       Checksum : 93849742 - correct

         Events : 40843

         Layout : left-symmetric

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     3       8      113        3      active sync   /dev/sdh1

   0     0       8       65        0      active sync

   1     1       0        0        1      faulty removed

   2     2       0        0        2      faulty removed

   3     3       8      113        3      active sync   /dev/sdh1

server etc # 

```

mdadm seems very confused.

From what I've been able to google, it looks like if I do a --create with the drives specified and the option --assume-clean, it may come back to life.

But, since this 4TB of data is almost full, I seriously hesitate to try for fear of losing the data.

Does anyone have any experience with this? Should I upgrade the kernel? Perhaps downgrade back to my previous kernel?

Thanks in advance.

Edit: perhaps it has something to do with the "major" number changing to 8?

----------

## richard.scott

Have you tried booting your old kernel?

Or tried adding --metadata=0 to the assemble command?

Is your partition type FD for your array partitions?

----------

## RayDude

No dice on the --metadata

```

server ~ # mdadm -A /dev/md1 --metadata=0 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1

mdadm: /dev/md1 assembled from 1 drive - not enough to start the array.

```

All are FD:

```

server ~ # fdisk /dev/sdf

Command (m for help): p

Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x7e0f46f7

   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1              63  2930272064  1465136001   fd  Linux raid autodetect

/dev/sdf2      2930272065  3907024064   488376000   83  Linux

Command (m for help): q

server ~ # fdisk /dev/sdg

Command (m for help): p

Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x1d175175

   Device Boot      Start         End      Blocks   Id  System

/dev/sdg1              63  2930272064  1465136001   fd  Linux raid autodetect

Command (m for help): q

server ~ # fdisk /dev/sdh

Command (m for help): p

Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x2af70ec0

   Device Boot      Start         End      Blocks   Id  System

/dev/sdh1              63  2930272064  1465136001   fd  Linux raid autodetect

Command (m for help): q

server ~ # fdisk /dev/sdi

Command (m for help): p

Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x09b7459b

   Device Boot      Start         End      Blocks   Id  System

/dev/sdi1              63  2930272064  1465136001   fd  Linux raid autodetect

Command (m for help): q

```

Do you know if its safe to create the array with --assume-clean? Or will I lose the data on it with the --create command?

Thanks for the help.

----------

## richard.scott

is /dev/md1 already running?

----------

## RayDude

 *richard.scott wrote:*   

> is /dev/md1 already running?

 

No, I have to stop it with mdadm -S /dev/md1 before I try the -A.

I just wish the damn thing would give me an error message.

----------

## richard.scott

What shows up in /proc/mdstat then? Any clues in there?

I assume you've loaded the relevant modules i.e. raid5 or raid6 etc.

Have you tried booting with the latest ISO?

----------

## RayDude

I haven't tried two things you suggest: the old kernel and booting an ISO because my server run my internet mail and because I have to go to work.

When /dev/md1 is running cat /proc/mdstat gives me this:

```
server ~ # cat /proc/mdstat 

Personalities : [raid1] [raid6] [raid5] [raid4] [multipath] 

md1 : inactive sdf1[0](S) sdi1[3](S) sdh1[2](S) sdg1[1](S)

      5860543744 blocks

```

What does the (S) mean?

Thanks again.

----------

## RayDude

I should mention that I have another Raid5 array on the computer and its working fine. Its the / boot partition.

----------

## richard.scott

I think it means its spare.

does mdadm --run /dev/md1 start it?

----------

## RayDude

That got us more info.

```
server ~ # mdadm -A --run /dev/md1

mdadm: failed to RUN_ARRAY /dev/md1: Input/output error

mdadm: Not enough devices to start the array.

```

Looking at dmesg, we see this:

```
md: bind<sdg1>

md: bind<sdh1>

md: bind<sdi1>

md: bind<sdf1>

md: kicking non-fresh sdi1 from array!

md: unbind<sdi1>

md: export_rdev(sdi1)

md: kicking non-fresh sdh1 from array!

md: unbind<sdh1>

md: export_rdev(sdh1)

md: kicking non-fresh sdg1 from array!

md: unbind<sdg1>

md: export_rdev(sdg1)

md/raid:md1: device sdf1 operational as raid disk 0

md/raid:md1: allocated 4270kB

md/raid:md1: not enough operational devices (3/4 failed)

RAID conf printout:

 --- level:5 rd:4 wd:1

 disk 0, o:1, dev:sdf1

md/raid:md1: failed to run raid set.

md: pers->run() failed ...

```

What does it mean its not fresh? And what air freshener do I need to fix it up?

 :Smile: 

Thanks again.

----------

## RayDude

Updated update: My failure was not due to the kernel bug. It has to do with the external SATA link going down while the drives were being written. I guess 604 days of continuous up time is hard on cheap drive bays. More at end of thread...

Update: I've joined the kernel mailing list and it looks like this is a raid5 bug introduced in a recent kernel. Its supposed to be a very rare failure but others are having it.

Apparently I have to rebuild the superblocks to recover the data. This requires knowledge of the order of the drives and most importantly the offsets of the partitions.

I'm hoping someone will create a guide for me to follow.

----------

## richard.scott

can you revert back to an older kernel until this bug is fixed?

----------

## RayDude

Thanks Richard,

I have reverted to the previous kernel to insure Raid5 safety, but my problem was not the kernel bug. The kernel developer told me that my metadata was okay but the drives were out of sync with each other.

I have an external ESATA four drive enclosure and apparently the ESATA cable went wonky on me.

I re-seated the cable just to make sure and used the following commands to bring it back up:

```
mdadm -v -A --force /dev/md1

fsck.ext4 -y /dev/md1

```

Thanks again for your help. I had no idea there were so many ways to find raid errors. And now I know more about raid!

Brian

----------

## tipp98

Dude, I'm having a similar problem. It started out with only one drive not being identified.

```
...

ata7.01: failed to IDENTIFY (I/O error, err_mask=0x11)

...
```

This looks like a common problem with port multiplier hardware (i.e. sil3124)

The array came up anyhow. During the process of trying to find out the best way to proceed, knowing that the state of the disks and array were unchanged and in sync prior to this, a second drive became "faulty"...

```
Jul  9 19:02:35 Falcon kernel: [  805.027931] EXT4-fs (md5): mounted filesystem with ordered data mode. Opts: (null)

Jul  9 19:04:44 Falcon kernel: [  934.335705] ata7.00: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335711] ata7.01: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335715] ata7.02: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335718] ata7.03: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335721] ata7.04: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335725] ata7.05: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335728] ata7.06: failed to read SCR 1 (Emask=0x40)

Jul  9 19:04:44 Falcon kernel: [  934.335736] ata7.15: exception Emask 0x10 SAct 0x0 SErr 0x80000 action 0xe frozen

Jul  9 19:04:44 Falcon kernel: [  934.335739] ata7.15: irq_stat 0x00140010, PHY RDY changed

Jul  9 19:04:44 Falcon kernel: [  934.335743] ata7.15: SError: { 10B8B }

Jul  9 19:04:44 Falcon kernel: [  934.335750] ata7.00: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335758] ata7.01: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335765] ata7.02: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335772] ata7.03: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335780] ata7.04: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335787] ata7.05: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335795] ata7.06: exception Emask 0x100 SAct 0x0 SErr 0x0 action 0x6 frozen

Jul  9 19:04:44 Falcon kernel: [  934.335803] ata7.15: hard resetting link

Jul  9 19:04:46 Falcon kernel: [  936.399945] ata7.15: SATA link down (SStatus 0 SControl 0)

Jul  9 19:04:49 Falcon kernel: [  939.399933] ata7.15: qc timeout (cmd 0xe4)

Jul  9 19:04:49 Falcon kernel: [  939.399947] ata7.15: failed to read PMP GSCR[0] (Emask=0x5)

Jul  9 19:04:49 Falcon kernel: [  939.399951] ata7.15: PMP revalidation failed (errno=-5)

Jul  9 19:04:51 Falcon kernel: [  941.399986] ata7.15: hard resetting link

Jul  9 19:04:53 Falcon kernel: [  943.466625] ata7.15: SATA link down (SStatus 0 SControl 0)

Jul  9 19:04:53 Falcon kernel: [  943.693697] ata7.15: failed to read PMP GSCR[0] (Emask=0x1)

Jul  9 19:04:53 Falcon kernel: [  943.693701] ata7.15: PMP revalidation failed (errno=-5)

Jul  9 19:04:53 Falcon kernel: [  943.693706] ata7.15: limiting SATA link speed to 1.5 Gbps

Jul  9 19:04:58 Falcon kernel: [  948.466629] ata7.15: hard resetting link

Jul  9 19:05:00 Falcon kernel: [  950.649950] ata7.15: SATA link up 1.5 Gbps (SStatus 113 SControl 10)

Jul  9 19:05:00 Falcon kernel: [  950.650150] ata7.00: hard resetting link

Jul  9 19:05:01 Falcon kernel: [  950.970226] ata7.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310)

Jul  9 19:05:01 Falcon kernel: [  950.970255] ata7.01: hard resetting link

Jul  9 19:05:01 Falcon kernel: [  951.290225] ata7.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Jul  9 19:05:01 Falcon kernel: [  951.290254] ata7.02: hard resetting link

Jul  9 19:05:01 Falcon kernel: [  951.610225] ata7.02: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Jul  9 19:05:01 Falcon kernel: [  951.610254] ata7.03: hard resetting link

Jul  9 19:05:02 Falcon kernel: [  951.930226] ata7.03: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Jul  9 19:05:02 Falcon kernel: [  951.930256] ata7.04: hard resetting link

Jul  9 19:05:02 Falcon kernel: [  952.250214] ata7.04: SATA link up 1.5 Gbps (SStatus 113 SControl 300)

Jul  9 19:05:02 Falcon kernel: [  952.250243] ata7.05: hard resetting link

Jul  9 19:05:03 Falcon kernel: [  953.273286] ata7.05: failed to resume link (SControl 0)

Jul  9 19:05:03 Falcon kernel: [  953.273479] ata7.05: SATA link up 1.5 Gbps (SStatus 113 SControl 0)

Jul  9 19:05:03 Falcon kernel: [  953.273527] ata7.06: hard resetting link

Jul  9 19:05:03 Falcon kernel: [  953.593558] ata7.06: SATA link up 1.5 Gbps (SStatus 113 SControl 310)

Jul  9 19:05:03 Falcon kernel: [  953.611283] ata7.00: configured for UDMA/100

Jul  9 19:05:03 Falcon kernel: [  953.623819] ata7.01: configured for UDMA/100

Jul  9 19:05:03 Falcon kernel: [  953.628332] ata7.02: configured for UDMA/100

Jul  9 19:05:03 Falcon kernel: [  953.632728] ata7.03: configured for UDMA/100

Jul  9 19:05:03 Falcon kernel: [  953.642242] ata7.04: configured for UDMA/100

Jul  9 19:05:03 Falcon kernel: [  953.642301] ata7.05: unsupported device, disabling

Jul  9 19:05:03 Falcon kernel: [  953.642304] ata7.05: disabled

Jul  9 19:05:03 Falcon kernel: [  953.642361] ata7: EH complete

Jul  9 19:20:46 Falcon kernel: [ 1896.518629] ata7.01: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xf

Jul  9 19:20:46 Falcon kernel: [ 1896.518635] ata7.01: SError: { PHYRdyChg DevExch }

Jul  9 19:20:46 Falcon kernel: [ 1896.518702] ata7.01: hard resetting link

Jul  9 19:20:47 Falcon kernel: [ 1897.236786] ata7.01: SATA link down (SStatus 0 SControl 310)

Jul  9 19:20:52 Falcon kernel: [ 1902.236495] ata7.01: hard resetting link

Jul  9 19:20:52 Falcon kernel: [ 1902.556777] ata7.01: SATA link down (SStatus 0 SControl 310)

Jul  9 19:20:57 Falcon kernel: [ 1907.556614] ata7.01: hard resetting link

Jul  9 19:20:58 Falcon kernel: [ 1907.876775] ata7.01: SATA link down (SStatus 0 SControl 310)

Jul  9 19:20:58 Falcon kernel: [ 1907.876853] ata7.01: disabled

Jul  9 19:20:58 Falcon kernel: [ 1907.876936] ata7: PMP SError.N set for some ports, repeating recovery

Jul  9 19:20:58 Falcon kernel: [ 1907.876988] ata7.00: hard resetting link

Jul  9 19:20:58 Falcon kernel: [ 1908.596775] ata7.00: SATA link down (SStatus 0 SControl 310)

Jul  9 19:21:03 Falcon kernel: [ 1913.596488] ata7.00: hard resetting link

Jul  9 19:21:04 Falcon kernel: [ 1913.916775] ata7.00: SATA link down (SStatus 0 SControl 310)

Jul  9 19:21:04 Falcon kernel: [ 1913.916876] ata7.00: limiting SATA link speed to 1.5 Gbps

Jul  9 19:21:09 Falcon kernel: [ 1918.916490] ata7.00: hard resetting link

Jul  9 19:21:09 Falcon kernel: [ 1919.236783] ata7.00: SATA link down (SStatus 0 SControl 310)

Jul  9 19:21:09 Falcon kernel: [ 1919.236860] ata7.00: disabled

Jul  9 19:21:09 Falcon kernel: [ 1919.236945] ata7: EH complete

Jul  9 19:21:09 Falcon kernel: [ 1919.236962] ata7.00: detaching (SCSI 6:0:0:0)

Jul  9 19:21:09 Falcon kernel: [ 1919.243286] md/raid:md5: Disk failure on sde1, disabling device.

Jul  9 19:21:09 Falcon kernel: [ 1919.243287] md/raid:md5: Operation continuing on 2 devices.

Jul  9 19:21:09 Falcon kernel: [ 1919.249979] sd 6:0:0:0: [sde] Synchronizing SCSI cache

Jul  9 19:21:09 Falcon kernel: [ 1919.250073] sd 6:0:0:0: [sde]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

Jul  9 19:21:09 Falcon kernel: [ 1919.250077] sd 6:0:0:0: [sde] Stopping disk

Jul  9 19:21:09 Falcon kernel: [ 1919.250085] sd 6:0:0:0: [sde] START_STOP FAILED

Jul  9 19:21:09 Falcon kernel: [ 1919.250087] sd 6:0:0:0: [sde]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

Jul  9 19:21:09 Falcon kernel: [ 1919.250113] ata7.01: detaching (SCSI 6:1:0:0)

Jul  9 19:21:09 Falcon kernel: [ 1919.276629] sd 6:1:0:0: [sdf] Synchronizing SCSI cache

Jul  9 19:21:09 Falcon kernel: [ 1919.276683] sd 6:1:0:0: [sdf]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

Jul  9 19:21:09 Falcon kernel: [ 1919.276689] sd 6:1:0:0: [sdf] Stopping disk

Jul  9 19:21:09 Falcon kernel: [ 1919.276702] sd 6:1:0:0: [sdf] START_STOP FAILED

Jul  9 19:21:09 Falcon kernel: [ 1919.276705] sd 6:1:0:0: [sdf]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

...
```

That is where things went to hell and now ...

```
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md5 : inactive sdf1[0](S) sde1[2](S) sdg1[1](S) sdh1[4](S)

      3875034428 blocks super 1.2

       

unused devices: <none>
```

I guess I should have been using --no-degraded.

Anyhow, I can not assemble the array (i.e. "mdadm -v -A --force /dev/md1"). What steps did you take to rebuild the array? I assume I should be able to use the --assume-clean option, but wanted to get your input since you've been there done that with an array that was probably more broke than mine.

Also, if there was some detailed troubleshooting in your mailing list discussion, as far as determining whether the superblock or metadata was out of sync, that would be handy to link here. If not, don't bother.

Thanks,

Kyle

----------

## tipp98

NM, I needed to stop the array.

----------

