# Surrogate LVM2+Raid1

## cynric

I'm reviving an old gentoo box that had two 500GB drives in a RAID mirror with LVM2 on top. I followed Gentoo's RAID+LVM2 Quick Install guide. The drives had a single partition and were combined with RAID1. Then LVM went on top with 3 logical volumes.

The problem is, I've moved the drives to a new box and can't seem to get LVM to recognize them. Fdisk is a little weird in that it is showing the file systems to be FAT32. RAID and Device-Mapper are compiled into the kernel. They did sync up with RAID after adding them to /dev/md1. None of the *scan or *display commands seem to find anything though. I'm really not sure what other information I need to provide, but here's what I can think of. Any help would be greatly appreciated.

Pertinent fdisk:

```
Disk /dev/sde: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x9d015d38

   Device Boot      Start         End      Blocks   Id  System

/dev/sde1   *           1       60801   488384001    c  W95 FAT32 (LBA)

Disk /dev/sdf: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x9d015d38

   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1   *           1       60801   488384001    c  W95 FAT32 (LBA)
```

 ls -hAl /dev/md1:

```
brw-rw---- 1 root disk 9, 1 Apr 20 22:32 /dev/md1
```

Pertinent cat /proc/mdstat:

```
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]

md1 : active raid1 sdf1[1] sde1[0]

      488383869 blocks super 1.1 [2/2] [UU]

```

vgscan -v:

```
Wiping cache of LVM-capable devices

Wiping internal VG cache

Reading all physical volumes.  This may take a while...

Finding all volume groups

```

lvscan -v:

```
Finding all logical volumes
```

pvscan -v:

```
Wiping cache of LVM-capable devices

Wiping internal VG cache

Walking through all physical volumes

No matching physical volumes found

```

vgdisplay -v:

```
Finding all volume groups
```

lvdisplay -v:

```
Finding all logical volumes
```

pvdisplay -v:

```
Scanning for physical volume names
```

The original /etc/lvm/backup/vg file:

```
# Generated by LVM2: Sat Aug 11 21:04:24 2007

contents = "Text Format Volume Group"

version = 1

description = "Created *after* executing 'lvextend -L +40G vg/backup'"

creation_host = "haven"   # Linux haven 2.6.21-ck2 #3 Tue Jul 3 19:36:03 CDT 2007 i686

creation_time = 1186884264   # Sat Aug 11 21:04:24 2007

vg {

   id = "32UJql-fXR7-gHO0-OCFJ-jKyE-0RPp-Fzf1Hs"

   seqno = 9

   status = ["RESIZEABLE", "READ", "WRITE"]

   extent_size = 8192      # 4 Megabytes

   max_lv = 0

   max_pv = 0

   physical_volumes {

      pv0 {

         id = "eyCEcg-xyhu-w1qA-M1hf-DsJU-8mzD-giXY7e"

         device = "/dev/md1"   # Hint only

         status = ["ALLOCATABLE"]

         pe_start = 384

         pe_count = 119234   # 465.758 Gigabytes

      }

   }

   logical_volumes {

      pictures {

         id = "JUeV5B-ainE-RaOx-0iuK-pY6g-scfM-Zerx7Y"

         status = ["READ", "WRITE", "VISIBLE"]

         segment_count = 1

         segment1 {

            start_extent = 0

            extent_count = 25600   # 100 Gigabytes

            type = "striped"

            stripe_count = 1   # linear

            stripes = [

               "pv0", 0

            ]

         }

      }

      backup {

         id = "0k2e2Y-WHb8-aw3L-fzzG-1orJ-OcYL-YOxcGH"

         status = ["READ", "WRITE", "VISIBLE"]

         segment_count = 2

         segment1 {

            start_extent = 0

            extent_count = 25600   # 100 Gigabytes

            type = "striped"

            stripe_count = 1   # linear

            stripes = [

               "pv0", 25600

            ]

         }

         segment2 {

            start_extent = 25600

            extent_count = 10240   # 40 Gigabytes

            type = "striped"

            stripe_count = 1   # linear

            stripes = [

               "pv0", 66560

            ]

         }

      }

      music {

         id = "2JG3ya-8pLx-q0MC-ruXA-FQpV-gg2g-oOVV2e"

         status = ["READ", "WRITE", "VISIBLE"]

         segment_count = 2

         segment1 {

            start_extent = 0

            extent_count = 42434   # 165.758 Gigabytes

            type = "striped"

            stripe_count = 1   # linear

            stripes = [

               "pv0", 76800

            ]

         }

         segment2 {

            start_extent = 42434

            extent_count = 15360   # 60 Gigabytes

            type = "striped"

            stripe_count = 1   # linear

            stripes = [

               "pv0", 51200

            ]

         }

      }

   }

}

```

----------

## HeissFuss

First of all, this is probably a silly question, but do you have dm-mod loaded?

I'm not sure if this one's any different than pvscan, but does 'lvmdiskscan -v' show anything different?

Your raid assembled, which seems like a good sign.  If something had initialized the disk, I don't think the md info would have been accessible either.

Since it's just two mirrored disks, you may be able to manually fail one of the drives (for safety) and try to restore the vg backup config that you have.  You'll need to pvcreate /dev/md1 with the same id that it had before and with the vg backup info.

```

pvcreate -v --uuid "eyCEcg-xyhu-w1qA-M1hf-DsJU-8mzD-giXY7e" --restorefile /etc/lvm/backup/vg /dev/md1

```

After that you can see if the vg can come online and if your data is there.  If not, you have the other disk as a backup.

----------

