# lvm2, lv's not mounting during boot[Solved, kinda]

## needlern1

I'm a total noob with lvm and any kind of raid. I'm building a multimedia box. I've put two sata drives in it and using a genkernel --with-lvm --menuconfig all. Raid and lvm are built in to the kernel(vs modules).

I followed the gentoo lvm2 doc for installation. Getting ready for the first reboot,  I realized I had put the wrong fs's on the vg's. This first attempt I used the doc's example and named the lv's "vg". I went back to ground zero and started the lvm2 part again and went forward from there. During this second time frame I, somehow, wound up creating a new PV (I forget what I was trying to do).

I completed the build with fs's: / = reiser3.6, /dev/sda6; /boot = ext2, /dev/sda2; swap /dev/sda5; lv's with xfs for /var, /tmp, /opt, /usr, /home. I also have w2k on /dev/sda1 and not considering including it in the lv setup.

During boot up the system looks for the missing PV that no longer exists. Probably lists 20 or so identical lines of "Can't find .... PV... uuid #...". I'd like to use dmsetup to remove it, but don't know how.

When boot completes, the only thing mounted is /dev/sda6 and proc, and so on. I must do:

```
vgchange -a y
```

and then manually mount the vg0 (the new name):

```
mount /usr
```

and repeat the mount command for the other 4 dir's. Then all is well with the system. Of course some system resources are not being started as they can't be found during boot up.

I've read the lvm2 HowTo at tldp.org, the man page for dmsetup and some other threads and have not "eyes glazed over" too much  :Smile: 

What do you need to know to help me with these two things?

TIA,

Bill

EDIT - the only thing dmsetup returns for "info" is related to the new group vg0. Nothing on the PV I'm trying to get rid of.

----------

## Naughtyus

;( I've got the same problem here

----------

## erikm

As root, do

```
~# pvdisplay
```

This lists the devices on which LVM2 physical volumes are created. Find the offending one, and do

```
~# pvremove <offending volume>
```

You might want to use this partition for something else; remember to change the partition type using fdisk (e.g. type 83 for a regular Linux filesystem).

EDIT: Tab completing a little with 'lv', I found the utility 'lvmdiskscan'; might work better for you than pvdisplay...

----------

## needlern1

Thanks ErikM. I ran pvdisplay and it showed me all of the pv's, including the the uuid I'm trying to remove. However, it lists the device as unknown. I think I understand this. I setup my /dev/sda with 3 primary partitions and the last an extended partition. 

/dev/sda7 is where I tried to do the PV (vg) install. It now houses my mounted vg0 lv. I think I need to find the script or file or something that runs with a memory of that vg and vg0 and perhaps modify it. I tried deleting /etc/lvm/.cache and that did not help. Of course I can't unmount vg0 as it houses /var /tmp /opt /usr and /home.

During boot up, right before "Setting up the Logical Volume Manager..." is this line:

```
System.map not found - unable to check symbols
```

which might also have a bearing on this. I just double checked my /boot dir and linked (ln -s) the System.map-genkernel.... to System.map. Reboot produced no difference. So I don't know what that's all about.

I also did lv and pv tab completes and tried several of the offerings. Nothing offered anything that would remove the pv that "can't be found".

Still stuck,

Bill

----------

## erikm

 *needlern1 wrote:*   

> Thanks ErikM. I ran pvdisplay and it showed me all of the pv's, including the the uuid I'm trying to remove. However, it lists the device as unknown. I think I understand this. I setup my /dev/sda with 3 primary partitions and the last an extended partition. 
> 
> /dev/sda7 is where I tried to do the PV (vg) install. It now houses my mounted vg0 lv. I think I need to find the script or file or something that runs with a memory of that vg and vg0 and perhaps modify it. I tried deleting /etc/lvm/.cache and that did not help. Of course I can't unmount vg0 as it houses /var /tmp /opt /usr and /home.
> 
> During boot up, right before "Setting up the Logical Volume Manager..." is this line:
> ...

 

You should not have to use anything other than pv-, vg- and lvremove to get rid of the extra volume. But for Gog's sake, are you trying to manage your lvm with the volumes mounted?!   :Shocked: 

Tar up your entire system, put on a backup disk or burn to CD, start with the live CD and mess with your lvm from there. You might also want to verify that your physical volumes have the right hex code ('8e') in the partition table:

```
hostname ~ # fdisk -l /dev/sda

Disk /dev/sda: 80.0 GB, 80000000000 bytes

255 heads, 63 sectors/track, 9726 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1                  1         123      987966      82  Linux swap / Solaris

/dev/sda2             124         134       88357+     83  Linux

/dev/sda3             135         196       498015      83  Linux

/dev/sda4             197        9726    76549725   8e  Linux LVM
```

EDIT: The bit about System.map not being found is nothing to worry about. It's either the System.map missing from your boot directory, or from your kernel source directory; I can't remember which (I got it and fixed it when I last rebooted, some time back in November   :Wink:   ). Anyways, just copy the one over to the other, to get rid of the error message.

----------

## needlern1

ErikM wrote:

 *Quote:*   

> You should not have to use anything other than pv-, vg- and lvremove to get rid of the extra volume. But for Gog's sake, are you trying to manage your lvm with the volumes mounted?! 
> 
> Tar up your entire system, put on a backup disk or burn to CD, start with the live CD and mess with your lvm from there. You might also want to verify that your physical volumes have the right hex code ('8e') in the partition table

 

Hex code 8e exists, where appropriate, as I can mount vg0. Also confirmed with fdisk -l. Understand the concern for backing up and am addressing same. If I don't mount the lv's /usr,tmp,var,home and opt, there's not much I can accomplish. This holds true in "real-time" boot and in livecd "chroot". Just using the livecd (not chrooting) to run "vgdisplay" gives me the same "Couldn't find device with uuid '............', next line "Couldn't find all physical volumes for volume group vg" and the last line "Volume group "vg" doesn't exist". Then it displays:

```
---Volume group---

VG Name          vg0

System ID

Format              lvm2

Metadata Areas 2

... 

(and the last line is)

VG UUID          with a long number
```

I'm at the point where I'm no longer concerned with the error messages for not finding "vg", if I can get the lv's (vg0) mounted during boot-up. If a mounting answer is not forthcoming pretty soon, then it will be quicker to simply cut my losses and re-build everything - from a time invested standpoint.

I ran a "find" command for System.map. It only found the one in the kernel (/boot was mounted). Copied it to /boot (identical date/time/size as the genkernel system.map), rebooted and still had the same error message. Least of my worries. Oh well.

Thanks again,

Bill

----------

## erikm

 *needlern1 wrote:*   

> 
> 
> Hex code 8e exists, where appropriate, as I can mount vg0. Also confirmed with fdisk -l. Understand the concern for backing up and am addressing same. If I don't mount the lv's /usr,tmp,var,home and opt, there's not much I can accomplish. 

 

You can activate your lvm pv's, vg's and lv's from the livecd, without mounting anything. Just use vgchange.

As for your problem, try this:

```
~# man vgreduce
```

  :Wink: 

----------

## needlern1

Again, thanks ErikM. I'm still not able to address the system looking for the "no longer exists vg" PV.  All of the lv/pv tools will work wonders on my functioning vg0 lv (the one I can't seem to get automatically mounted at boot) and none of the tools will do anything for the vg PV (because it does not exist).

I was thinking of reformating my /dev/sda7, which is one of the two PV disks. But, I think, even if I do that and recreate the PVs the system is still going to be trying to find the PV "vg" that doesn't exist. There must be some file somewhere on / or /boot to edit. I've searched around and found similar "Couldn't find ..." errors, but none that seem to solve my dilema.

I tried making a "volume_list" in /etc/lvm/lvm.conf and just listing "vg0" as the one to make active at boot, but / isn't mounted at that point so that did not help.

I think I'll work on my taxes for awhile. That may afford me a little more pleasure right about now   :Confused: 

Bill

----------

## erikm

 *man vgreduce wrote:*   

> NAME
> 
>        vgreduce - reduce a volume group
> 
> SYNOPSIS
> ...

 

Did you try googling for your error?

----------

## Noe

Hello ErikM,

maybe you will help me....

I have gone all the steps in setting up LVM2 as mentioned in http://gentoo-wiki.com/HOWTO_Install_Gentoo_on_an_LVM2_root_partition#The_Second_Easiest_Way:

  - I have enabled LVM in the kernel together with device-mapper:

```
Device Drivers  --->

 Multi-device support (RAID and LVM)  --->

   [*] Multiple devices driver support (RAID and LVM)

   < >   RAID support

   <*>   Device mapper support
```

  - I have compiled Device-mapper statically in to the kernel:

```
USE="static" emerge device-mapper
```

  - I have enabled ramdisk and initial ramdisk support

  - I have modified the lvm.conf in following way to avoid scanning all devices:

```
filter = [ "a|^/dev/hda[12]|", "a|^/dev/vg01$|", "r/.*/" ]
```

  - I have edited lvm2create_initrd so as to set its size (to 8192kB) and I have created initrd with it:

```
sh ./lvm2create_initrd -c /etc/lvm/lvm.conf 2.6.11-gentoo-r3
```

  - my /etc/fstab:

```
/dev/hda1                  /boot       ext2            noauto,noatime         1 2

/dev/hda2                  none        swap            sw                     0 0

/dev/mapper/vg01-lvol1     /           xfs             noatime                0 1

/dev/mapper/vg01-lvol2     /home       xfs             noatime                0 0

none                       /proc       proc            defaults               0 0

none                       /dev/shm    tmpfs           nodev,nosuid,noexec    0 0
```

  - my /boot/grub/grub.conf: 

```
title Gentoo with LVM

    kernel /boot/kernel-lvm2-2.6.11-gentoo-r3 root=/dev/ram0 lvm2root=/dev/vg01/lvol1 init=/linuxrc ramdisk=8192 video=vesafb:ywrap,mtrr,1024x768-16@85 splash=silent,theme:emergence

    initrd /boot/initrd-lvm2-2.6.11-gentoo-r3.gz
```

Unfortunatelly when booting, the corresponding logical volumes are not found - the system prompts with lvm2rescue but there is no command to rescue lvm2.

Could you please help?

----------

## erikm

 *Noe wrote:*   

> Hello ErikM,
> 
> maybe you will help me....
> 
> 

 

Certainly  :Very Happy: .

 *Noe wrote:*   

> 
> 
> I have gone all the steps in setting up LVM2 as mentioned in http://gentoo-wiki.com/HOWTO_Install_Gentoo_on_an_LVM2_root_partition#The_Second_Easiest_Way:
> 
>   - I have enabled LVM in the kernel together with device-mapper:
> ...

 

Doesn't really matter here, but the 'static' USE flag only prevents dynamic linking with the (in this case: dev-mapper) binaries. It has nothing to do with the kernel.

 *Noe wrote:*   

> 
> 
>   - I have enabled ramdisk and initial ramdisk support
> 
>   - I have modified the lvm.conf in following way to avoid scanning all devices:
> ...

 

What I typically do in an LVM based Gentoo install is as follows:

1. Fdisk my harddrive: First, a reasonably sized swap space, then a boot partition, then a root partition (size depends on how much I want to run as logical volumes, usually 100 - 500 MB), and lastly, one big LVM2 volume. Verify hex codes in partition table.

2. Create the necessary physical volume(s), volume groups and logical volumes. Activate with vgchange.

3. Mount root, create directories in root, mount corresponding logical volumes.

4. Chroot and do the stage 1 on 3 dance, emerging lvm2,device-mapper etc.. Make sure lv's are activated, show up and are functional in the chroot.

5. Reboot.

That is, I typically don't put root on LVM. I can't say I see why you would, quite frankly.

So, my questions to you are:

1. What is the output of pv-, vg- and lvdisplay <your device>?

2. Are you positive you got the lvm2create_initrd step right (i.e. search, google)?

3. Your initrd still has the gzip extension in grub.conf. Is this consistent with the actual name of the initrd (mine don't have the .gz extension)?

----------

## Noe

Hi ErikM,

I have created boot partition (ext2), swap and a LVM partition (8e code for it).

I have created one pv for the LVM partition, created vg01 and two logical volumes on it - lvol1 and lvol2. vg01 is ok with two logical volumes active.

I have enabled dm-mod with modprobe.

I have mounted /mnt/gentoo/, /mnt/gentoo/boot/, /mnt/gentoo/proc/, /mnt/gentoo/dev/, /mnt/gentoo/dev/mapper.

I have chrooted to a new environment with no problem and do stage3 there, portage, set USE variables, emerged lvm2, device-mapper, mkinitrd. 

I have compiled kernel there, edited lvm.conf and lvm2create_initrd (setting a concrete size of a initrd file). This script makes initrd gzipped.

```
sh ./lvm2create_initrd -c /etc/lvm/lvm.conf 2.6.11-gentoo-r3
```

I do not know if the script is totally ok, but people use this script and have come to success with it when setting up lvm on root (on gentoo).

Then I even export vg (vgexport -a), deactivate it (vgchange -a n) and exit chroot. Then I unmount all the mounted filesystems and reboot.

When rebooting the ramdisk is mounted. But when the vg01 should be activated it is not activated at all.

I even filtered out all other devices by editing lvm.conf 

```
filter = [ "a|^/dev/hda[12]|", "a|^/dev/vg01$|", "r/.*/" ]
```

It is supposed that the node for vg will be created by executing the following lines added to /etc/init.d/checkroot

```

vgscan --mknodes --ignorelockingfailure

vgchange -ay --ignorelockingfailure

```

but I have a suspicion that the system cannot find the /etc/init.d/checkroot and cannot come to executing this two important lines.

so....

yesterday evening I have started to set up my gentoo with LiveCD 2005.1 instead of 2004.1  :Smile:  so I will see if the problem lies in the version of the kernel etc...

----------

## erikm

Well, the only main difference between our methods, apart from you putting root on an lvm (and I must reiterate that I think that requires some form of justification), is that you deactivate your vg's before rebooting. Why do you do that? AFAIK, your vg's should be activated in your chroot, and should not have to be deactivated and reactivated every time you reboot...?   :Confused: 

----------

## Noe

Ooh, ErikM, you may be right!

I can't w8 to get home from work to test it. I will respond!

----------

## RoundsToZero

 *needlern1 wrote:*   

> During boot up, right before "Setting up the Logical Volume Manager..." is this line:
> 
> ```
> System.map not found - unable to check symbols
> ```
> ...

 

You said your /usr partition was on a logical volume.  This means that /usr/src/linux/System.map is not available when modules-update wants to run because the LV hasn't been mounted yet (this error happens right before modules are loaded from /etc/modules.autoload.d/kernel-2.6).  However, it is not a big deal.  What I did was just put modules-update in /etc/conf.d/local.start.  The only time this might not work is if you install a new kernel with its new modules, and modules-update needs to be run before the next boot.  In that case, by the time you get to starting "local" when you reboot, it might be too late, and some modules may have failed to load (but who knows, maybe it's fine).  Just run it before you reboot if you are doing something with kernels or modules.  I actually just put modules-update in local.stop as well haha

----------

## needlern1

@ErikM - I did indicate that I had searched for my error message.

@RoundsToZero - Thanks, but no cigar.

RoundsToZero wrote:

 *Quote:*   

> You said your /usr partition was on a logical volume. This means that /usr/src/linux/System.map is not available when modules-update wants to run because the LV hasn't been mounted yet (this error happens right before modules are loaded from /etc/modules.autoload.d/kernel-2.6). However, it is not a big deal. What I did was just put modules-update in /etc/conf.d/local.start. The only time this might not work is if you install a new kernel with its new modules, and modules-update needs to be run before the next boot. In that case, by the time you get to starting "local" when you reboot, it might be too late, and some modules may have failed to load (but who knows, maybe it's fine). Just run it before you reboot if you are doing something with kernels or modules. I actually just put modules-update in local.stop as well haha

 

I nano'd "modules-update" into /etc/conf.d/local.start and ...stop. Rebooted. Had the  same error messages and the unmounted /usr, var, tmp, opt and home directories. It made no difference, because the system errors were occuring before / gets mounted. I've also tried putting a boot line command of "vgscan --mknodes" in and it did not do anything new.Now that I have the box back up again, tomorrow I'm going to unmerge lvm (under liveCD) and re-emerge it and see what I wind up with. Thanks to all.

Bill

----------

## needlern1

Unmerging lvm did not help. I wound up, from chroot, backing out and removing lv's then pv's. Then rebooted and did not see any error messages, other than not being able to find the missing 5 dir's.

LiveCD back in at the Chapter 4 point. Recreated everything from that point on. Completed the installation and rebooted. What a treat! Zero messages! of any kind! Everything mounted as expected!   :Very Happy:   I'm currently rebuilding glibc, then will head toward MythTv land.

Thanks again for the help offered. I'm going to mark this "Solved, kinda" if I have space. Bill

----------

## Noe

 *ErikM wrote:*   

> Well, the only main difference between our methods, apart from you putting root on an lvm (and I must reiterate that I think that requires some form of justification), is that you deactivate your vg's before rebooting. Why do you do that? AFAIK, your vg's should be activated in your chroot, and should not have to be deactivated and reactivated every time you reboot...?  

 

Hello ErikM,

this was not the solution. 

The corresponding vg is deactivated >>always<< when shutdowning (and then rebooting). It is intended to activate the vg01 by vgscan and vgchange (added lines in checkroot) but this and only this is not successfull. It seems that the created initrd is faulty in some way although I personally doubt it.

I have talked a little with one of my firends about the problem - he will send me his cinfiguration info to let me see in.

----------

## erikm

 *Noe wrote:*   

> 
> 
> Hello ErikM,
> 
> this was not the solution. 
> ...

 

Well, of course the vg's are deactivated when LVM shuts down, but you should still not do it manually. AFAIK, a manual deactivation of a vg deactivates it permanently, i.e. it is not brought up again on an LVM restart.

 *Noe wrote:*   

> 
> 
>  It seems that the created initrd is faulty in some way although I personally doubt it.

 

This would be my guess; creating an initrd with a proper pivot root init script is not trivial, and may not work the same way on all kernels and hardware setups. This problem is only due to your insistence on putting root on a logical volume; again (third time and counting), why the hell do you want that?

----------

## Noe

Hello ErikM,

why the hell not?  :Smile: 

The purpose of LVM is to make the services uninterruptable - if the LVM is combined with RAID chances are the uptime of the box will be something to look at  :Smile: 

What's the point to use LVM when the root filesystem will be not protected by it? If I would lose the hdd on which / resides everything would go to hell right away and other disks with LVM probably too. Some solution could be to have RAID on boot and root filesystems and LVM (with RAID) on the others.

To have LVM on filesystems other than root and boot is a piece of a cake but not this crazy case when taking / into account.

Moreover there is no bootloader (on Linux distros) which knows LVM so the /boot had to be always protected by something else.

----------

## erikm

 *Noe wrote:*   

> Hello ErikM,
> 
> why the hell not? 
> 
> The purpose of LVM is to make the services uninterruptable - if the LVM is combined with RAID chances are the uptime of the box will be something to look at 
> ...

 

If by 'make the services uninterruptable' you mean you are able to unmount, resize and reformat partitions selectively without rebooting using LVM, then sure, this is true for all partitions except the root. It is not possible to unmount root and still have a running system, unless you have chrooted into something like a live CD, using a mount point outside the regular file system, in a ramdisk or similar. This will be tremendously difficult at best. There is no added advantage to RAID when using LVM; your data is protected equally well by RAID, with or without LVM.

That is, you gain absolutely squat by putting root on LVM, as opposed to having root on a small (<100 MB) regular partition containing /bin, /sbin, /lib, /etc and the tmpfs mount points. The only difference is the hassle you are now experiencing.

But then, if you feel like experimenting, be my guest. Linux tinkering is kinda fun, after all...   :Wink: 

----------

## Noe

 *Quote:*   

> If by 'make the services uninterruptable' you mean you are able to unmount, resize and reformat partitions selectively without rebooting using LVM, then sure, this is true for all partitions except the root. It is not possible to unmount root and still have a running system, unless you have chrooted into something like a live CD, using a mount point outside the regular file system, in a ramdisk or similar. This will be tremendously difficult at best.

 

No, ErikM, I didn't mean that. I ment a solution when you have an LVM on all filesystems and all of them are - for example - mirrored. In such a case when your disk is broken, nothhing happens. All logical volumes are mirrored so you can simply plug the destroyed disk out of fully running machine, plug there a fresh disk and by 3 commands you have everything as before - in LVM and fully mirrored.

 *Quote:*   

> That is, you gain absolutely squat by putting root on LVM, as opposed to having root on a small (<100 MB) regular partition containing /bin, /sbin, /lib, /etc and the tmpfs mount points. The only difference is the hassle you are now experiencing.

 

No, really not. The only difference is a good sleep when such a machine with fully functional LVM is running in some production env.

 *Quote:*   

> But then, if you feel like experimenting, be my guest. Linux tinkering is kinda fun, after all...  

 

Yes, it is   :Laughing:   . Being so a big fan for Linux I must say now that the LVM on Linux is not what it is intended to be.  :Shocked: 

----------

## erikm

 *Noe wrote:*   

> No, ErikM, I didn't mean that. I ment a solution when you have an LVM on all filesystems and all of them are - for example - mirrored. In such a case when your disk is broken, nothhing happens. All logical volumes are mirrored so you can simply plug the destroyed disk out of fully running machine, plug there a fresh disk and by 3 commands you have everything as before - in LVM and fully mirrored.

 

I'm not following you. There is basically two cases related to your statement here:

1. Running LVM only, in striped (not mirrored) mode. This emulates RAID0, which means there is no redundancy. If a drive dies, your data is lost, that is. LVM cannot do mirroring over PV's.

2. Running LVM combined with software or hardware RAID in a redundant configuration, 1 or higher. In this case, the RAID volume is typically presented as a single block device to the LVM, just as if it was a single disk, and LVM is run in normal mode.

In summary, the redundancy you mention can only be achieved with actual RAID in some form, and is not due to or enhanced in any form by LVM.

AFAICT, LVM gives you a smarter way of handling more partitions, no more, no less.

----------

## Noe

Yes, that was what I meant - actual HW RAID and LVM with one mirror copy for each logical volume.

Neverthless, I will try to push things further with this Linux raid/lvm/ stuff ... and make the solution as redundant as possible.

----------

## erikm

 *Noe wrote:*   

> Yes, that was what I meant - actual HW RAID and LVM with one mirror copy for each logical volume.

 

And this is where I think you've misunderstood things: With sw or hw RAID, data can be mirrored or striped over different physical devices. However, the RAID driver presents the volume as one physical device to the kernel, that can be partitioned as a regular device. If the RAID is configured in mirror mode (RAID1 or higher), data can be recovered from a dead disk (although if you want true redundancy I would go for at least RAID5).

LVM stripes its data on a physical volume, i.e. it writes one sequence of data in one location, and the next sequence in another location. It is possible to create a physical volume over several block devices (hard disks), which in default LVM operation mode means data will be striped on different disks. However, this is stripe mode, not mirror, meaning that data is lost if a disk dies.

Thus, if you use RAID5 for instance, and make part of the RAID volume a regular partition, and the rest LVM, all data on the RAID volume is mirrored. On the other hand, if you don't use RAID, but make the same disks into one big LVM physical volume, no data is mirrored. The redundancy is due to the RAID, not LVM.

This is why I don't see the point of putting your root on LVM.

----------

