# 'Failed to connect to lvmetad' when booting

## Utsuho Reiuji

I did an update yesterday, lvm2 was one of packages that was pulled in by portage:

```
*  sys-fs/lvm2

      Latest version available: 2.02.187-r1

      Latest version installed: 2.02.187-r1

      Size of files: 2,350 KiB

      Homepage:      https://sourceware.org/lvm2/

      Description:   User-land utilities for LVM2 (device-mapper) software

      License:       GPL-2
```

Now I am getting the following error message while booting my system:

```
rc boot logging started at Mon Apr 20 10:36:31 2020

 * Setting system clock using the hardware clock [Local Time] ...

 [ ok ]

 * Starting up RAID devices ...

 * 

 [ !! ]

 * Starting the Logical Volume Manager ...

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  Reading all physical volumes.  This may take a while...

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

 [ ok ]

 * Checking local filesystems  ...

/dev/sda3: clean, 1259011/6668288 files, 16684021/26649344 blocks

/dev/sda2: clean, 333/32768 files, 56612/131072 blocks

warehouse: clean, 208499/183148544 files, 611601121/732566272 blocks

/dev/md127: clean, 1027058/61038592 files, 147691415/244126720 blocks

 [ ok ]

 * Remounting root filesystem read/write ...

 [ ok ]

 * Remounting filesystems ...

 [ ok ]

 * Updating /etc/mtab ...

-- snip --
```

I am not 100% sure if this is connected to be honest. The system seems to boot up fine, the harddrives seem to work normally.

```
rc-service lvm status

 * status: started
```

Even though the service is running, I can't use commands like lvdisplay:

```
lvdisplay

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
```

Any ideas in how I can check what's going on?

----------

## NeddySeagoon

Utsuho Reiuji,

Aren't lvm and lvmetad separate services?

```
$ rc-update show -v | grep lv

                  lvm |                              

       lvm-monitoring |                              

              lvmetad |                              

             lvmpolld |      
```

As the message says, its a warning. With lots of block devices, device scanning can take along time.

----------

## ipic

I have the same issue following the LVM2 upgrade.

Whilst I agree device scanning can take a while, I find it suspicious when this sort of thing starts on a unchanged system (except for LVM).

After the boot I got the same issue as the OP:

```
ian2 ~ # pvscan

  WARNING: Failed to connect to lvmetad. Falling back to device scanning.

  PV /dev/md202   VG vg01            lvm2 [<427.00 GiB / <64.20 GiB free]

  PV /dev/md203   VG vg01            lvm2 [457.07 GiB / 203.83 GiB free]

  PV /dev/md205   VG vg01            lvm2 [<468.18 GiB / <268.18 GiB free]

  PV /dev/md206   VG vg01            lvm2 [504.15 GiB / 504.15 GiB free]

  PV /dev/md102   VG vg00            lvm2 [<518.12 GiB / <51.77 GiB free]

  PV /dev/md103   VG vg00            lvm2 [<459.63 GiB / 20.98 GiB free]

  PV /dev/md105   VG vg00            lvm2 [<429.56 GiB / <325.56 GiB free]

  PV /dev/md106   VG vg00            lvm2 [449.09 GiB / 449.09 GiB free]

  Total: 8 [<3.63 TiB] / in use: 8 [<3.63 TiB] / in no VG: 0 [0   ]

ian2 ~ # 
```

I then did this:

```
ian2 ~ # /etc/init.d/lvmetad --nodeps restart

 * Starting lvmetad ...                                                   [ ok ]

ian2 ~ #
```

 and after that the warning is no longer there:

```

ian2 ~ # pvscan

  PV /dev/md202   VG vg01            lvm2 [<427.00 GiB / <64.20 GiB free]

  PV /dev/md203   VG vg01            lvm2 [457.07 GiB / 203.83 GiB free]

  PV /dev/md205   VG vg01            lvm2 [<468.18 GiB / <268.18 GiB free]

  PV /dev/md206   VG vg01            lvm2 [504.15 GiB / 504.15 GiB free]

  PV /dev/md102   VG vg00            lvm2 [<518.12 GiB / <51.77 GiB free]

  PV /dev/md103   VG vg00            lvm2 [<459.63 GiB / 20.98 GiB free]

  PV /dev/md105   VG vg00            lvm2 [<429.56 GiB / <325.56 GiB free]

  PV /dev/md106   VG vg00            lvm2 [449.09 GiB / 449.09 GiB free]

  Total: 8 [<3.63 TiB] / in use: 8 [<3.63 TiB] / in no VG: 0 [0   ]

ian2 ~ #

```

From time to time there have been issues in the past of incorrect hand off between the initramfs and the boot sequence where LVM is concerned.  The last one made me change from genkernel to dracut. I did rebuild the initramfs before booting, so I don't think it's a version thing between the two.

I'd put this in the annoyance rather than problem bucket at the moment. At least the boot process doesn't hang up for minutes like it used to when this happened before.

EDIT: OK Scratch that waffle. I should have spotted it when the restart didn't stop anything.

The solution (for me) is to add lvmetad to the boot runlevel. After that, boot and subsequent LVM commands work without issue.

```
ian2 ~ # rc-update add lvmetad boot
```

Shouldn't post before coffee.

----------

## Utsuho Reiuji

It would have been good had the post update info mentioned about adding the service in openrc. Only lvm is mentioned and without notice, users like me won't just know about the new (?) service...

----------

## NeddySeagoon

Utsuho Reiuji,

The daemon has been there for a long time.

All that's changed is the warning message has been added.

No action is required unless device scanning is removed from LVM.

----------

## ipic

Hi NeddySeagoon,

The e-build gives this advice when it's done:

```
 * Make sure the "lvm" init script is in the runlevels:

 * # rc-update add lvm boot

 * 

 * Make sure to enable lvmetad in /etc/lvm/lvm.conf if you want

 * to enable lvm autoactivation and metadata caching.
```

Whist I'll have a hard time proving it, I'm convinced that previous instruction was that lvmetad would be started by LVM if use_lvmetad = 1 was set in /etc/lvm/lvm.conf

Would it be appropriate to raise a bug on the ebuild so that 'rc-update add lvmetad boot' is added to it's end of build advice?

----------

## NeddySeagoon

ipic,

Its always good to give feedback like that.

Check for an existing bug. 

If everything you wanted to say is already in bugzilla, there is no point in adding a "me too" comment.

----------

## ipic

I found this bug 689292

Looking at the ebuild for sys-fs/lvm2-2.02.187-r1 it looks to me that the resolution of this bug is in this version.

/etc/init.d/lvm has a depends section that includes this:

```

   # We may use lvmetad based on the configuration. If we added lvmetad

   # support while lvm2 is running then we aren't dependent on it. For the

   # more common case, if its disabled in the config we aren't dependent

   # on it.

   config /etc/lvm/lvm.conf

   local _use=

   if service_started ; then

      _use=$(service_get_value use)

   else

      if _use_lvmetad ; then

         _use="${_use} lvmetad"

      fi

      if _use_lvmlockd ; then

         _use="${_use} lvmlockd"

      fi

   fi

```

I'll happily admit that I am out of my depth now. To me it looks like the lvm start script states that it 'uses' lvmetad if the config value _use_lvmetad is true.

I have 'use_lvmetad = 1' in /etc/lvm/lvm.conf

I now do not understand why lvmetad was not started by the lvm rc script on reboot.

Of course, having added lvmetad to the rc boot runlevel, it's a bit moot. However, as the OP says, casual users will be caught out.

----------

## Hu

According to man openrc-run:

```
     need             The service will attempt to start any services it needs

                      regardless of whether they have been added to the run‐

                      level. It will refuse to start until all services it

                      needs have started, and it will refuse to stop until all

                      services that need it have stopped.

     use              The service will attempt to start any services it uses

                      that have been added to the runlevel.
```

As I read that, a use only helps you if the administrator has already put the used service in the right runlevel.  If the administrator does that, then use ensures the service is started early enough to be useful.  If the administrator has not done that, then use is a no-operation statement.  A need would force the named service to try to start, regardless of whether the administrator put it in the runlevel.

----------

## ipic

Thanks for the response, learning all the time.

In bug 689292 the proposer says the following:

```
Reason:

- "use" will do the same like "need", i.e. it will make sure that lvmetad/lvmlockd will get started before lvm.

- However, when restarting lvmetad/lvmlockd, i.e. after upgrading lvm2 package, we won't longer try to stop lvm service (keep in mind that lvm service is not a real daemon...) which will always fail if LVM volumes are still in use.
```

The patch was accepted, applied and released in pretty quick order.

However, as the OP says, this is a very visible change in package behaviour when compared to the way it has been up to now. the user needs to take extra action as a result of this.

So, don't you think that at least the ebuild end comments should include something about it?

----------

## Hu

I think any system that tried to stop the lvm service when it was merely needed by something that was restarting was already misconfigured.  As described, it seems to me like a flaw in the dependency resolver.  It should know it will need to turn the lvm service on again, and not turn it off.

I don't think ebuild end comments are a good place to describe this, because not everyone will read those.  I think discussing the issue here is fine, but if you want a change in the tree, you need to bring the problem to the attention of the relevant maintainer.

----------

## ipic

A bit of teamwork here  :Smile:   This bug has been raised

----------

## sdauth

 *ipic wrote:*   

> A bit of teamwork here   This bug has been raised

 

2.02.187-r2 fixes the issue. https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=171bc0144212041972ae2e877078cf073b8252a5

No need to manually add "lvmetad" to boot runlevel. It correctly detects the setting (use_lvmetad = 1) in /etc/lvm/lvm.conf (like before   :Surprised:  )

----------

## Utsuho Reiuji

The new r2 of lvm2 indeed fixed the warning during boot, thanks for your help!

----------

