# [solved] why does lvcreate fail ?

## toralf

```
# pvscan

  /run/lvm/lvmetad.socket: connect failed: No such file or directory

  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

  PV /dev/sda4   VG vg0   lvm2 [2.60 TiB / 2.60 TiB free]

  PV /dev/sdb1   VG vg0   lvm2 [2.73 TiB / 2.73 TiB free]

  Total: 2 [5.33 TiB] / in use: 2 [5.33 TiB] / in no VG: 0 [0   ]

# vgscan

  /run/lvm/lvmetad.socket: connect failed: No such file or directory

  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

  Reading all physical volumes.  This may take a while...

  Found volume group "vg0" using metadata type lvm2

# vgdisplay

  /run/lvm/lvmetad.socket: connect failed: No such file or directory

  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

  --- Volume group ---

  VG Name               vg0

  System ID

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  23

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               5.33 TiB

  PE Size               4.00 MiB

  Total PE              1397896

  Alloc PE / Size       0 / 0

  Free  PE / Size       1397896 / 5.33 TiB

  VG UUID               mS8jX9-xPdR-5RqO-BpLl-3ifd-Rqgx-8hfvx5

# lvcreate -i 2 -l 100%VG -n lv0 /dev/vg0

  /run/lvm/lvmetad.socket: connect failed: No such file or directory

  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

  Using default stripesize 64.00 KiB.

  /dev/vg0/lv0: not found: device not cleared

  Aborting. Failed to wipe start of new LV.

# lvcreate -i 2 -l 100%VG -n lv0 /dev/vg0 --verbose

  /run/lvm/lvmetad.socket: connect failed: No such file or directory

  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

  Using default stripesize 64.00 KiB.

    Finding volume group "vg0"

    Converted 100%VG into 1397896 extents.

    Archiving volume group "vg0" metadata (seqno 25).

    Creating logical volume lv0

    Found fewer allocatable extents for logical volume lv0 than requested: using 1365000 extents (reduced by 32896).

    Creating volume group backup "/etc/lvm/backup/vg0" (seqno 26).

    Activating logical volume "lv0".

    activation/volume_list configuration setting not defined: Checking only host tags for vg0/lv0

    Creating vg0-lv0

    Loading vg0-lv0 table (253:0)

    Resuming vg0-lv0 (253:0)

  /dev/vg0/lv0: not found: device not cleared

  Aborting. Failed to wipe start of new LV.

    Removing vg0-lv0 (253:0)

    Creating volume group backup "/etc/lvm/backup/vg0" (seqno 27).

```

Hhm, this is not from teh Gentoo Linxu itself, but rather from a rescue system - maybe have to wait till I can login into my remote server ?Last edited by toralf on Sat Aug 20, 2016 8:17 am; edited 1 time in total

----------

## eccerr0r

Just a guess, does the rescue kernel have device mapper built in?  What version of udev is the rescue system using?  What udev rules does it have?

----------

## toralf

 *eccerr0r wrote:*   

> Just a guess, does the rescue kernel have device mapper built in?  What version of udev is the rescue system using?  What udev rules does it have?

 indeed, the rescue system prevented it, booting into the Gentoo solved the issue.

TIA

----------

