# [SOLVED] udev update, separate lvm /usr, genkernel...

## vputz

Okay, I read the warnings, I updated anyway since I was using an existing initramfs with separate /usr on lvm and it was working fine with an older udev.  I upgraded kernel to 3.6.11, used genkernel (3.4.45) to compile and install an initramfs, and pressed on.

And now I'm in a weird position.  My boot sequence stops after (typed rather than pasted)

```

Scanning for and activating Volume Groups

12 logical volume(s) in Volume Group "vgbase" now active

Determining root device

mounting /dev/sda3 as root

using mount -t ext4 -o ro

!! the filesystem mounted at /dev/sda3 does not appear to be a valid /, try again

Could not find the root block device in .

```

If I type "shell" to go to a shell, though... /dev/sda3 is mounted read-only at /newroot.  If I unmount it and "mount /dev/sda3 newroot" it mounts like a champ.  So... the kernel IS seeing it.  From ash, I can "mount /dev/mapper/vgbase-usr newroot/usr" and that mounts just fine too.  So lvm is working.  I can mount all the relevant lvm partitions to newroot and even chroot over to it (although devices are having trouble starting if I chroot, so I can't seem to start ethernet or anything).

I set the "static" use flag for lvm2, reemerged it (and udev), and made a new genkernel initramfs (it did say "using cache" for lvm...?).

CONFIG_DEVTMPFS=Y and CONFIG_DEVTMPFS_MOUNT=Y are set.

/etc/fstab contains the line "/dev/mapper/vgbase-usr   /usr  ext4 noatime 0 0"

So... what the devil is going on?  I shouldn't be missing a kernel option for block devices, because it's seeing and using the disks just fine once I'm in ash.  I'm using genkernel for the initramfs, and I thought that would take care of the separate /usr partition.  The kernel is clearly seeing the /dev/sda* series of partitions from within ash, and the boot message shows that LVM is working or it wouldn't show "vgbase" as active before mounting root.

Seriously, this is killing me.  This is my home server for squid, media, etc, so even my non-gentoo boxes are inconvenienced.  Halp?  I don't even know how to get diagnostic information on this.Last edited by vputz on Fri Jan 25, 2013 10:04 pm; edited 1 time in total

----------

## vputz

Mind you, something may be strange, because if I chroot to newroot and then do "udevadm info --query=all --name=sda, I get "device node not found", and playing around with initscripts is just confusing because they act strange in the chrooted shell: "udev is already starting" or "udev stopped by something else."

But I'm powerless to explain it all.  This is confusing as hell.

----------

## vputz

Disgregard last.  Some more info:

The ash shell (I assume my initramfs?) has an /etc/mtab that lists a udev /dev line.  If I chroot into newroot, /etc/mtab lists a tmpfs /dev line, but I can mount udev (using the line in the ash shell) and suddenly "udevadm info --query=all --name=sda" looks purty and gives me all the relevant information.  I still don't have a network interface according to ifconfig, of course.

So wot teh hell!?  Udev works.  LVM works.  /usr is mountable.  But evidently it's not all happening at the right times to make a worky system.  This is DRIVING ME MAD, and of course happened on a day when I was working from home and using files on that system for work (pure hubris to attempt the upgrade at this point in time, but we all do foolish things like that).

----------

## John R. Graham

Did you miss it? *eselect news read new wrote:*   

> 2013-01-23-udev-upgrade
> 
>   Title                     Upgrading udev from 171 (or older) to 197
> 
>   Author                    Samuli Suominen <ssuominen@gentoo.org>
> ...

 - John

----------

## vputz

No, I didn't; that's my confusion!

 *Quote:*   

> - Remove udev-postmount from runlevels

 

Did this.

 *Quote:*   

> The need of CONFIG_DEVTMPFS=y in the kernel

 

Put CONFIG_DEVTMPFS in kernel.  Have no line at all for /dev/ in /etc/fstab (which I gather is acceptable)

 *Quote:*   

> - The case of predictable network interface names

 

I understand I'm not looking for eth0, but I had read that ifconfig would at least list interfaces; it lists nothing.

 *Quote:*   

> - Support for older kernels than 2.6.39 is dropped

 

3.6.11

 *Quote:*   

> - The case of separate /usr; if it worked for you... we still recommend initramfs...

 

Had a working separate /usr with the old version, and am using initramfs.  And I didn't see any objectionable note in emerging udev.  I may try reemerging udev-init-scripts to check there, or try inserting the (I thought unnecessary) /dev entry in fstab, but I really thought I had addressed the specifics of the news note.

----------

## vputz

Yep, tried emerging udev-init-scripts and adding a /dev line to fstab, no avail.  Tried changing from /dev/sda3 to the UUID in menu.lst, but an interesting tidbit is that it still uses "/dev/sda3" when "determining root device" (even after changing it in menu.lst and recreating the initramfs with --disklabel in genkernel, the boot sequence says "Detected real_root=/dev/sda3").

I'm still just baffled that the boot sequence says it's using "mount -t ext4 -o ro" and can't mount /dev/sda3, yet when I start the shell, /etc/mtab lists "/dev/sda3  /newroot  ext4 ro 0 0"; it's mounted just fine.

----------

## vputz

Okay, massively confused now.  I tried changing the real_root=UUID=... and it fails, but if I use the correct uuid it correctly tries to mount "/dev/sda3" as root.  So that tells me devtmpfs is working, if it can map a UUID to a device name... right?

The particular error is specific; "the filesystem mounted at ${REAL_ROOT}" does not appear to be valid /.  It occurs here in the initrd init script (finally booted with parted magic so I could ssh in):

```

                # Try to mount the device as ${NEW_ROOT}

                if [ "${REAL_ROOT}" = '/dev/nfs' ]; then

                        findnfsmount

                else

                        # If $REAL_ROOT is a symlink

                        # Resolve it like util-linux mount does

                        [ -L ${REAL_ROOT} ] && REAL_ROOT=`readlink ${REAL_ROOT}`

                        # mount ro so fsck doesn't barf later

                        if [ "${REAL_ROOTFLAGS}" = '' ]; then

                                good_msg "Using mount -t ${ROOTFSTYPE} -o ${MOUNT_STATE}"

                                mount -t ${ROOTFSTYPE} -o ${MOUNT_STATE} ${REAL_ROOT} ${NEW_ROOT}

                        else

                                good_msg "Using mount -t ${ROOTFSTYPE} -o ${MOUNT_STATE},${REAL_ROOTFLAGS}"

                                mount -t ${ROOTFSTYPE} -o ${MOUNT_STATE},${REAL_ROOTFLAGS} ${REAL_ROOT} ${NEW_ROOT}

                        fi

                fi

               # If mount is successful break out of the loop

                # else not a good root and start over.

                if [ "$?" = '0' ]

                then

                        if [ -d ${NEW_ROOT}/dev -a -x "${NEW_ROOT}${REAL_INIT:-/sbin/init}" ] || [ "${REAL_ROOT}" = "/dev/nfs" ]

                        then

                                break

                        else

                                bad_msg "The filesystem mounted at ${REAL_ROOT} does not appear to be a valid /, try again"

                                got_good_root=0

                                REAL_ROOT=''

                        fi

                else

                        bad_msg "Could not mount specified ROOT, try again"

                        got_good_root=0

                        REAL_ROOT=''

                fi

 
```

So... it looks like it's trying "mount -t ext4 -o ro /dev/sda3 /newroot"; it doesn't receive an error code ([ "$?" = '0' ]) so it tests if newroot/dev exists as a directory and if newroot/sbin/init is executable.

Right?  But /dev is indeed a directory and sbin/init is indeed executable.  So what is the problem?  I've FSCK'd it and it's clean.  This RO mount and check is at line 646, the mounts of other things (including /usr) are around line 872:

```

# Mount the additional things as required by udev & systemd

if [ -f ${NEW_ROOT}/etc/initramfs.mounts ]; then

        fslist=$(get_mounts_list)

else

        fslist="/usr"

fi

```

...so we're failing before usr is mounted, which is before udev ever starts with the regular init.  Right?  But why?

I'm even prepared to resize partitions and put /usr right in / if necessary, but it's not clear at all that this would help.  Please, if anyone can shed light on this I would greatly appreciate it.  I really don't want to reinstall my server.

----------

## vputz

Well.  Dammit  :Smile: 

problem was the argument "init=linuxrc", which was what my old menu.lst entry had.  But without the / (ie "init=/linuxrc") it was testing for the wrong thing somehow.  Boots like a champ.

That's really odd to me.  I'm not sure if the scripts changed so the / was now necessary, or if something else happens, but boy did that burn some midnight oil.  Marking solved.  Talk about a complete red herring.

----------

## Element Dave

 *vputz wrote:*   

> Well.  Dammit 
> 
> problem was the argument "init=linuxrc", which was what my old menu.lst entry had.  But without the / (ie "init=/linuxrc") it was testing for the wrong thing somehow.  Boots like a champ.
> 
> That's really odd to me.  I'm not sure if the scripts changed so the / was now necessary, or if something else happens, but boy did that burn some midnight oil.  Marking solved.  Talk about a complete red herring.

 

I'm glad you got it working.  However, you should remove the "init" entry from your boot menu entirely to avoid potential headaches in the future: the correct "init" will be executed automatically.  The old linuxrc isn't used by initramfs.  The only reason it works for you is that genkernel has a compatibility symlink from linuxrc to init (or the other way around), if memory serves.  You only need an "init" entry if you don't want to use the default, in which case you'll what/why to use.

----------

## stackoverflow128

HI

I had the same issue that says. This is not a valid root partition. I fixed this by emerge -v sysvinit.

forsome reason updating, I think, udev removed my 

 /sbin/init

I later learnt that this is what that error message really means. 

So I had to boot up the system using a live CD and do the usual 

livecd usr # cd /

livecd / # mount -t proc proc /mnt/gentoo/proc

livecd / # mount --rbind /dev /mnt/gentoo/dev

livecd / # mount --rbind /sys /mnt/gentoo/sys

livecd / # cp -L /etc/resolv.conf /mnt/gentoo/etc/ 

livecd / # chroot /mnt/gentoo /bin/bash

livecd / # source /etc/profile

Then emerge -v sysvinit which provides the /sbin/init file.

James Cordell

----------

