# KVM and virtio - can't mount guest root fs

## john.newman

Hello,

I am back w/ gentoo after about 5 year hiatus.  Very excited to be back and making great progress so far. It's amazing how much I remember and how quickly it is coming back.    :Shocked: 

Anyway,  I am trying to run a few virtual machines w/ kvm.  Everything is working fine using .img files, but I am trying to run on a partition using the virtio driver (/dev/vda).   Either approach will be fine, so if I absoultely can't get this to work I can switch back to .img files. However I think I am close, I just need someone to say "oh you need to do this and it will work"

I've setup LVM and have /dev/vms/vm1 available as a LVM partition for one guest.  I installed gentoo on that (mkfs, stg 3 tarball, cp some /etc files from host, build kernel in chroot.)  I couldn't really find much about what should be different for the guest kernel, I copied the hosts .config and didn't change that much.  Anyone have any tips for guest kernel?  I did compile the virtio drivers INTO the kernel as described here http://www.linux-kvm.org/page/Virtio .. doing "cat .config | grep virtio" confirms they are all set to Y in the guest system.

Anyway, after unmounting the partition, I *should be able to boot kvm by running:

```
exec kvm -name vm1 -cpu core2duo -smp 8 -m 2048 -boot c -drive file=/dev/vms/vm1,if=virtio,boot=on -localtime -kernel /boot/vmlinuz-2.6.30-gentoo-r5 -append 'root=/dev/vda gentoo=nodevfs' -net nic,macaddr=77:77:77:77:12:34,model=virtio -net tap,ifname=qtap0,script=no,downscript=no
```

So it starts up, but I get a kernel panic when trying to mount rootfs:  (i had to manually type this   :Razz:  )

```
md:  ... autorun DONE.

Root-NFS: No NFS server available, giving up.

VFS: Unable to mount root fs via NFS, trying floppy.

VFS: Cannot open root device "vda" or unknown-block(2,0)

Please append a correct "root=" boot option; here are the available partitions:

0b00      1048575 sr0 driver: sr  (this is my cdrom)

Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0)

Pid: 1, comm: swapper Not tainted 2.6.30-gentoo-r5 #11

Call Trace:

  [<ffffffff814cc23d>] panic +0xa0/0x151

  [<ffffffff817f1773>] ? printK_all_input partitions+0x1de/0x1f0

  [<ffffffff810e0d66>] ? sys_mount +0xb9/0xcf

  [<ffffffff817d01eb>] mount_block_root +0x1d3/0x1ea

  [<ffffffff817d027b>] mount_root +0x79/0x99

  [<ffffffff817d040b>] prepare_namespace +0x170/0x19d

  [<ffffffff817cf708>] kernel_init + 0x174/0x184

  [<ffffffff8100cd8a>] child_rip +0xa/0x20

  [<ffffffff817cf594>] ? kernel_init +0x0/0x184

  [<ffffffff8100cd80>] ? child_rip +0x0/0x20
```

Does anyone know what is going on?  Again, the virtio drivers are built in to the guest..  ?

What's strange is when kvm first boots, I do not see a hard drive listed at all, but I do see the cd rom.  When I boot with a .img file instead of -drive file,if=virtio, I do see the HD there and everything works fine.  Again, I'd rather be able to run off of a lvm like this - unless it can't work or someone can convince me .img files are the better approach because?

I found out about the -drive option from this post http://archives.gentoo.org/gentoo-admin/msg_63e77a6009f5eb492d69f872ca18d313.xml .. based on what the author says, everything I've got here should be possible and working yes?

Also what's the deal with GRUB on the guest system?  DO i need to do that?

Thanks

----------

## Hu

 *john.newman wrote:*   

> I couldn't really find much about what should be different for the guest kernel, I copied the hosts .config and didn't change that much.

 

The guest hardware is largely fixed by the choice of hypervisor, and rarely has much in common with your host hardware.

 *john.newman wrote:*   

> doing "cat .config | grep virtio"

 

grep -i virtio .config

 *john.newman wrote:*   

> 
> 
> ```
> exec kvm -name vm1 -cpu core2duo -smp 8 -m 2048 -boot c -drive file=/dev/vms/vm1,if=virtio,boot=on -localtime -kernel /boot/vmlinuz-2.6.30-gentoo-r5 -append 'root=/dev/vda gentoo=nodevfs' -net nic,macaddr=77:77:77:77:12:34,model=virtio -net tap,ifname=qtap0,script=no,downscript=no
> ```
> ...

 Is that the right kernel path?  It looks like the path to your host kernel, not your guest kernel.  This might explain why the guest seems not to support virtio disks.

 *john.newman wrote:*   

> So it starts up, but I get a kernel panic when trying to mount rootfs:  (i had to manually type this   )

 

Use -serial file:tty.out and modify the kernel command line to print to the serial console.

 *john.newman wrote:*   

> 
> 
> ```
> md:  ... autorun DONE.
> 
> ...

 Why do you have root-over-NFS and floppy support in your kernels?

 *john.newman wrote:*   

> 
> 
> What's strange is when kvm first boots, I do not see a hard drive listed at all, but I do see the cd rom.

 

That seems suspicious.  Are you sure there is no output from the kvm process on its stdout/stderr?  Does it have permission to open the LVM device node?  What is the output of info block in the monitor?

----------

## john.newman

 *Quote:*   

> grep -i virtio .config

 

 :Idea:   thanks 

WAIT .. are you telling me the guest kernel is supposed to physicially live on the host file system ??  hahahaha!

```
mount /mnt/vms/vm1

mkdir /boot/vms

cp -a /mnt/vms/vm1/boot/* /boot/vms/

umount /mnt/vms/vm1

kvm ....... -kernel /boot/vms/vmlinuz-2.6.30-gentoo-r5 -append "root=/dev/vda"
```

SUCCESS

vm1 ~ # mount

/dev/vda on / type ext3 (rw,noatime)

...

vm1 ~ # emerge 

THANKS

 *Quote:*   

> Use -serial file:tty.out and modify the kernel command line to print to the serial console. 

 

again very useful.  "-append 'console=ttyS0,115200' is how you modify the kvm command .. just in case anyone stumbles across for reference

looking at the tty.out file, I did not see anything BAD.  There is lots of unecessary stuff that I will lean down as I go forward (you pointed out the floppy, nfs, etc).  I just want to get this up and running as a proof of concept to verify that kvm can actually work fast and meet my needs, then I will get everything properly setup and managed.  My host system still needs lots of work as well, I finally got X running with ATI drivers.   :Mad:   :Mad:   :Mad: 

I do need a strategy to manage the images. Backups, cloning, and most importantly sending changesets across vms.  This is kind of important to get figured out before I go too much further.  I just want to get two nodes up and talking to each other this weekend.   :Cool:   Fortunately that should be very easy now that I know where the kernel goes.

 *Quote:*   

> That seems suspicious. Are you sure there is no output from the kvm process on its stdout/stderr?

 

YES it is silent, all is well

 *Quote:*   

> Does it have permission to open the LVM device node? What is the output of info block in the monitor?

 

yep permissions are good.  There's a lot of output from that.. i'm not going to bother posting it, since well, everything is working.  

Thanks again

----------

## Hu

 *john.newman wrote:*   

> WAIT .. are you telling me the guest kernel is supposed to physicially live on the host file system ??  hahahaha!

 

Sometimes.  If you use -kernel, then kvm reads the host filesystem to load a file into the guest.  If you leave out -kernel and instead provide only a path to a virtual hard disk, then kvm will pass control to the bootloader in the guest, which will search the guest filesystem to find a kernel.  The -kernel option is primarily used for environments where you care more about convenience of loading the kernel than about exactly emulating the way a guest would normally load.

----------

## john.newman

ok thanks for explaining that.  

I will be using the -kernel option then, it seems to fit better with what I am going for.  I'd rather not install grub on the guests, but it is good to know I can go that way if I want.  So many options with a lot of things here.  

Is it correct that I would still have to make && make modules_install && make install the kernel on the guest system?  I tried not doing that and got some nasty errors, then I tried just the modules and got less, now I built the kernel and everything is just running absolutely [ ok ] .  The errors may or may not have been to something else I screwed up.  But now I polished off some gentoo install scripts and am rapidly adding guests that are automatically talking to eachother and with the host.   :Cool: 

plus I can easily mount the guests and manage them, this is pretty damn good I am very impressed so far.  Excellent work.  I also have my windoze install here this is fantastic

----------

## Hu

 *john.newman wrote:*   

> Is it correct that I would still have to make && make modules_install && make install the kernel on the guest system?

 

Probably, but again, it depends.  You can build a kernel on one system and use it on a system which has not built its own, subject to two main limitations.

First, you must install all required modules into the target system, since the kernel on the target system will search its filesystem for those modules, not the host filesystem.  Alternately, you could build a monolithic kernel and not worry about it.

Second, you cannot emerge (even as a prebuilt binary) packages like sys-apps/lm_sensors, because they wrongly assume that they can search for a kernel .config, and they die if they do not find an acceptable .config.  There are two parts to this problem.  First, the linux-info eclass defaults to dying if it cannot find an acceptable kernel .config when linux_chkconfig_present is used.  You can mitigate its bad behavior with the right environment variable, but the same variable also suppresses checks elsewhere.  Even if you suppress that behavior, there is the secondary problem that sys-apps/lm_sensors will die on its own if it does not see the symbols it wants.  These checks are ostensibly to prevent users from silently creating configurations that may compile fine, but cannot work.  However, they have the unfortunate side effect of inappropriately constraining users who know what they are doing.

----------

## john.newman

OK you are correct, I tried it out again and I confirmed that A) you do absoultely need to install the modules and B) you can boot but will run into strange problems if you don't build the kernel as well.  So I would say it is just best practice to go through those steps but is not mandatory.

Hu, thanks to your posts, I am now running an almost legit server farm here on my desktop with no issues.   :Cool:   :Very Happy:  I do appreciate it.

OT, right now I am working on a backup process for the virtual machines.  My idea so far is to shut each node in the group down, mount the lv partition, rsync the files to a dir on another disk, umount the lv, and restart the kvm.  That is pretty clean I guess.  I will have a separate manual step for backing up the physicial host.  Namely boot gentoo from a usb key and run the same rsync command for the  physical host's partitions.  That requires two restarts of the machine but I've got burnt with backing up a running system before so that is the way to do it.  I think?

I do need to know how to halt a running virtual node from a terminal in the host and how to 'attach' a new terminal to an existing kvm process.  Last night I accidentally killed my xserver, so the shell that started kvm was no longer available, but kvm was still running.  I needed to get back in there to cleanly shut it down but ended up running kill which is every bit as good as pulling the power cord.    :Rolling Eyes: 

----------

## Hu

The archive method you describe sounds safe.  I would probably risk the inconsistency to keep systems alive, unless you run some application that is both critical and is susceptible to problems if a non-atomic copy is made.

You can generate ACPI events in the guest by using the system_reset and system_powerdown commands in the KVM monitor.  You can redirect the monitor to a socket, so that it is accessible even if you lose access to the main shell.  You should probably run kvm in a screen session if you are not putting it in daemon mode.  The only way to reattach to a KVM would be to start it in VNC mode, at which point you can reattach by starting a new VNC client.

----------

