# KVM-based VM has slow disk-i/o (udma not available)

## s4l0m0n

Hi guys!

I really took a fair amount of time to search the web for solutions for my problem, but had no succes. Maybe (I hope so) the soultion is trivial, but I can't get around it.

I'm running a server which hosts several VMs via KVM (host has 2.6.28-gentoo-r5). One of them is a file server and has acces to a logical volume which is given as hdb. This VM is running 2.6.30 since it fixed some bugs with ext4, which crashed the server two times. The KVM-GUEST Option is enabled in the kernel, allthough i dont think it is useful with a C2D (which has native virtualization support)

I noticed that the IO Performance is awful. On the host machine I can write with 85Mbyte/s on the LV, inside the VM which uses the LV, I only got 20Mbyte/s. So I took a look at dmesg et voila:

```
hdb: host side 80-wire cable detection failed, limiting max speed to UDMA33
```

Asking hdparm gives:

```
hdparm -i /dev/hdb

/dev/hdb:

 Model=QEMU HARDDISK, FwRev=0.10.0, SerialNo=QM00002

 Config={ Fixed }

 RawCHS=16383/16/63, TrkSize=32256, SectSize=512, ECCbytes=4

 BuffType=DualPortCache, BuffSize=256kB, MaxMultSect=16, MultSect=16

 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3816816640

 IORDY=yes, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}

 PIO modes:  pio0 pio1 pio2 

 DMA modes:  sdma0 sdma1 sdma2 mdma0 mdma1 *mdma2 

 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 

 AdvancedPM=no

 Drive conforms to: ATA/ATAPI-5 published, ANSI INCITS 340-2000:  ATA/ATAPI-4,5,6,7

 * signifies the current active mode
```

So I tried hdparm -X udma5 -d1 /dev/hdb , but it does nothing.

As far as I know, the cable detection makes sure that only disks connected with a modern cable can use UDMA66 and above. But since its a VM, this should be pointless. The LV on the host resides on a raid5 with three 1TB Western Digital Disks, each of them is in udma6 mode (verified with hdparm on the host).

I hope i didnt miss any important informations. If so, let me know.

So, does anyone know how do activate a higher dma-mode inside the VM? Any hints are highly appreciated!

----------

## Hu

As a guess, the virtual hard drive presented to the guest is incapable of UDMA66, which manifests as the warning about the missing cable.  Remember that the hypervisor presents fake hardware to the guest, so the capabilities of your real hardware are not necessarily available to the guest.  You could try using a different virtual hard drive, such as the scsi or virtio interface.  Those may allow the guest kernel to move data to the hypervisor in a more efficient manner.  Alternately, if you want to give an entire physical hard disk to the guest, you could try the device pass-through support.  I do not know whether it will work here, or if it will give the performance you need.

----------

## s4l0m0n

hm, i think virtio could do the trick. For this I will need to make several changes to the host and the guest, so it'll take some time. Perhaps this can even boost the network interface, which was very slow too. This sounds very promising:

http://www.linux-kvm.org/page/Virtio

Thank you! I will post the results asap

----------

## Mad Merlin

Virtio disks will speed up your disk access a fair bit, but you need to make sure your guest can still find the disk next time it boots, as the disk will be /dev/vda instead of /dev/hda.

For network interface, I found the emulated e1000 card to be very fast (~800-900 Mbit/s) and since the guest I was working with at the time didn't support virtio NICs, I didn't explore that option. But either way, bridged e1000 is very much faster than the default (userspace NAT with a 100 Mbit card).

----------

## s4l0m0n

 *Mad Merlin wrote:*   

> Virtio disks will speed up your disk access a fair bit, but you need to make sure your guest can still find the disk next time it boots, as the disk will be /dev/vda instead of /dev/hda.

 

yep, I read about that and will change the paths

 *Quote:*   

> 
> 
> For network interface, I found the emulated e1000 card to be very fast (~800-900 Mbit/s) and since the guest I was working with at the time didn't support virtio NICs, I didn't explore that option. But either way, bridged e1000 is very much faster than the default (userspace NAT with a 100 Mbit card).

 

I dont really know if its the guests NIC (as in your example, a bridged e1000) or any other problem. I only noticed that my network-clients are not able to write more than 3Mbyte/s over NFS on the guest. With this speed, doing stage4-backups of my workstation is really pain in the butt...

----------

## frenkel

Just try to use virtio for both the disk and the network cards. That should result in the fastest i/o possible. If that's still to slow, it might be another problem.

----------

## s4l0m0n

well, I migrated the NIC and the disk to virtio. There is a significant Performance boost, but I'm still not happy...

Over NFS, I can now write with 10-20Mbyte/s (3Mbyte/s w/o virtio), and 30Mbyte/s locally. Does KVM really have such a huge overhead?

----------

## Hu

That depends on several factors.  What is the full command line you are using to start the guest?  What version of KVM are you using?  Some versions used writethrough caching for some backing stores, which was very bad for performance.  If you are storing the guest's disk in a file, the file could be fragmented on the host.

----------

## frenkel

Are you using disk images? Thats a performance penalty also, because you're basically using a filesystem on a filesystem. It's better to create a seperate partition for it. Furthermore, I would not recommend using LVM either.

----------

## s4l0m0n

this is how I start the machine:

```
kvm -drive file=/mnt/vm/images/lance.img,if=virtio,boot=on -m 512 -net nic,model=virtio,macaddr=52:54:00:12:34:46 -net tap,ifname=tap1,script=no,downscript=no -vnc :0 -name lance -daemonize -drive file=/dev/raid/lance,if=virtio -smp 2
```

lance.img is a qcow2 image and /dev/raid/lance is the LV used by the VM for storage. On the host, I use app-emulation/kvm-85-r2.

/dev/raid/lance was only accepted by kvm when i converted it to a raw-image using "kvm-img convert". (used  this howto)

\\Edit:

it seems that kvm doesnt treat the LV as an image:

```
kvm-img info /dev/raid/lance

image: /dev/raid/lance

file format: host_device

virtual size: 1.8T (1954210119680 bytes)

disk size: 0

```

----------

## frenkel

Your host kernel threats it as a raid device, but this is an extra level of indirection. Try to create a normal partition and us that for the VM. It should really speed things up.

----------

## s4l0m0n

afaik, kvm can only use images. could you tell me exactly how to give the VM a whole partition?

----------

## frenkel

I'm using it with a whole disk like this:

 *Quote:*   

> -hda /dev/sdb 

 

I don't know it if will work with a partition though, a search on google seems to say it won't.

----------

## s4l0m0n

 *frenkel wrote:*   

> I'm using it with a whole disk like this:
> 
>  *Quote:*   -hda /dev/sdb  
> 
> I don't know it if will work with a partition though, a search on google seems to say it won't.

 

thats what Im talking about. the manpage says that

```
-drive file=/dev/hda,index=0,media=disk
```

and

```
-hda /dev/hda
```

are equivalent. And so it is with LVs. Anyway, I dont have disk I could play with, only the raid with LVs on it.

----------

## frenkel

LV's are slower, because 1 LV could have parts all over your disk, while a partition is geographically close. Anyways, if you can't test that, this is the best performance you can get I guess..

----------

## moe

 *frenkel wrote:*   

> LV's are slower, because 1 LV could have parts all over your disk, while a partition is geographically close. Anyways, if you can't test that, this is the best performance you can get I guess..

 

LV's could be slower, because the parts could be scattered over one or more disks.

I'm using kvm-88 and gentoo-sources-2.6.30-r4. This is an output from a guest without virtio:

```
# hdparm -tT /dev/sda

/dev/sda:

 Timing cached reads:   2324 MB in  2.00 seconds = 1161.50 MB/sec

 Timing buffered disk reads:  256 MB in  3.03 seconds =  84.61 MB/sec

```

With virtio it's not better:

```
# hdparm -tT /dev/vda

/dev/vda:

 Timing cached reads:   2324 MB in  2.00 seconds = 1161.65 MB/sec

 Timing buffered disk reads:  212 MB in  3.00 seconds =  70.60 MB/sec

```

The disk is a Western Digital Caviar Green with 640GB.

----------

