# KVM - slow disk IO

## Castor-x

Hi all! I have a "Linux 2.6.34-gentoo-r1 #1 SMP x86_64 AMD Athlon(tm) II X4 620 GNU/Linux" server with installed and running KVM. THe problem is OSes (other than Linux - Windows, FreeBSD) as guests have VERY slow disk IO. Particular FreeBSD reads CD image and writes down data to HD image (raw or qcow2 - nevermind) with speed about 20 KB/s. I've tried different versions of OSes - Windows XP, Vista, Freebsd 7.1, 8.0 in x86 and x86_64 editions.

Does anyone met such a trouble?

----------

## dmitryilyin

linux 2.3.31, kvm 0.12.3, cheap sata disks

freebsd 6.4 i386

read 44mb/s

write 5mb/s

linux 2.6.26 virtio drivers

read 48 mb/s

write 20 mb/s

linux 2.6.32 kvm 0.12.4, expensive sas raid

freebsd 8 amd64

read 132 mb/s

write 10 mb/s

linux 2.6.32 virtio drivers

read 241mb/s

write 66mb/s

write without virtio 39mb/s

- kvm disk io performance pretty much sucks

- virtio helps a little

- freebsd doesnt have virtio support and probably will not have

- freebsd doesnt have good support for any virtualization

----------

## dmitryilyin

virtualbox

linux 2.3.61, sata emulation, cheap sata drive

read 36mb/s

write 28mb/s

without virtualization

read 60mb/s

write 42mb/s

- not much better then kvm

----------

## dmitryilyin

linux 2.6.32 kvm 0.12.4, memory disk

linux + virtio

write: 400mb/s

read: 900mb/s

linux + ide emulation

read:540mb/s

write:197mb/s

linux + scsi emulation

read: 850mb/s

write: 580mb/s

freebsd8 amd64 + ide

read:525mb/s

write:412mb/s

freebsd8 amd64 + scsi

read:360mb/s

write:470mb/s

without virtualization

read:5100mb/s

write:1800mb/s

-kvm disk io performance still suck

-virtio marginaly better

----------

## idella4

I have kvm, can either of you two instruct me on how to invoke and measure a disk i/o and I can check it out and post.

Separately, I have an old dvd of solaris 10, and kvm just can't install it. qemu can but it's pathetically slow at installing, to the point of abandoning it a number of times.

I've upgrade kvm acquired a current cd of opensolaris which may be faulty because kvm is having trouble with it.

Still trying to install it.

----------

## dmitryilyin

you can use dd

dd if=/dev/disk of=/dev/null bs=1M (1m for freebsd)  read test

dd if=/dev/zero of=/dev/disk bs=1M write test (DO NOT DO IT! IT WILL WIPE DISK!!! use empty disk)

dd if=/dev/zero of=file bs=1M erite test to file (give it some time to offset cache)

hdparm -Tt /dev/disk read test with and without cache

disk can be:

hd[a-z]  linux ide subsystem

sd[a-z]  linux scsi,sata or new pata

vd[a-z] linux virtio

ad[0-9]+ freesbd ide/sata

da[0-9]+ freebsd scsi

ada[0-9]+ freebsd new sata ahci 8.0+

----------

## idella4

me and my big keyboard.

If you want some measures, it will take a while.  Of all systems You cited BSD, the one  system I don't have at hand.

opensolaris and windows will have to do, after I get them installed!

BSD maybe later on

ok, all those tips above relate to BSD and linux types.

i HAVE A WINDOWS INSTALLED, AND IT;S NOT SO GOOD.

I made a vfat partition available to the install, and got it to copy a folder of 9.18 gig.

The windows progress window estimated the completion time between 64 - 66 minutes. 

Not having used it for years, I can't remember windows tools to measure speeds of copying.

Do you want to suggest one?

I don't know about open solaris.  The cd is a live cd which it should rip  through.

It keeps taking an age to bring up the desktop, fails to complete it and / or the mouse keeps freezing.

Not good.

----------

## dmitryilyin

You can use this ftp://ftp.ufanet.ru/pub/unix/BSD/Frenzy/1.3/iso/ freebsd livecd to test disk io performance. Just run iso in kvm and use dd to measure io speed. Same goes for solaris.

----------

## idella4

Castor-x.

what version or you running, kvm or qemu-kvm?

I have an opensolaris vm run by kvm.  Its a standard virt-manager raw file image.  I just repeated it a few times.

```

dd if=/dev/zero of=file bs=50k

dd: writing 'file': No space left on device

4643+0 records in

4642+0 records out

237670400 bytes (238 MB) copied, 3.63015 s, 65.5 MB/s

rm file

dd if=/dev/zero of=file bs=50k

dd: writing 'file': No space left on device

4650+0 records in

4649+0 records out

238034944 bytes (238 MB) copied, 1.13926 s, 209 MB/s

rm file

dd if=/dev/zero of=file bs=50k

dd: writing 'file': No space left on device

4649+0 records in

4648+0 records out

237977600 bytes (238 MB) copied, 1.11213 s, 214 MB/s

rm file

dd if=/dev/zero of=file bs=50k

dd: writing 'file': No space left on device

4649+0 records in

4648+0 records out 

237977600 bytes (238 MB) copied, 1.23812 s, 192 MB/s

```

above is from .iso.  Once installed, here's some more to compare.

```

$ dd if=/dev/zero of=file bs=50k

[cut it short]

114824+0 records in

114824+0 records out 

5878988800 bytes (5.9 GB) copied, 215.28 s, 27.3MB/s

$ rm file

 dd if=/dev/zero of=file bs=5k

[let it go]

dd: writing 'file': No space left on device

2024935+0 records in

2024934+0 records out

10367662080 bytes (10GB) copied, 390.45 s, 26.6 MB/s

```

shall add bsd later.

done : BSD; this is using a minimal bsd .iso, and a qcow2 image file only of 4.4 Gig, bit it gives some measures anyway.

```

# dd if=/dev/ad0 of=/dev/null bs=50k

92180+1 records in

92180+1 records out

4719640576 bytes transferred in 33.991728 secs (138846739 bytes/sec)

# dd if=/dev/ad0 of=/dev/null bs=50k

92180+

921804+1 records in

921804+1 records out

4719640576 bytes transferred in 389.929465 secs (12103832 bytes/sec)

[repeat]

# dd if=/dev/ad0 of=/dev/null bs=50k

92180+1 records in

92180+1 records out

4719640576 bytes transferred in 37.935013 secs (124413839 bytes/sec)

#dd if=/dev/ad0 of=file bs=5k

/Frenzy/ramdisk/: write failed, file system is full

dd: file: No space left on device

12399+0 records in

12398+0 records out

63477760 bytes transferred in 4.735827 secs (13403733 bytes/sec) 

[repeat]

#dd if=/dev/ad0 of=file bs=5k

/Frenzy/ramdisk/: write failed, file system is full

dd: file: No space left on device

12399+0 records in

12398+0 records out

63477760 bytes transferred in 4.384362 secs (14478221 bytes/sec)

#dd if=/dev/ad0 of=file bs=5k

/Frenzy/ramdisk/: write failed, file system is full

dd: file: No space left on device

1240+0  records in

1239+0  records out 

63436800 bytes transferred in 1.046345 secs (60627040 bytes/sec)

```

dmitryilyin, do you know what solaris would call another device?  The devices it makes are numerous and not intuitive.  It seems to default to something like /dev/cua0.

The solaris fdisk is unusable.  enter fdisk -l and it says unknown command.

----------

## devsk

To get the real disk throughput as opposed to the cache performance, you either do oflag=direct or oflag=sync for 'dd'. Without either of those, you are likely measuring "cache backed with disk with a delay" performance. More so from within a VM.

There is a way to disable host cache for the guest's IO in virtualbox, not sure about others. Use that as well to combine with oflag, to get at least half decent numbers.

----------

## dmitryilyin

FreeBSD's dd doesn't have oflag at all. You file is only about 200mb so you are inside cache and your write speeds show it too. Use file about 10-20Gb to exclude cache or write directly on empty block device.

----------

## pactoo

I wonder how performance would increase if you passthrough a sas/sata controller straight to the guest, provided VT-d/Vi support is present. Should be close to native performance, but has anybody done any tests?

----------

## dmitryilyin

 *dmitryilyin wrote:*   

> FreeBSD's dd doesn't have oflag at all. You file is only about 200mb so you are inside cache and your write speeds show it too. Use file about 10-20Gb to exclude cache or write directly on empty block device.

 

FreeBSD's block devices are in fact character devices with block semantic, so i/o on them are not chached at all, only i/o on files go through cache.

In linux i/o on both files and block devices is cached, "raw" device interface is depricated and replaced by O_DIRECT option. So use oflag=sync on linux if you are measuring disk/file i/o speed.

FreeBSD does have O_DIRECT for files too, but there is no support in dd, so use dd-rescue for files.

----------

## vitoriung

 *Castor-x wrote:*   

> Hi all! I have a "Linux 2.6.34-gentoo-r1 #1 SMP x86_64 AMD Athlon(tm) II X4 620 GNU/Linux" server with installed and running KVM. THe problem is OSes (other than Linux - Windows, FreeBSD) as guests have VERY slow disk IO. Particular FreeBSD reads CD image and writes down data to HD image (raw or qcow2 - nevermind) with speed about 20 KB/s. I've tried different versions of OSes - Windows XP, Vista, Freebsd 7.1, 8.0 in x86 and x86_64 editions.
> 
> Does anyone met such a trouble?

 

Do you have this issue even when running only one vm? It can be like in my case, when running multiple vm's, one particular vm takes all IO and makes all the other vms include host pretty unresponsive.

I setup CFQ IO scheduler and using ionice to lower IO priority of those "IO hungry" vms, however it doesn't entirely resolve the issue, just reduces it.

I am running kvm and libvirt, most guests are Win2003 and Suse Enterprise Servers.

Good tool to see actual IO threads consuming is  iotop  where you can see what processes (vms) are taking IO bandwith, if you don't see much IO activity then you have issue inside the guest. My Windows guests actually performs very well, so I wouldn't say it could be a kvm issue.

----------

## frostschutz

I'm very happy with KVM + virtio disk performance, then again it's just a softraid1 with consumer hdds on the host side.

----------

## vitoriung

 *frostschutz wrote:*   

> I'm very happy with KVM + virtio disk performance, then again it's just a softraid1 with consumer hdds on the host side.

 

I actually don't use viostore, because I haven't seen any performance advantage on my Win guests and rather seen host IO degradation from guest using viostore. However it can be caused just by circumstances of my configuration.

I understand that performance of SCSI should be significantly better, however when you having 3 SATA disks, you should still get a decent performance when running 3 guests, it should be like having 3 separate real machines each using 1 disk. That is I am afraid just theory, in reality can IO be a big issue when guest demands all IO bandwith and none of the schedulers seems to be coping with this very well, at least in my case.

I am currently looking into Block IO controller after reading this interesting article - http://lwn.net/Articles/360958/

----------

## vitoriung

 *vitoriung wrote:*   

> 
> 
> I actually don't use viostore, because I haven't seen any performance advantage on my Win guests and rather seen host IO degradation from guest using viostore. However it can be caused just by circumstances of my configuration.
> 
> 

 

Of course it was, I found out that I use old drivers from Qumranet, Red Hat released a new Windows drivers, I found out only yesterday, now using them for both network and storage and no issues so far.

----------

