# KVM Performance

## db_404

I have a system that runs a number of KVM VMs - all Gentoo (both the host and the guest VMs), the guests all have paravirt IO and networking enabled and seemingly working, however often I see that the host cpu is vastly higher than the guests e.g (taken while performing a file copy to a guest):

Host:

```

Tasks: 102 total,   2 running, 100 sleeping,   0 stopped,   0 zombie

Cpu(s):  1.3%us,  1.4%sy,  0.0%ni, 96.5%id,  0.6%wa,  0.1%hi,  0.1%si,  0.0%st

Mem:   4026420k total,  3582820k used,   443600k free,   671768k buffers

Swap:  2097148k total,    90004k used,  2007144k free,     6400k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                   

 4638 root      20   0 2192m 2.0g  716 R   91 52.2 516:03.52 qemu-system-x86                           

 4664 root      20   0  222m 137m  708 S    2  3.5 106:35.21 qemu-system-x86                           

    1 root      20   0  3888  536  500 S    0  0.0   0:13.29 init            

```

Guest:

```

Tasks:  89 total,   3 running,  86 sleeping,   0 stopped,   0 zombie

Cpu(s):  0.1%us,  0.1%sy,  0.0%ni, 99.6%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:   2060696k total,  2045108k used,    15588k free,    76952k buffers

Swap:  1048568k total,        0k used,  1048568k free,  1747284k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            

 3267 root      15  -5     0    0    0 S   10  0.0   0:26.78 nfsd               

 3272 root      15  -5     0    0    0 R    8  0.0   0:30.12 nfsd               

 3271 root      15  -5     0    0    0 R    4  0.0   0:29.51 nfsd     

```

Any idea if this is normal, or if not where to look to find out what's wrong - I've not got a huge amount of experience of KVM, having mostly used Xen (where I've never seen anything like this), so don't really have a baseline to know if this is just normal behavior[/code] or something that needs to be fixed.

----------

## audiodef

I use KVM, but I know a lot less than you about it. However, I CAN tell you that if you use KMS, disable it. For me, it VASTLY decreased the performance of my VMs. V-A-S-T-L-Y. Ghastly!

----------

## idella4

db_404,

I'll try to follow this and compare.  I have kvm and xen running.

What command did you use to produce that output?

I am having similar things on my gentoo 32 system I'd like to clarify / understand more.   top ?

----------

## db_404

Don't believe I'm using KSM, /sys/kernel/mm/ksm/pages_shared is 0.

I just used top to produce the output - what you see is typical of what I get, a small guest cpu load seems to produce a disproportionately high usage on the related qemu-system-x86 process on the host.

----------

## idella4

db_404

here's a comparison.  This is using kvm, a fedora12 guest.  I got the fedora guest to mount a non system partition and execute

```

time dd if=/dev/zero  of=file.img count=8000 bs=512k oflag=direct

8000+0 records in

8000+0 records out

4194304000 bytes [4.2 Gb] copied 194.526 s, 21.6 mb/sec

```

Here is the host while this is going on.

```

top - 11:21:58 up  1:09, 10 users,  load average: 0.95, 0.57, 0.50

Tasks: 143 total,   2 running, 140 sleeping,   1 stopped,   0 zombie

Cpu(s): 12.3%us,  8.8%sy,  0.0%ni, 73.6%id,  4.2%wa,  0.0%hi,  1.1%si,  0.0%st

Mem:   2054744k total,  1790668k used,   264076k free,   749468k buffers

Swap:  1052220k total,        0k used,  1052220k free,   474684k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                    

 8084 root      20   0  415m 229m 3492 R   27 11.4   2:57.88 qemu                       

 2412 root      20   0 62888 5808 3460 S    7  0.3   4:01.10 libvirtd                   

 3933 idella    20   0  353m  76m  26m S    4  3.8   0:43.16 firefox                   

 
```

Here is the guest under load.

```

top - 11:21:09 up 10 min,  2 users,  load average: 0.50, 0.47, 0.41

Tasks:  72 total,   1 running,  71 sleeping,   0 stopped,   0 zombie

Cpu(s):  0.7%us,  4.9%sy,  0.0%ni, 48.8%id, 45.7%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:    261224k total,    72568k used,   188656k free,    10252k buffers

Swap:   524280k total,        0k used,   524280k free,    30064k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                    

 1211 root      20   0  4616 1124  532 D  8.4  0.4   0:03.78 dd                         

 1210 root      20   0  2424 1044  836 R  2.3  0.4   0:01.83 top                        

 1187 root      20   0 10868 3040 2436 S  0.3  1.2   0:01.34 sshd                       

    1 root      20   0  2016  788  576 S  0.0  0.3   0:04.98 init                       

```

So no, the host is not running at a high level.  Perhaps take a look at the amount of memory you had allocated.

The fedora guest had 266 Mb memory allocated to it.

The host

```

idella@genny ~ $ cat /sys/kernel/mm/ksm/pages_shared

0

idella@genny ~ $ free

             total       used       free     shared    buffers     cached

Mem:       2054744    1795704     259040          0     749552     475028

-/+ buffers/cache:     571124    1483620

Swap:      1052220          0    1052220

```

The guest

```

[root@localhost ~]# cat /sys/kernel/mm/ksm/pages_shared

0

[root@localhost ~]# free

             total       used       free     shared    buffers     cached

Mem:        261224      72680     188544          0      10596      30272

-/+ buffers/cache:      31812     229412

Swap:       524280          0     524280

```

----------

## Hu

 *audiodef wrote:*   

> I CAN tell you that if you use KMS, disable it. For me, it VASTLY decreased the performance of my VMs. V-A-S-T-L-Y. Ghastly!

 KMS or KSM?  You wrote one.  Another poster responded as though it were the other.  I have not noticed any performance problems using Kernel Mode Setting (KMS) with the host X server that displays KVM guests.  I could believe, but have not personally tested, that Kernel Samepage Merging (KSM) could have negative effects on KVM performance.

----------

## Mad Merlin

IIRC, wait time in the guest is accounted as regular CPU time on the host, so if you have a large disk bound process going (like nfsd), you'll see low CPU usage in the guest (but high wait time) and high CPU usage on the host.

----------

## db_404

 *Mad Merlin wrote:*   

> IIRC, wait time in the guest is accounted as regular CPU time on the host, so if you have a large disk bound process going (like nfsd), you'll see low CPU usage in the guest (but high wait time) and high CPU usage on the host.

 

Ah, that's interesting given that what's going on in my example was shuffling data around over nfs to an raid 5 array, so lots of IO wait. I'll have to try some cpu bound tasks and see what that does.

----------

