# kvm internal network

## Adel Ahmed

I want to setup an internal high capacity network like the internal network in virtual box, I will be transferring 10s of GBs over this network link(using a backup product)

I cannot seem to set something like this up under kvm and virt-manager

the best transfer rate I could achive is 40 MB/s using NAT

thanks

----------

## szatox

4 simple tips from me:

1) use bridged networking

2) use virtio drivers (you need support in host's kernel, guest's kernel and option in qemu command line)

3) use jumbo frames (it seems you must enable jumbo on at least one bridged interface before you can enable it on bridge)

4) make sure it's not hard drive that is your bottleneck

----------

## NeddySeagoon

blakdeath,

If you are getting 40 MB/s between machines sharing the same rotating rust platters, it won't get much better.

The problem is that you are reading and writing the same HDD but in different areas, so you have lots of slow head movemets.

A HDD will do between 120Mb/sec on the outside track to 40Mb/sec on the inside track, so youl 40Mb/sec sounds OK.

----------

## Adel Ahmed

I'll give it a shot on my ssd

see how things go

----------

## Adel Ahmed

32MB/s on my ssd, this number is inadequate

any ideas on how to improve?

----------

## NeddySeagoon

blakdeath,

That suggests the bottleneck is not the SSD, as head movements have been eliminated.

Does VBox have access to a partition or is its fllesystem a file on the hosts filesysem?

The latter is slow as there are two passes through the filesystem code.  Once in VBox and agian on the host.

This can be made slower if the VBox file on a filesystem uses journaling.

----------

## Adel Ahmed

it's a file on the hosts file system

and I have no journalling on the host FS

----------

## szatox

I think you're making a bad mistake trying to messure network performance sending files over it.

Use a tool, that would test only network, without generating (or being limited by) load on any other components instead. 

Check out iperf or netperf for example. You need one of those at both ends of link you want to test. Launch one in listening mode, then launch the other pointing it to first one. They will tell you how fast network is and whether you're looking for bottleneck in a right place

----------

## Navar

I just use dd, netcat and /dev/zero (local pull)-> /dev/null (remote sink) to test raw network throughput.  Haven't found anything more efficient.  I suppose you could toss pv in there too.

----------

## Adel Ahmed

now that I've setup my kvm and libvirt again, I'll give it another shot using the dd method

----------

