# Root server & 2 VMs: best setup for IPv4 and IPv6 connection

## Pearlseattle

Hi

It's since a while that I'm fiddling around with the network setup of my server and am not sure which solution would be the best & simpliest one.

Situation:

Root server, with 1 public IPv4 address and a lot of public IPv6 addresses available.

Running on the server 2 VMs.

VM1 runs the Apache webserver

VM2 runs the mailserver

Final target:

Would like to have both external IPv4 and IPv6-requests ending up into the right VM.

What I did so far:

1)

This was simple: I used KVM's port-forwarding functionality to forward all incoming requests on a specific port (e.g. port 80 for http) to the correct VM (e.g. VM1 for http/port 80 requests).

Example:

```

KVM_CMD="qemu-kvm \

        -nographic -daemonize \

        -balloon none \

        -pidfile $PIDPATH \

        -drive file=/mnt/vm/images/amd64-gentoo-main/rootfs.img,if=virtio,cache=writeback \

        -drive file=/mnt/vm/images/amd64-gentoo-main/swapfile.img,if=virtio,cache=writeback \

        -m 3072 -smp 3 \

        -kernel /boot/current-kernel \

        -append \"root=/dev/vda\" \

        -net nic,model=virtio,macaddr=DE:AD:BE:EF:29:10 \

        -net user,host=192.168.1.2,restrict=n,net=192.168.1.0/24,\

hostfwd=tcp::80-:80"

```

Unluckily this mechanism not handle IPv6  :Sad: 

2)

I am now thinking about a network bridge (br0 and tap1/2/3...). I could create one (tried yesterday and it worked - had a tap-devices) and using that I could set in every single VM one of the many external IPv6-addresses, but because I don't have more than 1 external IPv4-addresses I think that I'll have to set up iptables to forward incoming IPv4-requests to the correct VM depending on the port.

I'm not sure if option 2 is actually valid - it's good that I can decide from within the VM which external IPv6-address I want to use (or at least I think so), but IPv4 won't work because I don't have any additional IPv4-address (the only one I have is used by the host itself) => I might have to create a virtual NIC on the host that points to an internal network (e.g. 10.0.0.1) and give to the VMs that kind of address (e.g. 10.0.0.10 to VM1 and 10.0.0.20 to VM2).

Not sure if what I write is makes sense...  :Crying or Very sad: 

All this sounds quite complicated... .

Am I making things too complicated? Any simplier solution for this kind of setup?

Thanks a lot.

----------

## frostschutz

I know IPv4 is dying out, but most hosts still allow you to get one or two extra IPs. Not the case with your hoster? It would make things so much simpler when running IPv4 enabled VMs.

Without extra IPs, you will have no choice but to set up port forwarding / nat on the server, giving the VM a local network IPv4, same way your box at home accesses the net from behind the router.

-net user is the easy way out, and you pay dearly with bad network performance. While on topic of performance, see also http://www.linux-kvm.org/page/Tuning_KVM

As for IPv6, a lot depends on how those addresses reach your box. If your IPv6 subnet is routed to your box (for example via the link local IPv6 addresses), you could simply divide it into smaller subnets, and route an entire IPv6 subnet to your VM. Your VM could then pick any addresses out of its very own sub-subnet.

If it's not routed, you have to pick single addresses, make them known on the eth0 interface (neighbor discovery), and assign them to the VMs - so the VMs don't have free pick, instead you have to change host configuration every time a VM needs a new IPv6.

 *Quote:*   

> Any simplier solution for this kind of setup?

 

Not unless your datacenter simply lets you bridge everything in eth0. Most don't. Personally I prefer routed setups, so VM A can not take IP which belongs to VM B. Also easier to spot configuration errors. It's simple enough once you get the hang of routing.

----------

## Pearlseattle

Thanks a lot for your reply frostschutz.

 *Quote:*   

> As for IPv6, a lot depends on how those addresses reach your box. If your IPv6 subnet is routed to your box (for example via the link local IPv6 addresses), you could simply divide it into smaller subnets, and route an entire IPv6 subnet to your VM. Your VM could then pick any addresses out of its very own sub-subnet.
> 
> If it's not routed, you have to pick single addresses, make them known on the eth0 interface (neighbor discovery), and assign them to the VMs - so the VMs don't have free pick, instead you have to change host configuration every time a VM needs a new IPv6. 

 

Apart from putting the right entry into /etc/conf.d/net to see in ifconfig something and being able to ping the host from another IPv6-host I know nothing about IPv6: you don't have by chance some examples?

 *Quote:*   

> As for IPv6, a lot depends on how those addresses reach your box. If your IPv6 subnet is routed to your box (for example via the link local IPv6 addresses)

 

Sorry, don't get it - know as well nothing about "link local"...   :Crying or Very sad:  . How can I check this?

Thanks a lot!

----------

## FishB8

Link local is just a range of IP addresses reserved for computers to use when a dhcp server is not available, and they have not been assigned static ip addresses. Generally the computer sends out ARP packets for different addresses in link-local. When it finds an address that nobody responds to (i.e. the address is not taken) it assigns itself that address.

IP4 Link-local addresses : 169.254.1.0 - 169.254.254.255

IP6 Link-local addresses:  fe80::/10

Try using an Open vSwitch bridge. That way you can set up the routing rules within the bridge itself.

----------

## Pearlseattle

Wow - that openVswitch  seems to be really powerful - never heard of before!

Looking into it... .

Will come back if I end up creating a giant endless loop of packet-forwards through the whole Internet   :Wink: 

Thank a lot!!!!!!!!!!!!

 :Very Happy:   :Very Happy:   :Very Happy: 

----------

## Pearlseattle

Mmmhh, tried openVswitch but there aren't yet many instructions & FAQs available (at least not that I was able to easily understand) => was able to set it up and create a bridge but then I just didn't find any "easy" instructions about how to instruct the Vswitch how to route packets.

I ended up, as I tried to describe here http://www.blah-blah.ch/it/how-to-s/virtual-nic-bridging-nat-kvm-and-ipv6/, to set up a more or less classical IPv6-bridge using a veth NIC and using HAProxy to reroute incoming IPv6-traffic to the correct VM depending on the incoming requested port (almost lost my mind trying to give to the VMs direct IPv6 connectivity, but it just did not work).

But still, thanks a lot for your help   :Smile: 

----------

## FishB8

So to help explain how this works:

OpenVSwitch is a virtual switch/router. You set up rules for it using flow controls. Flow based switches is where the trend is currently heading with routing / switching hardware and software. Basically it abstracts the logic from the implementation. So instead of having a bunch of expensive network appliances that have to each be configured separately, you have a bunch of cheap network appliances that are all controlled from a flow server. The logic is consolidated and controlled from a centralized computer. The reason everything is trending towards this type of paradigm is that it is cheaper, faster to deploy/manager, and offers more functionality.

OpenVSwitch is a software implementation of the "dumb appliance". It can do all kinds of fancy routing, but it has no "brains" per se. You have to set up flow rules. 

There are 3 components:

There's the Open VSwitch daemon. It does the grunt work and implements the flow rules. (Remember that the kernel component does the actual lifting though)

The Open VSwitch controller is the brains that tells the daemon how to set itself up.

The Open VSwitch DB server is a database the stores the flow rules so that the controller doesn't forget the flow rules after a reboot.

So in order to setup a custom config you have to learn how to use tools like ovs−ofctl: http://openvswitch.org/cgi-bin/ovsman.cgi?page=utilities%2Fovs-ofctl.8

Remember the flow communication between the controller and VSwitch daemon is a generic flow protocol. i.e. the controller could be set to control other hardware or software flow router/switchers as well as the local one. (Provided the other implementations support the same version of the flow protocol. It is a rapidly changing protocol, so the versions are comming out quite quickly)

----------

