# Best way to tunnel a public IPv4 address

## JC Denton

So I've been reading this and thinking about how best to implement it for multiple services on the backend that may, or may not, be on the same system.  My initial test of this was to have an internal router/firewall VM on the backend, but I'm realizing the rub there is that it'll likely be with double NAT and the backend service (e.g., NGINX or Postfix) would lose the real source address.  So my current thought is to have each backend ride a Wireguard tunnel up to the host with the public IPv4 address.  Then that box can forward on 25, 80, 443, and others as needed.  Does that make seem sense or is there a better / more efficient way to do it?

Thanks in advance  :Smile: 

----------

## szatox

 *Quote:*   

> Then that box can forward on 25, 80, 443, and others as needed

 

I'd just setu up service-specific proxies.

MTAs log themselves in email headers, http proxy can insert XFF or similar headers too. Redesigning your private network to expose client's IP to the backends looks like a lot of effort to create a very fragile solution.

----------

## NeddySeagoon

JC Denton,

If all the services you want to run are behind the same public IP address  or same subnet, then port forwarding to the host (VM) running that service is all you need.

e.g. I have a hetzner bare metal host that does nothing on the bare metal except run KVMs.

I have a mail server and web server in KVMs that are reached by port forwarding of the public IP address.

It happens that all these installs are on the name physical hardware but that's not important. It works equally well if the installs are on different physical hardware too.

There is no tunnelling required. The same is true if I had a /29 subnet. Each service could then have its own IP.

Tunnelling would only be required if you wanted to run services at another site not behind the same IPv4 subnet.

Think VPN. When I use my works VPN, I get an IP address on my employers network tunnelled over the public internet.

From your question, I don't think that's what you want.

----------

## JC Denton

I probably should have diagramed it out a bit.

```
(Mail VM, Web VM) -> ESX/KVM -> My Router -> My ISP (which blocks inbound 25/80/443 and outbound 25) -> VPS Provider -> My VPS (with its own public v4 address and no such firewall restrictions)
```

So my current thought there is to have each "local" VM (e.g., mail, web, etc.) make a separate Wireguard tunnel to the VPS and then have the VPS NAT as needed.

Does that make a bit more sense?

----------

## pingtoo

I think you are OK with using your VPS as proxy. so I suggest use *socat* as port relay services should be sufficient for your purpose.

 *JC Denton wrote:*   

> I probably should have diagramed it out a bit.
> 
> ```
> (Mail VM, Web VM) -> ESX/KVM -> My Router -> My ISP (which blocks inbound 25/80/443 and outbound 25) -> VPS Provider -> My VPS (with its own public v4 address and no such firewall restrictions)
> ```
> ...

 

here is how I envision its implementation (conceptual)

```
   Internet

   +----------------------+

   |                      |

   |   +---------------+  |

   |   |     proxy     |  |

   |   +---------------+  |

   |          VPS         |

   |   +---------------+  |

   |   |     socat     |  |

   |   +---------------+  |

   |                      |

   +----------------------+

               /\

              /  \

             /    \

            +-+  +-+

              |  |

              |  |

              |  |

              +--+

   +----------------------+

   |                      |

   |   +---------------+  |

   |   |     socat     |  |

   |   +---------------+  |

   |       HomeLab        |

   |   +---------------+  |

   |   |  Mail-service |  |

   |   +---------------+  |

   |                      |

   +----------------------+
```

   On VPS use socat to listen/accept connection from HomeLab(the computer behind firewall) then connect to proxy

```
socat openssl-listen:vps-tunnel-port.fork,reuseaddr,cert=/path/to/server.pem TCP4:127.0.0.1:vps-public-proxy-port
```

   On HomeLab computer use socat connect to VPS then connect to services

```
**** socat TCP4:homelab-mailserver:smtp,fork,reuseaddr openssl:vps.domain.tld:vps-tunnel-port,cert=$HOME/etc/client.pem,cafile=$HOME/etc/cacerts.crt
```

You can setup your private CA and issue your own client certificate and server certificate that way the tunnel are secured.

I suggest use container to implement socat function. If you are ok using docker, then there are plenty of socat docker image you can download which can make whole implementation rather simple.

----------

## Hu

I think that socat is not a good choice here.  As an application level proxy, it would introduce additional buffering that a NAT/VPN solution would not, and may not properly propagate subtleties of the TCP connection state.

Using a Docker container for a program as simple as socat is massive overkill.  socat is available in Portage, has few dependencies, and all those dependencies are likely to be installed on a typical non-embedded system already.  Using a Docker image for this means finding a trustworthy image, which seems like more trouble than just emerge --ask --verbose net-misc/socat.

Personally, I would do a single VPN connection between ESX/KVM and VPS provider, and let those two hosts handle NAT/routing, so that the individual VMs can remain unaware that there is anything complicated occurring.  Since the ports do not conflict, the VPS system can NAT all 3 ports to the ESX/KVM system, and let KVM determine which VMs receive which ports.

----------

## JC Denton

 *Hu wrote:*   

> Personally, I would do a single VPN connection between ESX/KVM and VPS provider, and let those two hosts handle NAT/routing, so that the individual VMs can remain unaware that there is anything complicated occurring.  Since the ports do not conflict, the VPS system can NAT all 3 ports to the ESX/KVM system, and let KVM determine which VMs receive which ports.

 

How would you do that if you have multiple ESX systems managed by a vCenter in the backend, though? That's why I'm thinking about the individual "service" VMs having their own VPN connections.

----------

## pingtoo

Ah, here goes the VM vs docker debate,  love it   :Very Happy: 

 *Hu wrote:*   

> I think that socat is not a good choice here.  As an application level proxy, it would introduce additional buffering that a 
> 
> NAT/VPN solution would not, and may not properly propagate subtleties of the TCP connection state.

 

I think this depend on what is the intention, do we want a flexible architecture to support current applications AND all the future unknown or we just want to design a fixed architecture to support current known apps? if all we care about is one Webserver and one Mailserver I believe a fixed architecture is quick to deploy and easy to secure. On the other hand if the intention is to support flexibility then NAT/VPN is the way to go.

if performance is important than use NAT/VPN or if accept proxy concept than socat will do. actually come to think of any proxy app can do for example nginx. in my mind the OP request is not about performance since it will traverse multiple nodes before reach apps.

I think today's standard protocols (smtp, http are very will known and most proxy app will do them right and if you know well the application protocol socat can be programmed to support many TCP state manipulation.

 *Hu wrote:*   

> Using a Docker container for a program as simple as socat is massive overkill.  socat is available in Portage, has few dependencies, and all those dependencies are likely to be installed on a typical non-embedded system already.  Using a Docker image for this means finding a trustworthy image, which seems like more trouble than just emerge --ask --verbose net-misc/socat.

 

I suggest download existing image just to help speed implementation, one can always get a official alpinelinux (or gentoo stage3) image and install socat with minimal effort Or with lots of work build your own image   :Cool: 

 *Hu wrote:*   

> Personally, I would do a single VPN connection between ESX/KVM and VPS provider, and let those two hosts handle NAT/routing, so that the individual VMs can remain unaware that there is anything complicated occurring.  Since the ports do not conflict, the VPS system can NAT all 3 ports to the ESX/KVM system, and let KVM determine which VMs receive which ports.

 

I support the idea of using signle VPN when the requirement call for performance. Once a VPN established the rest is just matter of routing.

In terms of VM vs container (docker). I think OP have preference over VM but I suggest look into using container this way you can place socat together with application in one container with all correct configuration for socat tunnel. The end result will be simple container up/down to publish service(s) to Internet.

----------

## pingtoo

 *JC Denton wrote:*   

> How would you do that if you have multiple ESX systems managed by a vCenter in the backend, though? That's why I'm thinking about the individual "service" VMs having their own VPN connections.

 

I would suggest create one VM nodes with VPN client and make the VM as router (or firewall) for example setup a pfsence VM node you will be able to do both VPN and routing.

----------

## Hu

 *JC Denton wrote:*   

> How would you do that if you have multiple ESX systems managed by a vCenter in the backend, though? That's why I'm thinking about the individual "service" VMs having their own VPN connections.

 You could do that, if you prefer.  You could also have the VPN linking a chosen ESX with the VPS server, and let that chosen ESX redirect the traffic unencrypted over the local LAN to the right ESX, for non-local VMs.

----------

## JC Denton

 *pingtoo wrote:*   

>  I would suggest create one VM nodes with VPN client and make the VM as router (or firewall) for example setup a pfsence VM node you will be able to do both VPN and routing.

 

 *Hu wrote:*   

> You could also have the VPN linking a chosen ESX with the VPS server, and let that chosen ESX redirect the traffic unencrypted over the local LAN to the right ESX, for non-local VMs.

 

So interestingly enough, I tried something like this.  As a really quick spin out the gate, I used two OpenBSD systems and one of my Gentoo boxes.

```

web (Gentoo, NGINX listening on 80/443) on 10.30.0.105/24) -> vpn_vm (OpenBSD, Wireguard on 10.30.1.2/30 and main interface on 10.30.0.254/24) -> **wireguard tunnel** -> vps (OpenBSD, Wireguard on 10.30.1.1/30 and main interface with external public IP)
```

The VPS on the "public" side, with a route added to reach over Wireguard for 10.30.0/24, can happily curl/wget/etc. the 10.30.0.105 system.  Any form of pf rule telling it to NAT or redirect 80/443 to 10.30.0.105 never even hits the Wireguard interface.  I'm beginning to think it might be an OpenBSD oddity, so I'm going to try and redo that setup with Gentoo boxes instead of the OpenBSDs.

----------

