# [solved] routing with two different public interfaces

## c00l.wave

For migrating a server from one DC to another, I have to route all traffic from the old IP address to the new server and back again while already serving content from the new address. Using a proxy is, unfortunately, insufficient. Packets arrive at the second server but are then discarded as martian packets. I've tried several things that do not work to fix that, now you're my last hope.  :Wink: 

Let's sketch out what I have:

```
           Data Center 1              ||     Data Center 2

              =====================        ==================

         NAT  |      Server 1     |  DNAT  |     Server 2   | 

4.3.2.1 ====> | eth0 192.168.70.2 | =====> | eth0   1.2.3.4 |

              | tap0    10.90.0.4 |  VPN   | tap0 10.90.0.1 |

              =====================        ==================

```

The VPN works fine, as does usual routing from/to external IPs on both sides.

What I want:

If a request is made to 4.3.2.1 (e.g. on port 80) it first gets NATed to the first server via a router in DC1.

The server decides to (D)NAT it to 10.90.0.1 which is the IP of server 2 on the other side of my VPN.

The packet arrives at server 2 and the connection gets setup and the request served (daemons listen on the VPN IP). Response packets should not go out on eth0 (public IP 1.2.3.4) but go the same way back that they came from. That means it should go back to server 1 (10.90.0.4) as a router.

Server 1 receives the packet and applies NAT masquerading.

The response routes out via the router in DC1.

The IP address of the original request is preserved at all times (first source, then destination address).

What I have:

If a request is made to 4.3.2.1 (e.g. on port 80) it first gets NATed to the first server via a router in DC1.

The server decides to (D)NAT it to 10.90.0.1 which is the IP of server 2 on the other side of my VPN.

The packet arrives at server 2. The server does not know how to handle that packet as it is recognized as martian and gets discarded. If I disable rp_filter, a connection is tried to be made via eth0 (which is the wrong gateway).

I thought the source of my martian problem would be that I have an ambiguous or unknown way to route the packets. 10.90.0.0/24 can't be the source of packets from the public internet, at least not if I lack a default route to 10.90.0.4 (which matches the MAC address of all incoming routed packets). So I tried setting up a second routing table but that doesn't seem to help.

What I did so far:

```
on server 1

iptables -A FORWARD -i tap0 -s 10.90.0.0/24 -j ACCEPT

iptables -A FORWARD -o tap0 -d 10.90.0.0/24 -j ACCEPT

iptables -t nat -F

iptables -t nat -A PREROUTING -p tcp -i eth0 -s ip.i.am.coming.from --dport 80 -j DNAT --to-destination 10.90.0.1

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
```

```
on server 2

added to /etc/iproute2/rt_tables:

    200     vpn-dc1

run on console:

ip rule add iif tap0 table vpn-dc1

ip rule add oif tap0 table vpn-dc1

ip route add default via 10.90.0.4 dev tap0 table vpn-dc1

ip route add 10.90.0.4 dev tap0 table vpn-dc1
```

Martians can be seen with net.ipv4.conf.all.log_martians=1 and are accepted with net.ipv4.conf.tap0.rp_filter=0.

With rp_filter = 1, dmesg says:

```
[ 2100.369101] martian source 10.90.0.1 from ip.i.am.coming.from, on dev tap0

[ 2100.369103] ll header: 26:f8:e1:e4:a0:82:86:0c:0e:2b:30:58:08:00
```

MAC addresses from ll header are correct and match server 2 and server 1 on tap0 each, so I can't see what the problem should be...

If rp_filter is 0, response packets go out via eth0 which is the wrong interface.

What I need is either

(with rp_filter=1) a way to get those packets to go into and stay at vpn-dc1 routing table and be processed as if they were a separate network or

(with rp_filter=0) a way to direct the responses going from 10.90.0.1 to addresses other than 10.90.0.0/24 back to 10.90.0.4 which will act as a gateway

I'm not seeing how to achieve any of these solutions as I would expect it to already work with the vpn-dc1 routing table. I'm likely missing something here. Do you have any hints?

Thanks a lot in advance!

----------

## papahuhn

Try the following on server 2:

- For incoming packets via tap0, MARK packets via iptables.

- Copy the packet mark into the connection mark (CONNMARK --save-mark). That way, response packets emitted by server 2' are automatically connmarked respectively.

- in mangle-OUTPUT, copy the connmark back to the packet mark (CONNMARK --restore-mark)

- Use ip rule fwmark to route the packets back via tap0.

----------

## c00l.wave

That doesn't seem to work... What works is if I add a rule "from host 10.90.0.1 table vpn-dc1" but only for connections I do not NAT further.

If I NAT away from 10.90.0.1 (necessary for FTP) to a VM that runs on 192.168.70.2 as well but in DC2 then I cannot simply add "from host 192.168.70.2" as that IP is supposed to be used by public inet as well as that insane routing setup for DC1....

I thought connection marking could come to rescue but I can't get it working. What's worse, TRACE target doesn't log anything although I see LOG target and martian logging outputs in kernel log. Routing springs to life if I set all interfaces to rp_filter = 0 but as eth0 is on a shared network (the server in DC2 is a rented dedicated root server) I would prefer to keep rp_filter=1 for eth0.

If I add a LOG rule after packet/connection marking in mangle PREROUTING, I get to see my mark on the kernel log. If I add a LOG rule to mangle INPUT the mark appears to have gone. Is that due to NAT routing? TOS marks seem to vanish as well.

I don't really want to use SNAT as that would mean I would loose the original IPs when the packets reach the FTP server (e.g. no fail2ban possible). Maybe I could try adding a second interface to 192.168.70.2 for FTP but if it's possible to get this working by connection marks I would prefer staying with only one interface.

Or maybe I'm trying to setup connection marking in a wrong way, but I think I tried enough variations by now... Do you have any working example?

----------

## Mad Merlin

What kind of traffic are you serving? If it's just HTTP(S), I've done this before using Pound, a reverse proxy. You run it on the old server and set your backend to the new server. All actual requests are served by the new server, but are available from both IPs. Works like a charm.

If you're serving other types of traffic though, I guess you're out of luck.

----------

## c00l.wave

HTTP is no problem and works fine with "ip rule add from host 10.90.0.1 table vpn-dc1". The new server runs two VMs and has an Apache as proxy on the host that passes requests to either the legacy VM (which is a copy of the server in DC1) or a new VM that will soon host relaunched websites. The Apache listens on all interfaces and thus can be accessed by either the public IP or the VPN IP and will respond accordingly, so the response can easily be filtered to routing table vpn-dc1.

Unfortunately, the old server also hosts FTP to customers, so I will have to route that traffic as well. Apart from that, Samba has to run for legacy reasons but that's it.

If I disable rp_filter on all devices (including eth0), FTP routing works nicely until a response is on its way back - as connection marking does not work (for whatever reasons) the answer is sent via the public network (in DC2) instead of the VPN (via DC1).

TRACE could help finding the problem - if it would produce any output at all (the packets are matched by that rule without any effect).

----------

## c00l.wave

I solved my issues by adding an additional virtual ethernet interface to my VM using another IP address. Routing works great with rules from/to that IP, FTP as well as Samba.

Why packet/connection marks and TRACE do not work would still be interesting, though.

----------

