# Routing public IPs in a data center [Need Help]

## quantumsummers

I have an interesting problem that requires a solid solution.

I have a hardened gentoo server that I want to do routing with. Its a nice machine with 4 nics.

The really interesting part is that I have been given a public subnet that I must route. This is not an attempt at NAT.

The topology is as follows:

eth2 connects to the data center subnet with IP 123.456.142.58 and talks to the data center router whci is IP=123.456.142.157.

Now I want packets to get from eth2 to eth3. eth3 is connected to a switch which will have the public network, which is 123.456.140.112/28.

eth3 has IP 123.456.140.113 and should serve as the default gateway for any machine on the 123.456.140.112/28 subnet.

eth1 needs IP 123.456.140.114 and will be a heavily filtered.

eth0 is on a lan, and again no NAT.

I recognize that I need iproute2, however I am encountering trouble with packets not going where they are supposed to, and many martians are being logged.

So in /proc/sys/net/ipv4 I have enabled forwarding (and for eth2 and eth3 specifically, I don't want eth1 or eth0 to forward packets).

I have also played a bit with rp_filter and the arp_* settings.  In fact I have been flipping many switches & testing with no good (read: complete) solution.

Thus I appeal to the community for assistance, which will be met with many thanks.  As a lark, I plan to fully document the solution and ancillary knowledge required to do this the "Gentoo Way" so that others may have an easier time with situations like this.

Best Regards, 

quantumsummers

----------

## Hu

Please post the output of ip addr ; ip route, as run both on the Gentoo router and as run on one of the machines on the eth3 subnet.  Obfuscating the first two octets is fine.  If you are not joining these interfaces in a bridge, you will probably need to enable proxy_arp.

When are the martians reported?  What is the text printed when the kernel logs a martian?  What is the output of for a in /proc/sys/net/ipv4/conf/*; do for b in forwarding log_martians proxy_arp rp_filter ; do echo -n "$a/$b: " ; cat $a/$b; done; done?

----------

## quantumsummers

@Hu thanks for helping! When I get to the data center I will grab all the info you are requesting.

Though I can tell you a bit from memory.

net.ipv4.ip_forward = 1

net.ipv4.conf.lo.forwarding = 0

net.ipv4.conf.eth0.forwarding = 0

net.ipv4.conf.eth1.forwarding = 0

net.ipv4.conf.eth2.forwarding = 1

net.ipv4.conf.eth3.forwarding = 1

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1

net.ipv4.conf.lo.log_martians = 1

net.ipv4.conf.eth0.log_martians = 1

net.ipv4.conf.eth1.log_martians = 1

net.ipv4.conf.eth2.log_martians = 1

net.ipv4.conf.eth3.log_martians = 1

net.ipv4.conf.lo.proxy_arp = 0

net.ipv4.conf.eth0.proxy_arp = 0

net.ipv4.conf.eth1.proxy_arp = 0

net.ipv4.conf.eth2.proxy_arp = 0

net.ipv4.conf.eth3.proxy_arp = 0

Not sure about reverse path filtering (rp_filter), but I think its enabled for all interfaces. I have read that this can cause problems for static routes.

Is this situation a good candidate for bridging eth2 & eth3, or perhaps a pseudo-bridge with proxy_arp? Packets just need to go from eth2 to eth3 in a transparent manner.  I plan on applying some filtering/shaping on eth2. Will that mess things up?

Thanks for the help, this has been a difficult problem for me.

----------

## quantumsummers

#ip route show

xx.xx.142.156/30 dev eth2  proto kernel  scope link  src xx.xx.142.158

xx.xx.140.112/28 dev eth1  proto kernel  scope link  src xx.xx.140.114

xx.xx.140.112/28 dev eth3  proto kernel  scope link  src xx.xx.140.113

192.168.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.1

127.0.0.0/8 dev lo  scope link

default via xx.xx.142.157 dev eth2

#ip addr show (removed the link/ether lines)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue

    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo

2: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000

    inet xx.xx.140.113/28 brd xx.xx.140.127 scope global eth3

3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 100

    inet xx.xx.142.158/30 brd xx.xx.142.255 scope global eth2

4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000

    inet xx.xx.140.114/28 brd xx.xx.140.127 scope global eth1

5: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000

    inet 192.168.1.1/24 brd 192.168.1.255 scope global eth0

----------

## quantumsummers

The martians being logged are of two types.

The first are from a machine that is connected to the switch that should allow public IPs.

Jan 27 15:48:38 guard martian source xx.xx.140.113 from xx.xx.140.117, on dev eth3

Jan 27 15:48:38 guard ll header: ff:ff:ff:ff:ff:ff:00:1b:fc:e2:15:54:08:06

The second type appear to come from other machines scanning my IPs.

Jan 27 15:43:26 guard martian source xx.xx.140.123 from 201.127.233.150, on dev eth2

Jan 27 15:43:26 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:44:04 guard martian source xx.xx.140.125 from 76.6.117.96, on dev eth2

Jan 27 15:44:04 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:44:07 guard martian source xx.xx.140.125 from 76.6.117.96, on dev eth2

Jan 27 15:44:07 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:44:43 guard martian source xx.xx.140.123 from 92.81.130.27, on dev eth2

Jan 27 15:44:43 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:44:46 guard martian source xx.xx.140.123 from 92.81.130.27, on dev eth2

Jan 27 15:44:46 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:46:24 guard martian source xx.xx.140.114 from 189.62.165.112, on dev eth2

Jan 27 15:46:24 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:46:27 guard martian source xx.xx.140.114 from 189.62.165.112, on dev eth2

Jan 27 15:46:27 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

Jan 27 15:46:40 guard martian source xx.xx.140.123 from 76.77.233.16, on dev eth2

Jan 27 15:46:40 guard ll header: 00:15:17:98:76:87:00:0c:db:76:18:1e:08:00

----------

## quantumsummers

@Hu

The info on a host that needs a public IP follow the scheme below, where in this case the inet addr is .120 as shown.

I have used this same schema with other hosts on the public switch as well with no luck.

config_eth1=( "xx.xx.140.120/28 brd xx.xx.140.127" )

routes_eth1=( "default gw xx.xx.140.113" )

Thanks!

EDIT: One thing to note, somehow I have access to the router via ssh, so something is working.  I can provide any additional info from that machine or any other box a that location.

----------

## Hu

I have not encountered this problem before, but from what I know of martians, it looks like at least some of your problems are caused by putting eth1 and eth3 in the same subnet, then making eth3 the dominant interface.  Martians are reported for packets with impossible values, as defined in part by the kernel's routing table.  Based on your ip route show output, the kernel gives the eth1 route higher priority than the eth3 route, so packets coming in eth3 with a source address consistent with the eth1 subnet are being marked as martian.

The simplest fix would be to remove eth1 from that subnet.  Put both IP addresses on eth3, and write your filtering rules according to IP address.  This should be functionally equivalent from a networking perspective, though you will have lower maximum throughput since all traffic for both addresses would go over eth3.  Turning off reverse path filtering might solve it as well, but I consider it unhealthy to have two interfaces with the same subnet and routing information.  If you need both NICs serving traffic for that subnet, bonding might be more appropriate.

I suggest using proxy_arp first, and fall back to creating a full bridge if proxy_arp does not address your needs adequately.  Shaping should be fine when doing IP forwarding with proxy_arp.  I have not specifically tried shaping a port which is part of a bridge.  Filtering works fine both with bridges and with simple IP forwarding, though you need some extra kernel features if you want to filter a bridge.

----------

