# How to configure Full Cone NAT with iptables ?

## lvl1s7a

Hi Experts;

I want to find the right iptables commands combination to address the following need:

Please check the diagram : http://i42.tinypic.com/35b7bsg.jpg

- NEs are NATed thru the linux box (using iptables) towards the WAN cloud, where the NTP servers are situated.

- In order to achieve redundancy, the NTP Servers are in a load balancing cluster with one virtual IP address (172.30.4.245) 

- The problem is that when the NEs request for NTP updates using the 172.30.4.245, the NTP response is received from one of the actual IP addresses (.200, .230 .240).

Example:

The iptables is not allowing this flow, which is a normal behaviour since the requested vs responding address are not the same (172.30.4.245 vs 172.30.4.230) :

Request : UDP 10.68.2.11:23445 ---> 172.30.4.245:123 (this is Before NAT, of course after NAT the source is 10.23.14.72)

Response: UDP 172.30.4.230:123 ---> 10.23.14.72:23445 (Response to the WAN address)

I'm wondering if there is any way to let iptables establish the UDP flow only based on the (s-port/d-port) regardless of the IP addresses, and execute the NAT back to the LAN based on that.

UDP/NTP is just an example, almost all the needed services are setup in the same way (load balancing in Cluster).

http://i42.tinypic.com/35b7bsg.jpg

Appreciate your help !

Thanks & Regards

----------

## Mad Merlin

If I'm reading this correctly, you're using the load balancing features in iptables to create a vip and load balance to multiple backend servers.

I might suggest using ipvsadm/keepalived (in masq mode) for the load balancing instead. They are especially tailored for HTTP load balancing, but are flexible enough to load balance most any type of service. In this way, you'll get the responses back from the vip and not the real backend ip, and you can have (more flexible) health checking for bringing backend servers in and out of the pool.

I'll also point out that NTP specifically would benefit from using all 3 servers (and even a fourth) rather than just a single server in a LB fashion. With multiple servers configured, NTP will use all of them, and can spot servers that hand out bad time (google ntp falseticker for further information) and consequently not sync against them. Additionally, because your NTP packets will reach different backend servers (even with persistence set you will need to bring down backends sooner or later), the clients will see responses that may confuse it (even when NTP synced, servers may be 10s or 100s of ms apart).

----------

