# 2nd Interface not working (EC2 instance)

## comonloon

I have an Amazon EC2 instance running Gentoo... After attaching a 2nd network interface and bringing it up via net.eth1 (or during boot), my primary interface goes down. To be more precise: I can't get both interfaces working at the same time. I've tried restarting the primary after logging to the 2ndary and still no luck. I can see the primary renew its DHCP lease. I can see arps on another box (ctrl1 below) that show both interfaces (db-master and ha-nfs). Obviously, TCP not working (e.g. ssh). Anyone have any ideas? Help much appreciated. 

```

db-master ~ # ifconfig eth0

eth0      Link encap:Ethernet  HWaddr 02:57:54:88:a1:ac  

          inet addr:10.0.0.101  Bcast:10.0.0.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:204 errors:0 dropped:6 overruns:0 frame:0

          TX packets:194 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000 

          RX bytes:19281 (18.8 KiB)  TX bytes:21877 (21.3 KiB)

          Interrupt:48 

db-master ~ # ifconfig eth1

eth1      Link encap:Ethernet  HWaddr 02:57:54:a2:33:b6  

          inet addr:10.0.0.100  Bcast:10.0.0.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:508 errors:0 dropped:6 overruns:0 frame:0

          TX packets:396 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000 

          RX bytes:40723 (39.7 KiB)  TX bytes:58929 (57.5 KiB)

          Interrupt:49 

db-master ~ # route

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

default         router          0.0.0.0         UG    3      0        0 eth0

10.0.0.0        *               255.255.255.0   U     0      0        0 eth0

10.0.0.0        *               255.255.255.0   U     0      0        0 eth1

loopback        localhost       255.0.0.0       UG    0      0        0 lo

db-master ~ # ping -I eth1 -c3 ctrl1

PING ctrl1 (10.0.0.169) from 10.0.0.100 eth1: 56(84) bytes of data.

64 bytes from ctrl1 (10.0.0.169): icmp_req=1 ttl=64 time=0.393 ms

64 bytes from ctrl1 (10.0.0.169): icmp_req=2 ttl=64 time=0.567 ms

64 bytes from ctrl1 (10.0.0.169): icmp_req=3 ttl=64 time=0.527 ms

--- ctrl1 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 1998ms

rtt min/avg/max/mdev = 0.393/0.495/0.567/0.078 ms

db-master ~ # ping -I eth0 -c3 ctrl1

PING ctrl1 (10.0.0.169) from 10.0.0.101 eth0: 56(84) bytes of data.

--- ctrl1 ping statistics ---

3 packets transmitted, 0 received, 100% packet loss, time 2001ms

db-master ~ # arp

Address                  HWtype  HWaddress           Flags Mask            Iface

router                   ether   02:57:54:80:00:01   C                     eth1

ctrl1                    ether   02:57:54:a2:13:7d   C                     eth1

dns                      ether   02:57:54:80:00:01   C                     eth1

ctrl1                    ether   02:57:54:a2:13:7d   C                     eth0

db-master ~ # uname -a

Linux db-master 3.2.1-gentoo-r2 #5 SMP Thu Mar 1 09:30:11 PST 2012 x86_64 Intel(R) Xeon(R) CPU E5645 @ 2.40GHz GenuineIntel GNU/Linux

---cut---

Mar 21 06:51:42 db-master kernel: Initialising Xen virtual ethernet driver.

---cut---

ctrl1 ~ # arp

Address                  HWtype  HWaddress           Flags Mask            Iface

router                   ether   02:57:54:80:00:01   C                     eth0

ha-nfs                   ether   02:57:54:a2:33:b6   C                     eth0

www1                     ether   02:57:54:8d:4e:80   C                     eth0

dns                      ether   02:57:54:80:00:01   C                     eth0

db-master                ether   02:57:54:88:a1:ac   C                     eth0

www2                     ether   02:57:54:8d:4e:7f   C                     eth0

```

----------

## comonloon

In case anyone runs into this in the future... Matt@AWS pointed me at the routing and:

http://www.linuxjournal.com/article/7291?page=0,0

ip route add default via 10.0.0.1 dev eth0 tab 1

ip route add default via 10.0.0.1 dev eth1 tab 2

ip rule add from 10.0.0.101/32 tab 1

ip rule add from 10.0.0.100/32 tab 2

Fixed it... got my HA, DRBD, LVM, NFS shared disk going...

----------

## Mad Merlin

Maybe I'm not understanding what you're trying to accomplish here, but why don't you bond the two network devices together (probably in active/passive mode), it looks like they're both on the same network. This makes the network HA transparent to the layers above.

----------

## comonloon

I actually have 2 servers using linux-ha's heartbeat, drbd & nfs. Essentially the 2nd interface on either is the IP of the cluster, if the primary server fails then the secondary takes over the cluster IP. This is currently just a testing setup, but it will simply be providing ha nfs for file storage.

----------

## Mad Merlin

Ah, ok, a floating VIP, that makes more sense. If you wanted, you can do this with an IP alias too (IE, a second IP address for a particular interface, be it a single interface, a bonded interface, or an additional interface). The alias approach would simplify your configuration (no special routing needed).

----------

