# Teaming NIC

## jgaffney

Hello,

I belive a problem with my teamed NICs.

I have one Gentoo server with teamed nics. I recently replaced the switch (Cisco) it was connected to with an HP switch. Now speeds are very slow when transfering files. If I shutdown one of the trunked ports on the switch speeds are faster but not as fast as they were whent he teaming was working with the Cisco switch.

I don't know much about NIC teaming and am looking for some direction for troubleshooting.

Any help would be apreciated.

Thank you.

----------

## NeddySeagoon

jgaffney,

Channel bonding (Teaming) has to be supported at every hop. Does the HP switch support channel bonding ?

----------

## jgaffney

I guess "Bonding" would be the term for linux. And yes it's supported on the switch, HP calls it "Trunking".

----------

## jgaffney

After some further research I'm more confused.

I've run mii-diag and it says eth0 and eth1 are at 100baseTx-FD. I tend to believe that mii-diag cannot handle Gig nics???

These are both Gig nics and show that they are connected at 1 Gig on the switch port themselve.

On top of that I ran mii-diag on the bond (bond0) and it says it's at 10Meg Half Duplex.

```
rsync1 ~ # mii-diag eth0

Basic registers of MII PHY #1:  1000 796d 0020 6190 05e1 c1e1 000d 2001.

 The autonegotiated capability is 01e0.

The autonegotiated media type is 100baseTx-FD.

 Basic mode control register 0x1000: Auto-negotiation enabled.

 You have link beat, and everything is working OK.

 Your link partner advertised c1e1: 100baseTx-FD 100baseTx 10baseT-FD 10baseT.

   End of basic transceiver information.

rsync1 ~ # mii-diag eth1

Basic registers of MII PHY #1:  1000 796d 0020 6190 05e1 c1e1 000d 2001.

 The autonegotiated capability is 01e0.

The autonegotiated media type is 100baseTx-FD.

 Basic mode control register 0x1000: Auto-negotiation enabled.

 You have link beat, and everything is working OK.

 Your link partner advertised c1e1: 100baseTx-FD 100baseTx 10baseT-FD 10baseT.

   End of basic transceiver information.

rsync1 ~ # mii-tool bond0                     

bond0: 10 Mbit, half duplex, link ok

rsync1 ~ # 
```

So... I'm assuming mii-diag cannot handle 1000base so it reports 100base as it's speed. That's fine if it's true. But how do I get the bond to connect any faster than 10Meg HD?

Tried

```

rsync1 ~ # mii-tool bond0                     

bond0: 10 Mbit, half duplex, link ok

```

```

rsync1 ~ # mii-tool bond0 --force=100baseTx-FD

SIOCSMIIREG on bond0 failed: No such device

```

```

rsync1 ~ # mii-tool bond0 -F 100baseTx-FD

SIOCSMIIREG on bond0 failed: No such device

```

```

rsync1 ~ # mii-diag bond0 -F 100baseTx-FD

Setting the speed to "fixed", Control register 2100.

SIOCSMIIREG on bond0 failed: No such device

Basic registers of MII PHY #0:  0000 0004 0004 0004 0004 0004 0004 0004.

 Basic mode control register 0x0000: Auto-negotiation disabled, with

 Speed fixed at 10 mbps, half-duplex.

 You have link beat, and everything is working OK.

 Link partner information is not exchanged when in fixed speed mode.

   End of basic transceiver information.

```

----------

## cappaberra

I currently have the same issue... still not sure how to do it.    :Confused: 

-cappaberra

----------

## jgaffney

Never did figure it out,   1 Gig was plenty of bandwith for me so I quit fighting it , Sorry I can't help   :Sad: 

----------

## frilled

You can't force speed on the bonding interface; instead, you need to apply it to the individual physical interfaces.

----------

## cappaberra

It is though.... unless I'm looking in the wrong place.  I get the exact same results like jgaffney above said:

 *Quote:*   

> ...I ran mii-diag on the bond (bond0) and it says it's at 10Meg Half Duplex.
> 
> ```
> rsync1 ~ # mii-diag eth0
> 
> ...

 

So, I'm not exactly sure what to try next... it seems like the slave interfaces are auto-negociated to 100-full, but the bond comes up 10-half.  Puzzling to me....   :Confused: 

-cappaberra

----------

## frilled

I'm quite sure mii-diag does not output anything meaningful on the virtual interface. IIRC it tries to get information from the driver directly, but there's no such information in the bonding structures (as it makes little sense). For starters, I would simply ignore it.

Since both NICs have negotiated 100MBit (or possibly 1GBit), the problem might be somewhere else. I know for sure there are NICs that have problems with HP (and other) switches with autonegotioation. On the other hand, if you wantto do Gigabit, autoneg usually works (and fixing speed does not), since NICs and switches recognize that when all 8 pins are used (as opposed to 4 on <=100MBit).

It's hard to debug without one of those handy tools like Fluke's, since mii-* or ethtool seem to bail out on GBit connections.  Try messing with Auto-MDIX on the HP, as there are firmware revisions on the HPs that are known to have problems with that, too. And of course set the logging to debug on the switch.

Actually it's beneficial to start with either interface alone, without bonding (which you seem to have done). If you don't get good speed with that, bonding won't make anything better, really  :Razz: 

If you can't get your hands on a network testing tool, I'd try this testing sequence:

Disable trunking on the switch.

eth0 (w/o bond) connected -> switch, eth1 disconnected 

eth1 (w/o bond) connected -> switch, eth0 disconnected 

If those are okay, enable bonding. If they aren't, boo! Mess with Auto-MDIX. Mess with Hardware Handshake (also known to barf sometimes). If you want GBit, that's all there is. Ohterwise try setting BOTH switch and the NICs to 100Full, for example, rinse and repeat.

eth0 (with bond) connected -> switch, eth1 disconnected 

eth1 (with bond) connected -> switch, eth0 disconnected 

See above. If it gets worse now, you're screwed ^_^.

Now enable trunking. If something goes wrong, your switch is messed almighty  :Smile: 

If it's okay, connect the second NIC. If it goes haywire now, either your bonding settings or your trunking is messed. Hard to decide without the logs from the switch, though  :Razz: 

Hm. This might be fairly obvious, of course. Still, you could narrow it down a bit. Without a good tool your only other option might be to put sniffing boxes between each NIC and the corresponding switch port. Great mess, too.

I hope (but don't really think) this helps. Share the results.

----------

## cappaberra

I apologize for not sharing more about my topology before you posted.... although that might help someone else, my particular environment; I have two Gentoo boxes that have two crossover cables going in between 2 identical dual-port 100Mbit ethernet cards.  There are no switches in play here.

I guess my question would be, is there any tool that can accurate assess the speed/duplex of the bond0 interface?

Thanks for your input!!

cappaberra

----------

## joyo222

hi,

    All of our bonded NICs are reporting the same through mii-diag and SNMP: 10MB and half-duplex, even though the underlying cards are reporting 100MB and full-duplex.

I don't think it's anything to worry about as mii-diag and SNMP are most likely filling in default, guessed values.

From the net-snmp FAQ:

 *Quote:*   

> Some operating systems will provide a mechanism for determining the speed and type of network interfaces, but many do not.  In such cases, the agent attempts to guess the most appropriate values,  usually based on the name of the interface.
> 
>   The snmpd.conf directive "interface" allows you to override these guessed values, and provide alternative values for the name, type and speed of a particular interface.  This is particularly useful for fast-ethernet, or dial-up interfaces, where the speed cannot be guessed from the name.

 

----------

## cappaberra

I agree with you...  When I perform some netperf tests, it's performing as though it's got an aggregate throughput of about 200Mb/s (taking protocol overhead into consideration), so I'll just leave it at that.   :Smile: 

Thanks!

----------

