# Gigabit LAN over CAT5 cable

## gcasillo

I recently bought a Netgear GS105 gigabit switch for my home network. Now that all of my machines have gigabit NICs in them, I want to step up to a gigabit LAN. I do a lot of work with DV video files. They're huge and I often move them around from one machine to the next.

After getting everything hooked up to the switch, I was disappointed with the speed I was getting. I have been getting transfer rates ~13MB/s with scp/sftp. I nosed around the forums, and while I found some folks having similar problems, I did find any firm solutions.

I've read that not all CAT5 cable is the same, and I wonder if this could be my problem. I also suspect that the tg3 drivers each of my NICs use could be to blame. One card is a Broadcom AC9100 card; the other, a Broadcom BCM5702.

Can any of you network gurus set me straight as to what cables can and can't handle gigabit. Any other tips would be greatly appreciated.

----------

## steel300

Cat6 is the standard for gigabit networking. It's the same connector as cat 5, but they changed the internals.

----------

## ferrarif5

CAT6 can handle Gigabit and so should CAT5E.

----------

## kashani

scp and sftp aren't the fastest way to move traffic either. Try changing the encryption to blowfish and see if things pickup.

Also no system pushes data around quickly without some RAM and CPU. Back in '99 it took 2 CPU's and 4GB or RAM to keep the gigE card on the Sun 4500 in data while the other 10 CPU's did real work.

Additionally if your hard drives are slow you aren't going to seem much speed increase either. I'd also check to make sure all machines have actually negotiate GigE speeds and not FastE speeds which is exactly what you're getting. 13x8 is about 100mb/s

kashani

----------

## aman

Did you mean to say 13Mb (megabits) or 13 MB (megabytes) per second?  13MB/s is along the lines of 104Mb/s, and that is slightly better than a 10/100 network.  Not that it is nearly as high as it should be, but maybe there are just other factors, like the ones discussed above in other replies that are slowing down your transfers?  Hope this helps.

----------

## aman

And I wouldn't blame the cable unless it is in crap condition, good cat5 will work fine in gigabit connections, even crossover cable will work if you have a good autosensing switch and NIC card.  Be sure to check in ifconfig and see what the link and duplex are on your connection.

----------

## jmoeller

A lot of mb's don't even support gigabit speeds, either.  Even if you do get all your other factors in place, like RAM, HD, etc., you still wouldn't be able to attain pure Gb speeds with a PCI bus.

----------

## eNTi

my benchmarks look like this:

```

root@eNTi $ netio -u hydra

NETIO - Network Throughput Benchmark, Version 1.23

(C) 1997-2003 Kai Uwe Rommel

UDP connection established.

Packet size  1k bytes:  30511 KByte/s (0%) Tx,  31657 KByte/s (5%) Rx.

Packet size  2k bytes:  38452 KByte/s (0%) Tx,  34222 KByte/s (0%) Rx.

Packet size  4k bytes:  34933 KByte/s (0%) Tx,  36089 KByte/s (3%) Rx.

Packet size  8k bytes:  37697 KByte/s (0%) Tx,  37473 KByte/s (0%) Rx.

Packet size 16k bytes:  34066 KByte/s (0%) Tx,  37502 KByte/s (0%) Rx.

Packet size 32k bytes:  30591 KByte/s (0%) Tx,  37149 KByte/s (1%) Rx.

Done.

```

```

root@eNTi $ netio -t hydra

NETIO - Network Throughput Benchmark, Version 1.23

(C) 1997-2003 Kai Uwe Rommel

TCP connection established.

Packet size  1k bytes:  29006 KByte/s Tx,  34306 KByte/s Rx.

Packet size  2k bytes:  30428 KByte/s Tx,  34396 KByte/s Rx.

Packet size  4k bytes:  30531 KByte/s Tx,  34273 KByte/s Rx.

Packet size  8k bytes:  30858 KByte/s Tx,  34379 KByte/s Rx.

Packet size 16k bytes:  29298 KByte/s Tx,  33204 KByte/s Rx.

Packet size 32k bytes:  27609 KByte/s Tx,  34253 KByte/s Rx.

Done.

```

while the disk on the server runs at:

```

root@hydralisk $ hdparm /dev/hde

/dev/hde:

 multcount    = 16 (on)

 IO_support   =  1 (32-bit)

 unmaskirq    =  1 (on)

 using_dma    =  1 (on)

 keepsettings =  0 (off)

 readonly     =  0 (off)

 readahead    = 256 (on)

 geometry     = 39870/16/63, sectors = 40188960, start = 0

<12:50:27><Mon Feb 02><~>

root@hydralisk $ hdparm -tT /dev/hde

/dev/hde:

 Timing buffer-cache reads:   696 MB in  2.01 seconds = 346.84 MB/sec

 Timing buffered disk reads:   80 MB in  3.03 seconds =  26.40 MB/sec

```

and the one on the slave side of netio:

```

<12:54:00><Mon Feb 02><~>

root@eNTi $ hdparm /dev/hda

/dev/hda:

 multcount    = 16 (on)

 IO_support   =  1 (32-bit)

 unmaskirq    =  1 (on)

 using_dma    =  1 (on)

 keepsettings =  0 (off)

 readonly     =  0 (off)

 readahead    = 256 (on)

 geometry     = 65535/16/63, sectors = 160836480, start = 0

<12:54:03><Mon Feb 02><~>

root@eNTi $ hdparm -tT /dev/hda

/dev/hda:

 Timing buffer-cache reads:   1072 MB in  2.00 seconds = 535.81 MB/sec

 Timing buffered disk reads:  136 MB in  3.02 seconds =  45.04 MB/sec

```

yesterday i ordered cat6 cables, because i only have usual cat5 (not cat5e, even though i don't know the difference, or how to tell it other than to look on the cable descripton).

but because of my buffered read speeds, i wonder if i could even get more throughput at all? does anyone know a rule here? how fast does the drive buffered reads have to be, to reach a certain transfer speed?

i'd like to point out, that i only get about 9MB/s over my gigabit lan with scp or cp. especially when copying larger files. this is quite bad and i wonder how this could possibly happen?

```

nt@hydralisk $ scp foo.* enti:~/dl/isos/

nt@enti's password: 

foo.img                                        100%  725MB   8.6MB/s   01:19    

foo.sub                                        100%   30MB   8.1MB/s   00:04     

```

----------

## jonnevers

without a 64-bit PCI slot and a 64-bit PCI gigE NIC you aren't going to be able to fully utilize a gigE network conn. as the data will bottle neck on the PCI bus as well as various other places

the category of cable used (i.e cat5/5e/6) refers to the number of times the twisted pair in the cable are twisted PER FOOT (**EDITED from INCH, as per my mistake***)( i.e cat 5 is twisted 5 times per inch, cat6 is twisted 6 times per inch) - and that extra twist makes a ton of difference...

it significantly cuts down on interference, you may also want to take a look at what you are running your network line by , check for power lines, floyercent (spelling?) lightening, metal wall studs, electrical equipment like monitors or radio frequency generating devices, etc

if you have gigE NICs, use cat6 or higher if you can find it. (although  i've never heard of cat7)

with a 100baseT connection i've been able to sustain 11-13MB per second connection times, if other things are on the line, collisions may occur and the TCP protocol follows the multiplicative decrease, additive increase procedure, so if packets are dropped, it will dramatically reduce speed immediately and slowly bring it back up as less packets are dropped....

-jon

thanks for the correction, you are entirely correctLast edited by jonnevers on Mon Feb 02, 2004 10:44 pm; edited 1 time in total

----------

## ARC2300

Just out of total curousity, and I realize you won't get a full 1Gb out of it, but if the PCI bus runs at 33MHz, you're supposed to be able to get 133MB/s transfer over it, right?

1Gb is 125MB worth of data, right??  So why can't you utilize the full Gb even though in theory you can?

Just curious.

----------

## dsd

 *jonnevers wrote:*   

> the category of cable used (i.e cat5/5e/6) refers to the number of times the twisted pair in the cable are twisted PER INCH( i.e cat 5 is twisted 5 times per inch, cat6 is twisted 6 times per inch) - and that extra twist makes a ton of difference...

 

interesting, thanks for the info. what about cat5E? how does that differ from cat5? can you get cat5d/cat5f or anything like that?

----------

## kashani

cat5e was for cat5 Enhanced. There are no F or D. I'd never heard of the twists per inch, but according to the link below that's wrong. Cat 3 is 3 twists per foot. Cat 4 is 5-6 twists per foot and cat 5 is 8 twists per foot. After that they don't say.

http://www.networkstuff.net/Category_What/Category5.htm

As to the PCI bus issues this is the best explaination of what's really going i've ever seen.

http://www.beowulf.org/pipermail/beowulf/2001-December/002084.html

The short blurb is, GigE means 1Gb/s in both directions which does exceed your PCI bus.

kashani

----------

## ARC2300

Thank you for the link to that article.  It clears up a lot of things.   :Very Happy: 

Eh, well.  At least it will be faster when getting loaded than a 100Mb NIC, right??   :Very Happy: 

----------

## NeddySeagoon

ARC2300,

With burst transfers a 32 bit 33MHz PCI bus maxes out at about 233Mb/s but its not all useable. The data from/to the NIC has to go somewhere and that somewhere is usually to/from a HDD, which is normally also on the same PCI bus. That means the theoretical 233Mb/s is halved, leaving no time for anything  else on the PCI bus. Given retries, wait states etc, it gets slower still in practice.

The reality is you need a 64 bit 66MHz PCI bus to keep up with Gigabit Ethernet.

edit: a bad case of brainfade The rate is not halved if the data transfer controller (the NIC say) is operating in bus master mode. Then the transfers are NIC to HDD direct.

----------

## bumpus

You can use mii-tool to determine what speed your NIC thinks that it's running at. That should help you rule out your network cables as the source of the bottle neck.

```
$ sudo mii-tool

eth0: negotiated 100baseTx-FD, link ok
```

----------

## TEB

Ive got some similar problems... Ive got a server with a Intel Pro gigabit nic with the 2.6.1 drivers.  Im copying via samba from my own windows box. Im maxing out at 13Megabytes pr sec. via samba 2.x (gentoo)

Both the server and the client nic's operate on 33mhz/32bit pci connectors.. but it should give much more than 13Megs per sec..I have discsystems on each side that can handle over 70Megs pr sec sustained (WD 10krpm SATA drives) and this is reported in hdparm -Tt also.. (on the server side at least..)

I can see on my switch (HP Procurve 5304) that the port is autoneg. at 1000TX but mii-tool says 100TX so im unsure what to do... I should maybe try the Intel non-gpl drivers instead and see if it helps..

And im using cat 5e cables.

----------

## gcasillo

Got my cat6 cables today. With my (average quality) cat5 cables, I was getting 15MB/s transfers via wget. Now with cat6 cables in place, I'm getting 30-35MB/s. Much better. Not too much electrical stuff around to interfere: a lamp, a TV that is never on (sigh, missed Janet Jackson's boob), and two small UPSs. So the cabling helped.

Both cards auto-negotiated at 1000TX. hdparm scores fwiw:

120GB RAID0 box:

/dev/md0:

 Timing buffer-cache reads:   1752 MB in  2.00 seconds = 876.00 MB/sec

 Timing buffered disk reads:  192 MB in  3.02 seconds =  63.56 MB/sec

40GB box:

/dev/hda:

 Timing buffer-cache reads:   1808 MB in  2.00 seconds = 904.00 MB/sec

 Timing buffered disk reads:  174 MB in  3.02 seconds =  57.62 MB/sec

So all things considered, I'm pretty satisfied now.

----------

## eNTi

one thing i forgot to mention (because i thought it's not of importance, but i still might be) is my 3rd pc == router. it holds a firewall and two nics. one gigabit nic (useless, because my hdd readings are very bad, not at least because it's a quite old AMD K2-300) connected to the gigabit switch (btw. all nics and the switch are netgear) the other 100mbit card connected to the adsl modem. i wonder if any traffic need possibly go over that pc. that might explain lack of speed, when acutally copying larg amounts of data. of course this _should_ not happen, because the switch should handle all the traffic by himself and forward between the two faster pc's, but well... i don't really know. any guesses?

one more more thing that bugs me. where the heck do i get a 66MHz pci bus? shouldn't a normal pci bus work with 66Mhz as well?

and well... i still don't understand the connection between the hdparm readings and the actual amount of data i can get over my systems + gigabit network. is buffered reads == max data transfer over nics?

----------

## NeddySeagoon

eNTi,

As you say, the traffic should not go through or even to your slow box. Run tcpdump on it while you do a file transfer to check.

64 bit 66MHz PCI is strictly server hardware - you need a server chip set to get that.

Buffered reads (from hdparm) is the data rate you get to and from the drive cache on the hard drive - without getting data from the drive platters. It is not the sustainable data rate you nee dfor file transfers. That is much less.

----------

## eNTi

thx, that made some things clear for me. does anyone have some ideas how to trace bottle necks in the transfer?

----------

## kashani

 *TEB wrote:*   

> 
> 
> I can see on my switch (HP Procurve 5304) that the port is autoneg. at 1000TX but mii-tool says 100TX so im unsure what to do... I should maybe try the Intel non-gpl drivers instead and see if it helps..
> 
> And im using cat 5e cables.

 

Try changing it manually to 1000TX within mii-tool or use ethtool if mii can't do that. You may also need to set it manually on the switch as well.

As those of us who have been in the networking field have learned over the years, autonegotiation usually doesn't.

kashani

----------

## eNTi

this is really weird. i've installed cat6 today and know what? the average data transfer rate went DOWN!!! i do not understand this. shouldn't i get at least some 25MB/s if the worst buffered reads of my slowest harddrive on the network is about 25MB/s too? please help me on this. i'm getting nuts!!!

----------

## NeddySeagoon

eNTi,

To trace bottlenecks stare here.

At one PC, do self the self transefers via the network using its real IP address, not 127.0.0.1, which cuts out a lot of the network software.

You can send a file to /dev//null via the network, read one from /dev/zero via the network, even do a copy (read and write to disk) Repeat on each PC in turn, using the protocol of your choice.

This is as good as data movement speeds get, adding in the network for real can only make it worse.

Now you know which PC to work on, what you do next depends on what you found in the previous stage and what the hardware is.

----------

## Sir_Chancealot

 *ARC2300 wrote:*   

> Just out of total curousity, and I realize you won't get a full 1Gb out of it, but if the PCI bus runs at 33MHz, you're supposed to be able to get 133MB/s transfer over it, right?
> 
> 1Gb is 125MB worth of data, right??  So why can't you utilize the full Gb even though in theory you can?
> 
> Just curious.

 

IP overhead.  There is lots of information in an IP packet that doesn't directly correspond to the data that you are transmitting (headers, etc.)

----------

