# Gigabit Network Card PCI

## Jamesbch

Hello,

At the moment, I decided to buy a gigabit switch and now I need a gigabit PCI Card for my server. I have a old server on my network and it has only a 100mbit card on the motherboard and I would like to upgrade my network to Gigabit so I don't know yet what should I buy for a gigabit card (in PCI) for my server.

At the nearest shop I can buy this:

- US Robotics Gigabit Card,1GB, USR7902, PCI, RJ45, Auto 

- Intel Pro/1000MT Desktop, 1GB, PWLA8391GT, PCI, Ethernet 

- Planet-Giga-Card, Client,1GB, RJ45,32-bit,AUTO-Negotiation 

- Planet-Giga-Card, Client,1GB, RJ45,32-bit,AUTO-Negotiation

I'm asking you because I don't know which card is compatible with Linux, which you recommend, some advices, etc. If these cards aren't good I can buy at another shop with more choice but I won't spend more than 50-60$ (US).

Thank you in advance.

----------

## Cheesebaron

I have only good experience with the Intel Pro 1000, can't really tell anything about the others.

----------

## depontius

A word to the wise...

A gigabit ethernet card can chew up a big chunk of the PCI bus bandwidth, and so can, for instance, a SCSI RAID card.  I have a friend who had both on a PCI bus, and found that they were contending with each other for bandwidth.  In other words, as a file server he couldn't get his full bandwidth out of the SCSI RAID controller AND serve that out to his network at gigabit speed.  There just wasn't enough bandwidth on the PCI bus.  It might be a good excuse to spring for a PCI-express motherboard.

----------

## Jamesbch

hello cheesebaron, what have you experimented with Intel? Does it work well?

hello depontius, do you mean I cant reach the gigabit because the mb is too old? I haven't any other pci card I have just a AGP card. What do you think?

Thank you.

----------

## cyrillic

I have had good results with Intel gigabit cards.

If any of those other cards you mentioned use a Realtek chipset, then stay away from those.

EDIT:  Just checked on google, the USR7902 uses a Realtek 8169 chipset (bad), and the Planet-Giga-Card doesn't give any useful results (worse).Last edited by cyrillic on Tue May 13, 2008 3:43 pm; edited 1 time in total

----------

## depontius

Jamesbch,  You can reach gigabit, but it's a question of what else you can do at the same time.  Nominal pci frequency is 33MHz, and in one cycle it can move 4 bytes, for 132 MB/s.  Gigabit ethernet moves about 100MB/s, so you've got 32MB/s "left over".  Modern RAID controllers can easily exceed that 32MB/s, so that's how my friend got to the pci bus being his pinch point.

You've only got the one pci card, but what this all really depends on is the topology on the motherboard an inside the chipset.  I believe AGP bandwidth is separate from the pci, but you're probably talking onboard IDE for your disks, and I don't know if that is on pci at all, the same pci as the slots, or a different one.  I just looked at "lspci" on the system I'm using now, and can't really tell what my topology is, though someone better informed probably can.

Your system will work just fine, you just may not get full Gigabit all the time, especially not during heavy disk access.

----------

## Cheesebaron

Yeah the Intel card performs quite well and does the job pretty well.

----------

## Jamesbch

I use my server for storage, I share a disk over NFS. With a gigabit card I can reach the maximum speed of the IDE HDD (~70Mo/s I think). I could change the motherboard and buy a new Sata drive to reach 100Mo/s but I don't want to change my server now because I don't need so much. It will be great if I reach 70Mo/s at all. I think it isn't a probleme because the Gigabit Card could go faster (120Mo/s couldn't it ?).

 *Quote:*   

> JServer ~ # lspci
> 
> 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 645xx (rev 50)
> 
> 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] SiS AGP Port (virtual PCI-to-PCI bridge)
> ...

 

The server has a Asus P4S800 as the motherboard with a P4 HT. It's quite old but I haven't any probleme with it except for the Gigabit LAN of course. The MB has only a 100Mb/s integrated.

Thank you very much for your replies and advices.

----------

## depontius

With that setup, you shouldn't have any problem.  If you're serving the disk out over NFS, the disk becomes the limiting factor.  My friend's problem was that he had a RAID array of fast SCSI drives.  So you need X bandwidth coming out of the drives over the pci, and then the same X bandwidth going out over the ethernet to serve your NFS.  If 2X > 132MBs, then you're constrained.  Technically at 70MB/s you're constrained, but that 70MB/s is a "peak" number that is seldom, if ever, met, so you probably never will be noticeably constrained.  His real disk bandwidth (RAID-5, by the way) was greater than that, so he noticed.

On my estimate I divided the Gigabit by 10 to get 100MB/s, figuring there's be serialization, framing, and protocol overhead, etc, that didn't have to go over the pci bus.

----------

## Jamesbch

Hello everyone,

Finally, I bought an Intel 1000 MT PCI Card and it's cheap.

I installed it and my tests are below. I used a RAM disk on my computer to maximize the speed:

With the hard disk of the server as source:

On SSH (Client: Filezilla) I reach 20-25Mo/s, it's not very good.

On FTP (Client: same, Server: Pure-FTPD) I reach 50Mo/s, it's pretty good because HDD is old and the server too.

On NFS (Client: Windows with SuperCopier 1.9b) I reach 30Mo/s. 

Now with a RAM disk on the server too as source :

On SSH (Client: Filezilla) I reach 20-25Mo/s again. It's not important because I use FTP or NFS instead.

On FTP (same as before) I reach 68Mo/s. It's better than before but not very good at all.

On NFS (same as before) I reach 50Mo/s. 

I'm disappointed because the RAM disc on my computer can reach >150Mo/s, so it cannot minimize the speed.

I don't know how fast the RAM and the HDD are, how can I know ? What do you think about the perfomance ?

----------

## depontius

A friend started messing with gigabit, and there's one other thing you need in order to get full performance.  Your switch needs to support "jumbo frames".  Just for grins, if you've got a crossover cable handy, direct-connect 2 systems and benchmark that.  I'm not sure if the network configuration needs tweaking to use jumbo frames, or if it will just happen if everything in the path supports them.

----------

## Monkeh

 *depontius wrote:*   

> A friend started messing with gigabit, and there's one other thing you need in order to get full performance.  Your switch needs to support "jumbo frames".  Just for grins, if you've got a crossover cable handy, direct-connect 2 systems and benchmark that.  I'm not sure if the network configuration needs tweaking to use jumbo frames, or if it will just happen if everything in the path supports them.

 

A crossover cable isn't needed. Decent NICs support Auto-MDI/MDI-X.

----------

## Cyker

 *Monkeh wrote:*   

>  *depontius wrote:*   A friend started messing with gigabit, and there's one other thing you need in order to get full performance.  Your switch needs to support "jumbo frames".  Just for grins, if you've got a crossover cable handy, direct-connect 2 systems and benchmark that.  I'm not sure if the network configuration needs tweaking to use jumbo frames, or if it will just happen if everything in the path supports them. 
> 
> A crossover cable isn't needed. Decent NICs support Auto-MDI/MDI-X.

 

I've seen very few NIC's (Like, 1, and that might have been a hallucination) that support MDI/MDI-X. The only place where that's common is switches.

However, you are right - To connect two machines directly:

For 10/100 - You NEED a crossover cable.

For 1000 - You MUST NOT use a crossover cable.

The reason is that 10/100 run Rx and Tx over different pairs, but 1000 uses two-way signalling with echo-cancellation, and using a crossover-cable means the pairs goto the wrong place on the other end unless the NIC can sense and intelligently re-map its pins.

Using a crossover cable will cause most GigE NIC's auto-negotiation to detect the other system as 100BaseTX because of this.

----------

## Monkeh

 *Cyker wrote:*   

>  *Monkeh wrote:*    *depontius wrote:*   A friend started messing with gigabit, and there's one other thing you need in order to get full performance.  Your switch needs to support "jumbo frames".  Just for grins, if you've got a crossover cable handy, direct-connect 2 systems and benchmark that.  I'm not sure if the network configuration needs tweaking to use jumbo frames, or if it will just happen if everything in the path supports them. 
> 
> A crossover cable isn't needed. Decent NICs support Auto-MDI/MDI-X. 
> 
> I've seen very few NIC's (Like, 1, and that might have been a hallucination) that support MDI/MDI-X. The only place where that's common is switches.

 

All my gigabit NICs support it. That's three different Intel devices, a Realtek, and a Broadcom.

 *Quote:*   

> However, you are right - To connect two machines directly:
> 
> For 10/100 - You NEED a crossover cable.
> 
> For 1000 - You MUST NOT use a crossover cable.

 

No, for 10/100 you need a crossover if it doesn't support Auto-MDI/MDI-X. Few 10/100s do. For gigabit, you need a proper crossover (most just use 568A on one end and 568B on the other. That will not work, as the other pairs require swapping), unless it supports Auto-MDI/MDI-X (most modern gigabits). If you use a crap crossover, you'll get 100mbit.

----------

## Jamesbch

It seems that my switch supports Jumbo Frames. 

Features:

 *Quote:*   

> - Complies with IEEE802.3, 10Base-T, IEEE802.3u, 100Base-TX and IEEE802.3ab, 1000Base-T 
> 
> - IEEE802.3x, full-duplex flow control compliant; back-pressure, half-duplex flow control 
> 
> - 10/100/1000Mbps 8-port Gigabit Ethernet Switch (RJ-45) 
> ...

 

Must the computers on the network support jumbo frames ? I don't know very much what is Jumbo or "Auto-MDI/MDI-X". I will search about that on the net because I'm a little bit lost and what can I do now ?

I think I haven't any crossover cable but I can look for one. What can I except of this test ? Because I will use the LAN only with the Gigabit Switch (NIC, is it that ?)

Thank you very much for your interests.

----------

## Monkeh

All machines must support jumbo frames. If you have machines which are only 100mbit, then you cannot use jumbo frames. Auto-MDI/MDI-X automatically changes the pins in the port so crossover cables are not required.

----------

## depontius

A nagging thought bubbled to the surface this morning in the shower...

For the Intel e1000 driver in particular there is an extra throughput-oriented option - E1000_NAPI. (From kernel config)

```
Use Rx Polling (NAPI) (E1000_NAPI)

NAPI is a new driver API designed to reduce CPU and interrupt load

when the driver is receiving lots of packets from the card. It is

still somewhat experimental and thus not yet enabled by default.

If your estimated Rx load is 10kpps or more, or if the card will be

deployed on potentially unfriendly networks (e.g. in a firewall),

then say Y here.

If in doubt, say N.
```

I don't know if this really affects throughput or CPU overhead, but you might try it as one dimension of your performance matrix.  Nor do I know if other gigabit cards have similar options.  I'm going to see my friend with the gigabit network later today, and will ask him about getting jumbo frames working.

EDIT...  Google is our friend:

http://gentoo-wiki.com/TIP_Jumbo_Frames

http://www.cyberciti.biz/faq/rhel-centos-debian-ubuntu-jumbo-frames-configuration/

http://www.linuxforums.org/forum/linux-programming-scripting/34280-how-send-jumbo-frames.html

http://datatag.web.cern.ch/datatag/howto/tcp.html

----------

