# Why is my 100MBit PCMIA-Card only working with 10MBit?

## Jinidog

Hello,

I've an Xircom RealPort Networkcard for my Notebook and I'm sure it is a 100MBit card.

I'm using the Xircom driver from Kernel 2.6.9 and everything works correctly.

(just that I'm only getting 600 Kbytes per second at best over this card)

Can anyone help?

Thanks.

----------

## ChrisWhite

Well, you can't really pinpoint that to one specific item, but here's where it breaks down:

1) You've been hacked and some nice hidden proccess is taken over your net connection.

2) The kernel driver for 2.6.9 sucks and you should upgrade to 2.6.10 to see if it helps.

3) Your cabling

4) Your connection to the other system

5) Your ISP hates you and wants to give you the slowest possible connection.

1 or 2 would make the most sense.  3 - 5 are guesses  :Smile: .

----------

## Jinidog

It's a connection between two PCs.

The laptop reaches the web over the other PC.

So it's no direct Internetconnection.

Point 1 is unlikely, Point 5 cannot be  :Smile: .

I had 100 Mbit connections with the same cable from the PC to another PC.

So it must have to do with the laptop and it's network card.

I wil upgrade to 2.6.10 when gentoo-dev-sources-2.6.10 are stable finally.

(they need much time in comparisson to the releases of 2.6.8 and 2.6.9)

----------

## Jinidog

Updating the kernel did not help.

----------

## Jinidog

Knoppix doesn't do any better...

----------

## yardbird

If a card supports 100 mbit networks it does not necessarily mean that it will go at 10 mbyte/s. In case of PCMCIA cards it boils down to wether the card and the socket on your notebook can sustain high speeds. Is your NIC a cardbus or not? What about your PCMCIA socket?

----------

## Jinidog

What informations do you need?

the yenta_socket module is used.

----------

## djnauk

 *Jinidog wrote:*   

> What informations do you need?
> 
> the yenta_socket module is used.

 

dmesg would be useful. see what it's saying.

Also, you didn't say how you are connected to the other computer. Is it a direct connection via cross-over cable? If so, does the card on the other side support 100mbps? If not and ur via a hub/switch, is it 100mbps capabale. and auto-sensing?

If it's not auto-sensing, it may be running at the speed of the lowest connection.

----------

## Jinidog

I've no clue about auto-sensing and how to check or enable it.

On the other side there is a network adapter from the nforce2.

(this should do 100MBit and I believe to have had 100 MBit with this build in network chip alreadyI

The cable is 100MBit able, I'm sure.

This is the output of dmesg:

 *Quote:*   

> 
> 
> bash-2.05b$ dmesg
> 
> Linux version 2.6.10-gentoo-r4 (root@PIII650) (gcc-Version 3.4.3 (Gentoo Linux 3.4.3, ssp-3.4.3-0, pie-8.7.6.6)) #7 Thu Jan 13 10:00:41 CET 2005
> ...

 undefined

----------

## djnauk

 *Jinidog wrote:*   

> I've no clue about auto-sensing and how to check or enable it.

 

Auto-sensing is something you can't enable or disable. It should be on all switches or higher-end hubs. Auto-sensing just works out what the maximum transmission is on the line, i.e. 10/100/1000, with or without duplex.

Some hubs that can run at higher speeds may only be able to run at the speed of the lowest connection (due to the way hubs work), which is found by auto-sensing. Some are fixed at a certain speed. Swtiches generally are avaliable to have any type of connection on each port, as they store and process the packet instead of just trying to broadcast it.

 *Jinidog wrote:*   

> On the other side there is a network adapter from the nforce2.
> 
> (this should do 100MBit and I believe to have had 100 MBit with this build in network chip alreadyI
> 
> The cable is 100MBit able, I'm sure.
> ...

 

As long as the cable is Cat5, 100meg should be find. However, the kernel seams to have picked up that it's a 100meg connection (100BaseT media), and so should be running at 100meg.

What's the hub ur using?

----------

## Jinidog

They are not using 100 MBit.

The bandwith is just 600 KB/s and the ping latency is to high.

0.2 ms

Perhaps the problem is the other PC?

There is no hub between them, it is a direct connection.

This is the output of the other PC's dmesg

AMD2800+ jini # dmesg

Linux version 2.6.10-gentoo-r4 (root@AMD2800+) (gcc-Version 3.4.3 (Gentoo Linux 3.4.3, ssp-3.4.3-0, pie-8.7.6.6)) #6 Thu Jan 13 18:33:18 CET 2005

BIOS-provided physical RAM map:

 BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)

 BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)

 BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)

 BIOS-e820: 0000000000100000 - 000000003fff0000 (usable)

 BIOS-e820: 000000003fff0000 - 000000003fff3000 (ACPI NVS)

 BIOS-e820: 000000003fff3000 - 0000000040000000 (ACPI data)

 BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved)

 BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)

 BIOS-e820: 00000000ffff0000 - 0000000100000000 (reserved)

127MB HIGHMEM available.

896MB LOWMEM available.

On node 0 totalpages: 262128

  DMA zone: 4096 pages, LIFO batch:1

  Normal zone: 225280 pages, LIFO batch:16

  HighMem zone: 32752 pages, LIFO batch:7

DMI 2.2 present.

ACPI: RSDP (v000 Nvidia                                ) @ 0x000f6c90

ACPI: RSDT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x3fff3000

ACPI: FADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x3fff3040

ACPI: MADT (v001 Nvidia AWRDACPI 0x42302e31 AWRD 0x00000000) @ 0x3fff71c0

ACPI: DSDT (v001 NVIDIA AWRDACPI 0x00001000 MSFT 0x0100000e) @ 0x00000000

ACPI: PM-Timer IO Port: 0x4008

ACPI: Local APIC address 0xfee00000

ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)

Processor #0 6:10 APIC version 16

ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])

Built 1 zonelists

Kernel command line: root=/dev/hda4

Found and enabled local APIC!

mapped APIC to ffffd000 (fee00000)

Initializing CPU#0

CPU 0 irqstacks, hard=c04eb000 soft=c04ea000

PID hash table entries: 4096 (order: 12, 65536 bytes)

Detected 2125.635 MHz processor.

Using pmtmr for high-res timesource

Console: colour VGA+ 80x25

Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)

Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)

Memory: 1034640k/1048512k available (2936k kernel code, 13284k reserved, 883k data, 164k init, 131008k highmem)

Checking if this processor honours the WP bit even in supervisor mode... Ok.

Calibrating delay loop... 4210.68 BogoMIPS (lpj=2105344)

Mount-cache hash table entries: 512 (order: 0, 4096 bytes)

CPU: After generic identify, caps: 0383fbff c1c3fbff 00000000 00000000

CPU: After vendor identify, caps:  0383fbff c1c3fbff 00000000 00000000

CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)

CPU: L2 Cache: 512K (64 bytes/line)

CPU: After all inits, caps:        0383fbff c1c3fbff 00000000 00000020

Intel machine check architecture supported.

Intel machine check reporting enabled on CPU#0.

CPU: AMD Athlon(tm) XP 2800+ stepping 00

Enabling fast FPU save and restore... done.

Enabling unmasked SIMD FPU exception support... done.

Checking 'hlt' instruction... OK.

ACPI: setting ELCR to 0200 (from 0c28)

NET: Registered protocol family 16

PCI: PCI BIOS revision 2.10 entry at 0xfb5a0, last bus=2

PCI: Using configuration type 1

mtrr: v2.0 (20020519)

ACPI: Subsystem revision 20041105

ACPI: Interpreter enabled

ACPI: Using PIC for interrupt routing

ACPI: PCI Root Bridge [PCI0] (00:00)

PCI: Probing PCI hardware (bus 00)

PCI: nForce2 C1 Halt Disconnect fixup

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.HUB0._PRT]

ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.AGPB._PRT]

ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LNK2] (IRQs 3 4 5 6 7 *10 11 12 14 15)

ACPI: PCI Interrupt Link [LNK3] (IRQs 3 4 5 6 7 *10 11 12 14 15)

ACPI: PCI Interrupt Link [LNK4] (IRQs 3 4 *5 6 7 10 11 12 14 15)

ACPI: PCI Interrupt Link [LNK5] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LUBA] (IRQs 3 4 5 6 7 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LUBB] (IRQs *3 4 5 6 7 10 11 12 14 15)

ACPI: PCI Interrupt Link [LMAC] (IRQs 3 4 5 6 7 10 *11 12 14 15)

ACPI: PCI Interrupt Link [LAPU] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LACI] (IRQs *3 4 5 6 7 10 11 12 14 15)

ACPI: PCI Interrupt Link [LMCI] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LSMB] (IRQs 3 4 5 6 7 *10 11 12 14 15)

ACPI: PCI Interrupt Link [LUB2] (IRQs 3 4 *5 6 7 10 11 12 14 15)

ACPI: PCI Interrupt Link [LFIR] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [L3CM] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [LIDE] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.

ACPI: PCI Interrupt Link [APC1] (IRQs *16), disabled.

ACPI: PCI Interrupt Link [APC2] (IRQs *17), disabled.

ACPI: PCI Interrupt Link [APC3] (IRQs *1 :Cool: , disabled.

ACPI: PCI Interrupt Link [APC4] (IRQs *19), disabled.

ACPI: PCI Interrupt Link [APC5] (IRQs *16), disabled.

ACPI: PCI Interrupt Link [APCF] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCG] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCH] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCI] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCJ] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCK] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCS] (IRQs *23), disabled.

ACPI: PCI Interrupt Link [APCL] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCM] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [AP3C] (IRQs 20 21 22) *0, disabled.

ACPI: PCI Interrupt Link [APCZ] (IRQs 20 21 22) *0, disabled.

SCSI subsystem initialized

usbcore: registered new driver usbfs

usbcore: registered new driver hub

PCI: Using ACPI for IRQ routing

** PCI interrupts are no longer routed automatically.  If this

** causes a device to stop working, it is probably because the

** driver failed to call pci_enable_device().  As a temporary

** workaround, the "pci=routeirq" argument restores the old

** behavior.  If this argument makes the device work again,

** please email the output of "lspci" to bjorn.helgaas@hp.com

** so I can fix the driver.

spurious 8259A interrupt: IRQ7.

Machine check exception polling timer started.

cpufreq: Detected nForce2 chipset revision C1

cpufreq: FSB changing is maybe unstable and can lead to crashes and data loss.

cpufreq: FSB currently at 170 MHz, FID 12.5

audit: initializing netlink socket (disabled)

audit(1105965151.339:0): initialized

highmem bounce pool size: 64 pages

devfs: 2004-01-31 Richard Gooch (rgooch@atnf.csiro.au)

devfs: boot_options: 0x0

SGI XFS with no debug enabled

Initializing Cryptographic API

inotify device minor=63

Real Time Clock Driver v1.12

Linux agpgart interface v0.100 (c) Dave Jones

agpgart: Detected NVIDIA nForce2 chipset

agpgart: Maximum main memory to use for agp memory: 941M

agpgart: AGP aperture is 512M @ 0x80000000

Hangcheck: starting hangcheck timer 0.5.0 (tick is 180 seconds, margin is 60 seconds).

ACPI: Power Button (FF) [PWRF]

ACPI: Fan [FAN] (on)

ACPI: Thermal Zone [THRM] (6 C)

serio: i8042 AUX port at 0x60,0x64 irq 12

serio: i8042 KBD port at 0x60,0x64 irq 1

Serial: 8250/16550 driver $Revision: 1.90 $ 8 ports, IRQ sharing disabled

ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A

mice: PS/2 mouse device common for all mice

input: AT Translated Set 2 keyboard on isa0060/serio0

input: ImExPS/2 Logitech Explorer Mouse on isa0060/serio1

io scheduler noop registered

io scheduler anticipatory registered

io scheduler deadline registered

io scheduler cfq registered

ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker

http://www.scyld.com/network/ne2k-pci.html

ACPI: PCI Interrupt Link [LNK3] enabled at IRQ 10

PCI: setting IRQ 10 as level-triggered

ACPI: PCI interrupt 0000:01:06.0[A] -> GSI 10 (level, low) -> IRQ 10

eth0: RealTek RTL-8029 found at 0x9000, IRQ 10, 00:80:AD:45:B8:B8.

forcedeth.c: Reverse Engineered nForce ethernet driver. Version 0.30.

ACPI: PCI Interrupt Link [LMAC] enabled at IRQ 11

PCI: setting IRQ 11 as level-triggered

ACPI: PCI interrupt 0000:00:04.0[A] -> GSI 11 (level, low) -> IRQ 11

PCI: Setting latency timer of device 0000:00:04.0 to 64

eth1: forcedeth.c: subsystem: 01019:1b31 bound to 0000:00:04.0

Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2

ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx

NFORCE2: IDE controller at PCI slot 0000:00:09.0

NFORCE2: chipset revision 162

NFORCE2: not 100% native mode: will probe irqs later

NFORCE2: BIOS didn't set cable bits correctly. Enabling workaround.

NFORCE2: 0000:00:09.0 (rev a2) UDMA133 controller

    ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA

    ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:DMA

Probing IDE interface ide0...

hda: ST3160023A, ATA DISK drive

hdb: Maxtor 34098H4, ATA DISK drive

elevator: using anticipatory as default io scheduler

ide0 at 0x1f0-0x1f7,0x3f6 on irq 14

Probing IDE interface ide1...

hdd: LG DVD-ROM DRD-8120B, ATAPI CD/DVD-ROM drive

ide1 at 0x170-0x177,0x376 on irq 15

hda: max request size: 1024KiB

hda: 312581808 sectors (160041 MB) w/8192KiB Cache, CHS=19457/255/63, UDMA(100)

hda: cache flushes supported

 /dev/ide/host0/bus0/target0/lun0: p1 p2 < p5 p6 > p3 p4

hdb: max request size: 128KiB

hdb: 80043264 sectors (40982 MB) w/2048KiB Cache, CHS=65535/16/63, UDMA(100)

hdb: cache flushes not supported

 /dev/ide/host0/bus0/target1/lun0: p1

libata version 1.10 loaded.

ACPI: PCI Interrupt Link [LUB2] enabled at IRQ 5

PCI: setting IRQ 5 as level-triggered

ACPI: PCI interrupt 0000:00:02.2[C] -> GSI 5 (level, low) -> IRQ 5

ehci_hcd 0000:00:02.2: nVidia Corporation nForce2 USB Controller

PCI: Setting latency timer of device 0000:00:02.2 to 64

ehci_hcd 0000:00:02.2: irq 5, pci mem 0xd3005000

ehci_hcd 0000:00:02.2: new USB bus registered, assigned bus number 1

PCI: cache line size of 64 is not supported by device 0000:00:02.2

ehci_hcd 0000:00:02.2: USB 2.0 initialized, EHCI 1.00, driver 26 Oct 2004

hub 1-0:1.0: USB hub found

hub 1-0:1.0: 6 ports detected

ohci_hcd: 2004 Nov 08 USB 1.1 'Open' Host Controller (OHCI) Driver (PCI)

ACPI: PCI Interrupt Link [LUBA] enabled at IRQ 11

ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 11 (level, low) -> IRQ 11

ohci_hcd 0000:00:02.0: nVidia Corporation nForce2 USB Controller

PCI: Setting latency timer of device 0000:00:02.0 to 64

ohci_hcd 0000:00:02.0: irq 11, pci mem 0xd3003000

ohci_hcd 0000:00:02.0: new USB bus registered, assigned bus number 2

hub 2-0:1.0: USB hub found

hub 2-0:1.0: 3 ports detected

ACPI: PCI Interrupt Link [LUBB] enabled at IRQ 3

PCI: setting IRQ 3 as level-triggered

ACPI: PCI interrupt 0000:00:02.1[B] -> GSI 3 (level, low) -> IRQ 3

ohci_hcd 0000:00:02.1: nVidia Corporation nForce2 USB Controller (#2)

PCI: Setting latency timer of device 0000:00:02.1 to 64

ohci_hcd 0000:00:02.1: irq 3, pci mem 0xd3004000

ohci_hcd 0000:00:02.1: new USB bus registered, assigned bus number 3

hub 3-0:1.0: USB hub found

hub 3-0:1.0: 3 ports detected

usbcore: registered new driver usblp

drivers/usb/class/usblp.c: v0.13: USB Printer Device Class driver

i2c /dev entries driver

i2c_adapter i2c-0: nForce2 SMBus adapter at 0x5000

i2c_adapter i2c-1: nForce2 SMBus adapter at 0x5100

Advanced Linux Sound Architecture Driver Version 1.0.6 (Sun Aug 15 07:17:53 2004 UTC).

ACPI: PCI Interrupt Link [LNK1] enabled at IRQ 11

ACPI: PCI interrupt 0000:01:08.0[A] -> GSI 11 (level, low) -> IRQ 11

ALSA device list:

  #0: Sound Blaster Audigy (rev.3) at 0x9400, irq 11

oprofile: using NMI interrupt.

NET: Registered protocol family 2

IP: routing cache hash table of 8192 buckets, 64Kbytes

TCP: Hash tables configured (established 262144 bind 65536)

ip_conntrack version 2.1 (8191 buckets, 65528 max) - 300 bytes per conntrack

ip_tables: (C) 2000-2002 Netfilter core team

ipt_recent v0.3.1: Stephen Frost <sfrost@snowman.net>.  http://snowman.net/projects/ipt_recent/

arp_tables: (C) 2002 David S. Miller

NET: Registered protocol family 1

NET: Registered protocol family 17

ACPI wakeup devices:

HUB0 HUB1 USB0 USB1 USB2 F139 MMAC MMCI UAR1

ACPI: (supports S0 S1 S4 S5)

ReiserFS: hda4: found reiserfs format "3.6" with standard journal

usb 3-1: new low speed USB device using ohci_hcd and address 2

ReiserFS: hda4: using ordered data mode

ReiserFS: hda4: journal params: device hda4, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, maxtrans age 30

ReiserFS: hda4: checking transaction log (hda4)

ReiserFS: hda4: Using r5 hash to sort names

VFS: Mounted root (reiserfs filesystem) readonly.

Freeing unused kernel memory: 164k freed

Adding 586364k swap on /dev/hda3.  Priority:-1 extents:1

fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel.

[fglrx] Maximum main memory to use for locked dma buffers: 928 MBytes.

ACPI: PCI Interrupt Link [LNK4] enabled at IRQ 5

ACPI: PCI interrupt 0000:02:00.0[A] -> GSI 5 (level, low) -> IRQ 5

[fglrx] module loaded - fglrx 3.14.6 [Oct 30 2004] on minor 0

XFS mounting filesystem hdb1

Ending clean XFS mount for filesystem: hdb1

eth1: no link during initialization.

parport_pc: Ignoring new-style parameters in presence of obsolete ones

parport0: PC-style at 0x378 (0x778) [PCSPP(,...)]

parport0: irq 7 detected

lp0: using parport0 (polling).

NET: Registered protocol family 10

Disabled Privacy Extensions on device c047c4e0(lo)

IPv6 over IPv4 tunneling driver

allocation failed: out of vmalloc space - use vmalloc=<size> to increase size.

[fglrx] AGP detected, AgpState   = 0x1f00421b (hardware caps of chipset)

agpgart: Found an AGP 3.0 compliant device at 0000:00:00.0.

agpgart: Putting AGP V3 device at 0000:00:00.0 into 8x mode

agpgart: Putting AGP V3 device at 0000:02:00.0 into 8x mode

[fglrx] AGP enabled,  AgpCommand = 0x1f004312 (selected caps)

[fglrx] free  AGP = 524562432

[fglrx] max   AGP = 524562432

[fglrx] free  LFB = 116387840

[fglrx] max   LFB = 116387840

[fglrx] free  Inv = 0

[fglrx] max   Inv = 0

[fglrx] total Inv = 0

[fglrx] total TIM = 0

[fglrx] total FB  = 0

[fglrx] total AGP = 131072

eth1: no IPv6 routers present

eth0: no IPv6 routers present

eth1: link up.

eth1: link down.

eth1: link up.

eth1: link down.

----------

## djnauk

OK I've done some looking around:

http://www.ausforum.com/showthread.php?t=10431

and

http://www.etherboot.org/db/nic.php?show=tech_data&id=2

From the looks of it, eth0 on the second system is a Realtek RTL-8029, which is a PCI NE2000 clone, only capable of 10MBps, not the 100MBps. Also probably means it's runing on half-duplex, hence the poor connection.

You've got eth1 aswell (running via forcedeth.c), which I think is the nforce adaptor. Have you tried connecting it to the other network port and adjusting the settings in /etc/conf.d/net?

----------

## Jinidog

Yes, the other one is a realtek.

I cannot switch them because the other network is the homenetwork, which only makes 10Mbit physically.

It has a BNC adapter, the other one is RJ-45, so I cannot switch.

----------

## djnauk

 *Jinidog wrote:*   

> Yes, the other one is a realtek.
> 
> I cannot switch them because the other network is the homenetwork, which only makes 10Mbit physically.
> 
> It has a BNC adapter, the other one is RJ-45, so I cannot switch.

 

As long as ur using the realtek, i'm afraid ur going to be stuck using 10Mbps.

----------

## Jinidog

Why?

Because the driver is loaded?

Isn't is possible to have to NICs in one PC with different speed?

----------

## djnauk

 *Jinidog wrote:*   

> Why?
> 
> Because the driver is loaded?
> 
> Isn't is possible to have to NICs in one PC with different speed?

 

Yeah, you can have as many different connections as different speeds as you want (system capacity not withstanding), but, accoding to your second set of logs, the realtek controls eth0, while the nforce has eth1.

eth1 isn't connected (eth1: no link during initialization.), so, as long as you're connected on eth0, the connection can't run at any more than 10MBps.

----------

## Jinidog

But the connection goes over eth1, I'm completly sure of that.

Perhaps there is no link-detection, because I hadn't the notebook on when I started the PC.

----------

## djnauk

So you've got the home network connected to the realtek (the extra card with the BNC connection), and you're trying to connect the laptop to the desktop using the rj45 port on the motherboard (the nforce adapter)?

----------

## Jinidog

Exactly  :Smile: 

----------

## think4urs11

Hi!

What gives either 

```
ethtool eth0
```

 or 

```
mii-tool -v eth0
```

 on PIII650? (and for completeness on AMD for both cards)

You are trying to 

a) transfer a file located on AMD2800+ towards PIII650?

OR

b) transfer a file from 'behind' (means connected via the 10MB card to AMD2800+) towards PIII650?

in case of a) - via sftp/scp or via ftp/smb/whatever?

in case of b) the speed would be ok IMHO

HTH

T.

----------

## djnauk

Is the cable used to connect the desktop and laptop a Cat5 (or above) cross-over cable?

Have you tried to build a kernel without the realtek support and then just loaded the nforce. Do you still get the same problems?

I can't think of a way to test the speed of a line, other than chucking data at it.

----------

## djnauk

 *Think4UrS11 wrote:*   

> a) transfer a file located on AMD2800+ towards PIII650?
> 
> in case of a) - via sftp/scp or via ftp/smb/whatever?
> 
> 

 

That shouldn't be a problem, as I can throw files here back and fore between an AMD 2100+ and a PII-450 server on just about any protocol, maxing out my connection (a 100BaseT-FD on a dedicated switch with or without other traffic).

Thanks for the mii-tool info though!  :Smile:  Will remember that for the future! :p

----------

## Jinidog

I can just repeat, it is a point to point connection.

One Networkcard to the other.

So I'm pulling files from AMD2800+ to PIII650.

I'm using ssh.

What is a real proove, that everything works just with 10 MBit is the ping time.

It needs 0.2ms, what is much too long for a 100MBit connection.

I'll try remove realtek from the kernel.

Anyway, thanks for your help.

----------

## think4urs11

ping time is no proof at all

i've ~.25ms here between two boxes.

Transfer speed via ssh ~3.2MB (very slow proc on one of them) and approx 6-7MB via smb.

Did you check the settings of both cards with ethtool/mii-tool?

----------

## Jinidog

Oh, I believe PING-Time is a proove for an DIRECT ethernet connection (no hubs, routers, etc. between).

What is the difference between 10Mbit and 100 Mbit?

100 Mbit has much shorter coding intervals, the signal, that is coding a binary 0 or 1 is much shorter.

As a ping is a very small package, it depends on the latency and not on the bandwith.

100 Mbit has a ten times smaller latency as 10 Mbit, so Ping times should be around 0.02ms.

Status update:

I've removed the realtec driver from my kernel.

Obviously it was unneeded and my 10 Mbit card is using the ne2000 driver.

But that changed nothing regarding the connection to PIII650.

----------

## think4urs11

another try:

What gives  *Quote:*   

> ethtool ethX

  or  *Quote:*   

> mii-tool -v ethX

  for all NICs involved?

Both running at the same speed/duplex setting? If not - fix that first.

HTH

T.

----------

## Jinidog

 *Quote:*   

> 
> 
> AMD2800+ linux # ethtool eth1
> 
> Settings for eth1:
> ...

 

eth1 is the connection to the laptop PIII650.

 *Quote:*   

> 
> 
> AMD2800+ linux # mii-tool -v eth1
> 
> SIOCGMIIPHY on 'eth1' failed: Operation not supported
> ...

 

error.[/quote]

----------

## Jinidog

From the laptop.

(ethtool is not installed)

 *Quote:*   

> 
> 
> bash-2.05b# mii-tool -v eth0
> 
> eth0: negotiated 100baseTx-FD, link ok
> ...

 undefined

----------

## djnauk

 *Jinidog wrote:*   

> Oh, I believe PING-Time is a proove for an DIRECT ethernet connection (no hubs, routers, etc. between).
> 
> What is the difference between 10Mbit and 100 Mbit?
> 
> 100 Mbit has much shorter coding intervals, the signal, that is coding a binary 0 or 1 is much shorter.
> ...

 

It probably doesn't work like that. Chances are most of the time for a ping is probably processing time. Here's pings from my desktop (one to my router, one to my server, and the other the bbc.co.uk):

```
jwright@jonathan jwright $ ping -c 4 10.0.0.1

PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.

64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms

64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.264 ms

64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.251 ms

64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.280 ms

--- 10.0.0.1 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3001ms

rtt min/avg/max/mdev = 0.251/0.264/0.280/0.010 ms

jwright@jonathan jwright $ ping -c 4 10.0.0.2

PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.

64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms

64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.183 ms

64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.194 ms

64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.191 ms

--- 10.0.0.2 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 2999ms

rtt min/avg/max/mdev = 0.183/0.217/0.300/0.048 ms

jwright@jonathan jwright $ ping -c 4 news.bbc.co.uk

PING newswww.bbc.net.uk (212.58.226.40) 56(84) bytes of data.

64 bytes from news.bbc.co.uk (212.58.226.40): icmp_seq=1 ttl=121 time=29.0 ms

64 bytes from news.bbc.co.uk (212.58.226.40): icmp_seq=2 ttl=121 time=76.6 ms

64 bytes from news.bbc.co.uk (212.58.226.40): icmp_seq=3 ttl=121 time=30.3 ms

64 bytes from news.bbc.co.uk (212.58.226.40): icmp_seq=4 ttl=121 time=51.5 ms

--- newswww.bbc.net.uk ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3001ms

rtt min/avg/max/mdev = 29.040/46.895/76.625/19.349 ms
```

That's via a 100MBps, full duplex, connection though a dedidated switch with a 800MBps back-bone.

 *Jinidog wrote:*   

> Status update:
> 
> I've removed the realtec driver from my kernel.
> 
> Obviously it was unneeded and my 10 Mbit card is using the ne2000 driver.
> ...

 

In that case I'm not sure what's going on. It may be worth trying another cable or seeing if you can borrow a hub off someone and a pair of patch cables to see if that changes anything.

Or, as a last resort, take out the realtek card from the system and just try with the nforce port. You can probably pick up new networks from ebay for only a few s (most of mine are Intel Pro/100S nics, which only cost by about £5 a time over the net - instead of about £50 in the shops).

----------

