# LAN not running at expected speed

## alienjon

For those who may have seen my post from a while back this is a related question.  At this moment I have my desktop (with a Killer e2400 Gigabit Ethernet Controller) directly connected to my DD-WRT-flashed router (a Netgear R6300v2).  It's currently connected with a CAT6 patch cable, though I've also used a few different CAT5E cables as well.  Running iperf3 as a sever on the router and desktop as the client (and vice versa) I get the following results after a run (and these numbers are very consistent in the several runs I've made so far):

 *Quote:*   

> SEE BELOW, THIS WAS BAD INFO

 

While I understand that there are always variables that impact a network's performance, I'm surprised that this isn't higher.  Shouldn't the bandwidth be about 3 times higher than this for Gigabit speeds?  Am I missing something in understanding network performance or might this be a configuration/driver problem somewhere.

Before the question is asked, I understand that I'm talking about fairly impractical speeds for daily usage.  A reliable ~300mbps isn't anything to scoff at for a regular user, IMHO.Last edited by alienjon on Mon Jan 02, 2017 3:58 pm; edited 1 time in total

----------

## eccerr0r

I've found that hardware makes a big difference.

Pretty much the only way I can get Gbit over Gbit (my record is a bit over 100MB/sec, which is pretty close to 1Gb/sec) is to use PCIe connected Ethernet through either SSD or RAMdisk.  Almost anything becomes a bottleneck and typically I only get 60MB/sec or so with most of my disks on average as I still have a lot of mechanical hard drives.

What and how is the disk attached on both ends?

----------

## alienjon

 *eccerr0r wrote:*   

> I've found that hardware makes a big difference.
> 
> Pretty much the only way I can get Gbit over Gbit (my record is a bit over 100MB/sec, which is pretty close to 1Gb/sec) is to use PCIe connected Ethernet through either SSD or RAMdisk. Almost anything becomes a bottleneck and typically I only get 60MB/sec or so with most of my disks on average as I still have a lot of mechanical hard drives.
> 
> What and how is the disk attached on both ends?

 

I'm using an onboard card (not sure what the connection is, though, I'll try to find out.  It's an MSI Gaming Z170A.  My Gentoo partition is running on a regular SATA connected 7200rpm drive.  My Windows 10 partition, however, is on a SATA connected SSD (I'd imagine those rw speeds make ethernet speeds negligible in comparison, though please correct me if I'm wrong in that assumption).  I'm running these tests in both, but the info I'm giving you are the Windows data, as the rw speeds should be less of a bottleneck.

Also, I'm updating my last post.  The data I posted was the wrong (it was the final accumulated amount, not the average).  It should have read:

```

Connecting to host 192.168.1.1, port 5201

[  4] local 192.168.1.11 port 53721 connected to 192.168.1.1 port 5201

[ ID] Interval           Transfer     Bandwidth

[  4]   0.00-1.00   sec  41.9 MBytes   351 Mbits/sec

[  4]   1.00-2.00   sec  47.1 MBytes   395 Mbits/sec

[  4]   2.00-3.00   sec  43.9 MBytes   368 Mbits/sec

[  4]   3.00-4.00   sec  40.9 MBytes   343 Mbits/sec

[  4]   4.00-5.00   sec  43.1 MBytes   362 Mbits/sec

[  4]   5.00-6.00   sec  42.1 MBytes   353 Mbits/sec

[  4]   6.00-7.00   sec  43.4 MBytes   364 Mbits/sec

[  4]   7.00-8.00   sec  45.1 MBytes   379 Mbits/sec

[  4]   8.00-9.00   sec  47.6 MBytes   400 Mbits/sec

[  4]   9.00-10.00  sec  48.1 MBytes   404 Mbits/sec

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth

[  4]   0.00-10.00  sec   443 MBytes   372 Mbits/sec                  sender

[  4]   0.00-10.00  sec   443 MBytes   372 Mbits/sec                  receiver
```

----------

## eccerr0r

Need to be at bit more clear... so you were actually doing computer to computer?

I initially thought it was Gentoo to a disk attached to the Netgear R6300v2 which has some bottlenecking there.

Plus I don't know how the R6300v2 is connected internally.  If it uses a software bridge to connect two Ethernet ports, that could but not always bottleneck any connection through it.

The onboard Gbit Ethernet adapters on recent motherboards are usually hooked up via PCIe so it should be OK.  On older boards it may be hooked up via PCI shared with IDE/SATA which again bottlenecks.

----------

## NeddySeagoon

alienjon,

Network hardware connects at 10/100/1000 Mbit sec.

A 100Mbit link gets you at best' 10MB/sec.  The packet overhead on the link is close to 10% 

dmesg may tell the link speed.

The route from one hard drive to anther, over the network involves

Data read from HDD to RAM.  

Data copy from RAM to network card

Transmit data

... data reception is the reverse process.

UDP will blindly transmit data over the wires.

TCP expects an acknowledgement.   Its not send a packet, wait for the Ack. There is an adaptive window that allows several packets to be in flight together.  None the less, processing the Ack/Nak is not free.

The speed you get from 'rotating rust' HDD varies according to the distance from the centre the data is.

Its about 120Mb/sec at the outside and under 40Mb/sec for the inner tracks. 

In short, I suspect you are seeing the  'rotating rust' HDD head to platter data limit.

----------

## alienjon

Not computer to computer. I'm connecting computer directly to router and the router itself is running the iperf server. The question then may be the storage speed on the router, thigh I'm not sure if I'll be able to find that out easily. I do have a laptop that I think has a gbit port. I'll try to test that later, thigh the laptop doesn't have an ssd.

If it is the storage bottlenecking here how should I expect that to impact speed?

----------

## alienjon

 *NeddySeagoon wrote:*   

> In short, I suspect you are seeing the 'rotating rust' HDD head to platter data limit.

 

It would have to be on the Router end, then (as eccerr0r had noted as well) as the desktop is SDD.  You also pointed out RAM, however.  What specs in RAM would I be looking for in regards to comparing speed here?  I would guess the MHz frequency?  How might that translate.

Also, would it be correct to surmise that the Router (or a switch, for that matter) doesn't use any RAM/HD in transmission of network data passing through?  In other words, if I had 2 computers running gbps ethernet, SDD rw speeds, and RAM which wouldn't bottleneck a 1gbps bandwidth and ran a network test I would get faster speeds and/or any slowdown would be caused by other factors?

----------

## eccerr0r

Only thing you can try is ramdisk to ramdisk, or if you have SSDs use those.  Don't count on the router being fast enough, those usually use underpowered ARM or MIPS processors with suboptimal interconnect.  You really need a real switch (or even hub, but not a software bridge on underpowered hardware) to get full bandwidth.  I don't know if your router is indeed fast enough to reflect packets back at full Gbit speeds.  Can you run hdparm -T on one of the "disks" on the router to see how fast buffer transfers are?

I've seen modern spinning rust HDDs have burst sequential read speeds in the 180MB/sec range and are thus finally getting fast enough without RAID to fill Gbit Ethernet, but if you have an older, sub 1TB disk - especially laptop disks, likely it won't be fast enough.

----------

## alienjon

Thanks for pointing out the rw speeds of the router - I hadn't considered that earlier (I was only thinking of the other end - the desktop).  I just fired up my 'old' laptop (I think it was new in about 2009) but it also has PCIe gbit ethernet.  Iperf between desktop and laptop (using the router to communicate between the two) results in:

```
-----------------------------------------------------------

Server listening on 5201

-----------------------------------------------------------

Accepted connection from 192.168.1.124, port 51316

[  5] local 192.168.1.11 port 5201 connected to 192.168.1.124 port 51317

[ ID] Interval           Transfer     Bandwidth

[  5]   0.00-1.00   sec   103 MBytes   866 Mbits/sec

[  5]   1.00-2.00   sec  99.1 MBytes   831 Mbits/sec

[  5]   2.00-3.00   sec   107 MBytes   899 Mbits/sec

[  5]   3.00-4.00   sec   102 MBytes   856 Mbits/sec

[  5]   4.00-5.00   sec   105 MBytes   883 Mbits/sec

[  5]   5.00-6.00   sec  87.8 MBytes   736 Mbits/sec

[  5]   6.00-7.00   sec   108 MBytes   906 Mbits/sec

[  5]   7.00-8.00   sec   108 MBytes   906 Mbits/sec

[  5]   8.00-9.00   sec   110 MBytes   919 Mbits/sec

[  5]   9.00-10.00  sec   107 MBytes   894 Mbits/sec

[  5]  10.00-10.07  sec  7.36 MBytes   949 Mbits/sec

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth

[  5]   0.00-10.07  sec  0.00 Bytes  0.00 bits/sec                  sender

[  5]   0.00-10.07  sec  1.02 GBytes   870 Mbits/sec                  receiver
```

Now THATS what I was looking for (or, at least, expecting).

@eccerr0r:

```
/dev/sda:

Timing buffer-cache reads:   224 MB in 0.51 seconds = 446169 kB/s
```

Not sure how that translates with the rest of the conversation, but I did want to post these results.

Thanks for all the help with this (both @eccerr0r and @NeddySeagoon).  This was mostly diagnostic and helping me better understand my network than a real problem.  I'm looking to upgrade to a Ubiquiti router eventually and setup a separate server for media sharing and the like (thus allowing me to have better control over the components on the server).

----------

