# Windows faster than Linux?

## grooveman

Hi,

I have set up a freenas box as a samba server.  I have bumped into something interesting, and in all honesty, a little bit disappointing.

It seems that when I'm using Windows 8 (which I am loathe to do), that I saturate the full 1gb network connection.  I can transfer files at 110 mib/sec.  (for reads and writes)

However, when I boot the same computer to Gentoo Linux, that speed drops to about 60 mib/sec on the write, and about 75 mib on the read -- moving the same files on the same hardware, over the same network.

I have spent a few days trying to tweak the smb4.conf file on the freenas box to get a little better transfer rate, but nothing I try has worked.

I have tried suggestions here:

a rather long google-link

here:

http://www.eggplant.pro/blog/faster-samba-smb-cifs-share-performance/

here:

https://calomel.org/samba_optimize.html

and here:

https://wiki.amahi.org/index.php/Make_Samba_Go_Faster

And many other places as well, but they usually reference the same directives.

I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients.  Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?

I have verified this on other machines as well.  The windows PCs are able to saturate 1gb, but not the linux machines.  They are always 25% to 50% slower.

Any ideas on what I can to get the linux machines to saturate a 1gb network connection?

Thanks!

G

Formed URL-tags around a link to google due to it being past the length that tends to break the forum layout a little bit. —Chiitoo

----------

## chithanh

Try to identify the bottleneck.

When doing samba transfers, is the CPU at 100% load?

Are you limited by network(iperf) or disk(bonnie++) performance? Which filesystem?

----------

## grooveman

CPU does not appear to even be touched during the transfer.  There is no load on the system.  

The filesystem speed is likewise ruled out by copying the same files to another folder on the system.  These are almost instantaneous. 

This really seems to be (linux) networking related.

Thanks.

G

----------

## gordonb3

 *grooveman wrote:*   

> 
> 
> I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients.  Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?
> 
> 

 

That is the logical conclusion if Windows 8 machines can access the files quicker on the same Samba server. Meaning that the links you followed are rather useless because these are about speeding up the server, not the client.

Now the interesting part in your post is you mentioning copying files, plural. Starting with Vista Microsoft did some work on the SMB protocol, resulting in what is now known as smb2. This incorporates client side caching of frequently requested data that usually does not change very often and the Samba server supports this. Cifs however is not smb2 and as a result will spend more time reloading and interpreting that "static" data. You will likely be able to verify this by copying a single large file rather than a set of files, which should result in similar times for both Windows and Linux.

----------

## schorsch_76

Try to use netcat to split the problem into two halfs. 

Computer A (Receiver):

```
nc -l 10000 > /dev/null
```

Computer B (Sender):

```
cat /dev/zero | pv | nc targetip 10000
```

pv = sys-apps/pv (pipe view to determine the speed)

nc = netcat. There are different Versions in the portage tree.

If you get there, on the raw network your desired speed, you eliminated the network stack as an error source. Otherwise, the problem sits in the network stack (module options whatever). With this test (/dev/zero -> tcp socket -> /dev/null) you pipe Zeros to the other computers /dev/null. Zero and null are not filesystem or CPU bound.

----------

## chithanh

If you suspect that the network is the bottleneck, iperf (available both on Gentoo and FreeNAS) will be able to verify this.

Any further speculation is of limited use until hard data comes in from measurements.

----------

## gordonb3

 *chithanh wrote:*   

> If you suspect that the network is the bottleneck,...

 

He doesn't. Read the opening post:

 *grooveman wrote:*   

> However, when I boot the same computer to Gentoo Linux, that speed drops to about 60 mib/sec on the write, and about 75 mib on the read -- moving the same files on the same hardware, over the same network. 

 

It is software related and unless one thinks file transfer should run even faster when both the client machine and the NAS are booted into Windows the problem is restricted to what is running on the client.

----------

## chithanh

 *gordonb3 wrote:*   

>  *chithanh wrote:*   If you suspect that the network is the bottleneck,... 
> 
> He doesn't. Read the opening post:

 

He does. Read his second post:

 *grooveman wrote:*   

> This really seems to be (linux) networking related.

 

iperf will confirm or refute this hypothesis.

If it confirms, next step would be identifying which exact part of the network is the bottleneck. Could be the Linux TCP/IP stack, the kernel NIC driver, or some misconfiguration which makes the NIC work non-optimally with other network components.

----------

## grooveman

Thank you.

Since this seems to happen out of the box for all of the linux boxes I deal with, I guess I was hoping someone knew of a known issue inside the Linux TCP/IP stack that would account for this.

I really wanted to see if other people could corroborate this:  the notion that Linux is slower than windows on the network (at least when using smb).

I will see what I can get out of iperf.

Thank you   :Smile: 

----------

## gordonb3

 *chithanh wrote:*   

>  *gordonb3 wrote:*    *chithanh wrote:*   If you suspect that the network is the bottleneck,... 
> 
> He doesn't. Read the opening post: 
> 
> He does. Read his second post:
> ...

 

So we're both smart-asses  :Laughing: 

Yes, if the low network throughput can be linked to non-smb simple network transfers as well that would mean some low level networking component is not working the way it should. Because the FreeNAS machine appears to be unaffected by whatever may be causing this I'd not expect anything to come from this, but I guess it can't hurt. It's like you suggested: if the client machines all use the same NIC which is different from the one in the FreeNAS machine a faulty driver could slow down networking.

----------

## 1clue

 *schorsch_76 wrote:*   

> Try to use netcat to split the problem into two halfs. 
> 
> Computer A (Receiver):
> 
> ```
> ...

 

Just as a control group, I tried this with my systems and got 112MiB/s, wire speed would be 119 MiB/s.  This is Gentoo (atom c2758+Intel I354 nic, igb driver) client, Ubuntu server (i7 920+Realtek RTL8111 nic, r8169 driver).

----------

## grooveman

 *gordonb3 wrote:*   

>  *grooveman wrote:*   
> 
> I'm beginning to believe that the problem is not with my smb(4).conf, but rather with my linux clients.  Could it be that there are tweaks to the linux TCP/IP stack on the client side that I should be making?
> 
>  
> ...

 

Hi, Sorry, I missed this post (and the next).  Actually I have tried sending one large file as well, the speed difference is still there.

Thanks.

----------

## szatox

 *Quote:*   

> Computer A (Receiver):
> 
> Code:	
> 
> nc -l 10000 > /dev/null	
> ...

 

There is a serious pitfall here. I noticed it when I was testing "virtual" connections:  that pipe is too thin even for gigabit ethernet.

Use nc < /dev/zero and nc > /dev/null instead. And measure the speed with any other tool, like iftop, iptables, or even track progress with ifconfig if you can't do better, but do not push that data through a pipe.

----------

## grooveman

Ok.  I can send information from my linux PC to the NAS -- and it is cruising around 111mib/s -- commensurate with the windows machine moving data over smb.

However, I'm having trouble sending data (via nc) from the NAS to the linux machine.

The line (executed from the NAS)

```
cat /dev/zero |nc 10.9.99.200 10000
```

appears to do nothing.  I just get my prompt back.

The line:

```
cat /dev/zero >nc 10.9.99.200 10000
```

Seems to hang.  of course, all the while I'm running:

```
nc -l 10000 > /dev/null
```

 on the linux box, and I have iptraf open in another terminal.  Nothing registers.  Both Nas and the linux machine just sit there, the prompt apparently hanging.  Iptraf sits at zero.

----------

## 1clue

 *szatox wrote:*   

>  *Quote:*   Computer A (Receiver):
> 
> Code:	
> 
> nc -l 10000 > /dev/null	
> ...

 

@szatox,

I don't understand what you're saying here.  Are you saying that a pipe cannot support more than 1gbps?  What exactly can't support 1gbps?

FWIW I have an atom box that can cat /dev/zero | pv -ar | pbzip2 > /dev/null at 319MiB/s, it's doing it right now.  That's 2.68 gbps.

----------

## Akkara

 *grooveman wrote:*   

> The line:
> 
> ```
> cat /dev/zero >nc 10.9.99.200 10000
> ```
> ...

 

That line simply made a file named "nc" and filled it with zeros.  You might want to remove it now so it doesn't take up space.  :Smile: 

I think the suggestion had been to run:

```
nc 10.9.99.200 10000 </dev/zero
```

----------

## gordonb3

If you have trouble running low level protocols as suggested by some, how about trying one of the other file sharing protocols supported by FreeNAS? You could try FTP, which is a seriously no-nonsense sixties protocol.

----------

## grooveman

 *Akkara wrote:*   

>  *grooveman wrote:*   The line:
> 
> ```
> cat /dev/zero >nc 10.9.99.200 10000
> ```
> ...

 

 :Laughing:  LOL

hah, missed that one.  Yes, even trying it with your syntax still doesn't work.  The weird thing is that I get an "RSET" in my iptraf, and the connection is cut immediately... weird.

----------

## schorsch_76

 *Akkara wrote:*   

>  *grooveman wrote:*   The line:
> 
> ```
> cat /dev/zero >nc 10.9.99.200 10000
> ```
> ...

 

Oh yes, you are right. I wrote this as i was at work on a Windows machine  :Wink: 

----------

## szatox

1clue, have a look at those:

```
nc -l -p 9999 > /dev/null & nc localhost 9999 < /dev/zero

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

TX:             cum:   18.7GB   peak:      0b                                                                rates:   10.7Gb  10.8Gb  5.75Gb

RX:                       0B               0b                                                                            0b      0b      0b

TOTAL:                 18.7GB              0b                                                                         10.7Gb  10.8Gb  5.75Gb

```

```
 nc -l -p 9999 > /dev/null & cat /dev/zero | nc localhost 9999

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

TX:             cum:   9.35GB   peak:   3.91Gb                                                               rates:   3.91Gb  3.87Gb  3.12Gb

RX:                       0B               0b                                                                            0b      0b      0b

TOTAL:                 9.35GB           3.91Gb                                                                        3.91Gb  3.87Gb  3.12Gb

```

Alright, I noticed that drop when I had some load on this box (and the above is with otherwise idle system). This time it is enough for gigabit eternet, though the difference between ~10Gbps and 4Gbps is still noticable, and there are already 10Gb networks out there. Bear it in mind when testing throughput.

And a bonus: adding another pipe doesn't really change anything, so it's not an overload issue.

```
nc -l -p 9999 | cat > /dev/null & cat /dev/zero | nc localhost 9999 

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

TX:             cum:   5.81GB   peak:   4.07Gb                                                               rates:   4.07Gb  3.99Gb  2.91Gb

RX:                       0B               0b                                                                            0b      0b      0b

TOTAL:                 5.81GB           4.07Gb                                                                        4.07Gb  3.99Gb  2.91Gb
```

Tweaked line lengths in order to make the forum layout behave. —Chiitoo

----------

## EggplantSystems

I authored one of the original suggestions: https://eggplant.pro/blog/faster-samba-smb-cifs-share-performance/

Before I get to the client stuff, additional things to consider:

"strict allocate" makes a big difference, but your file system must support unwritten extents (XFS, ext4, BTRFS, OCS2) for it to work

"allocation roundup size" has no noticeable bearing on performance, but should probably be specified to keep space-wastage down when "strict allocate" = Yes

"socket options" only made a minor difference for me, but I am using Linux.  Some of the socket options are different/not-applicable for different operating systems.

Drives are many orders of magnitude slower than CPU and RAM.  Gigabit ethernet -- even unoptimized -- has just enough bandwidth for your typical 7200 RPM SATA drive.  If you're not getting the good numbers, the problem is probably going to be file-system related.  Samba has so many low-level tweaks for interfacing with the host filesystem that end up second-guessing the caching/allocation strategies of the host platform.  When the samba tweaks undermine the native caching/allocation strategies, the results are going to be bad.

As for client performance, smbclient/cifs has options similar to that of the server (buffers, socket opts, and in some cases even defaults to the contents of smb.conf).  You can use cifsiostat to get transfer stats.  On Debian:

```
Mount options for cifs

       See the options section of the mount.cifs(8) man page (cifs-utils package must be installed).
```

Another thing that can make a difference is the corpus you use for testing.  Sending a single 1GB file is going to go a lot faster than sending 1024 separate 1MB files.  There is a decent amount of per-file overhead (names, ACLs, etc).  The single file gets that out of the way once and can focus on raw data thereafter.

----------

