# Bad network/samba performance

## servicebag

Hi there,

two months ago I finally had the time to switch back to gentoo.

What I still can't figure out is why my network/samba performance is so bad. The writing performance usually drops to 3 to 9 MB/s. With the same cable and under Windows I usually have around 50 MB/s.

The reading speed is working fine so it's probably a bad configuration or something else.

I tried a few things but I am still not able to improve the performance.

```
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.1.5  netmask 255.255.255.0  broadcast 192.168.1.255

        inet6 fe80::224:1dff:fe75:7410  prefixlen 64  scopeid 0x20<link>

        ether 00:24:1d:75:74:10  txqueuelen 1000  (Ethernet)

        RX packets 1150992  bytes 1185151483 (1.1 GiB)

        RX errors 0  dropped 1  overruns 0  frame 0

        TX packets 878246  bytes 712107414 (679.1 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

```

smb.conf

```
# Global parameters

[global]

workgroup = WORKGROUP

#map to guest = Bad User

max connections = 300

time server = Yes

unix charset = UTF-8

display charset = UTF-8

interfaces = eth0

#bind interfaces only = yes
```

```
//nas/backup                  /mnt/backup         cifs            defaults,user,noauto,file_mode=0777,dir_mode=0777    0 0
```

```
[    0.840968] r8169 0000:05:00.0 eth0: RTL8168c/8111c at 0xffffc90000c74000, 00:24:1d:75:74:10, XID 1c4000c0 IRQ 43

[    0.841053] r8169 0000:05:00.0 eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko]

[    7.154632] r8169 0000:05:00.0 eth0: link down

[    7.154640] r8169 0000:05:00.0 eth0: link down

[    7.154656] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready

[    9.539001] r8169 0000:05:00.0 eth0: link up

[    9.539009] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
```

Thanks for helping me out!

----------

## vaxbrat

Try using wireshark to see what's flying back and forth packet wise.  Also since you are using jumbo frames, are you sure everybody in the local network is agreeable on frame size?  That whole slicing/dicing thing can make a hash of network performance.  Even whatever switches you are using must be able to deal with it or else packets will fragment and need re-assembly.

----------

## gerdesj

What version of Samba are you using?

Well spotted vaxbratt - jumbo frames.  If you can't guarantee that you know exactly what the interaction between all devices is with that then you will almost certainly see problems.

IMNSHO JFs have a use purely for iSCSI and that only when supported along the entire path.  That said it doesn't mean that you will not get a performance boost in certain use cases and with certain equipment.

An example: 

Dell Equallogic err something.  One of them is a £15,000 odd piece of kit.  Its a very well put together iSCSI SAN with dual controllers with four GB NICs, one of which can/should be dedicated to management of the thing (the "dormant" controller has another matching four NICs).  They also do one that is similar but with two GB NICs and a 100MB NIC for management per controller and its a bit cheaper.

Now, you put in two Powerconnect (whatever is supported) switches and off you go.  Now at one point Powerconnect 55xx were the base level switch and were supported but these seem to have become switch non gratia at Dell.  The iSCSI interfaces seem to work OK but the management interfaces go a bit weird.  The one with the 100MB management NICs is fine in this config.

The one that has snags runs the management NICs at 1GB with JFs enabled on it (you can't turn it off) and you can't turn off JFs per port on a Dell PC 55xx either so you are stuck.  Then you are at the stage of messing with different versions of firmware on both the Eql and switch to get the best combination.  It's messy!

Unless: 

(you are an expert at the ASIC level with your switch AND an expert on your IP stack and system hardware) 

OR (you have a path that is vender approved and supported) 

OR (it somehow just works)

THEN bin jumbo frames  

I get an awful lot better performance than you with Samba without doing anything fancy.  At home I have cheap Netgear eight port switches around the place and I get near wire speed performance from my laptop (an aging Dell studio 17 series) to a Medion something or other both running Gentoo.  SMB/CIFS is close to a fancy FTP session and there is not a lot of overhead.

1Gbits-1 = 125MBs-1 (for NICs G = 1,000,000 and there are 8 bits in a byte) and that is without any overhead.  Now I seem to remember that a busy ethernet runs at about 40% efficiency due to Collision Sense Multiple whatever and the onion effect of the protocols in use, so that gives you a target of 50MBs-1, although I know that can be bettered.

OK not very scientific but the message is:  bin the fancy stuff and benchmark.  Then make a change - one thing at a time and benchmark.  

You also fail to give any hints as to what you are using hardware wise.

Hope this helps

Cheers

Jon

----------

## vaxbrat

getdesj love your passage there.  I had an area doing a lot of matlab processing of certain data in copious amounts.  The servers and switches were HP based with DL 360 and 380 G6 and Z800 workstations and mostly WinXP 64 bit while my side was running RHEL 5 32 bit (customer wanted it that way... sheesh) using mostly HP 8400 class stuff that came off of someone's bonepile.  There was a lot of navel gazing about file server performance (AD and Win2k3) on the HP servers talking to the Matlab wonks.

For whatever reason, the on-board NIC's on the Z800's couldn't jumbo frame themselves out of a wet paper bag.  Off the top of my head, I think they were Broadcom of some NetExtreme vintage, but I suspect the XP64 driver support for them (you know, that red-headed stepchild version of Winders), left something to be desired.  The windows guy ended up buying a bunch of Intel EtherExpress Pro cards and dropped them in everything including my RedHat boxes.  He had a support contract with HP and got some expert to come in and reconfigure the switches.  Made a world of difference, and we had the frame size upped to the max everywhere.

OTOH let's talk about another area where they brought me in to do a Samba domain join for a RHEL box that was needed to run Xylinx.  This was all HP DL 360 G7 class (Sandy Bridge) stuff that hadn't been running more than 6 months and various desktops.  Network performance sucked hind extremity.  In my wanderings, I noticed an XP32 workstation reported that its network only ran at 10mbit.  Then I saw the same thing with a Z800 workstation where I couldn't even get the NIC to negotiate 100mbit at fixed speed.  After mumbling several WTF's under my breath I went in to the server room to look at the switch.  Turns out it was some turn of the century catalyst 4700 vintage that only did 10 to the ports and 100 to the uplink!  The IT guy in charge of the area had pulled the original switch out several months ago when another of his areas had a dead switch and needed a replacement.  At the time, there were only a couple of people in this new program area, and they were only proposal types.  So he pulled one off of his bonepile.  Then, HE FORGOT HE DID THIS!

----------

