# poor samba performance

## tnt

I have quite strange problem...

server <---> 1gbps switch <---> client

both server and client run gentoo amd64

both server and client have >2GHz 4MB L2 cache core2duo CPUs

server 8GB of RAM

client 4GB of RAM

server 8x750GB 7.2k rpm SATA hdds in RAID5

client 2x640GB 7.2k rpm SATA hdds in RAID0

with both FTP and NFS I achieve over 100MB/s from server to client.

witch samba (both using CIFS mount point in /etc/fstab and using smb://server/location in KDE's Dolphin) transfer tops at ~30MB/s.

interesting enough, using smbclient gave (average 65855.5 KiloBytes/sec) on 12GB file copy.

server smb.conf

```
[global]

        ...

        os level = 32

        log level = 2

        socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 SO_KEEPALIVE

        max xmit = 65536

        dead time = 15

        getwd cache = yes

        domain master = Yes

        preferred master = Yes

        ...

```

client smb.conf

```
[global]

        ...

        max connections = 300

        os level = 24

        disable spoolss = yes

        socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 SO_KEEPALIVE

        max xmit = 65536

        dead time = 15

        getwd cache = yes

        ...

```

does anyone have an idea what could go wrong with samba?

----------

## gentoo_ram

Try lowering the TCP send and receive buffer sizes to 8K or maybe 16K at the most.  If this is a LAN connection, those large TCP windows probably won't buy you much.   All it does is make the copies take a long time to ramp up. 

Also, the CIFS mount point will not use smb.conf for the TCP buffer sizes.  You'll need mount options like 'rsize=8192,wsize=8192'

----------

## tnt

setting buffer sizes on both server and client to 16K lowered smbclient throughput from ~68MB/s to ~50MB/s

setting buffer sizes on both server and client to 8K lowered smbclient throughput from ~50MB/s to ~25MB/s

setting buffer sizes on both server and client to 256K unleached the following: (82087.6 KiloBytes/sec)

setting buffer sizes on both server and client to 1M added ~3.7MB/s: (85717.4 KiloBytes/sec)

setting buffer sizes on both server and client to 4M did not change anything: (85054.8 KiloBytes/sec)

I've left buffers sizes at 1M

without any mount options, cifs mounts with 'rsize=16384,wsize=57344' and it achieves ~29MB/s

adding 'rsize=8192,wsize=8192' lowered throughput to ~26MB/s

setting 'rsize=16384,wsize=16384' gave ~30MB/s

setting rsize anything above 16384 did not do anything - /proc/mounts showed that rsize stayed at 16384

setting wsize works up to 57344. setting value above that leads to wsize=4096 in /etc/mounts

changing smb.conf on server to include 'max xmit = 1048576' did not change anything.

still no performance with cifs.  :Sad: 

is there any way to change those cifs limits?

----------

## tnt

I've found similar situation / topic here:

http://lists.samba.org/archive/linux/2009-April/022979.html

guess I'm not alone...  :Sad: 

----------

## tnt

[partialy solved]

I managed to pump up cifs performance from ~30MB/s to ~52MB/s.

cifs without rsize and wsize arguments tries to mount as big rsize and wsize it can negotiate with server. usualy, it is 16384 and 57344.

cifs cannot use bigger rsize then CIFSMaxBufSize. this is from man page for mount.cifs:

 *Quote:*   

> CIFSMaxBufSize defaults to 16K and may be changed (from 8K to the maximum kmalloc size allowed by your kernel) at module install time for cifs.ko. Setting CIFSMaxBufSize to a very large value will cause cifs to use more memory and may reduce performance in some cases.

 

so, I threw out cifs from my kernel and compiled it as a module. on load I passed CIFSMaxBufSize=130048 argument and all of my mount points were mounted with rsize=130048,wsize=57344.

that trick gave me ~52MB/s. 

for now, I can live with that.

----------

## luispa

Congratulations for your improvements. 

 *tnt wrote:*   

> 
> 
> with both FTP and NFS I achieve over 100MB/s from server to client.
> 
> 

 

A bit offtopic, how are you able to achieve this excellent performance with FTP/NFS. That's theorically impossible over a 1GB link, so I guess you've round it. Could you please ellaborate what you did in order to achieve such performance?.

Thanks

Luis

----------

## tnt

ftp worked out of the box.

this is today's real-life performance, but not so impressive because I'm low on space and quite fragmented on my desktop:

```
Logging in as anonymous ... Logged in!

==> SYST ... done.    ==> PWD ... done.

==> TYPE I ... done.  ==> CWD (1) /share/tnt ... done.

==> SIZE somefile.mkv ... 12124656017

==> PASV ... done.    ==> RETR somefile.mkv ... done.

Length: 12124656017 (11G) (unauthoritative)

100%[===============================================================>] 12,124,656,017 94.9M/s   in 2m 2s

2009-12-06 13:31:37 (94.6 MB/s) - `somefile.mkv' saved [12124656017]

```

and these are options from my /proc/mounts (most of them are defaults):

```
rw,nosuid,nodev,noexec,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=5,sec=sys,mountaddr=x.x.x.x,mountvers=3,mountport=42767,mountproto=tcp,addr=x.x.x.x
```

and options in /etc/exports on the sever:

```
rw,async,no_subtree_check,no_root_squash
```

nfs performance (file sent to /dev/null to avoid fragmented partition):

```
dd if=/home/tnt/nfs/somefile.mkv of=/dev/null bs=4096

2960121+1 records in

2960121+1 records out

12124656017 bytes (12 GB) copied, 107.786 s, 112 MB/s
```

btw, I even don't use jumbo frames.

----------

## luispa

Impressive, thank you very much

Luis

----------

