# Gigabit network question

## jpc82

Was was thinking of getting gigabit for my home network, however I was wondering if any one could answer some questions.

I was thinking of getting a CNET, or Trendware card, but I can't seem to find which chip it uses, so I don't know if its compatible with linux.

Stores:

http://canadacomputers.com/networking2.html#net

http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=771339&Sku=T156-2138&CatId=1175

Has anyone ever used Trendware products?  I havn't heard anything from them before and was wondering what the quaility with them is like.  I was also thinking of getting a Trendware 9-port switch.

http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=634751&Sku=T156-2068&CatId=596

----------

## NeddySeagoon

jpc82,

32 bit 33MHz PCI maxes out well before 1Gb/s, so its a bit of a waste unless you have some spare 64bit 66MHz PCI slots.

----------

## jpc82

What kind of speed would i expect?  

Right now I have regular 10/100 NICS and a 10/100 Switch, and I get about 3.5-4.5 transfer rate, using Samba.  The problem with this speed is that I do alot of video editing, and DVD authoring and transfering some of these HUGE files can take over a half hour.  With the price drops in Gigabit now it seems quite afordable (One of the links has NIC's for $20).

----------

## NeddySeagoon

jpc82,

If that speed is 2.5-4.5Mbyes/sec then your 100Mb network can do a lot better. Don't expect to see any speed improvement for using a Gb LAN. The network is not the bottleneck.

You should be able to get over 10Mbyte/sec with 100Mb LAN.

Find out where your current bottleneck is and fix that first.

----------

## jpc82

I don't really see what would be the bottleneck.

My file server is 

P2 400 

192Mb ram

/ is on a 14GB 5400RPM drive attached to Mobo

files are on two seperate 80GB drives attached to Promise Ultra66

Realtek NIC

2.4 Kernel

Hdparm for drives

/dev/hda:

 Timing buffer-cache reads:   500 MB in  2.00 seconds = 250.00 MB/sec

 Timing buffered disk reads:   38 MB in  3.00 seconds =  12.67 MB/sec

/dev/hdg:

 Timing buffer-cache reads:   488 MB in  2.00 seconds = 244.00 MB/sec

 Timing buffered disk reads:   66 MB in  3.02 seconds =  21.85 MB/sec

/dev/hde:

 Timing buffer-cache reads:   488 MB in  2.00 seconds = 244.00 MB/sec

 Timing buffered disk reads:   82 MB in  3.04 seconds =  26.97 MB/sec

Client

Athlon 1133Mhz

512 Ram

/ on 7200RPM attached to Mobo

Realtek NIC

2.6 kernel

Hdparm for drive 

/dev/hda:

 Timing buffer-cache reads:   804 MB in  2.01 seconds = 400.26 MB/sec

 Timing buffered disk reads:   96 MB in  3.02 seconds =  31.76 MB/sec

And they are attached by a Dlink 8 port switch

NOTE:  the way I got my speed was to transfer a file and then just divide the size my the time, getting the time from the time command.

----------

## NeddySeagoon

jpc82,

Timing the way you did it is fine.

You need to do some self to self testing on both the client and server. On both systems to the following.

Transfer a file self to self via 127.0.0.1 (This does disk reads and writes but does not use much of the network stack)

Transfer a file from /dev/zero via 127.0.0.1 to try disk writes only.

Transfer a file from disk to /dev/null via 127.0.0.1 to try reads only. The speeds you get here are as good as it gets for each PC.

Repeat the above using the PCs own ethernet address. It will be slower now because data should get to the network card. This is as fast as the PC can do useful file transfers.

Since the data must pass over the PCI bus twice, once on its way from the HDD to the motherboard (where it is processed) and again from the motherboard to the NIC., each byte needs at least two bytes of PCI bandwith.

Check that the PCI bus (and the cards) are using bus mastering.

----------

## jpc82

Thanks for all the advice.  I will give it a try latter today/tommorow.

So you think if I did get gigabit right now it would be a waste untill I get this bottleneck taken care off? 

If I did deside to get the gigabit after I fix this bottleneck what speeds should i expect.

----------

## NeddySeagoon

jpc82,

Yes. Whatever is the problem at the moment will still be the problem with a Gigabit network. If you can fix it to move the bottleneck to the network, then is the time to play with Gigabit.

You should be able to double your file transfer data rate before you hit the ceiling on a 100Mbit link.

----------

## jpc82

Ok I have run some tests and this is wht I have so far

Doing a cp from the samba share to local drive I got about 4.6MB/s

Doing a read from /dev/zero to a temp file on the client PC I got about 15MB/s

Doing a write from a temp file to /dev/null on the samba server I got about 30MB/s

And the output from netperf from the client to the samba server was about 11MB/s

I don't see where the bottleneck is.  Netpref make it look like the network is working fine, and all the read/write test seem ok.  Could it be my Samba config?  if so here it is, if some one can see anything wrong with it.

Note this was created a long time ago on samba2, and is now being used for samba3

```

# Samba config file created using SWAT

# from localhost.localdomain (127.0.0.1)

# Date: 2002/08/08 14:04:50

# Global parameters

[global]

        netbios name = AthlonTest

        server string = File Server

        security = share

        encrypt passwords = Yes

        obey pam restrictions = Yes

        pam password change = Yes

        passwd program = /usr/bin/passwd %u

        passwd chat = *New*password* %n\n *Retype*new*password* %n\n *passwd:*all*authentication*tokens*updated*successfully*

        unix password sync = Yes

        log file = /var/log/samba/%m.log

        max log size = 0

        socket options = TCP_NODELAY

        read raw = Yes

        write raw = Yes

        dns proxy = No

        guest account = jay

        printing = lprng

        keepalive = 600

[80]

        comment = 80Gig

        path = /tmp

        read only = No

        writable = yes

        guest ok = Yes

```

----------

## jpc82

I have been playing around with the smb.conf file and no matter what I add/remove the speed doesn't change very much.

Also I mergerd iperf and I was getting 80-90Mbits/s from it.

----------

## jpc82

OK, it has to be my network/samba.

I created a 235 file from /dev/urandom and put it in /dev/shm, and shared it with samba.

I then copied it from the share (ram disk) to /dev/null, and I got only 5.3MB/s

So with this test no hard drives are being used right?  And since I am only getting about 5.3MB/s it then has to be Samba or my network right?

----------

## jpc82

OK, another test I think proves that its samba.

I did a scp from my two fastest PC's and I was getting 8-9MB/s, however when I copy with samba from the two same computer all I get is about 4.6MB/s again.

----------

## NeddySeagoon

jpc82,

Try booting with Knoppix (or the Gentoo liveCD) to take the operating system out of the list of variables. Do the tests on all the PCs under identical conditions, or at least, as close as you can get.

I can't tie  *Quote:*   

> cp from the samba share to local drive I got about 4.6MB/s

 to a particular set of hardware and software components. From your later posts you seem to have got the idea. Take things away until it suddenly gets much better, then fix the last thing you took away.

----------

## burzmali

if all the pc's in your network are linux, then i woiuld say ditch samba for nfs.  you can use amd (auto-mounter deamon) with nfs to have automatic access to nfs shares as needed.  i can get more then 40Mbytes/s transfer between my two pc's using gigabit with cat-6 crossover cable and nfs shares.  both my boxes are fairly modern 1Ghz+ machines with fast drives, ram, etc.  not close to the theoretical 128Mbyte/s limit of gigabit, but more then 3 times faster then the maximum 100Mbit network (~12Mbytes/s).  have fun and good luck!

also- try transferring large files back and forth using ftp, and alternate which box is server.  I get different max speeds depending on which direction the data is going. and ftp is very efficient for file transfer (low overhead)

----------

## Malakin

If you stick with Samba replace your socket options line with this one:

```
socket options = SO_RCVBUF=8092 SO_SNDBUF=8092 TCP_NODELAY IPTOS_LOWDELAY
```

 You could try higher values for the buffers but I don't think it will help any more.

When transfering files with gigabit ethernet your bottleneck usually ends up being the write speed of the target drive, drives write about half as fast as they read so gigabit ethernet would do very little for you even if you did manage to tweak the hell out of everything.

I'm using gigabit ethernet myself and after quite a bit of tweaking was able to get the speeds up to 20MB/s which is pretty close to the write speed of my target drive so I'm getting about double what you can get with fast ethernet, certainly nothing spectacular.

----------

## jpc82

I'm starting to think that the problem is the linux samba clients.

When I ran a test on my Windows Box reading from my samba share I was gettign over 8MB/s.  Is there some sort of config for the client part of samba?

When I get a chance I am going to boot that Windows box into knoppix and see if it still has the same speed then.

----------

## jpc82

I'm pretty sure that the problem has to be my linux samba clients.  When I booted the PC that got 8MB/s in windows, into linux I only 4MB/s

Can anyone explain this to me, and how to fix it?

----------

## Malakin

 *Quote:*   

> Can anyone explain this to me, and how to fix it?

 Did you try the socket options I mentioned? You have to restart samba of course for them to come into effect. The buffer values I gave you have been the default for Samba for quite some time and you're not using them so that could certainly be causing problems.

Samba is much faster then windows.

----------

## jpc82

Yes I tried those values, but there was no noticable change.

It seems for me the samba clients are only half the speed of the windows clients, but I don't know why.  I have tried it on 3 different linux boxes and all of them have the same speed.

----------

## jpc82

Update:

I have now tested ftp transfer between the two boxes and I am getting about 10MB/s with it.

So something is wrong with samba for sure, and I'm pretty sure its all the linux clients.

----------

## NeddySeagoon

jpc82,

You have two boxes, ftp software and a network in the equation here. 10Mbytes/sec is pretty good for 100Mbit ethernet. (There is some overhead too)

To prove the network is the problem you need to do some self to self ftp from /dev/zero to disk and disk to /dev/null to take the network out of the equation. That will establish upper limits for the rest of the hardware.

Do you need samba or can you use ftp all round?

----------

## Malakin

 *Quote:*   

> So something is wrong with samba for sure, and I'm pretty sure its all the linux clients.

 

Have you monitored your cpu utilization on the server while transfering files to it? ftp will be less then samba.

I'm getting 20MB/s with Samba-3.0.2a-r2. I looked over my config file again and don't see anything performance related other then the socket options. I'm using jumbo frames (9000 Byte) but I was still getting at least 14MB/s without them. Someone in another thread said they were getting 40MB/s with samba but they must of been running raid to get that kind of drive write speed.

----------

## jpc82

Yes I have watched my CPU and with samba on my P2 400 it doesn't get over 50%.

----------

## jpc82

OK it has to be the linux clients because on the same PC I booted into Gentoo and Knoppix, both getting 4MB/s, and when I booted into Win2k I was able to get 8-9MB/s

----------

## Malakin

With your tests where you get 10MB/s with ftp and then 4MB/s with samba you are reading the same file and writing to the same partition I guess? Weird if so...

You could try nfs if using ftp isn't convenient enough for you.

----------

## jpc82

Yes, every other variable was identical.

I will play around with nfs, but its a bit of a waste to have two different shares for the exact same thing

----------

## Copperhead

Have you considered using NFS and Microsoft Services for Unix on your Windows boxes?  At the office, we considered putting Samba on the Solaris server for our new Windows clients, but instead tested Services for Unix.  It was easier to get Windows to be an NFS client to an existing NFS server than it was to get Samba up and running (not dissing samba... it's just we already had NFS).

Think about it... SFU is free (as in beer).

----------

