# LSI MegaRAID Sata 150-4 and Samba big files

## fjenou

Hi All,

I am running a Samba server (3.0.14a-r2) on kernel 2.6.12-gentoo-r9, with an LSI Logic MegaRAID Sata 150-4 controller and a 3-drive RAID 5 array. The megaraid2 driver is built into kernel, and I am booting Linux with an IDE drive. The hardware is HP ML350, with a single Xeon processor and 1G of RAM.

Normal system operation is OK, except when Windoze stations start writing big files to a Samba public share mounted on RAID 5. The mileage vary, but somewhere after 700MB or 800MB I get a "oplock_break: client failure in break - shutting down this smbd" error message (the files are binary files, nothing to do with database files). Using smbclient from a Linux station, I get "Error writing file: Call timed out: server did not respond after 20000 milliseconds". There are nothing different in system log (even with higher level Samba log).

However, when I write big files to another Samba share mounted on the IDE boot drive, there´s absolutely NO problem. Also, I can FTP the large files to the RAID 5 share, I can ssh-copy them, etc. It´s quite clear that something´s wrong with Samba and megaraid driver.

Has anyone seen this before?

----------

## stealthy

By you description, error seems to be more like in your raid, be it hardware or software related. You yourself ruled samba or the network layer as a variable by stating that you can successfully do the transfers on a share mounted on a regular ide drive.

have you tried copying +1G files locally on to the raid5?

----------

## fjenou

No, there´s no problems with big file creation in Linux on the RAID 5 file system. In fact, Samba share backups are tar files generated directly on the RAID 5 file sistem, with filesize of some 50GB.

The server is in production. There are some 30 network users, most of the files are small ones (M$ Office files), and occasionally some user generate these bigger files (which I can upload with FTP).

I´ll try network file copy with NFS now, to see if I can get the same problem.

----------

## fjenou

Found something interesting.

Actually, when errors appear during big file writes to Samba share, system log registers something like this:

Nov  2 22:33:16 samba smbd[7967]: [2005/11/02 22:33:16, 0] lib/util_sock.c:write_socket_data(430)

Nov  2 22:33:16 samba smbd[7967]:   write_socket_data: write failure. Error = Broken pipe

Nov  2 22:33:16 samba smbd[7967]: [2005/11/02 22:33:16, 0] lib/util_sock.c:write_socket(455)

Nov  2 22:33:16 samba smbd[7967]:   write_socket: Error writing 51 bytes to socket 24: ERRNO = Broken pipe

Nov  2 22:33:16 samba smbd[7967]: [2005/11/02 22:33:16, 0] lib/util_sock.c:send_smb(647)

Nov  2 22:33:16 samba smbd[7967]:   Error writing 51 bytes to client. -1. (Broken pipe)

Googling this error showed people with networking problems. In fact, networking is OK, but something struck me: this server runs Fast Ethernet full-duplex on a switched network, so I get 80 mbps of throughput when writing files to Samba shares. Maybe Samba network timing is too stringent for the RAID 5 array. FTP works fine because write speed is at some 12 or 13 mbps; SSH is at about 10 mbps.

I installed cbqinit on a Linux workstation, and limited the speedy to 40mbps when communicating with Samba server. Now I can copy the big files correctly with smbclient.

I´ll try to get more performance out of the RAID 5 array. Maybe I´ll try another NIC (the HP ML350 has an Intel Gigabit NIC).

----------

