Tuning NFS (and Gbit Enet ?)
Michael ODonnell
michael.odonnell at comcast.net
Tue Aug 22 15:04:00 EDT 2006
We're upgrading an installation by bringing an existing config
forward from some older, slower Dell boxes to some spiffy new
Dell 690 boxes with integral Broadcom (BCM5752) NICs and a
Dell PowerConnect 2616 (an unmanaged switch) connecting them,
all supposedly capable of Gbit rates. The (smp) kernel is a
2.4.21-47.EL from RHAT.
Using the old config files (which worked OK on the old HW)
on the new HW, we're seeing some depressing NFS throughput
numbers. For example, writes of large files from client to
server are down around ~450KB/S when /etc/{fstab,exports}
entries look, respectively, like this on client and server:
serverBox:/filesysOnServer /mountPointOnClient nfs bg,soft,intr,nodev,nfsvers=3,rsize=32768,wsize=32768,tcp,sync 0 0
/filesysOnServer *(rw,no_subtree_check,insecure,sync)
Throughput immediately improved to ~10MB/S when I experimentally
changed them to be thus:
serverBox:/filesysOnServer /mountPointOnClient nfs defaults 0 0
/filesysOnServer *(rw,async)
...which sucks less but, of course, we expect much better from
Gbit Enet - three times that, or so.
Since it's been a while since I've fiddled with this stuff
I'll publicly beg for advice (or dope-slaps) here in hopes
that everyone benefits.
One feature we'll unfortunately not be able to make use of is
Jumbo Frames since (we're annoyed to discover) it turns out
that neither these NICs nor this switch can do that.... :-(
It's easy to show that sustained xfer rates to the server's
local disk are around 100MB/S from RAM and 60MB/S from another
local disk so that's not the problem.
IIRC there are also tweaks to the networking infrastructure
itself (independent of NFS) that can come into play, maybe
TCP buffer size?
More information about the gnhlug-discuss
mailing list