Linux, gobs of RAM, RAID and performance suckage...
Bruce Dawson
jbd at codemeta.com
Thu Nov 30 14:02:10 EST 2006
Neil Joseph Schelly wrote:
> On Thursday 30 November 2006 11:59 am, Paul Lussier wrote:
>> Before yesterday we were noticing lots of NFS drop-outs on the clients
>> (300+ of them) and we correllated this pretty much to the backups
>> (amanda). The theory was that local disk I/O was beating out
>> nfs-client requests.
Its been years since I've been in the guts of NFS. Things I remember for
server tuning are:
* Make sure the user-mode processes aren't running into ulimit problems.
* Lockd chews up a lot of kernel resources - just which ones depends
on its implementation. Check shared memory and semaphores for resource
depletion.
* NFS over TCP is a real system hog.
* The socket queue size (which used to be a kernel configurable
option, I don't about today's kernels), if set larger would improve
performance.
* MTU should be uniform across the network. Things will work if its
not uniform, but performance will suffer. In general, bigger is better -
up to the maximum size the servers/clients can handle.
And old problem was that connections would "linger" after they've been
closed. I think the modern kernels/NFS distributions have fixed that though.
The fastest way to clog NFS is with a bunch of small random access
read-writes. This is typically manifested in directory updates (for
example: updating a netnews site over NFS), trying to run a database
across an NFS connection with a lot of simultaneous writers, and having
too many umbrellas in your empty rum glass.
Otherwise, if you can wait until I'm able to dig up my old notes on NFS
load-testing, I can get some more ideas. Right now, I'm several thousand
miles from my filing cabinet.
--Bruce
More information about the gnhlug-discuss
mailing list