Timing file read/write over NFS
bruce.labitt at autoliv.com
bruce.labitt at autoliv.com
Thu Dec 18 10:51:44 EST 2008
I have a Cell blade that uses NFS for its OS and general storage. I have
written an application that creates and reads large files. The file I/O
is a significant portion of the total execution time. I am trying to
track down several potential sources of slowdowns.
Of course, the first, and prime suspect is the program itself. (Ahem, the
nut behind the wheel...) More questions to follow in other emails... Yes,
I know profiling is my friend...
I have run the app netperf to first look at network speed. Netperf shows
Cell ==> NFS server link is 775 Mbit/sec whereas the NFS server ==> Cell
link runs at 787 Mbit/sec. The link could be faster, but it is probably
close to what the limit is given the number of connections and the cable
length. (two Gbit switches + 65m of CAT6 cable)
On the NFS server, running "hdparm -tT /dev/sda" reveals a buffered disk
read of 116 MB/sec. The interface supposedly has a theoretical maximum of
133 MB/sec (UDMA133). So neither of these is too bad.
I bumped up the MTU size from 1500 to 9000 for the link. This resulted in
an increase of file write speed in my program by ~25%. Jumbo frames are
good for large file transfers :)
Is there a way to effectively run hdparm on the Cell? (Connecting to NFS)
I don't what to call the device name.
ie. # hdparm [flags] [device].
How do I find out the device name? I tried IP://srv/nfsroot/... but
that, of course, was not correct. FWIW, the Cell runs YDL6 kernel
2.6.22-1.ydl1 (Red Hat like) and the NFS-server runs Ubuntu 8.10 kernel
2.6.27-9-generic.
A second set of tests that I performed was to time the link over NFS using
the instructions at http://nfs.sourceforge.net/nfs-howto/ar01s05.html I
fiddled about with block size and found some block size groups that were
better than others. What I did find was an asymmetry in read vs write
speeds of 2:1. Roughly I could get 100 MB/sec read over the network, but
only 45-60 MB/sec write over the network. Anyone have an idea why this
would be true? I tried block sizes from 4K to 128K. The small block size
gave the smallest write speed. The 64k block gave the best performance on
a 4.3 GB file size. I actually want to create files bigger than this, but
I don't want to take forever to run the tests.
Thanks for any and all insights and tips on this.
-Bruce
******************************
Neither the footer nor anything else in this E-mail is intended to or constitutes an <br>electronic signature and/or legally binding agreement in the absence of an <br>express statement or Autoliv policy and/or procedure to the contrary.<br>This E-mail and any attachments hereto are Autoliv property and may contain legally <br>privileged, confidential and/or proprietary information.<br>The recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way <br>disseminating any material contained within this E-mail without prior written <br>permission from the author. If you receive this E-mail in error, please <br>immediately notify the author and delete this E-mail. Autoliv disclaims all <br>responsibility and liability for the consequences of any person who fails to <br>abide by the terms herein. <br>
******************************
More information about the gnhlug-discuss
mailing list