Timing file read/write over NFS
Tom Buskey
tom at buskey.name
Thu Dec 18 11:29:02 EST 2008
On Thu, Dec 18, 2008 at 10:51 AM, <bruce.labitt at autoliv.com> wrote:
> I have a Cell blade that uses NFS for its OS and general storage. I have
> written an application that creates and reads large files. The file I/O
> is a significant portion of the total execution time. I am trying to
> track down several potential sources of slowdowns.
>
> Of course, the first, and prime suspect is the program itself. (Ahem, the
> nut behind the wheel...) More questions to follow in other emails... Yes,
> I know profiling is my friend...
>
> I have run the app netperf to first look at network speed. Netperf shows
> Cell ==> NFS server link is 775 Mbit/sec whereas the NFS server ==> Cell
> link runs at 787 Mbit/sec. The link could be faster, but it is probably
> close to what the limit is given the number of connections and the cable
> length. (two Gbit switches + 65m of CAT6 cable)
>
> On the NFS server, running "hdparm -tT /dev/sda" reveals a buffered disk
> read of 116 MB/sec. The interface supposedly has a theoretical maximum of
> 133 MB/sec (UDMA133). So neither of these is too bad.
>
> I bumped up the MTU size from 1500 to 9000 for the link. This resulted in
> an increase of file write speed in my program by ~25%. Jumbo frames are
> good for large file transfers :)
>
> Is there a way to effectively run hdparm on the Cell? (Connecting to NFS)
> I don't what to call the device name.
> ie. # hdparm [flags] [device].
>
> How do I find out the device name? I tried IP://srv/nfsroot/... but
> that, of course, was not correct. FWIW, the Cell runs YDL6 kernel
> 2.6.22-1.ydl1 (Red Hat like) and the NFS-server runs Ubuntu 8.10 kernel
> 2.6.27-9-generic.
>
> A second set of tests that I performed was to time the link over NFS using
> the instructions at http://nfs.sourceforge.net/nfs-howto/ar01s05.html I
> fiddled about with block size and found some block size groups that were
> better than others. What I did find was an asymmetry in read vs write
> speeds of 2:1. Roughly I could get 100 MB/sec read over the network, but
> only 45-60 MB/sec write over the network. Anyone have an idea why this
> would be true? I tried block sizes from 4K to 128K. The small block size
> gave the smallest write speed. The 64k block gave the best performance on
> a 4.3 GB file size. I actually want to create files bigger than this, but
> I don't want to take forever to run the tests.
>
> Thanks for any and all insights and tips on this.
> -Bruce
It seems like you're looking in all the right places.
I'd suggest running iozone (http://www.iozone.org). That could help you map
the sweet spot for block size. It will take awhile to run.
I'd expect hdparm to be tied closely to PCs with IDE or SATA(?) drives. I
know it won't work with SCSI drives on PCs..
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20081218/7c86ccf1/attachment.html
More information about the gnhlug-discuss
mailing list