Classic running out of memory... huh? what? <long>
bruce.labitt at autoliv.com
bruce.labitt at autoliv.com
Fri Jun 12 11:03:41 EDT 2009
gnhlug-discuss-bounces at mail.gnhlug.org wrote on 06/12/2009 07:35:23 AM:
> On Thu, Jun 11, 2009 at 7:30 PM, Bruce
> Labitt<bruce.labitt at myfairpoint.net> wrote:
> > Using netperf I think I got 770e6 bps write rates, but for
considerably
> > smaller file sizes. That is about twice the rate - but for a file at
least
> > 10 times smaller. Something about these big file writes that
> > aggrevate OS's :-)
>
> That's actually kind of interesting. Typically, writing many small
> files is slower than writing one big file -- the overhead of all those
> open()/close() operations is more than just doing a bunch of write()
> operations. Unless the netperf utility is writing just *one* file and
> it all fits in cache. Of course, I haven't really dabbled in NFS
> benchmarking, and what little I did was years ago.
>
I learned the hard way that one needs to do large binary writes if one
want anywhere near reasonable file write rates. I went from say 3MB/sec
(text based, line by line) to ~ 45 MB/sec that way. 45MB/sec is not that
good, but the upper limit with NFS on gigabit I would guess would be about
90MB/sec. Is it worth a ton of work to get that factor of two? Maybe. If
I could get the simulation time down to 4 hours it would be great. Last
night's run was 14 hours. Perhaps I could get more mileage out of better
coding practice... The joys of high perf computing...
The files I am writing and reading are 12GB each, more or less, with each
file being a fraction of the whole problem. Later in the program, I read
the file (to memory), destroy the file, and create a mega (actually
100giga) file that is the concatenation of all the files. Oh, yeah, and I
use OpenMP to use all the processing cores that I can. (I effectively
have threads destroying the old file while I write to the concatenated
file.)
OpenMP is kind of fun - lots to learn there. The first half of my program
is using all 4 cores about 50% of the time. The second half is not
parallelized yet. But that is another topic...
> > Although it would be fun to benchmark alternative storage schemes,
that
> > wasn't the original problem...
>
> Right, right, sorry, I wasn't suggesting you go do this right now.
> :) It was more along the lines of, if it gets to that point, you
> should compare USB vs NFS and use whatever is faster for your
> equipment, and not worry about what random people on the Internet say
> they can do.
>
> -- Ben
Actually back to another poster's question - access is kind of limited.
The rack is locked and in a locked cage. I have full access, but I have
to go pick up the key.
******************************
Neither the footer nor anything else in this E-mail is intended to or constitutes an <br>electronic signature and/or legally binding agreement in the absence of an <br>express statement or Autoliv policy and/or procedure to the contrary.<br>This E-mail and any attachments hereto are Autoliv property and may contain legally <br>privileged, confidential and/or proprietary information.<br>The recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way <br>disseminating any material contained within this E-mail without prior written <br>permission from the author. If you receive this E-mail in error, please <br>immediately notify the author and delete this E-mail. Autoliv disclaims all <br>responsibility and liability for the consequences of any person who fails to <br>abide by the terms herein. <br>
******************************
More information about the gnhlug-discuss
mailing list