Classic running out of memory... huh? what? <long>

Bruce Labitt bruce.labitt at myfairpoint.net
Thu Jun 11 19:30:54 EDT 2009


  But all this doesn't help the OP at all. The OP should benchmark
> the performance of what he's got.  It doesn't really matter if X could
> be faster if his particular X is slow.
>
> -- Ben
>   
OP here...  That 45 MB/sec file write rate was real-world, using fwrite 
in C over a network.  The average write rate was 45MB/sec with peaks 
nearly hitting the full gbit ethernet bandwidth.  This was for a 11.9GB 
file.  (I have 10 files to write.)  It took about 4.5 minutes for the 
file write.  Standard MTU size.  I have not messed with jumbo packets in 
a while.  I had jumbo frames running for a little while and then I 
messed something up.  Probably time to implement jumbo frames now that 
my program seems to be running.

NFS server on Ubuntu 8.10 - cheap workstation - 8GB RAM - 300GB fast 
SATA drive, 1TB slower SATA drive.  NFS client - diskless QS22.

Using netperf I think I got 770e6 bps write rates, but for considerably 
smaller file sizes.  That is about twice the rate - but for a file at 
least 10 times smaller. Something about these big file writes that 
aggrevate OS's :-)      

Although it would be fun to benchmark alternative storage schemes, that 
wasn't the original problem...  I was running out of memory, getting the 
OOM error and having the process killed.  I have seemingly dodged the 
problem by further constraining the buffer allocation.  It is now at a 
mere 12GB.  When I left work the sim had been run for 6.5 hours and it 
appeared I was thru the "danger" region.  We shall know for sure in the 
morning.

I will run valgrind on a smaller problem to see what is going on.

-Bruce




More information about the gnhlug-discuss mailing list