Classic running out of memory... huh? what? <long>

Ben Scott dragonhawk at gmail.com
Fri Jun 12 07:30:22 EDT 2009


On Thu, Jun 11, 2009 at 11:54 AM, <bruce.labitt at autoliv.com> wrote:
> Curiously, if I allocate more memory but I constrain the problem to fit in
> RAM (i.e. run a smaller problem), the program always runs to completion.

  On Linux, malloc()'ing memory doesn't commit memory pages (RAM or
swap).  Until your program actually writes into those pages, they're
just a virtual memory mapping without any backing.

  I'd guess that when you're "running a smaller problem,"  your
algorithms are touching fewer pages, so the kernel doesn't need as
much RAM to actually do the job.

> In both cases, if I use "free" I see that free memory is all the way down
> to 160MB during the file write.  This seems absurdly low somehow ;)

  This is normal.  In Linux, "free" memory is pages not used for
*anything*.  The kernel design assumes that you want your RAM to be
used, not sitting idle just using electricity and generating heat.  So
it will aggressively cache I/O in RAM.  As soon as a process needs
memory, the kernel will release those pages.

  That's the reason for the "+/- buffers/cache" line in free(1).  In
the output you posted, with 172 MB free, you'll see that you actually
have 10708 MB (~ 1 GB) free if one counts buffers/cache as free.

-- Ben



More information about the gnhlug-discuss mailing list