slow last 128MB of RAM in a 2GB system?

Ben Scott dragonhawk at gmail.com
Fri Apr 20 14:13:13 EDT 2007


On 4/20/07, Bill McGonigle <bill at bfccomputing.com> wrote:
> Ouch - is that simply a matter of cache impact on
> performance?  I wouldn't have guessed it would be so high.

  It could be (although it could be something else I'm just not aware of).

  The next time you run memtest86, observe the memory timings in the
upper-left corner of the screen.  Cache RAM is typically at least a
few orders of magnitude faster.

  Now think about what happens in, say, a program like "yum".  It's
going to be spinning through the same code paths for all the data.
That could easily be several thousand iterations at the outer loop
level, and at the level of the Python byte code interpreter, you're
basically implementing a special-purpose VM.  That's a *lot* of
fetch/execute cycles.  If they are not cached for some reason... well,
yuck.

  The "von Neumann bottleneck" (a term coined by the same Mr. Backus
who recently passed away) has been the bane of computer performance
for 50 years.  So far, with careful compiler code design, complex
logic in the CPU, and lots of fast cache RAM, we've been able to stave
off the worst of it, but it's still probably the most limiting
non-human factor in systems design.

  Of course, your performance discrepancy could be something else
entirely, but I never pass up an opportunity to pontificate.  ;-)

> The speed increase in the last one might be partially due to RAM
> cache from the previous run, but I also see fewer page faults.

  Hmmm.  If yum is using memory mapped I/O, would having everything
already in the kernel disk cache equal fewer page faults?

-- Ben


More information about the gnhlug-discuss mailing list