Linux Memory Fragmentation, ideas..

Thomas Charron twaffle at gmail.com
Fri Oct 10 17:29:05 EDT 2014


  I have full viability.  :-)  I've been developing the whole thing for
several years.  We don't have direct access to the buffers, as they are
created by libdc1394, which in turn interfaces with the firewire_core
modules.

  We're in a bind because we're finding the issue now that we're collecting
clinical results running constantly for days at a time.

  We've slated the 'imager daemon' for future versions, which would solve
the issue.  For the short term I'm trying to find ways to make the existing
code function from an OS standpoint.

  Thomas

On Fri, Oct 10, 2014 at 5:00 PM, Bill Freeman <ke1g.nh at gmail.com> wrote:

> I don't know how much visibility into the code you have, but if I were
> designing from scratch, I would consider pre-allocating the buffers for the
> imagers, then have the new instances connect to these bufferes, rather than
> allocate them afresh.  This could likely be done with mmap, with the
> buffers locked in memory, and not backed by the swap file, if possible.
> Then the buffers are out of consideration for file system and page buffers,
> or things that lesser IO might allocate and drop.  If mmap doesn't support
> what you need (and I haven't looked at the capability of modern linux
> mmap), then a custom device driver should be able to allocate such buffers
> at boot time and map them.
>
> Bill
>
> On Fri, Oct 10, 2014 at 4:47 PM, Thomas Charron <twaffle at gmail.com> wrote:
>
>>   Hello everyone,
>>
>>   This is a very interesting road, so I'm calling out to see if anyone
>> might have idea on how to minimize an issue I am having on an embedded
>> imaging system.
>>
>>   This device is performing an insane amount of image processing from two
>> firewire cameras.  We're talking on the order of ~120 fps.  Each of these
>> is being processed, and various image masks are saved to the local hard
>> drive, for later use, and are provided to the client side of the device via
>> an embedded web server within the analyzer software.
>>
>>   What is happening is, once a minute, a new instance of the imagers is
>> launched.  These two imagers then connect to the firewire cameras, and go
>> to work.
>>
>>   Over time, ~1-2 weeks, the imagers start to fail with at an increasing
>> rate, as the kernel starts to kill them as the firewire stack cannot
>> allocate enough DMA buffers to communicate with the cameras.  Note, there
>> is plenty of ram in DMA32, however, it is fragmented to the point that the
>> required contiguous 128k page areas are not available.  The system has tens
>> of thousands of 4&8k pages, however.
>>
>>   A short term solution I have found is to simply request the kernel drop
>> all of it's caches on the floor.  This, in turn, frees up a LOT of memory,
>> and subsequently, allocs can function without an issue, until it occurs
>> again.
>>
>>   I believe the issue is that the system is creation beeeelions of little
>> files, and the I/O caching system is using unused ram, of which there is
>> plenty.  The caching, however, it breaking up large page areas into the 4
>> and 8k areas, fragmenting the RAM significantly.
>>
>>   Is there possibly a way to limit Linux's caching system to prevent the
>> use of a portion of the DMA32 zone in its entirety?  Or perhaps block off
>> portions of the DMA32 zone for use only by firewire and/or DMA transfers?
>>
>>   Kind of describing the issue out loud, wasn't sure if anyone had any
>> good ideas to minimize the issue.
>>
>> --
>> -- Thomas
>>
>> _______________________________________________
>> gnhlug-discuss mailing list
>> gnhlug-discuss at mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>>
>


-- 
-- Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20141010/3e0b556f/attachment.html 


More information about the gnhlug-discuss mailing list