[OT] End-user uses for x86-64 (was: Why are still
not at 64 bits)
Jon 'maddog' Hall
maddog at li.org
Fri Feb 16 19:06:41 EST 2007
On Fri, 2007-02-16 at 18:30 -0500, Bill McGonigle wrote:
> On Feb 16, 2007, at 14:31, Jon 'maddog' Hall wrote:
>
> > I will note, however, that you can not mmap in an 8GB flash into a
> > single address space with a 32-bit processor.
>
> maddog, is this another of your profound observations? That 64-bit
> addressing might be more interesting in the low-end/embedded space
> than the high end? Wouldn't that be a kick in the pants for
> everybody expecting the 64-bit train to be headed southbound.
>
> -Bill
(sigh)
Once upon a time in a place far, far away I talked a young man into
porting his 32-bit kernel to a 64-bit platform.
My main goal for this was to create a platform where researchers and
educators could investigate and expand programming algorithms in a place
where you never ran out of address space....a place where boundless
programming existed.
I recognized that whenever CPU address space had a step function, Don
Knuth would bring out another version of Volume 3 (Sorting and Searching
Techniques) in his never-finished but often re-wrote "Art of Computer
Programming". I realized that every time address space went from 8-bit
to 16-bit to 32-bit he would bring out yet a few more algorithms that
now made sense, where they did not make sense before because the address
space was too small.
I actually wrote a talk about the concept of boundless programming. I
utilized (believe it or not) the routines defined in the Posix Real Time
specification (I think it was Posix 1003.11_something-or-other):
mmap
lock
threads
semaphores
async I/O
(there are more. I am tired)
and noted that with enough address space you could spread your data out
that you might not have collisions very often, and when you did you
might be able to take a little while to clean them up since they were so
seldom.
(By the way, these were techniques that I used back in 1973 on IBM
mainframes, with "huge" address spaces of 16 Mbytes. A lot has to do
with the speed of the processor versus the size of memory....Temporal
Locality and Spatial Locality (or lack of it).)
A professor up at Dartmouth heard about this, and did a study about how
long it would take for a 64 bit address space to fill up if you never
re-used the addresses....never did garbage collection. I tried to get
him an Alpha for this, but they were still in too much demand. All I
could get him were some DecStation 3100s (mips based).
At DecStation 3100 speeds and "normal" programming loads, he estimated
that it would be seven hundred years (or something like that) before all
the space was used up and you had to do garbage collection. We both
agreed that the processor would probably crash and have to be rebooted
before then (windows it was a given) so it was effectively "never" that
you would have to do garbage collection. And if you were ever forced to
do it, it would be highly effective, probably recovering terabytes of
data area at a time with little overhead.
I was disappointed that no one have ever seemed to do much in the area
of this work. Don still has not answered my challenge to re-write
Volume 3 again. But he is retired and will not even read email.
Something I am considering lately.
Good night.
md
More information about the gnhlug-discuss
mailing list