Packing/unpacking binary data in C - doubles, 64 bits

bruce.labitt at autoliv.com bruce.labitt at autoliv.com
Thu Sep 10 11:37:12 EDT 2009


gnhlug-discuss-bounces at mail.gnhlug.org wrote on 09/10/2009 10:12:12 AM:

> 
> Bruce Labitt writes:
> 
> > Kevin D. Clark wrote:
> > > 2:  Typically, binary stuff is sent over the network in "network 
byte
> > > order" and network byte order is big-endian.  This statement is not
> > > universally agreed to -- in fact I used to work at a shop where 
they'd
> > > never even considered this problem and it turned out that they were
> > > sending (most) stuff over the wire in little-endian format.
> > >
> > > 
> > That only works if both ends are the same - definitely not portable. 
In 
> > my case, the client is little-endian and the server is big-endian.
> 
> No, that always works and it is definitely portable.  Read what I said
> again: when you transmit binary integers onto the wire, make sure they
> exist in network-byte-order.
> 

Semantics... little-endian format != network-byte-order = big-endian 
format.

little-endian ==> pack hton ==> == network-byte-order == big-endian format

I'm trying to write a 'universal' pack and unpack for C that supports the 
following types - 8 bit, 16 bit, 32 bit, 64 bit signed and unsigned ints 
and float32 as well as float64.

> May I politely suggest that you consult a decent computer networking
> book?  Please take a look at the functions htonl() and ntohl().
> 

That is where my problems started :)  I started with something in a book. 
Unfortunately my

htonl() and ntohl() are 32 bit at best.  The documentation for C and 64 
bits is sketchy.  It has been a
pain to say the least - everyone has a different implementation and it is 
hard to find difinitive answers sometimes.

I compute in a 64 bit world... with doubles, and complex doubles.

> 
> 
> Question: from your various postings on this list, I gather that you
> are using MPI.  If this is true, why aren't you just using things like
> MPI_INT, MPI_DOUBLE, and possibly MPI_LONG_LONG?  Why not let your MPI
> library take care of details like this for you?  I guarantee you that
> any decent MPI implementation is going to be well-debugged and
> efficient.  It should also take care of any endian issues that you
> might encounter.
> 

Actually, I am using OpenMP on my Cell to use all 4 PPC threads, and FFTW 
to
use all the SPUs for FFTs.  Since I only have one blade, I see no value in 
MPI.
If I had more than a single blade, MPI rapidly becomes more attractive.

> Regards,
> 
> --kevin
> -- 
> GnuPG ID: B280F24E                God, I loved that Pontiac.
> alumni.unh.edu!kdc                -- Tom Waits
> http://kdc-blog.blogspot.com/ 


Cheers,
Bruce
Something clever should go here...


******************************
Neither the footer nor anything else in this E-mail is intended to or constitutes an <br>electronic signature and/or legally binding agreement in the absence of an <br>express statement or Autoliv policy and/or procedure to the contrary.<br>This E-mail and any attachments hereto are Autoliv property and may contain legally <br>privileged, confidential and/or proprietary information.<br>The recipient of this E-mail is prohibited from distributing, copying, forwarding or in any way <br>disseminating any material contained within this E-mail without prior written <br>permission from the author. If you receive this E-mail in error, please <br>immediately notify the author and delete this E-mail.  Autoliv disclaims all <br>responsibility and liability for the consequences of any person who fails to <br>abide by the terms herein. <br>
******************************



More information about the gnhlug-discuss mailing list