This is the mail archive of the
gcc-help@gcc.gnu.org
mailing list for the GCC project.
Re: Reading big-endian binary data on a little-endian machine
- From: Eljay Love-Jensen <eljay at adobe dot com>
- To: Mark Panning <mpanning at seismo dot berkeley dot edu>, gcc-help at gcc dot gnu dot org
- Date: Fri, 15 Aug 2003 14:47:00 -0500
- Subject: Re: Reading big-endian binary data on a little-endian machine
Hi Mark,
No easy way, on this one.
If your program uses a canonical data format, and it's already taken care .
For example, writing a 16-bit or 32-bit number out with htons and htonl (respectively), and reading them in with ntohs and ntohl will give a consistent canonical binary format. If your application does the same, then the binary format shouldn't be an issue.
Unfortunately, in my experience, many C/C++ programmers will write a structure out blindly, such as "fwrite(myStruct, sizeof myStruct, 1, fp);" which doesn't adjucate the 16-bit and 32-bit numbers into a canonical format (such as the aforementioned network byte order), and also includes any padding bytes which will often vary from platform to platform. A "platform" being both the OS and the compiler, in this case.
Serializing output and marshalling input are important, yet often overlooked issue.
Sincerely,
--Eljay