This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Multiplatform binary generation
- From: Mike Stump <mrs at apple dot com>
- To: "deploy.kde" <deploy dot kde at seznam dot cz>
- Cc: gcc at gcc dot gnu dot org
- Date: Fri, 14 May 2004 12:19:07 -0700
- Subject: Re: Multiplatform binary generation
On Friday, May 14, 2004, at 06:25 AM, deploy.kde wrote:
I am working on research project to implement executable binary for
all major linux desktop platforms.
Great, you've just reinvented FAT executables. I'd suggest reading up
on the NeXT legacy.
Check out the apple-ppc-branch and the `driver driver' and the -arch
flag. lipo is the program the glues together multiple output files (.o
and executable files) for multiple arches.
On darwin, the kernel (exec code) will select and load the right part
of the file for the CPU, as well as the dynamic loaders based upon the
current CPU setting.
We can do things like rebuild software multiple times for different
chips of the same general family and make them FAT, so if you run on
chip X, you can get code for chip X, as well as totally different CPU
architectures.
An example:
gcc -arch ppc -arch ppc970 -arch 386 -arch 686 foo.c -c -o foo.o
or
gcc -arch ppc -arch ppc970 -arch 386 -arch 686 foo.c -o foo
would create one foo.o (or foo) that has 4 parts, internally, the
driver driver runs the real gcc 4 times, with 4 different output files,
then at the end, it glues them all together.
Most normal software you can build this way:
CFLAGS='-arch ...' configure && make && make install
gcc, being special, we build specially, see build_gcc in the branch for
details.
However, having said all that, I think you should just use traditional
techniques and not intermingle contents.
I would like to hear any your sugestions or comments before I spent
months developing it.
Don't.