This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: libgcc2 and MIN_UNITS_PER_WORD on ia64


In article <OFBD7634B7.F7E86D88-ON88256A1F.00695D27@LocalDomain> you write:
>When I built a cross compiler with host i686-pc-linux-gnu and target
>ia64-linux, the libgcc2.c compile failed because it tried to use mode
>TI, which is not supported on a 32-bit host.

This has been a general problem since spring of 2000.  I've commented on
this before, but I am not sure where my earlier comments are.   There were
two changes in spring of 2000 that created the problem.  One change was to
disable TImode support on 32-bit hosts.  This was necessary to avoid
out-of-range shifts, which were causing gcc crashes.  Also, there was the
change from Kenner to make libgcc2 define TImode functions instead of DImode
functions, which was presumably needed for Ada.  Together, these cause the
32-bit hosted cross to 64-bit targeted libgcc2.c compile failure that you
are seeing.

I thought I had seen some patches to address this, but I've been so busy
with IA-64 specific problems I haven't been tracking the status of this
problem.

In the Red Hat tools group, what we do is define HOST_WIDE_INT to "long long"
instead of "long" for cross compilers from 32-bit hosts to 64-bit targets.
This is done via an ugly config.gcc hack.  This makes the cross gcc run slower,
but the resulting code will be much closer to what you would get with a
native compiler.  Otherwise, a lot of optimizations get disabled because we
can't do 64-bit calculations for the target natively on a 32-bit machine with
HOST_WIDE_INT set to long.  Thus cross compilers from 32-bit hosts to 64-bit
target generally emit worse code that native 64-bit compilers, and should be
avoided unless necessary.  The "long long" trick makes this go away.  I
believe this config.gcc hack has been previously discussed on some gcc lists
before.  What it really needs is someone to clean it up and make it more
presentable so that we can put it in the FSF tree.

>2) In gcc/config/ia64/t-ia64, change the names of the div and mod
>   functions in LIB1ASMFUNCS to use one underscore rather than two, and
>   in gcc/config/ia64/lib1funcs.asm change the corresponding L_*
>   symbols.  After these files were added there was a change to mklibgcc
>   to remove function names from LIB2FUNCS that are in LIB1ASMFUNCS.

I didn't know about that.  This would allow us to clean up the ia64 backend
a little.  Previously, there was no way to remove functions from libgcc2.c,
hence I had to use different names.  It looks like the current code is
in a little inbetween state here.  There is a comment in ia64.h documenting
the name change, but the macros that comment was documenting are gone, so
the comment should go also.  Search for "Implicit Calls to Library Routines"
in ia64.h and look at the next comment.

>   Currently the duplicates end up having different names because of the
>   mode change in libgcc2.h, but the lib2gcc versions aren't needed.
>   With MIN_UNITS_PER_WORD defined to be 8, there are duplicate
>   functions (in different .o files) so the the link of libgcc.a fails.

Presumably you mean with MIN_UNITS_PER_WORD set to 4 there is a link failure.
However, setting MIN_UNITS_PER_WORD to 4 isn't right either, since there are
currently no IA-64 ports that support 32-bit code.  Even if there were,
it is probably still not right.  The 32-bit alpha port doesn't change
UNITS_PER_WORD.

I think we need some other solution.  Changing HOST_WIDE_INT at configure
time seems to work as mentioned above.  Maybe changing the tests in libgcc2.h
would help, perhaps we could check the host wide int size, and if it is
32-bits, then disable the DWtype functions in libgcc2.c.

>What's the purpose of changing the mode in libgcc2.h? The ia64 compiler
>doesn't appear to need mode TI functions in libgcc; what architectures
>do need this?  Is MIN_UNITS_PER_WORD the correct value on which to base
>the change of modes?

libgcc2.c was originally created to provide DImode (64-bit) routines for
32-bit targets.  It is pretty useless for all non-32-bit targets, include
16-bit and 64-bit targets.  Kenner's change tried to make it more useful
by making libgcc2 automatically provide routines that are twice the word size
of the target.

While there are no obvious uses for TImode in ia64 compiler, it does get used
internally to gcc for address calculations.  We measure type sizes in both
bytes and bits.  We need the sizes in bits in some places so that we can
handle C bit-field issues easily.  However, on a 64-bit machine, the address
space is 2^64 bytes, and 2^67 bits.  Thus we need an integer type larger than
DImode to handle type sizes as measured in bits.  This is what we use TImode
for, even though there is no C equivalent for TImode integers.  This case is
probably pretty rare in C code, but I expect it is more common in Ada, as
it was Kenner who made the change.

See set_sizetype() in stor-layout.c.  It sets the precision for bitsizetype
to twice the word size.  However, looking at it now, I see that it is using
the host word size, which means this gets set to 64-bits on a 32-bit host,
and 128-bits on a 64-bit host.  Thus we should not need the TImode libgcc2.c
routines when building on a 32-bit host.  So it looks like much of this
cross compiler problem has been fixed, except for the problem with building
libgcc2.a.

Jim


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]