[LTO][PATCH] Fix long double precision problem

Jim Blandy jimb@codesourcery.com
Thu Dec 13 22:43:00 GMT 2007


""Doug Kwan (關振德)"" <dougkwan at google.com> writes:
>    What about complex long double? There are padding bits between the
> real and imaginary parts.

My understanding is that the base types are not meant to completely
specify how the debugger should interpret a type it's never seen
before; they're just meant to distinguish the source language's base
types.  For example, DWARF just has DW_ATE_float, not
DW_ATE_ieee_float, nor DW_ATE_float with DW_AT_mantissa_bits and
DW_AT_exponent_bias.  Once the DWARF has identified the type, it's up
to the ABI to specify the details of its representation.

Under that interpretation, all GCC should do for long double complex
is emit a DW_TAG_base_type with DW_ATE_complex_float and
DW_AT_byte_size == 24.  That's enough to distinguish long double
complex from double complex or float complex.  It's not necessary for
the DWARF to explicitly mention the 80-bit length if the ABI dictates
that for a lone 24-byte complex type.  (Do you also have a full
12-byte floating point complex format that would be ambiguous with
that?)

But that interpretation doesn't square too well with providing data
like DW_AT_bit_size and DW_AT_bit_offset.  So the interpretation is
unclear to me.  I don't remember this coming up; it may simply not
have been considered.



More information about the Gcc-patches mailing list