This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libgcj/26483] Wrong parsing of doubles when interpreted on ia64



------- Comment #10 from wilson at gcc dot gnu dot org  2006-03-08 00:46 -------
I missed the denorm angle obviously.  And the answer to the question about what
is different between native and interpreted execution would of course be
libffi, which took me far longer to remember than it should have.

Anyways, looking at libffi, the issue appears to be the stf_spill function in
the src/ia64/ffi.c file.  This function spills an FP value to the stack, taking
as argument a _float80 value, which is effectively a long double.  So when we
pass the denorm double to stf_spill, it gets normalized to a long double, and
this normalization appears to be causing all of the trouble.  This long double
value then gets passed to dtoa in fdlibm which expects a double argument.  dtoa
then fails.  I didn't debug dtoa to see why it fails, but it seems clear if we
pass it an argument of the wrong type, we are asking for trouble.

On IA-64, FP values are always stored in FP regs as a long double value rounded
to the appropriate type, so the normalization will have no effect except on
denorm values I think.  This means only single-denorm and double-denorm
arguments values are broken, which is something that would be easy to miss
without a testcase.

Stepping through ffi_call in gdb at the point whjere stf_spill is called, I see
the incoming value is 
f6             4.9406564584124654417656879286822137e-324        (raw
0x000000000000fc010000000000000800)
which has the minimum double exponent (fc01) and a denorm fraction (0...800). 
After the conversion to _float80, we have
f6             4.9406564584124654417656879286822137e-324        (raw
0x000000000000fbcd8000000000000000)
This now has an exponent invalid for double (fbcd) and a normal fraction
(800...0).

If I rewrite stf_spill to be a macro instead of a function, to avoid the
argument type conversion, then the testcase works for both gcj and gij.

ldf_spill appears to have the same problem, and is in need of the same
solution.


-- 

wilson at gcc dot gnu dot org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |wilson at gcc dot gnu dot
                   |                            |org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26483


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]