This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Spurious differences between 32 and 64 bit mode numeric results

Dear developers,
I've just tested a numerically intensive program.
I compiled it on Fedora core 2 x86_64  using a self compiled 
gcc/g++ 3.4.0 (and 3.3.3 as it comes with Fedora).
Result for both compilers: The program, takes more steps until the main
iteration converges when compiled in 64 bit mode when compared in
32 bit mode.

Since I know the program to converge faster (albeit at doubled 
and potentially prohibitive memory requirements) when running with doubles 
instead of floats, this seems to indicate that in 64 bit mode, some
numerical calculations are less accurate, at least when performed with 
floats. Btw, there's no fancy math involved,the 
time consuming inner loop is all adds and mults.

Could there be a least significant bits problem ?

Best regards
Andreas Svrcek-Seiler

P.S.: As far as I've seen, the problem is independent of CPU-specific 
optimization flags.
P.P.S.: If someone wants to investigate that, I can easily provide
the sources as well as a test input.
           ( O O )
              o        Wolfgang Andreas Svrcek-Seiler  
              o        (godzilla) 
      .oooO            Tel.:01-4277-52733 
      (   )   Oooo.    
-------\ (----(   )--------------------------------------------------------
        \_)    ) /

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]