This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: weird optimization in sin+cos, x86 backend

On 3 February 2012 16:24, Vincent Lefevre <> wrote:
> On 2012-02-03 16:57:19 +0100, Michael Matz wrote:
>> > And it may be important that some identities (like cos^2+sin^2=1) be
>> > preserved.
>> Well, you're not going to get this without much more work in sin/cos.
> If you use the glibc sin() and cos(), you already have this (possibly
> up to a few ulp's).
>> > For the glibc, I've finally reported a bug here:
>> >
>> > Â
>> That is about 1.0e22, not the obscene 4.47460300787e+182 of the original
>> poster.
> But 1.0e22 cannot be handled correctly.

Of course it can't.
You only have 52 bits of precision in double floating point numbers.
So, although you can have 1.0e22, you cannot represent (1.0e22 + 1)
If you understood how the sin and cos functions actually get
calculated, you would understand why the ability to handle +1 is so
sin and cos first have to translate the input value into a range of 0
to 2Pi, so if you cannot represent (1.0e22) and (1.0e22 +  2Pi), it is
not possible to translate the input value with any sort of way that
makes sense.

To represent the 1.0e22+1, you need more than 52 bits of precision.
If you move down to the 1.0e14 range, things start making sense again.

The easiest way to ensure correct calculations in floating point maths
is to convert everything into binary and then make sure you have
enough bits of precision at every stage.

Summary: 1.0e22 is just as obscene as 4.47460300787e+182 from the
point of view of sin and cos.

Kind Regards


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]