This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: weird optimization in sin+cos, x86 backend

On 2012-02-09 17:18:25 +0000, Joseph S. Myers wrote:
> The crlibm approach, involving exhaustive searches for worst cases for 
> directed rounding, could as I understand it work for functions of one 
> float, double or 80-bit long double argument, but I think the exhaustive 
> searches are still infeasible for binary128 (113-bit mantissa) long 
> double - although you could do functions that are probably (not proven) 
> correctly rounded in that case.  For functions of two arguments such as 
> pow, I don't know if it can produce proven correctly rounded versions even 
> for float.

For float, it might be possible to get all the hardest-to-round cases
of pow(x,y) by running my fast algorithms for each value of y. I just
fear that for large values of x and y, approximating the power function
by small-degree polynomials is very inaccurate (see below), so that my
algorithms won't work well and it may take too much time in practice.

For the double precision, even when y is restricted to an integer, this
is difficult. FYI, I currently have all the worst cases of pow(x,n) for
all positive integers n <= 3443. But for such large values of n (though
still reasonable compared to most ones in the [0,2^53] range), the
approximation of x^n by polynomials is much less accurate than for
usual functions (e.g. x^3), so that the tests run much more slowly.

> IBM long double (double+double) is a special case likely to need its own 
> implementations of many functions.  In that case it's not very 
> well-defined what correctly rounded even means (you *can* define it to be 
> the closest representable double+double value, but I doubt that's feasible 
> to implement in libm

I think you have the same kind of problems as in floating point: most
cases can be rounded "correctly" by first computing an approximation
with a bit more precision than twice the precision of double (106).
Then you have cases that are hard to round... With FP, this problem
can be solved heuristically based on probabilistic hypotheses when
they are conjectured to be satisfied, or by solving the problem in
another way for particular cases (e.g. Taylor's expansion for exp(x)
when x is close to 0). For double-double, this may be more difficult,
I think, for instance when in f(x_hi + x_lo), f(x_hi) is very close
to a rounding boundary...

Now, I don't think this is useful in practice, double-double often
being used as an intermediate arithmetic BTW.

> although it could be done for constant folding with MPFR,


> and the basic arithmetic operations don't claim to achieve it).  

I think the goal was to have more precision than double, but still
being quite fast. The implementation of accurate elementary functions
in double precision could use such an arithmetic.

> Probably the best that's sensible is an output within a few ULP (ULPs 
> defined as if the representation is fixed-width mantissa of 106 or 107 
> bits except for subnormals)

AFAIK, considering a 106-bit precision instead of 107 (i.e. with more
redundancy) allows much faster algorithms for the basic operations.

> of the mathematical value of the function at an input within a few
> ULPs of the specified value - this may also be the best that's
> possible for other floating-point formats for some hard-to-calculate
> functions.

Vincent Lefèvre <> - Web: <>
100% accessible validated (X)HTML - Blog: <>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]