This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] Fix PR28684


Perhaps Brad Lucier can help here, but rearranging A+(B+C) as (A+B)+C
for many finite values of A, B and C can introduce errors of many
thousands of ulp, when the exponents of A, B and C differ signifcantly.
Consider for example, A=10*10, B=-10*10, C=0.5.

The exercise to the reader is to determine whether the average error
of reassociating A+(B+C) as (A+B)+C, over the domains of A, B and C is
any worse than the averaged ulp error of fsin or fcos over the domain of
its argument.  Statistically, even without the extra accuracy of fsin
or fcos, is it any safer or worse than reassociating just a single pair
of additions, let alone a reversing a dot product or decomposing TRSM.


Here's where we're cheating...

Over course, the bias is that uniformly over the domain of possible
inputs it's extremely likely that the exponents of A and B will differ
by more than the available mantissa bits.  In the real world, the
values are not uniformly distributed, so vectorizing reduction becomes
reasonable, use of fsin/fcos is reasonable, and sub O(N^3) matrix
multiplication is reasonable.  It's a trade-off.  As soon as we allow
any change, the worst case behaviour often goes to hell, but hopefully
its the average or median case we care about.

Roger
--


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]