This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Setting -frounding-math by default


Andrew Haley wrote:
We know that's what you want.  What we don't know (well, what I don't
know) is *why*.  If you want to do something as specialized as interval
arithmetic, what's the big deal with having to pass special flags to
the compiler?

I contest the "as specialized as" comment. I know that I may look like Don Quixote here, but you may imagine that many people look like Panurge's sheeps to me on this point ;-)


Interval arithmetic is not supposed to be an obscure feature, it is a way to compute with real numbers which offers many advantages compared to the usual floating-point (FP) model (which is quite well supported by hardware and compilers). It directly competes with it on this ground (which means the potential market is huge), and is easily based on it for the implementation.

Some of the reasons why it is not used as much as it could, are that it
has poor support from compilers, and could have better support from
the hardware as well (with proper hardware support, interval operations
would roughly be just as fast as floating-point, and adequate hardware
support is not hard fundamentally).  All this induces a tradition of
training and education which puts FP first, and IA in a vicious circle :
no hardware/software improvement => unfair comparison => no teaching
=> no demand => no improvement...
It would also benefit from standardization, but this is being taken
care of (see the ongoing IEEE-1788, and the std::interval proposal
for C++/TR2).

As I said in another mail, if you have code which uses interval arithmetic
deep down an application, but it happens to come from, say, Boost.Interval,
which is a library providing inline functions for efficiency reasons
(not precompiled), then you need to pass these flags when compiling the
whole application (the translation units that include those headers) as well.
And that's a pain, especially with all those template/modern libraries
which have everything in headers.  That's a concrete problem I want
to solve that affects my pet library (see my signature) and its users.


Now, what's so good about intervals that nobody sees? I think that nobody sees the cost of floating-point, which is that it is a complicated model of approximation of reals, which forces tons of people to learn details about it, while all they want is to compute with real numbers. You need to teach all beginners the pitfalls of it, this is one of the first things you need to do. Doing the same thing with intervals would be much easier to teach and give strong guarantees on results which everybody mastering real numbers would understand easily.

Concrete example : you have heard that moto "never compare FP for
equality", which some people try to have beginners learn, while experts
know the reality behind it, which is : it depends what you do
(meaning : learn more).
With intervals, you simply get a clean semantic : true, false, or
"I don't know", which tells you that you have to do something special.
No surprise.  Everything is rock solid, and you don't need endless
discussions around "should I use -ffast-math in my code or which
subset of its sub-flags is best...".  Admittedly, filling the
"I don't know" can require some work, but it is clear where work
is needed and you may decide to ignore it (for -ffast-math users :-) ).

This was an argument for the cost of teaching to "masses" (aka "beginners",
which in fact already master real numbers maths, and which naively
expect computers to be a tool that helps them instead of a tool that
hurts their brain).
For advanced scientific computing, if you had intervals as fast as
floating-point, then a lot of complicated work on, say, static code
analysis for roundoff error propagation (the parts of it which try
to emulate IA with FP by computing bounds at compile-time), would
become pointless as the hardware would take care of it.
Also, even beyond, my guess is that formal proofs of algorithms
and programs dealing with real numbers would be much simplified
if they based their models on intervals rather than floating-point.
Try to evaluate the global educational cost of producing experts
in this area ?


My point about improving compiler support first, is that I see it as a first step to help reaching a critical mass of applications and users, in order to economically justify hardware support. (at which point compiler support will be trivial, but unfortunately we are not there yet. It's like we had to have FP-emulators in the past.) I may be wrong about that.


I don't mean IA is perfect nor magic and solving everything without thinking : certainly convergence and stability issues are similar than with FP. But, IMO, it's an improvement over FP which should be considered on the same ground as the improvement that FP has been over integers 2-3 decades ago.


Now, you know the *why*. I'm not sure whether I convinced you, but I'd be glad to have some help for the *how* ;-)

--
Sylvain Pion
INRIA Sophia-Antipolis
Geometrica Project-Team
CGAL, http://cgal.org/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]