Should -fcross-jumping be part of -O1?
Robert Dewar
dewar@gnat.com
Fri Dec 5 01:04:00 GMT 2003
Gabriel Dos Reis wrote:
> Please, who mentions Fortran compared to language X?
The discussion is pretty much language independent, I merely used
Fortran here as an example (not inappropriate if you are talking about
numerical code). We are talking about gcc here, and certainly our
discussion of optimization in gcc applies to g77 as well as the
other languages supported by gcc.
> Sure it is. If the result falls into the predicted interval, and
> therefore the expected decision is made based on that then then the
> computation is correct.
Predicted by what? Notice also that you are shifting your focus from
reproducibility to predictability (as I pointed out these are different,
and as you know the thread was originally about the effect of
optimization on reproducibility).
> That is an artefact of your rheterics. "correct" does have a useful
> content. It is no
Well the above point is a bit unclear, but my point is that correct
means different things to different people, so it is fundamentally a
confusing term. I still don't know precisely how you define it.
> There is a whole branch of mathematical approach to numerics based on
> interval aritmetic upon which on can base sofwtare construction with
> predictable results -- as far as the compiler implementer does not go
> his way of playing language lawyering non-sense.
I am of course perfectly familiar with the use of interval arithmetic,
but you will have to be much clearer on why you think this is relevant
to the discussion. There is some class of actions that you declare to
be nonsense, but you are far from clear in defining what this set of
actions might be. It would be helpful if you could be precise instead
of simply throwing terms like nonsense and correct around.
> In such context, algorithms are most of the time based on combination
> of numeric and symbolic computation and it is far more important to
> get a (numeric) computation correct than to get it fast.
Again, I don't know what you mean by correct. To me correctness can only
be established by reference to a language definition that defines what
translation is correct. We can't define correct as "what gaby has in
mind even if he does not define it precisely". Perhaps what would be
helpful is if, for any particular language as an example, exactly the
set of semantics that would define correct to you.
> "correct" is not a rhetorical device.
Well what I mean by a rhetorical device is that basically so far you are
saying "we want correctness [who could object?] and you should know what
that means since I can't be bothered to define it precisely."
> I don't time left, and I'm not quite interested in distractive
> language lawyering game. But I feel it is really misleading to
> outright reject (correctness, predictability which together imply)
> reproducibility.
Predictability and correctness are certainly perfectly reasonable goals,
but correctness must be with respect to a language definition. For
example, the Ada definition has a very precise definition, but it is
carefully designed to be sufficiently non-deterministic to accomodate
all common machine arithemtics. Now in an IEEE environment, it is
perfectly reasonable to add a precise set of rules to map Ada operations
into well defined IEEE operations (not completely removing
non-determinism, since as I am sure you know, the IEEE standard is not
100% deterministic, but it is close enough for most purposes). See
Sam Figueroa's thesis for a precise proposal for doing this in Ada 95.
Or see Apple's spec for Pascal.
Language lawyering is precisely about defining what correct means. If
you don't like a spec for a particular language, because it is
incomplete in your view, you can't simply assume that everyone will
agree on the delicate task of precise mapping of language semantics
to IEEE operations, there are quite a few delicate issues to be
dealt with (e.g. choice of base types in Ada 95).
Once you stress correctness, you are in the language lawyering business
whether you like it or not.
> No, I don't think so. "Correct" always implies expected results in a
> given context. It may become bad only when/if one plays distractive
> lawyering, but then that is true for virtually anything.
That's right, and those expected results are expected according to some
well defined language. Once again, we need a definition, not just vague
ideas of what is or is not obviously correct!
Language lawyering, which you dismiss as destractive (destracting I
assume), is precisely about determining whether results are correct
according to a precise definition.
You seem to insist on the idea that it is obvious what is correct or
not, and you don't need a precise definition suitable for perusal by
language lawyers.
Well I think that means you don't understand all the issues. It is not
at ALL obvious what the exact criteria for correctness should be if you
demand full IEEE predictability. For any given language, it is a
non-trivial task to determine what the rules are.
For example, what exactly should the rules be for converion of double to
string in C? Insisting on totally accurate conversion is a significant
burden, and requires the ability to output a very large number of
decimal digits. The IEEE standard has well defined precision bounds for
conversions of this type, but we know better algorithms now, so one is
tempted to be stricter than IEEE in this regard, and some would consider
that totally precise conversion is worth the cost. This is not a simple
discussion, and it is far from obvious what the choice should be.
Robert
More information about the Gcc
mailing list