[Bug tree-optimization/109154] [13 regression] jump threading de-optimizes nested floating point comparisons
rguenther at suse dot de
gcc-bugzilla@gcc.gnu.org
Mon Mar 27 10:18:13 GMT 2023
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109154
--- Comment #15 from rguenther at suse dot de <rguenther at suse dot de> ---
On Mon, 27 Mar 2023, jakub at gcc dot gnu.org wrote:
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109154
>
> --- Comment #12 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
> (In reply to Richard Biener from comment #11)
> > _1 shoud be [-Inf, nextafter (0.0, -Inf)], not [-Inf, -0.0]
>
> Well, that is a consequence of the decision to always flush denormals to zero
> in
> frange::flush_denormals_to_zero, because some CPUs do it always and others do
> it when asked for (e.g. x86 if linked with -ffast-math).
> Unless we revert that decision and flush denormals to zero only selectively
> (say on alpha in non-ieee mode (the default), or if fast math (which exact
> suboption?) etc.
I think flushing denormals makes sense for "forward" propagation,
aka computing LHS ranges. For ranges derived from relations it
really hurts (well, just for compares against zero).
OTOH, if you consider
_1 = a[1]; // load from a denormal representation
if (_1 < 0.)
then whether _1 should include -0.0 or not depends on what the target
does on the load. I suppose the standard leaves this implementation
defined?
Given -ffast-math on x86 enables FTZ we'd have to be conservative there
as well. But OTOH we don't have any HONOR_DENORMALS or so?
Note the testcase in this PR was about -Ofast ...
More information about the Gcc-bugs
mailing list