This program should print the same thing twice: #include <stdio.h> void f1() { unsigned long x = 3; float y = 1; printf ("%g\n", (-x) * y); } void f2() { unsigned long x = 3; float y = 1; unsigned long z = - x; printf ("%g\n", z * y); } int main() { f1(); f2(); } However, on x86_64 GNU/Linux with current mainline, it prints -3 1.84467e+19 It is already incorrect in the first GIMPLE dump. f1 has x = 3; y = 1.0e+0; D.2219 = (float) x; D.2220 = -D.2219; f2 has x = 3; y = 1.0e+0; z = -x; D.2223 = (float) z; In other words, in f1, the conversion to float happens before the negation. The bug happens with both the C and C++ frontends.
Google ref b/7902206
(In reply to comment #1) > Google ref b/7902206 The FE behaves correctly when 'double' type is used instead of 'float' .
With gcc-3.2.3 or 3.3.6 it prints 1.84467e+19 1.84467e+19 on x86_64-linux, with 3.4.6 up to 4.7.2 it prints -3 1.84467e+19 Optimization level doesn't seem to make any difference.
I'm beginning to think the test case is invalid. The operands of the multiplication in f1 are unsigned long and float, but they are not converted to double. Instead the unsigned long is converted to float (C1x, N1494, 6.3.1.8, 1st paragraph, 2nd "Otherwise" sub-paragraph), but -3UL is outside the range of a 32-bit float, so the result is undefined (C1x, N1494, 6.3.1.4, 2nd paragraph, last sentence).
Yes, this is invalid.
-3UL is not outside the range of float. -3UL == 0xfffffffffffffffd. That is less than FLT_MAX == 3.40282e+38. It is true that the value can not be precisely represented in float, but it is not out of range. So the standard says that the compiler must pick the nearest representable value, either higher or lower.
Ah, indeed - I misremembered a dup that was doing the opposite, converting -1.0 to unsigned and expecting -1U as result. It's fold converting (float)(-x) to -(float)x: float f1() { unsigned long x = 3; return -x; }
Or rather convert.c:convert_to_real: /* Propagate the cast into the operation. */ if (itype != type && FLOAT_TYPE_P (type)) switch (TREE_CODE (expr)) { /* Convert (float)-x into -(float)x. This is safe for round-to-nearest rounding mode. */ case ABS_EXPR: case NEGATE_EXPR: if (!flag_rounding_math && TYPE_PRECISION (type) < TYPE_PRECISION (TREE_TYPE (expr))) return build1 (TREE_CODE (expr), type, fold (convert_to_real (type, TREE_OPERAND (expr, 0)))); break; that's of course not valid for unsigned x, even when using ufloat, as the float result is always signed.
And as usual - convert.c contains premature optimization (this one hardly worth) and/or duplicates of fold-const.c. Thus removing the whole NEGATE_EXPR case looks like the correct thing to do. Leaving for christmas, so not me. Patch pre-approved if you want to prepare the patch and do the testing.
Yeah, I wonder if that transformation wasn't meant to be guarded by also FLOAT_TYPE_P (itype), comparing TYPE_PRECISION of a floating type with say integer type or vector type etc. looks fishy.
I'd keep it for FLOAT_TYPE_P -> FLOAT_TYPE_P for now, perhaps premature, but clearly we don't have any GIMPLE optimization that would do the job (or RTL). int foo (double x) { float a = x; double b = -x; float c = -a; float d = b; return c == d; } int bar (double x) { return (-(float) x) == ((float) -x); } with -O2 -ffast-math, bar is folded into return 1;, foo is quite a lot of code, although both are equivalent. If the NEGATE_EXPR case is just removed, then bar isn't optimized at all either.
*** Bug 57886 has been marked as a duplicate of this bug. ***
Fixed for GCC 4.9.0+ by r0-124269 which references this PR but for some reason it does not show up in the bugzilla.