[Bug tree-optimization/96133] [10/11 Regression] x86-64 gcc 10.1 using -O3 leads to wrong calculation since r10-1882-g831e688af50c5f77

rguenth at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Thu Jul 9 12:13:06 GMT 2020


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96133

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot gnu.org

--- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> ---
Value numbering stmt = vect__68.15_89 = MEM <vector(2) double> [(double
*)vectp_xyz_sRGB.12_85];
RHS MEM <vector(2) double> [(double *)vectp_xyz_sRGB.12_85] simplified to {
1.4308039999999999647428694515838287770748138427734375e-1, 0.0 }
Setting value number of vect__68.15_89 to {
1.4308039999999999647428694515838287770748138427734375e-1, 0.0 } (changed)

I guess we might be confused by the read crossing CTORs.


More information about the Gcc-bugs mailing list