[Bug target/82989] [6/7/8 regression] Inexplicable use of NEON for 64-bit math

wilco at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Mon Feb 19 18:41:00 GMT 2018


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82989

--- Comment #19 from Wilco <wilco at gcc dot gnu.org> ---
(In reply to sudi from comment #17)
> Since this looks like a pretty invasive problem, according to my discussions
> with Wilco and Kyrill, I think I will try to propose a smaller, but
> temporary fix using the ?s and special casing 32 for this PR (which could go
> in sooner). I will also open a new PR to handle this at the expand phase and
> clean up the code aimed at gcc 9.

Looking the history it seems the bad ?'s have been there a long time, since at
least 2010. Initially the patterns used "w, ?r", thus forcing use of Neon in
most cases. This was quickly discovered to be a bad idea and changed into "w,
?r, ?w" with the first w only enabled for non-cortex-a8 cores. Later "onlya8"
was renamed to "avoid_neon_for_64bits" which remains the default today
(eventhough that name doesn't actually mean what it says...).

The anddi3_insn/iordi3_insn/xordi3_inst patterns were fixed by
8ee7dc6fa1a663c9eea8043f84951c1e073468ff when they were merged back into
arm.md, so those use "w, r, ?w".


More information about the Gcc-bugs mailing list