[Bug target/67366] Poor assembly generation for unaligned memory accesses on ARM v6 & v7 cpus

rearnsha at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Thu Aug 27 09:36:00 GMT 2015


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67366

--- Comment #2 from Richard Earnshaw <rearnsha at gcc dot gnu.org> ---
(In reply to Richard Biener from comment #1)
> I think this boils down to the fact that memcpy expansion is done too late
> and
> that (with more recent GCC) the "inlining" done on the GIMPLE level is
> restricted
> to !SLOW_UNALIGNED_ACCESS but arm defines STRICT_ALIGNMENT to 1
> unconditionally.
> 

Yep, we have to define STRICT_ALIGNMENT to 1 because not all load instructions
work with misaligned addresses (ldm, for example).  The only way to handle
misaligned copies is through the movmisalign API.



More information about the Gcc-bugs mailing list