[PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
Wed Sep 24 13:50:00 GMT 2014
> BTW, I've noticed that perhaps using BIT_AND_EXPR for the
> (shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
> tests isn't best, maybe we could get better code if we expanded it as
> (shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
> (i.e. an extra basic block containing the second half of the test
> and fastpath for the shadow == 0 case if it is sufficiently common
> (probably it is)).
BIT_AND_EXPR allows efficient branchless implementation on platforms which
allow chained conditional compares (e.g. ARM). You can't repro this on
current trunk though because I'm still waiting for ccmp patches from
Zhenqiang Chen to be approved :(
> Will try to code this up unless somebody beats me to
> that, but if somebody volunteered to benchmark such a change, it would
> be very much appreciated.
AFAIK LLVM team recently got some 1% on SPEC from this.
View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073370.html
Sent from the gcc - patches mailing list archive at Nabble.com.
More information about the Gcc-patches