This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
- From: ygribov <tetra2005 at gmail dot com>
- To: gcc-patches at gcc dot gnu dot org
- Date: Wed, 24 Sep 2014 06:50:47 -0700 (PDT)
- Subject: Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
- Authentication-results: sourceware.org; auth=none
- References: <5405DC2A dot 7050503 at samsung dot com> <5405DDBE dot 10703 at samsung dot com> <20140924092249 dot GU17454 at tucnak dot redhat dot com>
> BTW, I've noticed that perhaps using BIT_AND_EXPR for the
> (shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
> tests isn't best, maybe we could get better code if we expanded it as
> (shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
> (i.e. an extra basic block containing the second half of the test
> and fastpath for the shadow == 0 case if it is sufficiently common
> (probably it is)).
BIT_AND_EXPR allows efficient branchless implementation on platforms which
allow chained conditional compares (e.g. ARM). You can't repro this on
current trunk though because I'm still waiting for ccmp patches from
Zhenqiang Chen to be approved :(
> Will try to code this up unless somebody beats me to
> that, but if somebody volunteered to benchmark such a change, it would
> be very much appreciated.
AFAIK LLVM team recently got some 1% on SPEC from this.
-Y
--
View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073370.html
Sent from the gcc - patches mailing list archive at Nabble.com.