This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)


On Tue, Sep 02, 2014 at 07:09:50PM +0400, Marat Zakirov wrote:
> >Here's a simple optimization patch for Asan. It stores alignment
> >information into ASAN_CHECK which is then extracted by sanopt to reduce
> >number of "and 0x7" instructions for sufficiently aligned accesses. I
> >checked it on linux kernel by comparing results of objdump -d -j .text
> >vmlinux | grep "and.*0x7," for optimized and regular cases. It eliminates
> >12% of and 0x7's.
> >
> >No regressions. Sanitized GCC was successfully Asan-bootstrapped. No false
> >positives were found in kernel.

Unfortunately it broke PR63316.  The problem is that you've just replaced
base_addr & 7 with base_addr in the
(base_addr & 7) + (real_size_in_bytes - 1) >= shadow
computation.  & 7 is of course not useless there, & ~7 would be.
For known sufficiently aligned base_addr, instead we know that
(base_addr & 7) is always 0 and thus can simplify the test
to (real_size_in_bytes - 1) >= shadow
where (real_size_in_bytes - 1) is a constant.

Fixed thusly, committed to trunk.

BTW, I've noticed that perhaps using BIT_AND_EXPR for the
(shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
tests isn't best, maybe we could get better code if we expanded it as
(shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
(i.e. an extra basic block containing the second half of the test
and fastpath for the shadow == 0 case if it is sufficiently common
(probably it is)).  Will try to code this up unless somebody beats me to
that, but if somebody volunteered to benchmark such a change, it would
be very much appreciated.

2014-09-24  Jakub Jelinek  <jakub@redhat.com>

	PR sanitizer/63316
	* asan.c (asan_expand_check_ifn): Fix up align >= 8 optimization.

	* c-c++-common/asan/pr63316.c: New test.

--- gcc/asan.c.jj	2014-09-24 08:26:49.000000000 +0200
+++ gcc/asan.c	2014-09-24 11:00:59.380298362 +0200
@@ -2585,19 +2585,26 @@ asan_expand_check_ifn (gimple_stmt_itera
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  /* Aligned (>= 8 bytes) can test just
+	     (real_size_in_bytes - 1 >= shadow), as base_addr & 7 is known
+	     to be 0.  */
 	  if (align < 8)
-	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
-						     base_addr, 7));
-	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
-						      gimple_seq_last (seq)));
-	  if (real_size_in_bytes > 1)
-	    gimple_seq_add_stmt (&seq,
-				 build_assign (PLUS_EXPR, gimple_seq_last (seq),
-					       real_size_in_bytes - 1));
-	  gimple_seq_add_stmt (&seq, build_assign (GE_EXPR,
+	    {
+	      gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						       base_addr, 7));
+	      gimple_seq_add_stmt (&seq,
+				   build_type_cast (shadow_type,
+						    gimple_seq_last (seq)));
+	      if (real_size_in_bytes > 1)
+		gimple_seq_add_stmt (&seq,
+				     build_assign (PLUS_EXPR,
 						   gimple_seq_last (seq),
-						   shadow));
+						   real_size_in_bytes - 1));
+	      t = gimple_assign_lhs (gimple_seq_last_stmt (seq));
+	    }
+	  else
+	    t = build_int_cst (shadow_type, real_size_in_bytes - 1);
+	  gimple_seq_add_stmt (&seq, build_assign (GE_EXPR, t, shadow));
 	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, shadow_test,
 						   gimple_seq_last (seq)));
 	  t = gimple_assign_lhs (gimple_seq_last (seq));
--- gcc/testsuite/c-c++-common/asan/pr63316.c.jj	2014-09-24 10:57:21.879454411 +0200
+++ gcc/testsuite/c-c++-common/asan/pr63316.c	2014-09-24 11:04:16.773241665 +0200
@@ -0,0 +1,22 @@
+/* PR sanitizer/63316 */
+/* { dg-do run } */
+/* { dg-options "-fsanitize=address -O2" } */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+extern void *malloc (__SIZE_TYPE__);
+extern void free (void *);
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+  int *p = (int *) malloc (sizeof (int));
+  *p = 3;
+  asm volatile ("" : : "r" (p) : "memory");
+  free (p);
+  return 0;
+}


	Jakub


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]