This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Come up with --param asan-stack-small-redzone (PR sanitizer/81715).


On 9/25/18 12:40 PM, Jakub Jelinek wrote:
> On Tue, Sep 25, 2018 at 12:10:42PM +0200, Martin Liška wrote: On 9/25/18
>> 11:24 AM, Jakub Jelinek wrote:
>>> On Tue, Sep 25, 2018 at 11:05:30AM +0200, Martin Liška wrote:
>>>> As requested in PR81715, GCC emits bigger middle redzones for small variables.
>>>> It's analyzed in following comment: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c28
>>>
>>> First of all, does LLVM make the variable sized red zone size only for
>>> automatic variables, or also for global/local statics, or for alloca?
>>
>> Yes, definitely for global variables, as seen here:
>>
>> lib/Transforms/Instrumentation/AddressSanitizer.cpp:
>>   2122      Type *Ty = G->getValueType();
>>   2123      uint64_t SizeInBytes = DL.getTypeAllocSize(Ty);
>>   2124      uint64_t MinRZ = MinRedzoneSizeForGlobal();
>>   2125      // MinRZ <= RZ <= kMaxGlobalRedzone
>>   2126      // and trying to make RZ to be ~ 1/4 of SizeInBytes.
>>   2127      uint64_t RZ = std::max(
>>   2128          MinRZ, std::min(kMaxGlobalRedzone, (SizeInBytes / MinRZ / 4) * MinRZ));
>>   2129      uint64_t RightRedzoneSize = RZ;
>>   2130      // Round up to MinRZ
>>   2131      if (SizeInBytes % MinRZ) RightRedzoneSize += MinRZ - (SizeInBytes % MinRZ);
>>   2132      assert(((RightRedzoneSize + SizeInBytes) % MinRZ) == 0);
>>   2133      Type *RightRedZoneTy = ArrayType::get(IRB.getInt8Ty(), RightRedzoneSize);
>>
>> So roughly to SizeInBytes / 4. Confirmed:
> 
> Changing non-automatic vars will be certainly harder.  Let's do it later.
> 
>>> Have you considered also making the red zones larger for very large
>>> variables?
>>
>> I was mainly focused on shrinking as that's limiting usage of asan-stack in KASAN.
>> But I'm open for it. Would you follow what LLVM does, or do you have a specific idea how
>> to growth?
> 
> Dunno if they've done some analysis before picking the current sizes, unless
> we you do some I'd follow their numbers, it doesn't look totally
> unreasonable, a compromise between not wasting too much for more frequent
> smaller vars and for larger vars catching even larger out of bound accesses.

Agree with that!

> 
>>> What exactly would need changing to support the 12-15 bytes long red zones
>>> for 4-1 bytes long automatic vars?
>>> Just asan_emit_stack_protection or something other?
>>
>> Primarily this function, that would need a generalization. Apart from that we're
>> also doing alignment to ASAN_RED_ZONE_SIZE:
>>
>> 	      prev_offset = align_base (prev_offset,
>> 					MAX (alignb, ASAN_RED_ZONE_SIZE),
>> 					!FRAME_GROWS_DOWNWARD);
>>
>> Maybe it has consequences I don't see right now?
> 
> Actually, I think even:
> +                 && i != n - 1                                                                                                                    
> in your patch isn't correct, vars could be last even if they aren't == n - 1
> or vice versa, the sorting is done by many factors, vars can go into
> multiple buckets based on predicate etc.
> So, rather than trying to guess what is last here it should be left to
> expand_used_vars for one side, and perhaps based on whether any vars were
> placed at all on the other side (don't remember if asan supports anything
> but FRAME_GROWS_DOWNWARD).

OK, it only affects size of red redzone, that can be bigger.

> 
>> First feedback is positive:
>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c30
>>
>> It's questionable whether handling of variables 1-4B wide worth further changes.
> 
> I'd think the really small vars are quite common, isn't that the case (sure,
> address taken ones will be less common, but still not rare).

So I decided to write the patch properly, I have a working version that survives
asan.exp tests. The code that creates red-zones is more generic and all stack
vars are now aligned just to ASAN_SHADOW_GRANULARITY.

The only missing piece is how to implement asan_emit_redzone_payload more smart.
It means doing memory stores with 8,4,2,1 sizes in order to reduce # of insns.
Do we have somewhere a similar code?

Do you like the generalization I did in general?

Thanks,
Maritn

> 
> 	Jakub
> 

>From eb1a81fb08b288a8a4e0b2c5a055931e027f233c Mon Sep 17 00:00:00 2001
From: marxin <mliska@suse.cz>
Date: Tue, 25 Sep 2018 10:54:37 +0200
Subject: [PATCH] First semi-working version.

---
 gcc/asan.c                                    | 90 ++++++++++---------
 gcc/asan.h                                    | 20 +++++
 gcc/cfgexpand.c                               | 10 +--
 .../c-c++-common/asan/asan-stack-small.c      | 28 ++++++
 4 files changed, 102 insertions(+), 46 deletions(-)
 create mode 100644 gcc/testsuite/c-c++-common/asan/asan-stack-small.c

diff --git a/gcc/asan.c b/gcc/asan.c
index 235e219479d..5b8ae77b0c6 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1158,15 +1158,24 @@ asan_pp_string (pretty_printer *pp)
 /* Return a CONST_INT representing 4 subsequent shadow memory bytes.  */
 
 static rtx
-asan_shadow_cst (unsigned char shadow_bytes[4])
+asan_emit_redzone_payload (rtx shadow_mem, unsigned int payload_size,
+			   unsigned char shadow_byte,
+			   unsigned extra_shadow_byte)
 {
-  int i;
-  unsigned HOST_WIDE_INT val = 0;
-  gcc_assert (WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN);
-  for (i = 0; i < 4; i++)
-    val |= (unsigned HOST_WIDE_INT) shadow_bytes[BYTES_BIG_ENDIAN ? 3 - i : i]
-	   << (BITS_PER_UNIT * i);
-  return gen_int_mode (val, SImode);
+  if (extra_shadow_byte)
+    {
+      emit_move_insn (shadow_mem, gen_int_mode (extra_shadow_byte, QImode));
+      shadow_mem = adjust_address (shadow_mem, VOIDmode, 1);
+      --payload_size;
+    }
+
+  for (unsigned i = 0; i < payload_size; i++)
+    {
+      emit_move_insn (shadow_mem, gen_int_mode (shadow_byte, QImode));
+      shadow_mem = adjust_address (shadow_mem, VOIDmode, 1);
+    }
+
+  return shadow_mem;
 }
 
 /* Clear shadow memory at SHADOW_MEM, LEN bytes.  Can't call a library call here
@@ -1256,7 +1265,6 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
   rtx_code_label *lab;
   rtx_insn *insns;
   char buf[32];
-  unsigned char shadow_bytes[4];
   HOST_WIDE_INT base_offset = offsets[length - 1];
   HOST_WIDE_INT base_align_bias = 0, offset, prev_offset;
   HOST_WIDE_INT asan_frame_size = offsets[0] - base_offset;
@@ -1398,7 +1406,7 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
 		     + (base_align_bias >> ASAN_SHADOW_SHIFT));
   gcc_assert (asan_shadow_set != -1
 	      && (ASAN_RED_ZONE_SIZE >> ASAN_SHADOW_SHIFT) == 4);
-  shadow_mem = gen_rtx_MEM (SImode, shadow_base);
+  shadow_mem = gen_rtx_MEM (QImode, shadow_base);
   set_mem_alias_set (shadow_mem, asan_shadow_set);
   if (STRICT_ALIGNMENT)
     set_mem_align (shadow_mem, (GET_MODE_ALIGNMENT (SImode)));
@@ -1408,39 +1416,39 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
       if (l == 2)
 	cur_shadow_byte = ASAN_STACK_MAGIC_RIGHT;
       offset = offsets[l - 1];
-      if ((offset - base_offset) & (ASAN_RED_ZONE_SIZE - 1))
+
+      char extra_shadow_byte = 0;
+
+      /* If a red-zone is not aligned to ASAN_SHADOW_GRANULARITY then
+	 the previous stack variable has size % ASAN_SHADOW_GRANULARITY != 0.
+	 In that case we have to emit one extra byte that will describe
+	 how many bytes (our of ASAN_SHADOW_GRANULARITY) can be accessed.  */
+      if ((offset - base_offset) & (ASAN_SHADOW_GRANULARITY - 1))
 	{
-	  int i;
 	  HOST_WIDE_INT aoff
 	    = base_offset + ((offset - base_offset)
-			     & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
-	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
-				       (aoff - prev_offset)
-				       >> ASAN_SHADOW_SHIFT);
-	  prev_offset = aoff;
-	  for (i = 0; i < 4; i++, aoff += ASAN_SHADOW_GRANULARITY)
-	    if (aoff < offset)
-	      {
-		if (aoff < offset - (HOST_WIDE_INT)ASAN_SHADOW_GRANULARITY + 1)
-		  shadow_bytes[i] = 0;
-		else
-		  shadow_bytes[i] = offset - aoff;
-	      }
-	    else
-	      shadow_bytes[i] = ASAN_STACK_MAGIC_MIDDLE;
-	  emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
+			     & ~(ASAN_SHADOW_GRANULARITY - HOST_WIDE_INT_1));
+	  extra_shadow_byte = offset - aoff;
 	  offset = aoff;
 	}
-      while (offset <= offsets[l - 2] - ASAN_RED_ZONE_SIZE)
-	{
-	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
-				       (offset - prev_offset)
-				       >> ASAN_SHADOW_SHIFT);
-	  prev_offset = offset;
-	  memset (shadow_bytes, cur_shadow_byte, 4);
-	  emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
-	  offset += ASAN_RED_ZONE_SIZE;
-	}
+
+
+      /* Adjust shadow memory to beginning of shadow memory
+	 (or one byte earlier).  */
+      shadow_mem = adjust_address (shadow_mem, VOIDmode,
+				   (offset - prev_offset)
+				   >> ASAN_SHADOW_SHIFT);
+
+      /* Calculate size of red zone payload.  */
+      unsigned int payload_size
+	= (offsets[l - 2] - offset) / ASAN_SHADOW_GRANULARITY;
+      offset += payload_size * ASAN_SHADOW_GRANULARITY;
+
+      /* Emit red zone content.  */
+      shadow_mem = asan_emit_redzone_payload (shadow_mem, payload_size,
+					      cur_shadow_byte, extra_shadow_byte);
+
+      prev_offset = offset;
       cur_shadow_byte = ASAN_STACK_MAGIC_MIDDLE;
     }
   do_pending_stack_adjust ();
@@ -1501,7 +1509,7 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
   for (l = length; l; l -= 2)
     {
       offset = base_offset + ((offsets[l - 1] - base_offset)
-			     & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
+			     & ~(ASAN_SHADOW_GRANULARITY - HOST_WIDE_INT_1));
       if (last_offset + last_size != offset)
 	{
 	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
@@ -1513,7 +1521,7 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
 	  last_size = 0;
 	}
       last_size += base_offset + ((offsets[l - 2] - base_offset)
-				  & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1))
+				  & ~(ASAN_SHADOW_GRANULARITY - HOST_WIDE_INT_1))
 		   - offset;
 
       /* Unpoison shadow memory that corresponds to a variable that is 
@@ -1534,7 +1542,7 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
 			   "%s (%" PRId64 " B)\n", n, size);
 		}
 
-		last_size += size & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1);
+		last_size += size & ~(ASAN_SHADOW_GRANULARITY - HOST_WIDE_INT_1);
 	    }
 	}
     }
diff --git a/gcc/asan.h b/gcc/asan.h
index 2f431b4f938..c0736a0cb6f 100644
--- a/gcc/asan.h
+++ b/gcc/asan.h
@@ -102,6 +102,26 @@ asan_red_zone_size (unsigned int size)
   return c ? 2 * ASAN_RED_ZONE_SIZE - c : ASAN_RED_ZONE_SIZE;
 }
 
+/* Return how much a stack variable occupy on a stack
+   including a space for redzone.  */
+
+static inline unsigned int
+asan_var_and_redzone_size (unsigned int size)
+{
+  if (size <= 4)
+    return 16;
+  else if (size <= 16)
+    return 32;
+  else if (size <= 128)
+    return 32 + size;
+  else if (size <= 512)
+    return 64 + size;
+  else if (size <= 4096)
+    return 128 + size;
+  else
+    return 256 + size;
+}
+
 extern bool set_asan_shadow_offset (const char *);
 
 extern void set_sanitized_sections (const char *);
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 35ca276e4ad..e84c82599e6 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -1125,13 +1125,13 @@ expand_stack_vars (bool (*pred) (size_t), struct stack_vars_data *data)
 	      && stack_vars[i].size.is_constant ())
 	    {
 	      prev_offset = align_base (prev_offset,
-					MAX (alignb, ASAN_RED_ZONE_SIZE),
+					MAX (alignb, ASAN_SHADOW_GRANULARITY),
 					!FRAME_GROWS_DOWNWARD);
 	      tree repr_decl = NULL_TREE;
+	      poly_uint64 size =  asan_var_and_redzone_size (stack_vars[i].size.to_constant ());
 	      offset
-		= alloc_stack_frame_space (stack_vars[i].size
-					   + ASAN_RED_ZONE_SIZE,
-					   MAX (alignb, ASAN_RED_ZONE_SIZE));
+		= alloc_stack_frame_space (size,
+					   MAX (alignb, ASAN_SHADOW_GRANULARITY));
 
 	      data->asan_vec.safe_push (prev_offset);
 	      /* Allocating a constant amount of space from a constant
@@ -2254,7 +2254,7 @@ expand_used_vars (void)
 			 & ~(data.asan_alignb - HOST_WIDE_INT_1)) - sz;
 	  /* Allocating a constant amount of space from a constant
 	     starting offset must give a constant result.  */
-	  offset = (alloc_stack_frame_space (redzonesz, ASAN_RED_ZONE_SIZE)
+	  offset = (alloc_stack_frame_space (redzonesz, ASAN_SHADOW_GRANULARITY)
 		    .to_constant ());
 	  data.asan_vec.safe_push (prev_offset);
 	  data.asan_vec.safe_push (offset);
diff --git a/gcc/testsuite/c-c++-common/asan/asan-stack-small.c b/gcc/testsuite/c-c++-common/asan/asan-stack-small.c
new file mode 100644
index 00000000000..11a56b8db4c
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/asan-stack-small.c
@@ -0,0 +1,28 @@
+/* { dg-do run } */
+
+char *pa;
+char *pb;
+char *pc;
+
+void access (volatile char *ptr)
+{
+  *ptr = 'x';
+}
+
+int main (int argc, char **argv)
+{
+  char a;
+  char b;
+  char c;
+
+  pa = &a;
+  pb = &b;
+  pc = &c;
+
+  access (pb);
+  access (pc);
+  // access 'b' here
+  access (pa + 32);
+
+  return 0;
+}
-- 
2.19.0


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]