[gcc(refs/users/aoliva/heads/testme)] hardcfr: check before potential sibcalls

Alexandre Oliva aoliva@gcc.gnu.org
Sat Aug 27 03:37:20 GMT 2022


https://gcc.gnu.org/g:72739a9a3d77c953104d06c6e041b8c62b623088

commit 72739a9a3d77c953104d06c6e041b8c62b623088
Author: Alexandre Oliva <oliva@adacore.com>
Date:   Fri Aug 26 02:51:16 2022 -0300

    hardcfr: check before potential sibcalls

Diff:
---
 .../doc/gnat_rm/security_hardening_features.rst    |  40 +++---
 gcc/common.opt                                     |   4 +
 gcc/doc/invoke.texi                                |  27 +++-
 gcc/gimple-harden-control-flow.cc                  | 149 ++++++++++++++++-----
 .../c-c++-common/torture/harden-cfr-notail.c       |   8 ++
 .../c-c++-common/torture/harden-cfr-tail.c         |  25 +++-
 6 files changed, 190 insertions(+), 63 deletions(-)

diff --git a/gcc/ada/doc/gnat_rm/security_hardening_features.rst b/gcc/ada/doc/gnat_rm/security_hardening_features.rst
index 9d762e7c8cc..8c065de49d5 100644
--- a/gcc/ada/doc/gnat_rm/security_hardening_features.rst
+++ b/gcc/ada/doc/gnat_rm/security_hardening_features.rst
@@ -263,25 +263,33 @@ For each block that is marked as visited, the mechanism checks that at
 least one of its predecessors, and at least one of its successors, are
 also marked as visited.
 
-Verification is performed just before returns and tail calls.
+Verification is performed just before a subprogram returns.
+
+Verification may optionally also be performed before returning calls, i.e.,
+calls whose result, if any, is immediately returned to the caller,
+whether or not they are optimized to tail calls.  This may be disabled
+with :switch:`-fno-hardcfr-check-returning-calls`.
+
+Any subprogram from which an exception may escape, i.e., that may
+raise or propagate an exception that isn't handled internally, is
+automatically enclosed by a cleanup handler that performs
+verification, unless this is disabled with
+:switch:`-fno-hardcfr-check-exceptions`.
+
 Verification may also be performed before noreturn calls, whether only
 nothrow ones, with :switch:`-fhardcfr-check-noreturn-calls=nothrow`,
 or all of them, with :switch:`-fhardcfr-check-noreturn-calls=always`.
-Furthermore, any subprogram from which an exception may escape, i.e.,
-that may raise or propagate an exception that isn't handled
-internally, is automatically enclosed by a cleanup handler that
-performs verification, unless this is disabled with
-:switch:`-fno-hardcfr-check-exceptions`.  When a noreturn call returns
-control to its caller through an exception, verification may have
-already been performed before the call, assuming
-:switch:`-fhardcfr-check-noreturn-calls=always` is in effect.  The
-compiler arranges for already-checked noreturn calls without a
-preexisting handler to bypass the implicitly-added cleanup handler and
-thus the redundant check, but calls with a local handler will use it,
-which modifies the set of visited blocks, and checking will take place
-againwhen the caller reaches the next verification point, whether it
-is a return or reraise statement after the exception is otherwise
-handled, or even another noreturn call.
+
+When a noreturn call returns control to its caller through an
+exception, verification may have already been performed before the
+call, assuming :switch:`-fhardcfr-check-noreturn-calls=always` is in
+effect.  The compiler arranges for already-checked noreturn calls
+without a preexisting handler to bypass the implicitly-added cleanup
+handler and thus the redundant check, but calls with a local handler
+will use it, which modifies the set of visited blocks, and checking
+will take place againwhen the caller reaches the next verification
+point, whether it is a return or reraise statement after the exception
+is otherwise handled, or even another noreturn call.
 
 The instrumentation for hardening with control flow redundancy can be
 observed in dump files generated by the command-line option
diff --git a/gcc/common.opt b/gcc/common.opt
index 57f01330fcf..09cc46d7dbc 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -1801,6 +1801,10 @@ fharden-control-flow-redundancy
 Common Var(flag_harden_control_flow_redundancy) Optimization
 Harden control flow by recording and checking execution paths.
 
+fhardcfr-check-returning-calls
+Common Var(flag_harden_control_flow_redundancy_check_returning_calls) Init(-1) Optimization
+Check CFR execution paths also before calls followed by returns of their results.
+
 fhardcfr-check-exceptions
 Common Var(flag_harden_control_flow_redundancy_check_exceptions) Init(-1) Optimization
 Check CFR execution paths also when exiting a function through an exception.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 1a8409d6e3e..3fcf3db6046 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -625,6 +625,7 @@ Objective-C and Objective-C++ Dialects}.
 -fcf-protection=@r{[}full@r{|}branch@r{|}return@r{|}none@r{|}check@r{]} @gol
 -fharden-compares -fharden-conditional-branches @gol
 -fharden-control-flow-redundancy  -fhardcfr-check-exceptions  @gol
+-fhardcfr-check-returning-calls  @gol
 -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]}  @gol
 -fstack-protector  -fstack-protector-all  -fstack-protector-strong @gol
 -fstack-protector-explicit  -fstack-check @gol
@@ -16557,12 +16558,13 @@ conditionals.
 @item -fharden-control-flow-redundancy
 @opindex fharden-control-flow-redundancy
 Emit extra code to set booleans when entering basic blocks, and to
-verify, at function exits (returns, before tail calls, and optionally,
-before escaping exceptions with @option{-fhardcfr-check-exceptions}, and
-before noreturn calls with @option{-fhardcfr-check-noreturn-calls}), and
-trap when they indicate an execution path that is incompatible with the
-control flow graph.  Tuning options @option{--param hardcfr-max-blocks}
-and @option{--param hardcfr-max-inline-blocks} are available.
+verify, at function exits (returns, and optionally, before escaping
+exceptions with @option{-fhardcfr-check-exceptions}, before returning
+calls with @option{-fhardcfr-check-returning-calls}, and before noreturn
+calls with @option{-fhardcfr-check-noreturn-calls}), and trap when they
+indicate an execution path that is incompatible with the control flow
+graph.  Tuning options @option{--param hardcfr-max-blocks} and
+@option{--param hardcfr-max-inline-blocks} are available.
 
 @item -fhardcfr-check-exceptions
 @opindex fhardcfr-check-exceptions
@@ -16573,6 +16575,19 @@ escape points, as if the function body was wrapped with a cleanup
 handler that performed the check and reraised.  This option is enabled
 by default; use @option{-fno-hardcfr-check-exceptions} to disable it.
 
+@item -fhardcfr-check-returning-calls
+@opindex fhardcfr-check-returning-calls
+@opindex fno-hardcfr-check-returning-calls
+When @option{-fharden-control-flow-redundancy} is active, check the
+recorded execution path against the control flow graph before function
+calls immediately followed by returning their result, whether or not
+they are optimized to tail calls.  Calls are optimized to tail calls too
+late to affect control flow redundancy, but calls annotated as mandatory
+tail calls, and any calls marked early enough as potential tail calls
+get the same treatment, even if not followed by a return of their
+result.  This option is enabled by default; use
+@option{-fno-hardcfr-check-returning-calls} to disable it.
+
 @item -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]}
 @opindex fhardcfr-check-noreturn-calls
 When @option{-fharden-control-flow-redundancy} is active, check the
diff --git a/gcc/gimple-harden-control-flow.cc b/gcc/gimple-harden-control-flow.cc
index 6d03e24d38a..7e2bee881aa 100644
--- a/gcc/gimple-harden-control-flow.cc
+++ b/gcc/gimple-harden-control-flow.cc
@@ -128,6 +128,38 @@ public:
 
 }
 
+/* Return TRUE if GSI is a potential tail call, and should be
+   handled as such.  */
+static bool
+potential_tail_call_p (gimple_stmt_iterator gsi)
+{
+  basic_block bb = gsi_bb (gsi);
+  if (!single_succ_p (bb)
+      || single_succ (bb) != EXIT_BLOCK_PTR_FOR_FN (cfun))
+    return false;
+
+  if (gsi_end_p (gsi))
+    return false;
+
+  gimple *call = gsi_stmt (gsi);
+  if (!call || !is_a <gcall *> (call))
+    return false;
+
+  gsi_next_nondebug (&gsi);
+  if (gsi_end_p (gsi))
+    return false;
+
+  gimple *ret = gsi_stmt (gsi);
+  if (!is_a <greturn *> (ret))
+    return false;
+
+  if (gimple_return_retval (as_a <greturn *> (ret))
+      != gimple_call_lhs (as_a <gcall *> (call)))
+    return false;
+
+  return true;
+}
+
 class rt_bb_visited
 {
   /* Use a sufficiently wide unsigned type to hold basic block numbers.  */
@@ -373,6 +405,45 @@ public:
     gimple_seq_add_stmt (&ckseq, ckfail_init);
   }
 
+  /* Return TRUE if a checkpoint is to be inserted before the
+     statement at GSIP (it may move).  */
+  bool check_before_p (gimple_stmt_iterator *gsip)
+  {
+    bool moved = false;
+    gimple_stmt_iterator gsi = *gsip;
+    if (gsi_end_p (gsi))
+      return false;
+
+    gimple *ret = gsi_stmt (gsi);
+
+    if (ret && is_a <gresx *> (ret))
+      return true;
+
+    if (ret && is_a <greturn *> (ret))
+      {
+	moved = true;
+	gsi_prev_nondebug (&gsi);
+	if (!gsi_end_p (gsi))
+	  ret = gsi_stmt (gsi);
+      }
+
+    if (!ret || !is_a <gcall *> (ret))
+      return false;
+
+    if (gimple_call_noreturn_p (ret)
+	|| (flag_harden_control_flow_redundancy_check_returning_calls
+	    && (gimple_call_must_tail_p (as_a <gcall *> (ret))
+		|| gimple_call_tail_p (as_a <gcall *> (ret))
+		|| potential_tail_call_p (gsi))))
+      {
+	if (moved)
+	  *gsip = gsi;
+	return true;
+      }
+
+    return false;
+  }
+
   /* Insert SEQ before a resx, or noreturn or tail call at the end of
      INSBB, and return TRUE, otherwise return FALSE.  */
   bool insert_exit_check (gimple_seq seq, basic_block insbb)
@@ -392,25 +463,8 @@ public:
        them.  This works for must-tail calls, but tail calling as an
        optimization is detected too late for us.  */
     gimple_stmt_iterator gsi = gsi_last_bb (insbb);
-    gimple *ret = gsi_stmt (gsi);
-
-    if (ret && is_a <gresx *> (ret))
-      {
-	gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
-	return true;
-      }
 
-    if (ret && is_a <greturn *> (ret))
-      {
-	gsi_prev (&gsi);
-	if (!gsi_end_p (gsi))
-	  ret = gsi_stmt (gsi);
-      }
-    if (ret
-	&& is_a <gcall *> (ret)
-	&& (gimple_call_noreturn_p (ret)
-	    || gimple_call_must_tail_p (as_a <gcall *> (ret))
-	    || gimple_call_tail_p (as_a <gcall *> (ret))))
+    if (check_before_p (&gsi))
       gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
     else
       return false;
@@ -419,11 +473,14 @@ public:
   }
 
   /* Insert SEQ on E, or close enough (e.g., before a noreturn or tail
-     call at the end of E->src).  */
-  void insert_exit_check (gimple_seq seq, edge e)
+     call at the end of E->src).  Return TRUE if insertion was before
+     a stmt in E->src rather than at the edge.  */
+  bool insert_exit_check (gimple_seq seq, edge e)
   {
-    if (!insert_exit_check (seq, e->src))
+    bool ret = insert_exit_check (seq, e->src);
+    if (!ret)
       gsi_insert_seq_on_edge_immediate (e, seq);
+    return ret;
   }
 
   /* Add checking code on every exit edge, and initialization code on
@@ -504,13 +561,21 @@ public:
 
 	    edge e = EDGE_PRED (EXIT_BLOCK_PTR_FOR_FN (cfun), i);
 
+	    bool before_stmt_p = insert_exit_check (seq, e);
+
 	    if (dump_file)
-	      fprintf (dump_file,
-		       "Inserting out-of-line check in"
-		       " block %i's edge to exit.\n",
+	      {
+		if (before_stmt_p)
+		  fprintf (dump_file,
+			   "Inserting out-of-line check"
+			   " before stmt in block %i.\n",
 		       e->src->index);
-
-	    insert_exit_check (seq, e);
+		else
+		  fprintf (dump_file,
+			   "Inserting out-of-line check in"
+			   " block %i's edge to exit.\n",
+			   e->src->index);
+	      }
 
 	    gcc_checking_assert (!bitmap_bit_p (noreturn_blocks, e->src->index));
 	  }
@@ -544,14 +609,24 @@ public:
 
 	if (!count_noreturn)
 	  {
-	    if (dump_file)
-	      fprintf (dump_file,
-		       "Inserting inline check in"
-		       " block %i's edge to exit.\n",
-		       single_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))->index);
+	    bool before_stmt_p
+	      = insert_exit_check (ckseq,
+				   single_pred_edge
+				   (EXIT_BLOCK_PTR_FOR_FN (cfun)));
 
-	    insert_exit_check (ckseq,
-			       single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)));
+	    if (dump_file)
+	      {
+		if (before_stmt_p)
+		  fprintf (dump_file,
+			   "Inserting inline check"
+			   " before stmt in block %i.\n",
+			   single_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))->index);
+		else
+		  fprintf (dump_file,
+			   "Inserting inline check in"
+			   " block %i's edge to exit.\n",
+			   single_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))->index);
+	      }
 	  }
 	else
 	  {
@@ -839,12 +914,14 @@ pass_harden_control_flow_redundancy::execute (function *fun)
 	      if (lookup_stmt_eh_lp (stmt) != 0)
 		continue;
 
-	      /* Don't split blocks at, nor add EH edvges to, tail
+	      /* Don't split blocks at, nor add EH edges to, tail
 		 calls, we will add verification before the call
 		 anyway.  */
-	      if (is_a <gcall *> (stmt)
+	      if (flag_harden_control_flow_redundancy_check_returning_calls
+		  && is_a <gcall *> (stmt)
 		  && (gimple_call_must_tail_p (as_a <gcall *> (stmt))
-		      || gimple_call_tail_p (as_a <gcall *> (stmt))))
+		      || gimple_call_tail_p (as_a <gcall *> (stmt))
+		      || potential_tail_call_p (gsi)))
 		continue;
 
 	      if (!gsi_one_before_end_p (gsi))
diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c
new file mode 100644
index 00000000000..92566403ec7
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c
@@ -0,0 +1,8 @@
+/* { dg-do compile } */
+/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fno-hardcfr-returning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */
+
+#include "harden-cfr-tail.c"
+
+/* Inline checking after the calls, disabling tail calling.  */
+/* { dg-final { scan-tree-dump-times "__builtin_trap" 3 "hardcfr" } } */
+/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 0 "hardcfr" } } */
diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
index 09daa70fa3a..18f03295193 100644
--- a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
+++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
@@ -2,8 +2,10 @@
 /* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects" } */
 
 /* We'd like to check that we insert checking so as to not disrupt tail calls,
-   but unfortunately mandatory tail calls are not available in C, and optimizing
-   calls as tail calls only takes place after hardcfr.  */
+   but unfortunately mandatory tail calls are not available in C, and
+   optimizing calls as tail calls only takes place after hardcfr, so we insert
+   checking before calls followed by return stmts with the same return value,
+   because they might end up as tail calls.  */
 
 extern int g(int i);
 
@@ -11,7 +13,20 @@ int f(int i) {
   return g (i);
 }
 
+extern void g2(int i);
+
+int f2(int i) {
+  g2 (i);
+}
+
+int f3(int i) {
+  /* Not regarded as tail call, returning value other than callee's returned
+     value.  */
+  g (i);
+  return i;
+}
+
 /* Inline checking before the tail call.  */
-/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */
-/* Inline checking before the tail call.  */
-/* { dg-final { scan-tree-dump-times "\\\[tail\]" 1 "hardcfr" { xfail *-*-* } } } */
+/* { dg-final { scan-tree-dump-times "__builtin_trap" 3 "hardcfr" } } */
+/* f and f2 only.  */
+/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 2 "hardcfr" } } */


More information about the Gcc-cvs mailing list