This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Speedup int_bit_from_pos


Hi,
int_bit_position is used by ipa-devirt's type walking code.  It is currently a bottleneck
since I introduced speculation into contextes (I plan to solve this by changing the
way i cache results). But this patch seems to make sense anyway: we do not need to go
through folding:
tree
bit_from_pos (tree offset, tree bitpos)
{
  if (TREE_CODE (offset) == PLUS_EXPR)
    offset = size_binop (PLUS_EXPR,
                         fold_convert (bitsizetype, TREE_OPERAND (offset, 0)),
                         fold_convert (bitsizetype, TREE_OPERAND (offset, 1)));
  else
    offset = fold_convert (bitsizetype, offset);
  return size_binop (PLUS_EXPR, bitpos,
                     size_binop (MULT_EXPR, offset, bitsize_unit_node));
}

Because all the code cares only about constant offsets, we do not need to go through fold_convert,
because all the codes go via int_bit_position that already expects result to be host wide int,
it seems to make sense to implement quick path for that.

Bootstrap/regtest x86_64 in progress, OK?

Honza

	* stor-layout.c (int_bit_from_pos): New function.
	* stor-layout.h (int_bit_from_pos): Declare.
	* tree.c (int_bit_from_pos): Use it.
Index: stor-layout.c
===================================================================
--- stor-layout.c	(revision 215409)
+++ stor-layout.c	(working copy)
@@ -858,6 +858,20 @@
 		     size_binop (MULT_EXPR, offset, bitsize_unit_node));
 }
 
+/* Like int_bit_from_pos, but return the result as HOST_WIDE_INT.
+   OFFSET and BITPOS must be constant.  */
+
+HOST_WIDE_INT
+int_bit_from_pos (tree offset, tree bitpos)
+{
+  HOST_WIDE_INT off;
+  if (TREE_CODE (offset) == PLUS_EXPR)
+    off = tree_to_shwi (TREE_OPERAND (offset, 0)) + tree_to_shwi (TREE_OPERAND (offset, 1));
+  else
+    off = tree_to_shwi (offset);
+  return off * BITS_PER_UNIT + tree_to_shwi (bitpos);
+}
+
 /* Return the combined truncated byte position for the byte offset OFFSET and
    the bit position BITPOS.  */
 
Index: stor-layout.h
===================================================================
--- stor-layout.h	(revision 215409)
+++ stor-layout.h	(working copy)
@@ -27,6 +27,7 @@
                                                 unsigned int);
 extern record_layout_info start_record_layout (tree);
 extern tree bit_from_pos (tree, tree);
+extern HOST_WIDE_INT int_bit_from_pos (tree, tree);
 extern tree byte_from_pos (tree, tree);
 extern void pos_from_bit (tree *, tree *, unsigned int, tree);
 extern void normalize_offset (tree *, tree *, unsigned int);
Index: tree.c
===================================================================
--- tree.c	(revision 215409)
+++ tree.c	(working copy)
@@ -2839,7 +2839,8 @@
 HOST_WIDE_INT
 int_bit_position (const_tree field)
 {
-  return tree_to_shwi (bit_position (field));
+  return int_bit_from_pos (DECL_FIELD_OFFSET (field),
+			   DECL_FIELD_BIT_OFFSET (field));
 }
 
 /* Return the byte position of FIELD, in bytes from the start of the record.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]