This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: fixed_scalar_and_varying_struct_p and varies_p


Thanks for both replies.

Richard Guenther <richard.guenther@gmail.com> writes:
> On Thu, Dec 29, 2011 at 8:48 PM, Eric Botcazou <ebotcazou@adacore.com> wrote:
>>> fixed_scalar_and_varying_struct_p passes an _address_ rather than a MEM.
>>> So in these cases fixed_scalar_and_varying_struct_p effectively becomes
>>> a no-op on targets that don't allow MEMs in addresses and takes on
>>> suspicious semantics for those that do. ÂIn the former case, every
>>> address is treated as "unvarying" and f_s_a_v_s_p always returns null.
>>> In the latter case, things like REG addresses are (wrongly) treated as
>>> unvarying while a MEM address might correctly be treated as varying,
>>> leading to false positives.
>>>
>>> It looks like this goes back to when fixed_scalar_and_varying_struct_p
>>> was added in r24759 (1999).
>>
>> Does this mean that MEM_IN_STRUCT_P and MEM_SCALAR_P have also been
>> effectively disabled since then?

Some important callers (cse.c and sched-deps.c) do use the proper
varies_p routine, so it probably isn't quite that extreme.  But...

>>> AIUI, the true_dependence varies_p parameter exists for the benefit
>>> of CSE, so that it can use its local cse_rtx_varies_p function.
>>> All other callers should be using rtx_varies_p instead. ÂQuestion is,
>>> should I make that change, or is it time to get rid of
>>> fixed_scalar_and_varying_struct_p instead?
>>
>> I'd vote for the latter (and for eliminating MEM_IN_STRUCT_P and MEM_SCALAR_P
>> in the process, if the answer to the above question is positive), there is no
>> point in resurrecting this now IMO.
>
> I agree.  The tree level routines should be able to figure most of, if
> not all, cases
> on their own via rtx_refs_may_alias_p (similar to the
> nonoverlapping_component_refs
> case which we could simply delete as well).

...that's 2 votes for and none so far against. :-)

I compiled the cc1 .ii files on x86_64-linux-gnu with and without
fixed_scalar_and_varying_struct_p.  There were 19 changes in total,
all of them cases where sched2 was able to reorder two memory accesses
because of f_s_a_v_s_p.  I've attached the diff below.

A good example is:

  if (bit_obstack)
    {
      element = bit_obstack->elements;

      if (element)
	/* Use up the inner list first before looking at the next
	   element of the outer list.  */
	if (element->next)
	  {
	    bit_obstack->elements = element->next;
	    bit_obstack->elements->prev = element->prev;
	  }
	else
	  /*  Inner list was just a singleton.  */
	  bit_obstack->elements = element->prev;
      else
	element = XOBNEW (&bit_obstack->obstack, bitmap_element);
    }
  else
    {
      element = bitmap_ggc_free;
      if (element)
	/* Use up the inner list first before looking at the next
	   element of the outer list.  */
	if (element->next)
	  {
	    bitmap_ggc_free = element->next;
	    bitmap_ggc_free->prev = element->prev;
	  }
	else
	  /*  Inner list was just a singleton.  */
	  bitmap_ggc_free = element->prev;
      else
	element = ggc_alloc_bitmap_element_def ();
    }

from bitmap.c, specifically:

	    bitmap_ggc_free = element->next;
	    bitmap_ggc_free->prev = element->prev;

Without f_s_a_v_s_p, sched2 couldn't tell that element->prev didn't
alias bitmap_ggc_free.  And this in turn is because cfgcleanup
considered trying to merge this block with:

	    bit_obstack->elements = element->next;
	    bit_obstack->elements->prev = element->prev;

It called merge_memattrs for each pair of instructions that it was
thinking of merging, and because the element->next and element->prev
MEMs were based on different SSA names, we lost the MEM_EXPR completely.

As it happens, we decided not to merge the blocks after all.
So an obvious first observation is that query functions like
flow_find_cross_jump and flow_find_head_matching_sequence shouldn't
change the rtl.  We should only do that once we've decided which
instructions we're actually going to merge.

Of course, that's not a trivial change.  It's easy to make
try_head_merge_bb call merge_memattrs during merging, but less
easy for try_crossjump_to_edge and cond_exec_process_if_block.
(Note that the latter, like try_head_merge_bb, can end up merging
fewer instructions than flow_find_* saw).

But does the choice of SSA name actually count for anything this
late on?  Should we consider MEM_EXPRs node_X->prev and node_Y->prev
to be "similar enough", if node_X and node_Y have equal types?

I've attached a patch to remove fixed_scalar_and_varying_struct_p
just in case it's OK.  Tested on mips64-linux-gnu.

Also, as Eric says, this is really the only middle-end use of
MEM_SCALAR and MEM_IN_STRUCT_P.  It looks like the only other
use is in config/m32c/m32c.c:m32c_immd_dbl_mov.  TBH, I don't
really understand what that function is trying to test, so I can't
tell whether it should be using MEM_EXPR instead.

I've attached a patch to remove MEM_IN_STRUCT_P and MEM_SCALAR too,
although I don't think the m32c.c change is acceptable.  Tested again
on mips64-linux-gnu.

Richard


diff -udpr ../before/bitmap.s ./bitmap.s
--- ../before/bitmap.s	2011-12-31 14:02:43.069284912 +0100
+++ ./bitmap.s	2011-12-31 14:04:58.937285021 +0100
@@ -128,8 +128,8 @@ _ZL23bitmap_elt_insert_afterP15bitmap_he
 	movq	(%rax), %rcx
 	testq	%rcx, %rcx
 	je	.L22
-	movq	8(%rax), %rsi
 	movq	%rcx, _ZL15bitmap_ggc_free(%rip)
+	movq	8(%rax), %rsi
 	movq	%rsi, 8(%rcx)
 	jmp	.L17
 	.p2align 4,,10
@@ -516,8 +516,8 @@ _Z11bitmap_copyP15bitmap_head_defPKS_:
 	movq	(%rax), %rdx
 	testq	%rdx, %rdx
 	je	.L84
-	movq	8(%rax), %rcx
 	movq	%rdx, _ZL15bitmap_ggc_free(%rip)
+	movq	8(%rax), %rcx
 	movq	%rcx, 8(%rdx)
 	jmp	.L79
 	.p2align 4,,10
@@ -1013,8 +1013,8 @@ _Z14bitmap_set_bitP15bitmap_head_defi:
 	movq	(%rax), %rdx
 	testq	%rdx, %rdx
 	je	.L158
-	movq	8(%rax), %rsi
 	movq	%rdx, _ZL15bitmap_ggc_free(%rip)
+	movq	8(%rax), %rsi
 	movq	%rsi, 8(%rdx)
 	jmp	.L153
 .L152:
@@ -2450,8 +2450,8 @@ _Z16bitmap_set_rangeP15bitmap_head_defjj
 	movq	(%rdi), %rax
 	testq	%rax, %rax
 	je	.L505
-	movq	8(%rdi), %rcx
 	movq	%rax, _ZL15bitmap_ggc_free(%rip)
+	movq	8(%rdi), %rcx
 	movq	%rcx, 8(%rax)
 	jmp	.L500
 .L499:
diff -udpr ../before/cgraphunit.s ./cgraphunit.s
--- ../before/cgraphunit.s	2011-12-31 14:02:50.209284962 +0100
+++ ./cgraphunit.s	2011-12-31 14:05:06.041285031 +0100
@@ -1759,9 +1759,9 @@ _ZL27assemble_thunks_and_aliasesP11cgrap
 .L370:
 	cmpl	%ecx, %edx
 	jae	.L494
+	movq	%r13, 8(%rax,%rdx,8)
 	movq	current_function_decl(%rip), %rdi
 	leal	1(%rdx), %ecx
-	movq	%r13, 8(%rax,%rdx,8)
 	movl	%ecx, (%rax)
 	movzwl	(%rdi), %eax
 	salq	$6, %rax
diff -udpr ../before/combine.s ./combine.s
--- ../before/combine.s	2011-12-31 14:02:55.065285192 +0100
+++ ./combine.s	2011-12-31 14:05:10.937285802 +0100
@@ -26978,8 +26978,8 @@ _ZL22rest_of_handle_combinev:
 	movq	8(%rbx), %rax
 	movq	(%rax), %rdx
 	movq	56(%rdx), %rdi
-	cmpq	8(%rax), %rdi
 	movq	%rdi, _ZL16this_basic_block(%rip)
+	cmpq	8(%rax), %rdi
 	je	.L6258
 	.p2align 4,,10
 	.p2align 3
@@ -27116,8 +27116,8 @@ _ZL22rest_of_handle_combinev:
 	movq	8(%rax), %rax
 	movq	(%rax), %rbx
 	movq	56(%rbx), %rdi
-	cmpq	8(%rax), %rdi
 	movq	%rdi, _ZL16this_basic_block(%rip)
+	cmpq	8(%rax), %rdi
 	je	.L6326
 .L6446:
 	call	_Z23optimize_bb_for_speed_pPK15basic_block_def
diff -udpr ../before/cprop.s ./cprop.s
--- ../before/cprop.s	2011-12-31 14:02:53.017285094 +0100
+++ ./cprop.s	2011-12-31 14:05:08.845285264 +0100
@@ -1772,8 +1772,8 @@ _ZL17execute_rtl_cpropv:
 	call	memset
 	xorl	%edi, %edi
 	call	_Z25bitmap_obstack_alloc_statP14bitmap_obstack
-	movq	%rax, %rdi
 	movq	%rax, _ZL14reg_set_bitmap(%rip)
+	movq	%rax, %rdi
 	movq	cfun(%rip), %rax
 	movq	8(%rax), %rax
 	movq	(%rax), %rdx
diff -udpr ../before/dwarf2cfi.s ./dwarf2cfi.s
--- ../before/dwarf2cfi.s	2011-12-31 14:02:59.209285650 +0100
+++ ./dwarf2cfi.s	2011-12-31 14:05:15.037368489 +0100
@@ -4077,9 +4077,9 @@ _ZL24maybe_record_trace_startP7rtx_defS0
 .L854:
 	cmpl	%edx, %ecx
 	jbe	.L873
+	movq	%rbx, 8(%rax,%rdx,8)
 	movq	dump_file(%rip), %rdi
 	leal	1(%rdx), %ecx
-	movq	%rbx, 8(%rax,%rdx,8)
 	movl	%ecx, (%rax)
 	testq	%rdi, %rdi
 	je	.L847
diff -udpr ../before/dwarf2out.s ./dwarf2out.s
--- ../before/dwarf2out.s	2011-12-31 14:03:09.757285692 +0100
+++ ./dwarf2out.s	2011-12-31 14:05:28.409701082 +0100
@@ -38726,9 +38726,9 @@ _ZL17modified_type_dieP9tree_nodeiiP10di
 	movl	$24, %edi
 	movq	%r11, (%rsp)
 	call	_Z31ggc_internal_cleared_alloc_statm
-	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	%r15, (%rax)
 	movq	%rbp, 8(%rax)
+	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	(%rsp), %r11
 	movq	%rdx, 16(%rax)
 	movq	%rax, _ZL14limbo_die_list(%rip)
@@ -40112,9 +40112,9 @@ _ZL21generic_parameter_dieP9tree_nodeS0_
 .L8519:
 	movl	$24, %edi
 	call	_Z31ggc_internal_cleared_alloc_statm
-	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	%rbp, (%rax)
 	movq	%rbx, 8(%rax)
+	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	%rdx, 16(%rax)
 	movq	%rax, _ZL14limbo_die_list(%rip)
 	jmp	.L8514
@@ -63923,9 +63923,9 @@ _ZL35dwarf2out_imported_module_or_decl_1
 .L14563:
 	movl	$24, %edi
 	call	_Z31ggc_internal_cleared_alloc_statm
-	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	%rbx, (%rax)
 	movq	%r14, 8(%rax)
+	movq	_ZL14limbo_die_list(%rip), %rdx
 	movq	%rdx, 16(%rax)
 	movq	%rax, _ZL14limbo_die_list(%rip)
 	jmp	.L14562
diff -udpr ../before/loop-invariant.s ./loop-invariant.s
--- ../before/loop-invariant.s	2011-12-31 14:03:25.985286301 +0100
+++ ./loop-invariant.s	2011-12-31 14:05:43.701701738 +0100
@@ -4748,9 +4748,9 @@ _Z20move_loop_invariantsv:
 	cmpl	%edx, %ecx
 	jbe	.L1191
 .L780:
+	movq	%r12, 8(%rax,%rdx,8)
 	movq	dump_file(%rip), %rdi
 	leal	1(%rdx), %ecx
-	movq	%r12, 8(%rax,%rdx,8)
 	movl	%ecx, (%rax)
 	testq	%rdi, %rdi
 	je	.L783
diff -udpr ../before/sel-sched.s ./sel-sched.s
--- ../before/sel-sched.s	2011-12-31 14:03:47.769535358 +0100
+++ ./sel-sched.s	2011-12-31 14:06:05.441701769 +0100
@@ -9901,9 +9901,9 @@ _ZL15fill_vec_av_setP10_list_nodeS0_P6_f
 	jbe	.L2913
 .L2634:
 	subl	$1, %edx
-	cmpl	$3, sched_verbose(%rip)
 	movq	56(%rsp), %rcx
 	movl	%edx, (%rax)
+	cmpl	$3, sched_verbose(%rip)
 	movq	8(%rax,%rdx,8), %rdx
 	movq	%rdx, 8(%rax,%rcx,8)
 	jle	.L2462
diff -udpr ../before/tree-browser.s ./tree-browser.s
--- ../before/tree-browser.s	2011-12-31 14:03:50.565534986 +0100
+++ ./tree-browser.s	2011-12-31 14:06:07.983951298 +0100
@@ -2028,9 +2028,9 @@ _Z11browse_treeP9tree_node:
 	cmpl	%ecx, %edx
 	jae	.L1142
 	movl	_ZL10TB_verbose(%rip), %edi
-	movq	current_function_decl(%rip), %rsi
-	leal	1(%rdx), %ecx
 	movq	%rbx, 8(%rax,%rdx,8)
+	leal	1(%rdx), %ecx
+	movq	current_function_decl(%rip), %rsi
 	movl	%ecx, (%rax)
 	testl	%edi, %edi
 	movq	%rsi, 56(%rsp)
diff -udpr ../before/tree-ssa-loop-im.s ./tree-ssa-loop-im.s
--- ../before/tree-ssa-loop-im.s	2011-12-31 14:04:05.573535187 +0100
+++ ./tree-ssa-loop-im.s	2011-12-31 14:06:23.129701717 +0100
@@ -7043,8 +7043,8 @@ _Z12tree_ssa_limv:
 	mov	(%rdx), %ecx
 	cmpl	4(%rdx), %ecx
 	jae	.L2109
-	movq	cfun(%rip), %r8
 	movq	%rax, 8(%rdx,%rcx,8)
+	movq	cfun(%rip), %r8
 	leal	1(%rcx), %esi
 	movl	%esi, (%rdx)
 	movq	32(%r8), %rax
diff -udpr ../before/tree-ssa-pre.s ./tree-ssa-pre.s
--- ../before/tree-ssa-pre.s	2011-12-31 14:04:09.261535046 +0100
+++ ./tree-ssa-pre.s	2011-12-31 14:06:26.564141263 +0100
@@ -4790,8 +4790,8 @@ _ZL15phi_translate_1P10pre_expr_dP10bitm
 .L993:
 	movq	-296(%rbp), %rdi
 	movq	8(%rdi), %rsi
-	movzwl	6(%rsi), %eax
 	movq	%rsi, -216(%rbp)
+	movzwl	6(%rsi), %eax
 	leal	-1(%rax), %eax
 	leaq	40(,%rax,8), %rdx
 	leaq	30(%rdx), %rax
diff -udpr ../before/var-tracking.s ./var-tracking.s
--- ../before/var-tracking.s	2011-12-31 14:04:18.165618572 +0100
+++ ./var-tracking.s	2011-12-31 14:06:35.445117953 +0100
@@ -14958,9 +14958,9 @@ _ZL13vt_initializev:
 	movl	$3, %edi
 	movq	%r13, 8(%rbp)
 	movq	16(%rsp), %rdx
+	movb	%al, 2(%rbp)
 	movq	%rdx, 16(%rbp)
 	movq	_ZL14call_arguments(%rip), %r13
-	movb	%al, 2(%rbp)
 	call	_Z14rtx_alloc_stat8rtx_code
 	movb	$0, 2(%rax)
 	movq	%rbp, 8(%rax)
gcc/
	* rtl.h (true_dependence, canon_true_dependence): Remove varies
	parameter.
	* alias.c (fixed_scalar_and_varying_struct_p): Delete.
	(true_dependence_1, write_dependence_p, may_alias_p): Don't call it.
	(true_dependence_1, true_dependence, canon_true_dependence): Remove
	varies parameter.
	* cselib.c (cselib_rtx_varies_p): Delete.
	(cselib_invalidate_mem): Update call to canon_true_dependence.
	* dse.c (record_store, check_mem_read_rtx): Likewise.
	(scan_reads_nospill): Likewise.
	* cse.c (check_dependence): Likewise.
	(cse_rtx_varies_p): Delete.
	* expr.c (safe_from_p): Update call to true_dependence.
	* ira.c (validate_equiv_mem_from_store): Likewise.
	(memref_referenced_p): Likewise.
	* postreload-gcse.c (find_mem_conflicts): Likewise.
	* sched-deps.c (sched_analyze_2): Likewise.
	* store-motion.c (load_kills_store): Likewise.
	* config/frv/frv.c (frv_registers_conflict_p_1): Likewise.
	* gcse.c (mems_conflict_for_gcse_p): Likewise.
	(compute_transp): Update call to canon_true_dependence.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	2012-01-02 10:44:41.000000000 +0000
+++ gcc/rtl.h	2012-01-02 14:33:13.000000000 +0000
@@ -2602,10 +2602,10 @@ extern bool read_rtx (const char *, rtx
 
 /* In alias.c */
 extern rtx canon_rtx (rtx);
-extern int true_dependence (const_rtx, enum machine_mode, const_rtx, bool (*)(const_rtx, bool));
+extern int true_dependence (const_rtx, enum machine_mode, const_rtx);
 extern rtx get_addr (rtx);
-extern int canon_true_dependence (const_rtx, enum machine_mode, rtx, const_rtx,
-				  rtx, bool (*)(const_rtx, bool));
+extern int canon_true_dependence (const_rtx, enum machine_mode, rtx,
+				  const_rtx, rtx);
 extern int read_dependence (const_rtx, const_rtx);
 extern int anti_dependence (const_rtx, const_rtx);
 extern int output_dependence (const_rtx, const_rtx);
Index: gcc/alias.c
===================================================================
--- gcc/alias.c	2012-01-02 10:47:21.000000000 +0000
+++ gcc/alias.c	2012-01-02 14:33:17.000000000 +0000
@@ -157,8 +157,6 @@ static rtx find_base_value (rtx);
 static int mems_in_disjoint_alias_sets_p (const_rtx, const_rtx);
 static int insert_subset_children (splay_tree_node, void*);
 static alias_set_entry get_alias_set_entry (alias_set_type);
-static const_rtx fixed_scalar_and_varying_struct_p (const_rtx, const_rtx, rtx, rtx,
-						    bool (*) (const_rtx, bool));
 static int aliases_everything_p (const_rtx);
 static bool nonoverlapping_component_refs_p (const_tree, const_tree);
 static tree decl_for_component_ref (tree);
@@ -2078,11 +2076,9 @@ memrefs_conflict_p (int xsize, rtx x, in
    changed.  A volatile and non-volatile reference can be interchanged
    though.
 
-   A MEM_IN_STRUCT reference at a non-AND varying address can never
-   conflict with a non-MEM_IN_STRUCT reference at a fixed address.  We
-   also must allow AND addresses, because they may generate accesses
-   outside the object being referenced.  This is used to generate
-   aligned addresses from unaligned addresses, for instance, the alpha
+   We also must allow AND addresses, because they may generate accesses
+   outside the object being referenced.  This is used to generate aligned
+   addresses from unaligned addresses, for instance, the alpha
    storeqi_unaligned pattern.  */
 
 /* Read dependence: X is read after read in MEM takes place.  There can
@@ -2094,39 +2090,6 @@ read_dependence (const_rtx mem, const_rt
   return MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem);
 }
 
-/* Returns MEM1 if and only if MEM1 is a scalar at a fixed address and
-   MEM2 is a reference to a structure at a varying address, or returns
-   MEM2 if vice versa.  Otherwise, returns NULL_RTX.  If a non-NULL
-   value is returned MEM1 and MEM2 can never alias.  VARIES_P is used
-   to decide whether or not an address may vary; it should return
-   nonzero whenever variation is possible.
-   MEM1_ADDR and MEM2_ADDR are the addresses of MEM1 and MEM2.  */
-
-static const_rtx
-fixed_scalar_and_varying_struct_p (const_rtx mem1, const_rtx mem2, rtx mem1_addr,
-				   rtx mem2_addr,
-				   bool (*varies_p) (const_rtx, bool))
-{
-  if (! flag_strict_aliasing)
-    return NULL_RTX;
-
-  if (MEM_ALIAS_SET (mem2)
-      && MEM_SCALAR_P (mem1) && MEM_IN_STRUCT_P (mem2)
-      && !varies_p (mem1_addr, 1) && varies_p (mem2_addr, 1))
-    /* MEM1 is a scalar at a fixed address; MEM2 is a struct at a
-       varying address.  */
-    return mem1;
-
-  if (MEM_ALIAS_SET (mem1)
-      && MEM_IN_STRUCT_P (mem1) && MEM_SCALAR_P (mem2)
-      && varies_p (mem1_addr, 1) && !varies_p (mem2_addr, 1))
-    /* MEM2 is a scalar at a fixed address; MEM1 is a struct at a
-       varying address.  */
-    return mem2;
-
-  return NULL_RTX;
-}
-
 /* Returns nonzero if something about the mode or address format MEM1
    indicates that it might well alias *anything*.  */
 
@@ -2391,8 +2354,6 @@ nonoverlapping_memrefs_p (const_rtx x, c
 /* Helper for true_dependence and canon_true_dependence.
    Checks for true dependence: X is read after store in MEM takes place.
 
-   VARIES is the function that should be used as rtx_varies function.
-
    If MEM_CANONICALIZED is FALSE, then X_ADDR and MEM_ADDR should be
    NULL_RTX, and the canonical addresses of MEM and X are both computed
    here.  If MEM_CANONICALIZED, then MEM must be already canonicalized.
@@ -2403,8 +2364,7 @@ nonoverlapping_memrefs_p (const_rtx x, c
 
 static int
 true_dependence_1 (const_rtx mem, enum machine_mode mem_mode, rtx mem_addr,
-		   const_rtx x, rtx x_addr, bool (*varies) (const_rtx, bool),
-		   bool mem_canonicalized)
+		   const_rtx x, rtx x_addr, bool mem_canonicalized)
 {
   rtx base;
   int ret;
@@ -2496,21 +2456,16 @@ true_dependence_1 (const_rtx mem, enum m
   if (mem_mode == BLKmode || GET_MODE (x) == BLKmode)
     return 1;
 
-  if (fixed_scalar_and_varying_struct_p (mem, x, mem_addr, x_addr, varies))
-    return 0;
-
   return rtx_refs_may_alias_p (x, mem, true);
 }
 
 /* True dependence: X is read after store in MEM takes place.  */
 
 int
-true_dependence (const_rtx mem, enum machine_mode mem_mode, const_rtx x,
-		 bool (*varies) (const_rtx, bool))
+true_dependence (const_rtx mem, enum machine_mode mem_mode, const_rtx x)
 {
   return true_dependence_1 (mem, mem_mode, NULL_RTX,
-			    x, NULL_RTX, varies,
-			    /*mem_canonicalized=*/false);
+			    x, NULL_RTX, /*mem_canonicalized=*/false);
 }
 
 /* Canonical true dependence: X is read after store in MEM takes place.
@@ -2521,11 +2476,10 @@ true_dependence (const_rtx mem, enum mac
 
 int
 canon_true_dependence (const_rtx mem, enum machine_mode mem_mode, rtx mem_addr,
-		       const_rtx x, rtx x_addr, bool (*varies) (const_rtx, bool))
+		       const_rtx x, rtx x_addr)
 {
   return true_dependence_1 (mem, mem_mode, mem_addr,
-			    x, x_addr, varies,
-			    /*mem_canonicalized=*/true);
+			    x, x_addr, /*mem_canonicalized=*/true);
 }
 
 /* Returns nonzero if a write to X might alias a previous read from
@@ -2535,7 +2489,6 @@ canon_true_dependence (const_rtx mem, en
 write_dependence_p (const_rtx mem, const_rtx x, int writep)
 {
   rtx x_addr, mem_addr;
-  const_rtx fixed_scalar;
   rtx base;
   int ret;
 
@@ -2598,14 +2551,6 @@ write_dependence_p (const_rtx mem, const
   if (nonoverlapping_memrefs_p (x, mem, false))
     return 0;
 
-  fixed_scalar
-    = fixed_scalar_and_varying_struct_p (mem, x, mem_addr, x_addr,
-					 rtx_addr_varies_p);
-
-  if ((fixed_scalar == mem && !aliases_everything_p (x))
-      || (fixed_scalar == x && !aliases_everything_p (mem)))
-    return 0;
-
   return rtx_refs_may_alias_p (x, mem, false);
 }
 
@@ -2687,10 +2632,6 @@ may_alias_p (const_rtx mem, const_rtx x)
   if (GET_CODE (mem_addr) == AND)
     return 1;
 
-  if (fixed_scalar_and_varying_struct_p (mem, x, mem_addr, x_addr,
-                                         rtx_addr_varies_p))
-    return 0;
-
   /* TBAA not valid for loop_invarint */
   return rtx_refs_may_alias_p (x, mem, false);
 }
Index: gcc/cselib.c
===================================================================
--- gcc/cselib.c	2012-01-02 10:47:21.000000000 +0000
+++ gcc/cselib.c	2012-01-02 14:02:08.000000000 +0000
@@ -2143,20 +2143,6 @@ cselib_invalidate_regno (unsigned int re
     }
 }
 
-/* Return 1 if X has a value that can vary even between two
-   executions of the program.  0 means X can be compared reliably
-   against certain constants or near-constants.  */
-
-static bool
-cselib_rtx_varies_p (const_rtx x ATTRIBUTE_UNUSED, bool from_alias ATTRIBUTE_UNUSED)
-{
-  /* We actually don't need to verify very hard.  This is because
-     if X has actually changed, we invalidate the memory anyway,
-     so assume that all common memory addresses are
-     invariant.  */
-  return 0;
-}
-
 /* Invalidate any locations in the table which are changed because of a
    store to MEM_RTX.  If this is called because of a non-const call
    instruction, MEM_RTX is (mem:BLK const0_rtx).  */
@@ -2193,8 +2179,8 @@ cselib_invalidate_mem (rtx mem_rtx)
 	      continue;
 	    }
 	  if (num_mems < PARAM_VALUE (PARAM_MAX_CSELIB_MEMORY_LOCATIONS)
-	      && ! canon_true_dependence (mem_rtx, GET_MODE (mem_rtx), mem_addr,
-		      			  x, NULL_RTX, cselib_rtx_varies_p))
+	      && ! canon_true_dependence (mem_rtx, GET_MODE (mem_rtx),
+					  mem_addr, x, NULL_RTX))
 	    {
 	      has_mem = true;
 	      num_mems++;
Index: gcc/dse.c
===================================================================
--- gcc/dse.c	2012-01-02 10:44:40.000000000 +0000
+++ gcc/dse.c	2012-01-02 14:02:08.000000000 +0000
@@ -1682,7 +1682,7 @@ record_store (rtx body, bb_info_t bb_inf
 	  if (canon_true_dependence (s_info->mem,
 				     GET_MODE (s_info->mem),
 				     s_info->mem_addr,
-				     mem, mem_addr, rtx_varies_p))
+				     mem, mem_addr))
 	    {
 	      s_info->rhs = NULL;
 	      s_info->const_rhs = NULL;
@@ -2279,7 +2279,7 @@ check_mem_read_rtx (rtx *loc, void *data
 	      = canon_true_dependence (store_info->mem,
 				       GET_MODE (store_info->mem),
 				       store_info->mem_addr,
-				       mem, mem_addr, rtx_varies_p);
+				       mem, mem_addr);
 
 	  else if (group_id == store_info->group_id)
 	    {
@@ -2290,7 +2290,7 @@ check_mem_read_rtx (rtx *loc, void *data
 		  = canon_true_dependence (store_info->mem,
 					   GET_MODE (store_info->mem),
 					   store_info->mem_addr,
-					   mem, mem_addr, rtx_varies_p);
+					   mem, mem_addr);
 
 	      /* If this read is just reading back something that we just
 		 stored, rewrite the read.  */
@@ -2377,7 +2377,7 @@ check_mem_read_rtx (rtx *loc, void *data
 	    remove = canon_true_dependence (store_info->mem,
 					    GET_MODE (store_info->mem),
 					    store_info->mem_addr,
-					    mem, mem_addr, rtx_varies_p);
+					    mem, mem_addr);
 
 	  if (remove)
 	    {
@@ -3276,8 +3276,7 @@ scan_reads_nospill (insn_info_t insn_inf
 		      && canon_true_dependence (group->base_mem,
 						GET_MODE (group->base_mem),
 						group->canon_base_addr,
-						read_info->mem, NULL_RTX,
-						rtx_varies_p))
+						read_info->mem, NULL_RTX))
 		    {
 		      if (kill)
 			bitmap_ior_into (kill, group->group_kill);
Index: gcc/cse.c
===================================================================
--- gcc/cse.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/cse.c	2012-01-02 14:02:08.000000000 +0000
@@ -573,7 +573,6 @@ static struct table_elt *insert (rtx, st
 				 enum machine_mode);
 static void merge_equiv_classes (struct table_elt *, struct table_elt *);
 static void invalidate (rtx, enum machine_mode);
-static bool cse_rtx_varies_p (const_rtx, bool);
 static void remove_invalid_refs (unsigned int);
 static void remove_invalid_subreg_refs (unsigned int, unsigned int,
 					enum machine_mode);
@@ -1846,8 +1845,7 @@ check_dependence (rtx *x, void *data)
 {
   struct check_dependence_data *d = (struct check_dependence_data *) data;
   if (*x && MEM_P (*x))
-    return canon_true_dependence (d->exp, d->mode, d->addr, *x, NULL_RTX,
-		    		  cse_rtx_varies_p);
+    return canon_true_dependence (d->exp, d->mode, d->addr, *x, NULL_RTX);
   else
     return 0;
 }
@@ -2794,67 +2792,6 @@ exp_equiv_p (const_rtx x, const_rtx y, i
   return 1;
 }
 
-/* Return 1 if X has a value that can vary even between two
-   executions of the program.  0 means X can be compared reliably
-   against certain constants or near-constants.  */
-
-static bool
-cse_rtx_varies_p (const_rtx x, bool from_alias)
-{
-  /* We need not check for X and the equivalence class being of the same
-     mode because if X is equivalent to a constant in some mode, it
-     doesn't vary in any mode.  */
-
-  if (REG_P (x)
-      && REGNO_QTY_VALID_P (REGNO (x)))
-    {
-      int x_q = REG_QTY (REGNO (x));
-      struct qty_table_elem *x_ent = &qty_table[x_q];
-
-      if (GET_MODE (x) == x_ent->mode
-	  && x_ent->const_rtx != NULL_RTX)
-	return 0;
-    }
-
-  if (GET_CODE (x) == PLUS
-      && CONST_INT_P (XEXP (x, 1))
-      && REG_P (XEXP (x, 0))
-      && REGNO_QTY_VALID_P (REGNO (XEXP (x, 0))))
-    {
-      int x0_q = REG_QTY (REGNO (XEXP (x, 0)));
-      struct qty_table_elem *x0_ent = &qty_table[x0_q];
-
-      if ((GET_MODE (XEXP (x, 0)) == x0_ent->mode)
-	  && x0_ent->const_rtx != NULL_RTX)
-	return 0;
-    }
-
-  /* This can happen as the result of virtual register instantiation, if
-     the initial constant is too large to be a valid address.  This gives
-     us a three instruction sequence, load large offset into a register,
-     load fp minus a constant into a register, then a MEM which is the
-     sum of the two `constant' registers.  */
-  if (GET_CODE (x) == PLUS
-      && REG_P (XEXP (x, 0))
-      && REG_P (XEXP (x, 1))
-      && REGNO_QTY_VALID_P (REGNO (XEXP (x, 0)))
-      && REGNO_QTY_VALID_P (REGNO (XEXP (x, 1))))
-    {
-      int x0_q = REG_QTY (REGNO (XEXP (x, 0)));
-      int x1_q = REG_QTY (REGNO (XEXP (x, 1)));
-      struct qty_table_elem *x0_ent = &qty_table[x0_q];
-      struct qty_table_elem *x1_ent = &qty_table[x1_q];
-
-      if ((GET_MODE (XEXP (x, 0)) == x0_ent->mode)
-	  && x0_ent->const_rtx != NULL_RTX
-	  && (GET_MODE (XEXP (x, 1)) == x1_ent->mode)
-	  && x1_ent->const_rtx != NULL_RTX)
-	return 0;
-    }
-
-  return rtx_varies_p (x, from_alias);
-}
-
 /* Subroutine of canon_reg.  Pass *XLOC through canon_reg, and validate
    the result if necessary.  INSN is as for canon_reg.  */
 
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2012-01-02 13:56:28.000000000 +0000
+++ gcc/expr.c	2012-01-02 14:33:13.000000000 +0000
@@ -7192,8 +7192,7 @@ safe_from_p (const_rtx x, tree exp, int
 	 are memory and they conflict.  */
       return ! (rtx_equal_p (x, exp_rtl)
 		|| (MEM_P (x) && MEM_P (exp_rtl)
-		    && true_dependence (exp_rtl, VOIDmode, x,
-					rtx_addr_varies_p)));
+		    && true_dependence (exp_rtl, VOIDmode, x)));
     }
 
   /* If we reach here, it is safe.  */
Index: gcc/ira.c
===================================================================
--- gcc/ira.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/ira.c	2012-01-02 14:02:08.000000000 +0000
@@ -2335,7 +2335,7 @@ validate_equiv_mem_from_store (rtx dest,
   if ((REG_P (dest)
        && reg_overlap_mentioned_p (dest, equiv_mem))
       || (MEM_P (dest)
-	  && true_dependence (dest, VOIDmode, equiv_mem, rtx_varies_p)))
+	  && true_dependence (dest, VOIDmode, equiv_mem)))
     equiv_mem_modified = 1;
 }
 
@@ -2589,7 +2589,7 @@ memref_referenced_p (rtx memref, rtx x)
 				      reg_equiv[REGNO (x)].replacement));
 
     case MEM:
-      if (true_dependence (memref, VOIDmode, x, rtx_varies_p))
+      if (true_dependence (memref, VOIDmode, x))
 	return 1;
       break;
 
Index: gcc/postreload-gcse.c
===================================================================
--- gcc/postreload-gcse.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/postreload-gcse.c	2012-01-02 14:02:08.000000000 +0000
@@ -589,8 +589,7 @@ find_mem_conflicts (rtx dest, const_rtx
   if (! MEM_P (dest))
     return;
 
-  if (true_dependence (dest, GET_MODE (dest), mem_op,
-		       rtx_addr_varies_p))
+  if (true_dependence (dest, GET_MODE (dest), mem_op))
     mems_conflict_p = 1;
 }
 
Index: gcc/sched-deps.c
===================================================================
--- gcc/sched-deps.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/sched-deps.c	2012-01-02 14:02:08.000000000 +0000
@@ -2636,8 +2636,7 @@ sched_analyze_2 (struct deps_desc *deps,
 	    pending_mem = deps->pending_write_mems;
 	    while (pending)
 	      {
-		if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
-				     t, rtx_varies_p)
+		if (true_dependence (XEXP (pending_mem, 0), VOIDmode, t)
 		    && ! sched_insns_conditions_mutex_p (insn,
 							 XEXP (pending, 0)))
 		  note_mem_dep (t, XEXP (pending_mem, 0), XEXP (pending, 0),
Index: gcc/store-motion.c
===================================================================
--- gcc/store-motion.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/store-motion.c	2012-01-02 14:02:08.000000000 +0000
@@ -309,8 +309,7 @@ load_kills_store (const_rtx x, const_rtx
   if (after)
     return anti_dependence (x, store_pattern);
   else
-    return true_dependence (store_pattern, GET_MODE (store_pattern), x,
-			    rtx_addr_varies_p);
+    return true_dependence (store_pattern, GET_MODE (store_pattern), x);
 }
 
 /* Go through the entire rtx X, looking for any loads which might alias
Index: gcc/config/frv/frv.c
===================================================================
--- gcc/config/frv/frv.c	2012-01-02 10:44:40.000000000 +0000
+++ gcc/config/frv/frv.c	2012-01-02 14:02:08.000000000 +0000
@@ -7229,8 +7229,7 @@ frv_registers_conflict_p_1 (rtx *x, void
       for (i = 0; i < frv_packet.num_mems; i++)
 	if (frv_regstate_conflict_p (frv_packet.mems[i].cond, cond))
 	  {
-	    if (true_dependence (frv_packet.mems[i].mem, VOIDmode,
-				 *x, rtx_varies_p))
+	    if (true_dependence (frv_packet.mems[i].mem, VOIDmode, *x))
 	      return 1;
 
 	    if (output_dependence (frv_packet.mems[i].mem, *x))
Index: gcc/gcse.c
===================================================================
--- gcc/gcse.c	2012-01-02 10:44:41.000000000 +0000
+++ gcc/gcse.c	2012-01-02 14:02:08.000000000 +0000
@@ -968,7 +968,7 @@ mems_conflict_for_gcse_p (rtx dest, cons
       return;
     }
 
-  if (true_dependence (dest, GET_MODE (dest), mci->mem, rtx_addr_varies_p))
+  if (true_dependence (dest, GET_MODE (dest), mci->mem))
     mci->conflict = true;
 }
 
@@ -1682,8 +1682,8 @@ compute_transp (const_rtx x, int indx, s
 		    rtx dest = pair->dest;
 		    rtx dest_addr = pair->dest_addr;
 
-		    if (canon_true_dependence (dest, GET_MODE (dest), dest_addr,
-					       x, NULL_RTX, rtx_addr_varies_p))
+		    if (canon_true_dependence (dest, GET_MODE (dest),
+					       dest_addr, x, NULL_RTX))
 		      RESET_BIT (bmap[bb_index], indx);
 	          }
 	      }
gcc/
	* doc/rtl.texi (MEM_IN_STRUCT_P, MEM_SCALAR_P): Delete.
	(in_struct, return_val): Remove MEM documentation.
	* rtl.h (rtx_def): Remove MEM meanings from in_struct and return_val.
	(MEM_IN_STRUCT_P, MEM_SCALAR_P): Delete.
	(MEM_COPY_ATTRIBUTES): Remove references to MEM_IN_STRUCT_P
	and MEM_SCALAR.
	* emit-rtl.c (set_mem_attributes_minus_bitpos): Likewise.
	* cfgexpand.c (add_alias_set_conflicts): Likewise.
	* expr.c (store_field): Likewise.
	* function.c (assign_stack_temp_for_type): Likewise.
	* ifcvt.c (noce_try_cmove_arith): Likewise.
	* reload1.c (reload): Likewise.
	* config/alpha/alpha.c (alpha_set_memflags_1): Likewise.
	(alpha_set_memflags): Likewise.
	* config/m32c/m32c.c (m32c_immd_dbl_mov): Nullify.

gcc/testsuite/
	* gcc.dg/memcpy-4.c: Don't expect /s on MEMs.

Index: gcc/doc/rtl.texi
===================================================================
--- gcc/doc/rtl.texi	2012-01-02 14:33:13.000000000 +0000
+++ gcc/doc/rtl.texi	2012-01-02 14:37:43.000000000 +0000
@@ -669,17 +669,6 @@ In @code{label_ref} and @code{reg_label}
 a reference to a non-local label.
 Stored in the @code{volatil} field and printed as @samp{/v}.
 
-@findex MEM_IN_STRUCT_P
-@cindex @code{mem} and @samp{/s}
-@cindex @code{in_struct}, in @code{mem}
-@item MEM_IN_STRUCT_P (@var{x})
-In @code{mem} expressions, nonzero for reference to an entire structure,
-union or array, or to a component of one.  Zero for references to a
-scalar variable or through a pointer to a scalar.  If both this flag and
-@code{MEM_SCALAR_P} are clear, then we don't know whether this @code{mem}
-is in a structure or not.  Both flags should never be simultaneously set.
-Stored in the @code{in_struct} field and printed as @samp{/s}.
-
 @findex MEM_KEEP_ALIAS_SET_P
 @cindex @code{mem} and @samp{/j}
 @cindex @code{jump}, in @code{mem}
@@ -689,18 +678,6 @@ mem unchanged when we access a component
 are already in a non-addressable component of an aggregate.
 Stored in the @code{jump} field and printed as @samp{/j}.
 
-@findex MEM_SCALAR_P
-@cindex @code{mem} and @samp{/i}
-@cindex @code{return_val}, in @code{mem}
-@item MEM_SCALAR_P (@var{x})
-In @code{mem} expressions, nonzero for reference to a scalar known not
-to be a member of a structure, union, or array.  Zero for such
-references and for indirections through pointers, even pointers pointing
-to scalar types.  If both this flag and @code{MEM_IN_STRUCT_P} are clear,
-then we don't know whether this @code{mem} is in a structure or not.
-Both flags should never be simultaneously set.
-Stored in the @code{return_val} field and printed as @samp{/i}.
-
 @findex MEM_VOLATILE_P
 @cindex @code{mem} and @samp{/v}
 @cindex @code{asm_input} and @samp{/v}
@@ -944,12 +921,6 @@ In an RTL dump, this flag is represented
 @findex in_struct
 @cindex @samp{/s} in RTL dump
 @item in_struct
-In @code{mem} expressions, it is 1 if the memory datum referred to is
-all or part of a structure or array; 0 if it is (or might be) a scalar
-variable.  A reference through a C pointer has 0 because the pointer
-might point to a scalar variable.  This information allows the compiler
-to determine something about possible cases of aliasing.
-
 In @code{reg} expressions, it is 1 if the register has its entire life
 contained within the test expression of some loop.
 
@@ -986,9 +957,6 @@ machines that pass parameters in registe
 may be used for parameters as well, but this flag is not set on such
 uses.
 
-In @code{mem} expressions, 1 means the memory reference is to a scalar
-known not to be a member of a structure, union, or array.
-
 In @code{symbol_ref} expressions, 1 means the referenced symbol is weak.
 
 In @code{call} expressions, 1 means the call is pure.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	2012-01-02 14:36:20.000000000 +0000
+++ gcc/rtl.h	2012-01-02 14:37:43.000000000 +0000
@@ -296,10 +296,7 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"
      barrier.
      1 in a CONCAT is VAL_NEEDS_RESOLUTION in var-tracking.c.  */
   unsigned int volatil : 1;
-  /* 1 in a MEM referring to a field of an aggregate.
-     0 if the MEM was a variable or the result of a * operator in C;
-     1 if it was the result of a . or -> operator (on a struct) in C.
-     1 in a REG if the register is used only in exit code a loop.
+  /* 1 in a REG if the register is used only in exit code a loop.
      1 in a SUBREG expression if was generated from a variable with a
      promoted mode.
      1 in a CODE_LABEL if the label is used for nonlocal gotos
@@ -308,7 +305,10 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"
      together with the preceding insn.  Valid only within sched.
      1 in an INSN, JUMP_INSN, or CALL_INSN if insn is in a delay slot and
      from the target of a branch.  Valid from reorg until end of compilation;
-     cleared before used.  */
+     cleared before used.
+
+     The name of the field is historical.  It used to be used in MEMs
+     to record whether the MEM accessed part of a structure.  */
   unsigned int in_struct : 1;
   /* At the end of RTL generation, 1 if this rtx is used.  This is used for
      copying shared structure.  See `unshare_all_rtl'.
@@ -328,7 +328,6 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"
      1 in a VALUE is VALUE_CHANGED in var-tracking.c.  */
   unsigned frame_related : 1;
   /* 1 in a REG or PARALLEL that is the current function's return value.
-     1 in a MEM if it refers to a scalar.
      1 in a SYMBOL_REF for a weak symbol.
      1 in a CALL_INSN logically equivalent to ECF_PURE and DECL_PURE_P.
      1 in a CONCAT is VAL_EXPR_HAS_REVERSE in var-tracking.c.
@@ -1335,17 +1334,6 @@ #define MEM_VOLATILE_P(RTX)						\
   (RTL_FLAG_CHECK3("MEM_VOLATILE_P", (RTX), MEM, ASM_OPERANDS,		\
 		   ASM_INPUT)->volatil)
 
-/* 1 if RTX is a mem that refers to an aggregate, either to the
-   aggregate itself or to a field of the aggregate.  If zero, RTX may
-   or may not be such a reference.  */
-#define MEM_IN_STRUCT_P(RTX)						\
-  (RTL_FLAG_CHECK1("MEM_IN_STRUCT_P", (RTX), MEM)->in_struct)
-
-/* 1 if RTX is a MEM that refers to a scalar.  If zero, RTX may or may
-   not refer to a scalar.  */
-#define MEM_SCALAR_P(RTX)						\
-  (RTL_FLAG_CHECK1("MEM_SCALAR_P", (RTX), MEM)->return_val)
-
 /* 1 if RTX is a mem that cannot trap.  */
 #define MEM_NOTRAP_P(RTX) \
   (RTL_FLAG_CHECK1("MEM_NOTRAP_P", (RTX), MEM)->call)
@@ -1404,8 +1392,6 @@ #define REG_OFFSET(RTX) (REG_ATTRS (RTX)
 /* Copy the attributes that apply to memory locations from RHS to LHS.  */
 #define MEM_COPY_ATTRIBUTES(LHS, RHS)				\
   (MEM_VOLATILE_P (LHS) = MEM_VOLATILE_P (RHS),			\
-   MEM_IN_STRUCT_P (LHS) = MEM_IN_STRUCT_P (RHS),		\
-   MEM_SCALAR_P (LHS) = MEM_SCALAR_P (RHS),			\
    MEM_NOTRAP_P (LHS) = MEM_NOTRAP_P (RHS),			\
    MEM_READONLY_P (LHS) = MEM_READONLY_P (RHS),			\
    MEM_KEEP_ALIAS_SET_P (LHS) = MEM_KEEP_ALIAS_SET_P (RHS),	\
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/emit-rtl.c	2012-01-02 14:37:43.000000000 +0000
@@ -1572,17 +1572,8 @@ set_mem_attributes_minus_bitpos (rtx ref
   attrs.alias = get_alias_set (t);
 
   MEM_VOLATILE_P (ref) |= TYPE_VOLATILE (type);
-  MEM_IN_STRUCT_P (ref)
-    = AGGREGATE_TYPE_P (type) || TREE_CODE (type) == COMPLEX_TYPE;
   MEM_POINTER (ref) = POINTER_TYPE_P (type);
 
-  /* If we are making an object of this type, or if this is a DECL, we know
-     that it is a scalar if the type is not an aggregate.  */
-  if ((objectp || DECL_P (t))
-      && ! AGGREGATE_TYPE_P (type)
-      && TREE_CODE (type) != COMPLEX_TYPE)
-    MEM_SCALAR_P (ref) = 1;
-
   /* Default values from pre-existing memory attributes if present.  */
   refattrs = MEM_ATTRS (ref);
   if (refattrs)
@@ -1854,17 +1845,6 @@ set_mem_attributes_minus_bitpos (rtx ref
   /* Now set the attributes we computed above.  */
   attrs.addrspace = TYPE_ADDR_SPACE (type);
   set_mem_attrs (ref, &attrs);
-
-  /* If this is already known to be a scalar or aggregate, we are done.  */
-  if (MEM_IN_STRUCT_P (ref) || MEM_SCALAR_P (ref))
-    return;
-
-  /* If it is a reference into an aggregate, this is part of an aggregate.
-     Otherwise we don't know.  */
-  else if (TREE_CODE (t) == COMPONENT_REF || TREE_CODE (t) == ARRAY_REF
-	   || TREE_CODE (t) == ARRAY_RANGE_REF
-	   || TREE_CODE (t) == BIT_FIELD_REF)
-    MEM_IN_STRUCT_P (ref) = 1;
 }
 
 void
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/cfgexpand.c	2012-01-02 14:37:43.000000000 +0000
@@ -357,8 +357,7 @@ aggregate_contains_union_type (tree type
    and due to type based aliasing rules decides that for two overlapping
    union temporaries { short s; int i; } accesses to the same mem through
    different types may not alias and happily reorders stores across
-   life-time boundaries of the temporaries (See PR25654).
-   We also have to mind MEM_IN_STRUCT_P and MEM_SCALAR_P.  */
+   life-time boundaries of the temporaries (See PR25654).  */
 
 static void
 add_alias_set_conflicts (void)
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2012-01-02 14:36:20.000000000 +0000
+++ gcc/expr.c	2012-01-02 14:37:43.000000000 +0000
@@ -6421,8 +6421,6 @@ store_field (rtx target, HOST_WIDE_INT b
       if (to_rtx == target)
 	to_rtx = copy_rtx (to_rtx);
 
-      if (!MEM_SCALAR_P (to_rtx))
-	MEM_IN_STRUCT_P (to_rtx) = 1;
       if (!MEM_KEEP_ALIAS_SET_P (to_rtx) && MEM_ALIAS_SET (to_rtx) != 0)
 	set_mem_alias_set (to_rtx, alias_set);
 
Index: gcc/function.c
===================================================================
--- gcc/function.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/function.c	2012-01-02 14:37:43.000000000 +0000
@@ -939,14 +939,7 @@ assign_stack_temp_for_type (enum machine
 
   /* If a type is specified, set the relevant flags.  */
   if (type != 0)
-    {
-      MEM_VOLATILE_P (slot) = TYPE_VOLATILE (type);
-      gcc_checking_assert (!MEM_SCALAR_P (slot) && !MEM_IN_STRUCT_P (slot));
-      if (AGGREGATE_TYPE_P (type) || TREE_CODE (type) == COMPLEX_TYPE)
-	MEM_IN_STRUCT_P (slot) = 1;
-      else
-	MEM_SCALAR_P (slot) = 1;
-    }
+    MEM_VOLATILE_P (slot) = TYPE_VOLATILE (type);
   MEM_NOTRAP_P (slot) = 1;
 
   return slot;
Index: gcc/ifcvt.c
===================================================================
--- gcc/ifcvt.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/ifcvt.c	2012-01-02 14:37:43.000000000 +0000
@@ -1667,10 +1667,6 @@ noce_try_cmove_arith (struct noce_if_inf
       /* Copy over flags as appropriate.  */
       if (MEM_VOLATILE_P (if_info->a) || MEM_VOLATILE_P (if_info->b))
 	MEM_VOLATILE_P (tmp) = 1;
-      if (MEM_IN_STRUCT_P (if_info->a) && MEM_IN_STRUCT_P (if_info->b))
-	MEM_IN_STRUCT_P (tmp) = 1;
-      if (MEM_SCALAR_P (if_info->a) && MEM_SCALAR_P (if_info->b))
-	MEM_SCALAR_P (tmp) = 1;
       if (MEM_ALIAS_SET (if_info->a) == MEM_ALIAS_SET (if_info->b))
 	set_mem_alias_set (tmp, MEM_ALIAS_SET (if_info->a));
       set_mem_align (tmp,
Index: gcc/reload1.c
===================================================================
--- gcc/reload1.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/reload1.c	2012-01-02 14:37:43.000000000 +0000
@@ -1111,10 +1111,7 @@ reload (rtx first, int global)
 	      if (reg_equiv_memory_loc (i))
 		MEM_COPY_ATTRIBUTES (reg, reg_equiv_memory_loc (i));
 	      else
-		{
-		  MEM_IN_STRUCT_P (reg) = MEM_SCALAR_P (reg) = 0;
-		  MEM_ATTRS (reg) = 0;
-		}
+		MEM_ATTRS (reg) = 0;
 	      MEM_NOTRAP_P (reg) = 1;
 	    }
 	  else if (reg_equiv_mem (i))
Index: gcc/config/alpha/alpha.c
===================================================================
--- gcc/config/alpha/alpha.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/config/alpha/alpha.c	2012-01-02 14:37:43.000000000 +0000
@@ -1489,8 +1489,6 @@ alpha_set_memflags_1 (rtx *xp, void *dat
     return 0;
 
   MEM_VOLATILE_P (x) = MEM_VOLATILE_P (orig);
-  MEM_IN_STRUCT_P (x) = MEM_IN_STRUCT_P (orig);
-  MEM_SCALAR_P (x) = MEM_SCALAR_P (orig);
   MEM_NOTRAP_P (x) = MEM_NOTRAP_P (orig);
   MEM_READONLY_P (x) = MEM_READONLY_P (orig);
 
@@ -1520,8 +1518,6 @@ alpha_set_memflags (rtx seq, rtx ref)
      generated from one of the insn patterns.  So if everything is
      zero, the pattern is already up-to-date.  */
   if (!MEM_VOLATILE_P (ref)
-      && !MEM_IN_STRUCT_P (ref)
-      && !MEM_SCALAR_P (ref)
       && !MEM_NOTRAP_P (ref)
       && !MEM_READONLY_P (ref))
     return;
Index: gcc/config/m32c/m32c.c
===================================================================
--- gcc/config/m32c/m32c.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/config/m32c/m32c.c	2012-01-02 14:37:43.000000000 +0000
@@ -3475,95 +3475,11 @@ #define DEBUG_MOV_OK 0
    for moving an immediate double data to a double data type variable
    location, can be combined into single SImode mov instruction.  */
 bool
-m32c_immd_dbl_mov (rtx * operands, 
+m32c_immd_dbl_mov (rtx * operands ATTRIBUTE_UNUSED,
 		   enum machine_mode mode ATTRIBUTE_UNUSED)
 {
-  int flag = 0, okflag = 0, offset1 = 0, offset2 = 0, offsetsign = 0;
-  const char *str1;
-  const char *str2;
-
-  if (GET_CODE (XEXP (operands[0], 0)) == SYMBOL_REF
-      && MEM_SCALAR_P (operands[0])
-      && !MEM_IN_STRUCT_P (operands[0])
-      && GET_CODE (XEXP (operands[2], 0)) == CONST
-      && GET_CODE (XEXP (XEXP (operands[2], 0), 0)) == PLUS
-      && GET_CODE (XEXP (XEXP (XEXP (operands[2], 0), 0), 0)) == SYMBOL_REF
-      && GET_CODE (XEXP (XEXP (XEXP (operands[2], 0), 0), 1)) == CONST_INT
-      && MEM_SCALAR_P (operands[2])
-      && !MEM_IN_STRUCT_P (operands[2]))
-    flag = 1; 
-
-  else if (GET_CODE (XEXP (operands[0], 0)) == CONST
-           && GET_CODE (XEXP (XEXP (operands[0], 0), 0)) == PLUS
-           && GET_CODE (XEXP (XEXP (XEXP (operands[0], 0), 0), 0)) == SYMBOL_REF
-           && MEM_SCALAR_P (operands[0])
-           && !MEM_IN_STRUCT_P (operands[0])
-           && !(INTVAL (XEXP (XEXP (XEXP (operands[0], 0), 0), 1)) %4)
-           && GET_CODE (XEXP (operands[2], 0)) == CONST
-           && GET_CODE (XEXP (XEXP (operands[2], 0), 0)) == PLUS
-           && GET_CODE (XEXP (XEXP (XEXP (operands[2], 0), 0), 0)) == SYMBOL_REF
-           && MEM_SCALAR_P (operands[2])
-           && !MEM_IN_STRUCT_P (operands[2]))
-    flag = 2; 
-
-  else if (GET_CODE (XEXP (operands[0], 0)) == PLUS
-           &&  GET_CODE (XEXP (XEXP (operands[0], 0), 0)) == REG
-           &&  REGNO (XEXP (XEXP (operands[0], 0), 0)) == FB_REGNO 
-           &&  GET_CODE (XEXP (XEXP (operands[0], 0), 1)) == CONST_INT
-           &&  MEM_SCALAR_P (operands[0])
-           &&  !MEM_IN_STRUCT_P (operands[0])
-           &&  !(INTVAL (XEXP (XEXP (operands[0], 0), 1)) %4)
-           &&  REGNO (XEXP (XEXP (operands[2], 0), 0)) == FB_REGNO 
-           &&  GET_CODE (XEXP (XEXP (operands[2], 0), 1)) == CONST_INT
-           &&  MEM_SCALAR_P (operands[2])
-           &&  !MEM_IN_STRUCT_P (operands[2]))
-    flag = 3; 
-
-  else
-    return false;
-
-  switch (flag)
-    {
-    case 1:
-      str1 = XSTR (XEXP (operands[0], 0), 0);
-      str2 = XSTR (XEXP (XEXP (XEXP (operands[2], 0), 0), 0), 0);
-      if (strcmp (str1, str2) == 0)
-	okflag = 1; 
-      else
-	okflag = 0; 
-      break;
-    case 2:
-      str1 = XSTR (XEXP (XEXP (XEXP (operands[0], 0), 0), 0), 0);
-      str2 = XSTR (XEXP (XEXP (XEXP (operands[2], 0), 0), 0), 0);
-      if (strcmp(str1,str2) == 0)
-	okflag = 1; 
-      else
-	okflag = 0; 
-      break; 
-    case 3:
-      offset1 = INTVAL (XEXP (XEXP (operands[0], 0), 1));
-      offset2 = INTVAL (XEXP (XEXP (operands[2], 0), 1));
-      offsetsign = offset1 >> ((sizeof (offset1) * 8) -1);
-      if (((offset2-offset1) == 2) && offsetsign != 0)
-	okflag = 1;
-      else 
-	okflag = 0; 
-      break; 
-    default:
-      okflag = 0; 
-    } 
-      
-  if (okflag == 1)
-    {
-      HOST_WIDE_INT val;
-      operands[4] = gen_rtx_MEM (SImode, XEXP (operands[0], 0));
-
-      val = (INTVAL (operands[3]) << 16) + (INTVAL (operands[1]) & 0xFFFF);
-      operands[5] = gen_rtx_CONST_INT (VOIDmode, val);
-     
-      return true;
-    }
-
+  /* ??? This relied on the now-defunct MEM_SCALAR and MEM_IN_STRUCT_P
+     flags.  */
   return false;
 }  
 
Index: gcc/testsuite/gcc.dg/memcpy-4.c
===================================================================
--- gcc/testsuite/gcc.dg/memcpy-4.c	2012-01-02 14:33:13.000000000 +0000
+++ gcc/testsuite/gcc.dg/memcpy-4.c	2012-01-02 14:37:43.000000000 +0000
@@ -10,5 +10,5 @@ f1 (char *p)
   __builtin_memcpy (p, "12345", 5);
 }
 
-/* { dg-final { scan-rtl-dump "mem/s/u.*mem/s/u" "expand" { target mips*-*-* } } } */
+/* { dg-final { scan-rtl-dump "mem/u.*mem/u" "expand" { target mips*-*-* } } } */
 /* { dg-final { cleanup-rtl-dump "expand" } } */

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]