This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: Patch to Avoid Bad Prefetching


Hi,

I have been working with Richard on improving this cost model. I have experimented with many heuristics with different variations and threshold values and the following patch implements the heuristics that gave the best results. This patch delivers an improvement of 3.6% on INT2006 and 8.7% on FP2006 relative to the current prefetcher without causing any significant regressions. This improvement mostly comes from eliminating the big performance degradations caused by the current prefetcher (see detailed results below).

The patch implements a prefetching cost model with two heuristics:

First Heuristic:  Disable prefetching in a loop if the potential benefit is insignificant. 
Prefetching improves performance by overlapping cache missing memory operations with CPU operations. Therefore, if the loop does not have a significant amount of CPU operations for the machine to execute while waiting on cache misses, the gain from prefetching will be insignificant and hence unlikely to pay for the prefetching cost. To be precise, an upper bound on the benefit from prefetching can be computed by estimating the time needed to execute the CPU operations and dividing that by the time needed to execute the entire loop (with cache misses taken into account). However, this patch avoids these instruction-by-instruction calculations and adopts an approximation that simply looks at the ratio between the total instruction count and the memory reference count and disables prefetching if that ratio is less than a certain threshold (PREFETCH_MIN_INSN_TO_MEM_RATIO). As detailed below, the experiments show that this approximation works very well in practice; it (along with the other heuristic) eliminates most of the performance degradations caused by the current prefetching pass. I plan on implementing the more precise calculation in the future and submitting that in a patch if it indeed gives better results. 

Second Heuristic: Disable prefetching for loops with unknown trip counts if the prefetching cost is above a certain threshold.
For now, we only consider the relative I-cache cost and compute that as the ratio between the number of prefetches and the total number of instructions. If the reciprocal ratio is less than MIN_INSN_TO_PREFETCH_RATIO, prefetching is disabled for that loop. Note that loop unrolling may reduce the prefetching cost. However different prefetches are affected differently by loop unrolling, depending on their strides. I plan on addressing that in a separate patch in the future. For now, the current patch assumes no unrolling, and that gives more conservative results (less degradation). 

The patch introduces two parameters, one for each heuristic. Given the logic behind these heuristics and their relative and high-level nature, the default values are expected to work reasonably well on all targets. However, for fine tuning on different targets, we plan on adding backend hooks to define machine-specific values for these parameters. This will be done in a separate patch.

Benchmark Results:
Here are the geometric-mean scores for CPU2006 with and without the patch: 

FP2006:     
No prefetching: 15.3    
Current prefetching: 14.0 (-8.5%)   
Patched prefetching: 15.2 (-0.5%)
So, the patch gives an improvement of 8.7% relative to the existing code.

INT2006:     
No prefetching: 14.6    
Current prefetching: 14.3 (-2.3%)   
Patched prefetching: 14.8 (1.22%)
So, the patch gives an improvement of 3.6% relative to the existing code.

The results were collected on a Shanghai machine using GCC 4.5 revision 145634 checked out on 4/4/2009. The following flags were used:
 
INT2006
COPTIMIZE	= -O3 -funroll-all-loops -minline-all-stringops -mveclibabi=acml -m64 -march=amdfam10 -fprefetch-loop-arrays 
CXXOPTIMIZE	= -O3 -funroll-all-loops -minline-all-stringops -m32 -march=amdfam10 -static -fprefetch-loop-arrays

FP2006
COPTIMIZE	= -O3 -funroll-all-loops -mveclibabi=acml -m64 -march=amdfam10 -fprefetch-loop-arrays 
CXXOPTIMIZE	= -O3 -mveclibabi=acml -m64 -march=amdfam10 -fprefetch-loop-arrays
FOPTIMIZE	= -O3 -funroll-all-loops -ffast-math -mveclibabi=acml -m64 -march=amdfam10 -fprefetch-loop-arrays

Note that the default settings of all prefetching parameters were used for all benchmarks. No additional prefetching command-line arguments were used. Also, it should be noted that all CPU2006 benchmarks passed except for cactusADM, which failed with and without prefetching using the above GCC revision. So, that failure is not related to this patch.

And here are the numbers for the individual benchmarks (with a prefetching impact of 3% or greater). For each benchmark, the first number is the improvement (or degradation if negative) achieved by the current prefetcher relative to no prefetching, and the second number is the improvement achieved by the patched prefetcher relative to no prefetching:


INT2006
gcc          -4%   -1%
gobmk        -3%   -2%
hmmer       -29%    0%
libquantum  +19%  +21%

FP2006
bwaves     -27%    0%
gamess     -17%   -5%
zeusmp     -10%   -2%
leslie     -14%    0%
calculix   -11%   -1%
Gems:      -13%   -1%
tonto:      -9%   -1%
lbm:        +5%   +5%
wrf:       -24%   -1%
sphinx:    -10%    0%

Note that most of the regressions on FP2006 are in the Fortran benchmarks. When we disable prefetching for Fortran, we get a net geometric mean improvement of 0.2% on FP2006. This net improvement is not possible without the patch.

Thanks
-Ghassan


Index: params.h
===================================================================
--- params.h	(revision 145634)
+++ params.h	(working copy)
@@ -172,4 +172,8 @@ typedef enum compiler_param
   PARAM_VALUE (PARAM_SWITCH_CONVERSION_BRANCH_RATIO)
 #define LOOP_INVARIANT_MAX_BBS_IN_LOOP \
   PARAM_VALUE (PARAM_LOOP_INVARIANT_MAX_BBS_IN_LOOP)
+#define MIN_INSN_TO_PREFETCH_RATIO \
+  PARAM_VALUE (PARAM_MIN_INSN_TO_PREFETCH_RATIO)
+#define PREFETCH_MIN_INSN_TO_MEM_RATIO \
+  PARAM_VALUE (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO)
 #endif /* ! GCC_PARAMS_H */
Index: tree-ssa-loop-prefetch.c
===================================================================
--- tree-ssa-loop-prefetch.c	(revision 145634)
+++ tree-ssa-loop-prefetch.c	(working copy)
@@ -109,6 +109,23 @@ along with GCC; see the file COPYING3.  
       prefetch instructions with guards in cases where 5) was not sufficient
       to satisfy the constraints?
 
+   The function is_loop_prefetching_profitable() implements a cost model
+   to determine if prefetching is profitable for a given loop. The cost
+   model has two heuristcs:
+   1. A heuristic that determines whether the given loop has enough CPU
+      ops that can be overlapped with cache missing memory ops.
+      If not, the loop won't benefit from prefetching. This is implemented 
+      by requirung the ratio between the instruction count and the mem ref 
+      count to be above a certain minimum.
+   2. A heuristic that disables prefetching in a loop with an unknown trip
+      count if the prefetching cost is above a certain limit. The relative 
+      prefetching cost is estimated by taking the ratio between the
+      prefetch count and the total intruction count (this models the I-cache
+      cost).
+   The limits used in these heuristics are defined as parameters with
+   reasonable default values. Machine-specific default values will be 
+   added later.
+ 
    Some other TODO:
       -- write and use more general reuse analysis (that could be also used
 	 in other cache aimed loop optimizations)
@@ -476,7 +493,7 @@ gather_memory_references_ref (struct loo
    true if there are no other memory references inside the loop.  */
 
 static struct mem_ref_group *
-gather_memory_references (struct loop *loop, bool *no_other_refs)
+gather_memory_references (struct loop *loop, bool *no_other_refs, unsigned *ref_count)
 {
   basic_block *body = get_loop_body_in_dom_order (loop);
   basic_block bb;
@@ -487,6 +504,7 @@ gather_memory_references (struct loop *l
   struct mem_ref_group *refs = NULL;
 
   *no_other_refs = true;
+  *ref_count = 0;
 
   /* Scan the loop body in order, so that the former references precede the
      later ones.  */
@@ -513,11 +531,17 @@ gather_memory_references (struct loop *l
 	  rhs = gimple_assign_rhs1 (stmt);
 
 	  if (REFERENCE_CLASS_P (rhs))
+	    {
 	    *no_other_refs &= gather_memory_references_ref (loop, &refs,
 							    rhs, false, stmt);
+	    *ref_count += 1;
+	    }
 	  if (REFERENCE_CLASS_P (lhs))
+	    {
 	    *no_other_refs &= gather_memory_references_ref (loop, &refs,
 							    lhs, true, stmt);
+	    *ref_count += 1;
+	    }
 	}
     }
   free (body);
@@ -846,20 +870,20 @@ schedule_prefetches (struct mem_ref_grou
   return any;
 }
 
-/* Determine whether there is any reference suitable for prefetching
-   in GROUPS.  */
+/* Estimate the number of prefetches in the given GROUPS.  */
 
-static bool
-anything_to_prefetch_p (struct mem_ref_group *groups)
+static int
+estimate_prefetch_count (struct mem_ref_group *groups)
 {
   struct mem_ref *ref;
+  int prefetch_count = 0;
 
   for (; groups; groups = groups->next)
     for (ref = groups->refs; ref; ref = ref->next)
       if (should_issue_prefetch_p (ref))
-	return true;
+	  prefetch_count++;
 
-  return false;
+  return prefetch_count;
 }
 
 /* Issue prefetches for the reference REF into loop as decided before.
@@ -1449,6 +1473,73 @@ determine_loop_nest_reuse (struct loop *
     }
 }
 
+/* Do a cost-benefit analysis to determine if prefetching is profitable
+   for the current loop given the following parameters:
+   AHEAD: the iteration ahead distance,
+   EST_NITER: the estimated trip count,  
+   NINSNS: estimated number of instructions in the loop,
+   PREFETCH_COUNT: an estimate of the number of prefetches
+   MEM_REF_COUNT: total number of memory references in the loop.  */
+
+static bool 
+is_loop_prefetching_profitable (unsigned ahead, HOST_WIDE_INT est_niter, 
+				unsigned ninsns, unsigned prefetch_count, 
+				unsigned mem_ref_count)
+{
+  int insn_to_mem_ratio, insn_to_prefetch_ratio;
+
+  if (mem_ref_count == 0)
+    return false;
+
+  /* Prefetching improves performance by overlapping cache missing 
+     memory accesses with CPU operations.  If the loop does not have 
+     enough CPU operations to overlap with memory operations, prefetching 
+     won't give a significant benefit.  One approximate way of checking 
+     this is to require the ratio of instructions to memory references to 
+     be above a certain limit.  This approximation works well in practice.
+     TODO: Implement a more precise computation by estimating the time
+     for each CPU or memory op in the loop. Time estimates for memory ops
+     should account for cache misses.  */
+  insn_to_mem_ratio = ninsns / mem_ref_count;  
+
+  if (insn_to_mem_ratio < PREFETCH_MIN_INSN_TO_MEM_RATIO)
+    return false;
+
+  /* Profitability of prefetching is highly dependent on the trip count.
+     For a given AHEAD distance, the first AHEAD iterations do not benefit 
+     from prefetching, and the last AHEAD iterations execute useless 
+     prefetches.  So, if the trip count is not large enough relative to AHEAD,
+     prefetching may cause serious performance degradation.  To avoid this
+     problem when the trip count is not known at compile time, we 
+     conservatively skip loops with high prefetching costs.  For now, only
+     the I-cache cost is considered.  The relative I-cache cost is estimated 
+     by taking the ratio between the number of prefetches and the total
+     number of instructions.  Since we are using integer arithmetic, we
+     compute the reciprocal of this ratio.  
+     TODO: Account for loop unrolling, which may reduce the costs of
+     shorter stride prefetches.  Note that not accounting for loop 
+     unrolling over-estimates the cost and hence gives more conservative
+     results.  */
+  if (est_niter < 0)
+    {
+      insn_to_prefetch_ratio = ninsns / prefetch_count;      
+      if(insn_to_prefetch_ratio < MIN_INSN_TO_PREFETCH_RATIO)
+	return false;
+      return true;
+    }
+       
+  if (est_niter <= (HOST_WIDE_INT) ahead)
+    {
+      if (dump_file && (dump_flags & TDF_DETAILS))
+	fprintf (dump_file,
+		 "Not prefetching -- loop estimated to roll only %d times\n",
+		 (int) est_niter);
+      return false;
+    }
+  return true;
+}
+
+
 /* Issue prefetch instructions for array references in LOOP.  Returns
    true if the LOOP was unrolled.  */
 
@@ -1460,6 +1551,8 @@ loop_prefetch_arrays (struct loop *loop)
   HOST_WIDE_INT est_niter;
   struct tree_niter_desc desc;
   bool unrolled = false, no_other_refs;
+  unsigned prefetch_count;
+  unsigned mem_ref_count;
 
   if (optimize_loop_nest_for_size_p (loop))
     {
@@ -1469,12 +1562,13 @@ loop_prefetch_arrays (struct loop *loop)
     }
 
   /* Step 1: gather the memory references.  */
-  refs = gather_memory_references (loop, &no_other_refs);
+  refs = gather_memory_references (loop, &no_other_refs, &mem_ref_count);
 
   /* Step 2: estimate the reuse effects.  */
   prune_by_reuse (refs);
 
-  if (!anything_to_prefetch_p (refs))
+  prefetch_count = estimate_prefetch_count (refs);
+  if (prefetch_count == 0)
     goto fail;
 
   determine_loop_nest_reuse (loop, refs, no_other_refs);
@@ -1485,27 +1579,22 @@ loop_prefetch_arrays (struct loop *loop)
      the loop body.  */
   time = tree_num_loop_insns (loop, &eni_time_weights);
   ahead = (PREFETCH_LATENCY + time - 1) / time;
-  est_niter = estimated_loop_iterations_int (loop, false);
-
-  /* The prefetches will run for AHEAD iterations of the original loop.  Unless
-     the loop rolls at least AHEAD times, prefetching the references does not
-     make sense.  */
-  if (est_niter >= 0 && est_niter <= (HOST_WIDE_INT) ahead)
-    {
-      if (dump_file && (dump_flags & TDF_DETAILS))
-	fprintf (dump_file,
-		 "Not prefetching -- loop estimated to roll only %d times\n",
-		 (int) est_niter);
-      goto fail;
-    }
-
-  mark_nontemporal_stores (loop, refs);
+  est_niter = estimated_loop_iterations_int (loop, false);  
 
   ninsns = tree_num_loop_insns (loop, &eni_size_weights);
   unroll_factor = determine_unroll_factor (loop, refs, ninsns, &desc,
 					   est_niter);
   if (dump_file && (dump_flags & TDF_DETAILS))
-    fprintf (dump_file, "Ahead %d, unroll factor %d\n", ahead, unroll_factor);
+    fprintf (dump_file, "Ahead %d, unroll factor %d, trip count %ld\n"
+	     "insn count %d, mem ref count %d, prefetch count %d\n", 
+	     ahead, unroll_factor, est_niter, ninsns, mem_ref_count, 
+	     prefetch_count);
+
+  if (!is_loop_prefetching_profitable (ahead, est_niter, ninsns, 
+				       prefetch_count, mem_ref_count))
+    goto fail;
+
+  mark_nontemporal_stores (loop, refs);
 
   /* Step 4: what to prefetch?  */
   if (!schedule_prefetches (refs, unroll_factor, ahead))
@@ -1556,7 +1645,11 @@ tree_ssa_prefetch_arrays (void)
       fprintf (dump_file, "    L1 cache size: %d lines, %d kB\n",
 	       L1_CACHE_SIZE_BYTES / L1_CACHE_LINE_SIZE, L1_CACHE_SIZE);
       fprintf (dump_file, "    L1 cache line size: %d\n", L1_CACHE_LINE_SIZE);
-      fprintf (dump_file, "    L2 cache size: %d kB\n", L2_CACHE_SIZE);
+      fprintf (dump_file, "    L2 cache size: %d kB\n", L2_CACHE_SIZE);      
+      fprintf (dump_file, "    min insn-to-prefetch ratio: %d \n", 
+	       MIN_INSN_TO_PREFETCH_RATIO);
+      fprintf (dump_file, "    min insn-to-mem ratio: %d \n", 
+	       PREFETCH_MIN_INSN_TO_MEM_RATIO);
       fprintf (dump_file, "\n");
     }
 
Index: params.def
===================================================================
--- params.def	(revision 145634)
+++ params.def	(working copy)
@@ -761,6 +761,17 @@ DEFPARAM (PARAM_LOOP_INVARIANT_MAX_BBS_I
 	  "max basic blocks number in loop for loop invariant motion",
 	  10000, 0, 0)
 
+DEFPARAM (PARAM_MIN_INSN_TO_PREFETCH_RATIO,
+	  "min-insn-to-prefetch-ratio",
+	  "min. ratio of insns to prefetches to enable prefetching for "
+          "a loop with an unknown trip count",
+	  10, 0, 0)
+
+DEFPARAM (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO,
+	  "prefetch-min-insn-to-mem-ratio",
+	  "min. ratio of insns to mem ops to enable prefetching in a loop",
+	  4, 0, 0)
+
 /*
 Local variables:
 mode:c



ChangeLog Entry:

2009-06-02  Ghassan Shobaki  <ghassan.shobaki@amd.com>
	
      * tree-ssa-loop-prefetch.c 
	(gather_memory_references): Introduced a counter for the number of 
      memory references.
	(anything_to_prefetch_p): Introduced a counter for the number of 
      prefetches.
	(is_loop_prefetching_profitable): New function with a cost model 
      for prefetching.
	(loop_prefetch_arrays): Use the new cost model to determine if 
      prefetching is profitable.
	* params.def (MIN_INSN_TO_PREFETCH_RATIO, 
      PREFETCH_MIN_INSN_TO_MEM_RATIO): New parameters.
	* params.h (MIN_INSN_TO_PREFETCH_RATIO, 
      PREFETCH_MIN_INSN_TO_MEM_RATIO): New parameters.


-----Original Message-----
From: Richard Guenther [mailto:richard.guenther@gmail.com] 
Sent: Friday, April 24, 2009 4:51 AM
To: Zdenek Dvorak
Cc: Shobaki, Ghassan; gcc-patches@gcc.gnu.org
Subject: Re: Patch to Avoid Bad Prefetching

On Thu, Apr 16, 2009 at 7:25 PM, Zdenek Dvorak <rakdver@kam.mff.cuni.cz> wrote:
> Hi,
>
>> However, using the command-line option you propose likely won't do the
>> job for this anyway, as different loops behave differently. ?A better
>> solution for such optimization would be per-loop hints, e.g.
>> #pragma loop count used by the intel compiler.
>>
>> [Ghassan] Totally agree that such user hints will give more precise
>> information and hence better performance, but the point is: what's the
>> best that we can do when that precise information is not available?
>> Should we just give up? My answer is "No".
>
> right; so why don't you implement #pragma loop count? ?It should take
> just a few hours to do, and would be hugely more useful, as other
> passes can take advantage of it too.
>
>> Also, at the point where the customer pays you to spend hours or days on
>> fidling with the compiler options, it likely won't hurt you to spend a
>> few minutes on modifying the makefiles (and getting the necessary
>> testcases)
>> to enable profile feedback, either,
>>
>> [Ghassan] Yes, but what about the compile time cost? Many users are not
>> willing to pay that cost every time they compile.
>
> You only need to enable the profile feedback for the final compilation
> (or before performance testing etc.)

To go forward with this may I propose to split this patch further.

A first patch to disable prefetching completely for unknown trip-count
loops, thus, a patch introducing is_loop_prefetching_profitable, but
with

+     prefetching may cause serious performance degradation. To avoid
this
+     problem when the trip count cannot be guessed at compile time,
+     do not issue prefetches in this case.  */
+  if (est_niter < 0)
+    return false;

without any new params.  This would be to reduce the degradation
of SPEC with prefetching.  In this light it it may be possible to
turn on prefetching by default at -O3 (or at least with
-fprofile-use/generate and -O3).  Detailed SPEC numbers
with/without prefetching would be useful here (also with/without
profile-feedback).

A second patch removing the artificial limit on the number of
basic blocks for unrolling completely, without any new params,
given that SPEC numbers do not degrade with this.

Note that in addition to SPEC Polyhedron also is a good source
for benchmarking.

Thanks,
Richard.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]