This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[patch] Speculative prefetching


Hello,

this patch uses value profiling to insert prefetches even memory
references that are not of the special shape -fprefetch-loop-arrays.

What we do is that we measure whether the difference between the two
consecutive addresses of a memory reference is usually a constant;
if this is true, we issue a prefetch for the current address + this
constant.

This catches even various nontrivial cases (like traversing a linked
list that just happens to be usually allocated sequentially,
or accessing the array from recursive function).

Results on SpecINT on amd64, base flags -O2 -march=k8 -funroll-loops,
peak flags -O2 -march=k8 -funroll-loops -fspeculative-prefetching:

                                     Estimated                     Estimated
                   Base      Base      Base      Peak      Peak      Peak
   Benchmarks    Ref Time  Run Time   Ratio    Ref Time  Run Time   Ratio
   ------------  --------  --------  --------  --------  --------  --------
   164.gzip          1400   200       700    *     1400   195       716    *
   175.vpr           1400   188       743    *     1400   188       743    *
   176.gcc           1100        --          X     1100        --          X
   181.mcf           1800   357       505    *     1800   291       618    *
   186.crafty        1000    83.4    1199    *     1000    83.6    1196    *
   197.parser        1800   340       529    *     1800   338       532    *
   252.eon           1300        --          X     1300        --          X
   253.perlbmk       1800   188       960    *     1800   186       969    *
   254.gap           1100   154       713    *     1100   158       696    *
   255.vortex        1900   169      1122    *     1900   169      1122    *
   256.bzip2         1500   205       731    *     1500   203       740    *
   300.twolf         3000   373       804    *     3000   372       806    *

For comparison, the same machine, base -O2 -march=k8 -funroll-loops,
peak -O2 -march=k8 -funroll-loops -fprefetch-loop-arrays:

                                     Estimated                     Estimated
                   Base      Base      Base      Peak      Peak      Peak
   Benchmarks    Ref Time  Run Time   Ratio    Ref Time  Run Time   Ratio
   ------------  --------  --------  --------  --------  --------  --------
   164.gzip          1400   197       712    *     1400   193       724    *
   175.vpr           1400   189       739    *     1400   191       734    *
   176.gcc           1100        --          X     1100        --          X
   181.mcf           1800   358       502    *     1800   356       506    *
   186.crafty        1000    83.4    1200    *     1000    83.9    1192    *
   197.parser        1800   338       533    *     1800   343       525    *
   252.eon           1300        --          X     1300        --          X
   253.perlbmk       1800   185       972    *     1800   190       949    *
   254.gap           1100   154       712    *     1100   156       704    *
   255.vortex        1900   170      1114    *     1900   171      1113    *
   256.bzip2         1500   204       736    *     1500   204       735    *
   300.twolf         3000   370       812    *     3000   398       754    *

Zdenek

	* common.opt (fspeculative-prefetching): New.
	* flags.h (flag_speculative_prefetching): Declare.
	* gcov-io.c (gcov_write_counter, gcov_read_counter): Allow negative
	values.
	* opts.c (common_handle_option): Handle -fspeculative-prefetching.
	* passes.c (rest_of_compilation): Ditto.
	* toplev.c (flag_speculative_prefetching): New.
	(process_options): Handle -fspeculative-prefetching.
	* value-prof.c (NOPREFETCH_RANGE_MIN, NOPREFETCH_RANGE_MAX): New
	macros.
	(insn_prefetch_values_to_profile, find_mem_reference_1,
	find_mem_reference_2, find_mem_reference, gen_speculative_prefetch,
	speculative_prefetching_transform): New.
	(value_profile_transformations): Call
	speculative_prefetching_transform.
	(insn_values_to_profile): Call insn_prefetch_values_to_profile.
	* doc/invoke.texi (-fspeculative-prefetching): Document.

Index: common.opt
===================================================================
RCS file: /cvs/gcc/gcc/gcc/common.opt,v
retrieving revision 1.30
diff -c -3 -p -r1.30 common.opt
*** common.opt	10 Mar 2004 06:02:50 -0000	1.30
--- common.opt	14 Mar 2004 15:56:51 -0000
*************** fsingle-precision-constant
*** 649,654 ****
--- 649,658 ----
  Common
  Convert floating point constants to single precision constants
  
+ fspeculative-prefetching
+ Common
+ Use value profiling for speculative prefetching
+ 
  fstack-check
  Common
  Insert stack checking code into the program
Index: flags.h
===================================================================
RCS file: /cvs/gcc/gcc/gcc/flags.h,v
retrieving revision 1.134
diff -c -3 -p -r1.134 flags.h
*** flags.h	10 Mar 2004 06:02:51 -0000	1.134
--- flags.h	14 Mar 2004 15:56:51 -0000
*************** extern int flag_gcse_las;
*** 678,684 ****
--- 678,689 ----
  extern int flag_gcse_after_reload;
  
  /* Nonzero if value histograms should be used to optimize code.  */
+ 
  extern int flag_value_profile_transformations;
+ 
+ /* Nonzero if they should be used for a speculative prefetching.  */
+ 
+ extern int flag_speculative_prefetching;
  
  /* Perform branch target register optimization before prologue / epilogue
     threading.  */
Index: gcov-io.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/gcov-io.c,v
retrieving revision 1.15
diff -c -3 -p -r1.15 gcov-io.c
*** gcov-io.c	23 Feb 2004 17:02:50 -0000	1.15
--- gcov-io.c	14 Mar 2004 15:56:51 -0000
*************** gcov_write_counter (gcov_type value)
*** 268,276 ****
      buffer[1] = (gcov_unsigned_t) (value >> 32);
    else
      buffer[1] = 0;
-   
-   if (value < 0)
-     gcov_var.error = -1;
  }
  #endif /* IN_LIBGCOV */
  
--- 268,273 ----
*************** gcov_read_counter (void)
*** 453,461 ****
      value |= ((gcov_type) from_file (buffer[1])) << 32;
    else if (buffer[1])
      gcov_var.error = -1;
!   
!   if (value < 0)
!     gcov_var.error = -1;
    return value;
  }
  
--- 450,456 ----
      value |= ((gcov_type) from_file (buffer[1])) << 32;
    else if (buffer[1])
      gcov_var.error = -1;
! 
    return value;
  }
  
Index: opts.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/opts.c,v
retrieving revision 1.61
diff -c -3 -p -r1.61 opts.c
*** opts.c	10 Mar 2004 06:02:53 -0000	1.61
--- opts.c	14 Mar 2004 15:56:51 -0000
*************** common_handle_option (size_t scode, cons
*** 1228,1233 ****
--- 1228,1237 ----
        flag_value_profile_transformations = value;
        break;
  
+     case OPT_fspeculative_prefetching:
+       flag_speculative_prefetching = value;
+       break;
+ 
      case OPT_frandom_seed:
        /* The real switch is -fno-random-seed.  */
        if (value)
Index: passes.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/passes.c,v
retrieving revision 2.3
diff -c -3 -p -r2.3 passes.c
*** passes.c	3 Mar 2004 16:32:38 -0000	2.3
--- passes.c	14 Mar 2004 15:56:51 -0000
*************** rest_of_compilation (tree decl)
*** 1762,1768 ****
  
        if (flag_branch_probabilities
  	  && flag_profile_values
! 	  && flag_value_profile_transformations)
  	rest_of_handle_value_profile_transformations (decl, insns);
  
        /* Remove the death notes created for vpt.  */
--- 1762,1769 ----
  
        if (flag_branch_probabilities
  	  && flag_profile_values
! 	  && (flag_value_profile_transformations
! 	      || flag_speculative_prefetching))
  	rest_of_handle_value_profile_transformations (decl, insns);
  
        /* Remove the death notes created for vpt.  */
Index: toplev.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/toplev.c,v
retrieving revision 1.887
diff -c -3 -p -r1.887 toplev.c
*** toplev.c	3 Mar 2004 16:32:38 -0000	1.887
--- toplev.c	14 Mar 2004 15:56:51 -0000
*************** int profile_arc_flag = 0;
*** 235,242 ****
--- 235,247 ----
  int flag_profile_values = 0;
  
  /* Nonzero if value histograms should be used to optimize code.  */
+ 
  int flag_value_profile_transformations = 0;
  
+ /* Nonzero if they should be used for a speculative prefetching.  */
+ 
+ int flag_speculative_prefetching = 0;
+ 
  /* Nonzero if generating info for gcov to calculate line test coverage.  */
  
  int flag_test_coverage = 0;
*************** process_options (void)
*** 2277,2282 ****
--- 2282,2298 ----
    if (flag_value_profile_transformations)
      flag_profile_values = 1;
  
+   /* Speculative prefetching implies the value profiling.  We also switch off
+      the prefetching in the loop optimizer, so that we do not emit double
+      prefetches.  TODO -- we should teach these two to cooperate; the loop
+      based prefetching may sometimes do a better job, especially in connection
+      with reuse analysis.  */
+   if (flag_speculative_prefetching)
+     {
+       flag_profile_values = 1;
+       flag_prefetch_loop_arrays = 0;
+     }
+ 
    /* Warn about options that are not supported on this machine.  */
  #ifndef INSN_SCHEDULING
    if (flag_schedule_insns || flag_schedule_insns_after_reload)
*************** process_options (void)
*** 2396,2406 ****
--- 2412,2432 ----
        warning ("-fprefetch-loop-arrays not supported for this target");
        flag_prefetch_loop_arrays = 0;
      }
+   if (flag_speculative_prefetching)
+     {
+       warning ("-fspeculative-prefetching not supported for this target");
+       flag_speculative_prefetching = 0;
+     }
  #else
    if (flag_prefetch_loop_arrays && !HAVE_prefetch)
      {
        warning ("-fprefetch-loop-arrays not supported for this target (try -march switches)");
        flag_prefetch_loop_arrays = 0;
+     }
+   if (flag_speculative_prefetching && !HAVE_prefetch)
+     {
+       warning ("-fspeculative-prefetching not supported for this target (try -march switches)");
+       flag_speculative_prefetching = 0;
      }
  #endif
  
Index: value-prof.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/value-prof.c,v
retrieving revision 1.9
diff -c -3 -p -r1.9 value-prof.c
*** value-prof.c	27 Feb 2004 14:50:41 -0000	1.9
--- value-prof.c	14 Mar 2004 15:56:52 -0000
*************** Software Foundation, 59 Temple Place - S
*** 34,41 ****
  #include "optabs.h"
  #include "regs.h"
  
! /* In this file value profile based optimizations will be placed (none are
!    here just now, but they are hopefully coming soon).
  
     Every such optimization should add its requirements for profiled values to
     insn_values_to_profile function.  This function is called from branch_prob
--- 34,49 ----
  #include "optabs.h"
  #include "regs.h"
  
! /* In this file value profile based optimizations are placed.  Currently the
!    following optimizations are implemented (for more detailed descriptions
!    see comments at value_profile_transformations):
! 
!    1) Division/modulo specialisation.  Provided that we can determine that the
!       operands of the division have some special properties, we may use it to
!       produce more effective code.
!    2) Speculative prefetching.  If we are able to determine that the difference
!       between addresses accessed by a memory reference is usually constant, we
!       may add the prefetch instructions.
  
     Every such optimization should add its requirements for profiled values to
     insn_values_to_profile function.  This function is called from branch_prob
*************** Software Foundation, 59 Temple Place - S
*** 50,66 ****
--- 58,101 ----
     -- the expression that is profiled
     -- list of counters starting from the first one.  */
  
+ /* For speculative prefetching, the range in that we do not prefetch (because
+    we assume that it will be in cache anyway).  The assymetry between min and
+    max range is trying to reflect the fact that the sequential prefetching
+    of the data is commonly done directly by hardware.  Nevertheless, these
+    values are just a guess and should of course be target-specific.  */
+ 
+ #ifndef NOPREFETCH_RANGE_MIN
+ #define NOPREFETCH_RANGE_MIN (-16)
+ #endif
+ #ifndef NOPREFETCH_RANGE_MAX
+ #define NOPREFETCH_RANGE_MAX 32
+ #endif
+ 
  static void insn_divmod_values_to_profile (rtx, unsigned *,
  					   struct histogram_value **);
+ #ifdef HAVE_prefetch
+ static bool insn_prefetch_values_to_profile (rtx, unsigned *,
+ 					     struct histogram_value **);
+ static int find_mem_reference_1 (rtx *, void *);
+ static void find_mem_reference_2 (rtx, rtx, void *);
+ static bool find_mem_reference (rtx, rtx *, int *);
+ #endif
+ 
  static void insn_values_to_profile (rtx, unsigned *, struct histogram_value **);
  static rtx gen_divmod_fixed_value (enum machine_mode, enum rtx_code, rtx, rtx,
  				   rtx, gcov_type);
  static rtx gen_mod_pow2 (enum machine_mode, enum rtx_code, rtx, rtx, rtx);
  static rtx gen_mod_subtract (enum machine_mode, enum rtx_code, rtx, rtx, rtx,
  			     int);
+ #ifdef HAVE_prefetch
+ static rtx gen_speculative_prefetch (rtx, gcov_type, int);
+ #endif
  static bool divmod_fixed_value_transform (rtx insn);
  static bool mod_pow2_value_transform (rtx);
  static bool mod_subtract_transform (rtx);
+ #ifdef HAVE_prefetch
+ static bool speculative_prefetching_transform (rtx);
+ #endif
  
  /* Release the list of VALUES of length N_VALUES for that we want to measure
     histograms.  */
*************** insn_divmod_values_to_profile (rtx insn,
*** 162,167 ****
--- 197,286 ----
      }
  }
  
+ #ifdef HAVE_prefetch
+ 
+ /* Called form find_mem_reference through for_each_rtx, finds a memory
+    reference.  */
+ 
+ static int
+ find_mem_reference_1 (rtx *expr, void *ret)
+ {
+   rtx *mem = ret;
+ 
+   if (GET_CODE (*expr) == MEM)
+     {
+       *mem = *expr;
+       return 1;
+     }
+   return 0;
+ }
+ 
+ /* Called form find_mem_reference through note_stores, finds out whether
+    the memory reference is a store.  */
+ 
+ static int fmr2_write;
+ static void
+ find_mem_reference_2 (rtx expr, rtx pat ATTRIBUTE_UNUSED, void *mem)
+ {
+   if (expr == mem)
+     fmr2_write = true;
+ }
+ 
+ /* Find a memory reference inside INSN, return it in MEM. Set WRITE to true
+    if it is a write of the mem.  Return false if no mem is found, true
+    otherwise.  */
+ 
+ static bool
+ find_mem_reference (rtx insn, rtx *mem, int *write)
+ {
+   *mem = NULL_RTX;
+   for_each_rtx (&PATTERN (insn), find_mem_reference_1, mem);
+ 
+   if (!*mem)
+     return false;
+   
+   fmr2_write = false;
+   note_stores (PATTERN (insn), find_mem_reference_2, *mem);
+   *write = fmr2_write;
+   return true;
+ }
+ 
+ /* Find values inside INSN for that we want to measure histograms for
+    a speculative prefetching.  Add them to the list VALUES and increment
+    its length in N_VALUES accordingly.  */
+ 
+ static bool
+ insn_prefetch_values_to_profile (rtx insn, unsigned *n_values,
+ 				 struct histogram_value **values)
+ {
+   rtx mem, address;
+   int write;
+ 
+   if (!INSN_P (insn))
+     return false;
+ 
+   if (!find_mem_reference (insn, &mem, &write))
+     return false;
+ 
+   address = XEXP (mem, 0);
+   if (side_effects_p (address))
+     return false;
+       
+   if (CONSTANT_P (address))
+     return false;
+ 
+   *values = xrealloc (*values,
+ 		      (*n_values + 1) * sizeof (struct histogram_value));
+   (*values)[*n_values].value = address;
+   (*values)[*n_values].mode = GET_MODE (address);
+   (*values)[*n_values].seq = NULL_RTX;
+   (*values)[*n_values].insn = insn;
+   (*values)[*n_values].type = HIST_TYPE_CONST_DELTA;
+   (*n_values)++;
+ 
+   return true;
+ }
+ #endif
  /* Find values inside INSN for that we want to measure histograms and adds
     them to list VALUES (increasing the record of its length in N_VALUES).  */
  static void
*************** insn_values_to_profile (rtx insn,
*** 171,176 ****
--- 290,300 ----
  {
    if (flag_value_profile_transformations)
      insn_divmod_values_to_profile (insn, n_values, values);
+ 
+ #ifdef HAVE_prefetch
+   if (flag_speculative_prefetching)
+     insn_prefetch_values_to_profile (insn, n_values, values);
+ #endif
  }
  
  /* Find list of values for that we want to measure histograms.  */
*************** find_values_to_profile (unsigned *n_valu
*** 294,299 ****
--- 418,440 ----
     It would be possible to continue analogically for K * b for other small
     K's, but it is probably not useful.
  
+    5)
+ 
+    Read or write of mem[address], where the value of address changes usually
+    by a constant C != 0 between the following accesses to the computation; with
+    -fspeculative-prefetching we then add a prefetch of address + C before
+    the insn.  This handles prefetching of several interesting cases in addition
+    to a simple prefetching for addresses that are induction variables, e. g.
+    linked lists allocated sequentially (even in case they are processed
+    recursively).
+ 
+    TODO -- we should also check whether there is not (usually) a small
+ 	   difference with the adjacent memory references, so that we do
+ 	   not issue overlapping prefetches.  Also we should employ some
+ 	   heuristics to eliminate cases where prefetching evidently spoils
+ 	   the code.
+ 	-- it should somehow cooperate with the loop optimizer prefetching
+ 
     TODO:
  
     There are other useful cases that could be handled by a similar mechanism,
*************** value_profile_transformations (void)
*** 347,352 ****
--- 488,498 ----
  	      || divmod_fixed_value_transform (insn)
  	      || mod_pow2_value_transform (insn)))
  	changed = true;
+ #ifdef HAVE_prefetch
+       if (flag_speculative_prefetching
+ 	  && speculative_prefetching_transform (insn))
+ 	changed = true;
+ #endif
      }
  
    if (changed)
*************** mod_subtract_transform (rtx insn)
*** 706,708 ****
--- 852,950 ----
  
    return true;
  }
+ 
+ #ifdef HAVE_prefetch
+ /* Generate code for transformation 5 for mem with ADDRESS and a constant
+    step DELTA.  WRITE is true if the reference is a store to mem.  */
+ 
+ static rtx
+ gen_speculative_prefetch (rtx address, gcov_type delta, int write)
+ {
+   rtx tmp;
+   rtx sequence;
+ 
+   /* TODO: we do the prefetching for just one iteration ahead, which
+      often is not enough.  */
+   start_sequence ();
+   if (offsettable_address_p (0, VOIDmode, address))
+     tmp = plus_constant (copy_rtx (address), delta);
+   else
+     {
+       tmp = simplify_gen_binary (PLUS, Pmode,
+ 				 copy_rtx (address), GEN_INT (delta));
+       tmp = force_operand (tmp, NULL);
+     }
+   if (! (*insn_data[(int)CODE_FOR_prefetch].operand[0].predicate)
+       (tmp, insn_data[(int)CODE_FOR_prefetch].operand[0].mode))
+     tmp = force_reg (Pmode, tmp);
+   emit_insn (gen_prefetch (tmp, GEN_INT (write), GEN_INT (3)));
+   sequence = get_insns ();
+   end_sequence ();
+ 
+   return sequence;
+ }
+ 
+ /* Do transform 5) on INSN if applicable.  */
+ 
+ static bool
+ speculative_prefetching_transform (rtx insn)
+ {
+   rtx histogram, value;
+   gcov_type val, count, all;
+   edge e;
+   rtx mem, address;
+   int write;
+ 
+   if (!find_mem_reference (insn, &mem, &write))
+     return false;
+ 
+   address = XEXP (mem, 0);
+   if (side_effects_p (address))
+     return false;
+       
+   if (CONSTANT_P (address))
+     return false;
+ 
+   for (histogram = REG_NOTES (insn);
+        histogram;
+        histogram = XEXP (histogram, 1))
+     if (REG_NOTE_KIND (histogram) == REG_VALUE_PROFILE
+ 	&& XEXP (XEXP (histogram, 0), 0) == GEN_INT (HIST_TYPE_CONST_DELTA))
+       break;
+ 
+   if (!histogram)
+     return false;
+ 
+   histogram = XEXP (XEXP (histogram, 0), 1);
+   value = XEXP (histogram, 0);
+   histogram = XEXP (histogram, 1);
+   /* Skip last value referenced.  */
+   histogram = XEXP (histogram, 1);
+   val = INTVAL (XEXP (histogram, 0));
+   histogram = XEXP (histogram, 1);
+   count = INTVAL (XEXP (histogram, 0));
+   histogram = XEXP (histogram, 1);
+   all = INTVAL (XEXP (histogram, 0));
+ 
+   /* We require that count is at least half of all; this means
+      that for the transformation to fire the value must be constant
+      at least 50% of time (and 75% gives the garantee of usage).  */
+   if (!rtx_equal_p (address, value) || 2 * count < all)
+     return false;
+ 
+   /* If the difference is too small, it does not make too much sense to
+      prefetch, as the memory is probably already in cache.  */
+   if (val >= NOPREFETCH_RANGE_MIN && val <= NOPREFETCH_RANGE_MAX)
+     return false;
+ 
+   if (dump_file)
+     fprintf (dump_file, "Speculative prefetching for insn %d\n",
+ 	     INSN_UID (insn));
+ 
+   e = split_block (BLOCK_FOR_INSN (insn), PREV_INSN (insn));
+   
+   insert_insn_on_edge (gen_speculative_prefetch (address, val, write), e);
+ 
+   return true;
+ }
+ #endif  /* HAVE_prefetch */
Index: doc/invoke.texi
===================================================================
RCS file: /cvs/gcc/gcc/gcc/doc/invoke.texi,v
retrieving revision 1.427
diff -c -3 -p -r1.427 invoke.texi
*** doc/invoke.texi	13 Mar 2004 21:48:56 -0000	1.427
--- doc/invoke.texi	14 Mar 2004 15:56:52 -0000
*************** in the following sections.
*** 292,298 ****
  -fsched-stalled-insns=@var{n} -sched-stalled-insns-dep=@var{n} @gol
  -fsched2-use-superblocks @gol
  -fsched2-use-traces  -fsignaling-nans @gol
! -fsingle-precision-constant  @gol
  -fstrength-reduce  -fstrict-aliasing  -ftracer  -fthread-jumps @gol
  -funroll-all-loops  -funroll-loops  -fpeel-loops @gol
  -funswitch-loops  -fold-unroll-loops  -fold-unroll-all-loops @gol
--- 292,298 ----
  -fsched-stalled-insns=@var{n} -sched-stalled-insns-dep=@var{n} @gol
  -fsched2-use-superblocks @gol
  -fsched2-use-traces  -fsignaling-nans @gol
! -fsingle-precision-constant  -fspeculative-prefetching @gol
  -fstrength-reduce  -fstrict-aliasing  -ftracer  -fthread-jumps @gol
  -funroll-all-loops  -funroll-loops  -fpeel-loops @gol
  -funswitch-loops  -fold-unroll-loops  -fold-unroll-all-loops @gol
*************** With @option{-fbranch-probabilities}, it
*** 4540,4545 ****
--- 4540,4560 ----
  and actually performs the optimizations based on them.
  Currently the optimizations include specialization of division operation
  using the knowledge about the value of the denominator.
+ 
+ @item -fspeculative-prefetching
+ @opindex fvpt
+ If combined with @option{-fprofile-arcs}, it instructs the compiler to add
+ a code to gather information about addresses of memory references in the
+ program.
+ 
+ With @option{-fbranch-probabilities}, it reads back the data gathered
+ and issues prefetch instructions according to them.  In addition to the opportunities
+ noticed by @option{-fprefetch-loop-arrays}, it also notices more complicated
+ memory access patterns -- for example accesses to the data stored in linked
+ list whose elements are usually allocated sequentially.
+ 
+ In order to prevent issuing double prefetches, usage of
+ @option{-fspeculative-prefetching} implies @option{-fno-prefetch-loop-arrays}.
  
  @item -fnew-ra
  @opindex fnew-ra


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]